While converting CONFIG_SYS_[DI]CACHE_OFF to Kconfig, there are instances
where these configuration items are conditional on SPL. This commit adds SPL
variants of these configuration items, uses CONFIG_IS_ENABLED(), and updates
the configurations as required.
Acked-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Trevor Woerner <trevor@toganlabs.com>
[trini: Make the default depend on the setting for full U-Boot, update
more zynq hardware]
Signed-off-by: Tom Rini <trini@konsulko.com>
When U-Boot started using SPDX tags we were among the early adopters and
there weren't a lot of other examples to borrow from. So we picked the
area of the file that usually had a full license text and replaced it
with an appropriate SPDX-License-Identifier: entry. Since then, the
Linux Kernel has adopted SPDX tags and they place it as the very first
line in a file (except where shebangs are used, then it's second line)
and with slightly different comment styles than us.
In part due to community overlap, in part due to better tag visibility
and in part for other minor reasons, switch over to that style.
This commit changes all instances where we have a single declared
license in the tag as both the before and after are identical in tag
contents. There's also a few places where I found we did not have a tag
and have introduced one.
Signed-off-by: Tom Rini <trini@konsulko.com>
As part of testing booting Linux kernels on Rockchip devices, it was
discovered by Ziyuan Xu and Sandy Patterson that we had multiple and for
some cases incomplete isb definitions. This was causing a failure to
boot of the Linux kernel.
In order to solve this problem as well as cover any corner cases that we
may also have had a number of changes are made in order to consolidate
things. First, <asm/barriers.h> now becomes the source of isb/dsb/dmb
definitions. This however introduces another complexity. Due to
needing to build SPL for 32bit tegra with -march=armv4 we need to borrow
the __LINUX_ARM_ARCH__ logic from the Linux Kernel in a more complete
form. Move this from arch/arm/lib/Makefile to arch/arm/Makefile and add
a comment about it. Now that we can always know what the target CPU is
capable off we can get always do the correct thing for the barrier. The
final part of this is that need to be consistent everywhere and call
isb()/dsb()/dmb() and NOT call ISB/DSB/DMB in some cases and the
function names in others.
Reviewed-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Ziyuan Xu <xzy.xu@rock-chips.com>
Acked-by: Sandy Patterson <apatterson@sightlogix.com>
Reported-by: Ziyuan Xu <xzy.xu@rock-chips.com>
Reported-by: Sandy Patterson <apatterson@sightlogix.com>
Signed-off-by: Tom Rini <trini@konsulko.com>
At present armv7 will unhappily invalidate a cache region and print an
error message. Make it skip the operation instead, as it does with other
cache operations.
Signed-off-by: Simon Glass <sjg@chromium.org>
Reviewed-by: Marek Vasut <marex@denx.de>
Adds missing flush_dcache_range and invalidate_dcache_range dummy
(empty) placeholder functions to the #else portion of the #ifndef
CONFIG_SYS_DCACHE_OFF, where full implementations of these functions
are defined.
Signed-off-by: Daniel Allred <d-allred@ti.com>
Signed-off-by: Andreas Dannenberg <dannenberg@ti.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Reviewed-by: Tom Rini <trini@konsulko.com>
v7_maint_dcache_all() does not work reliable when build with gcc6,
see: https://bugzilla.redhat.com/show_bug.cgi?id=1318788
While debugging this I learned that v7_maint_dcache_all() is unreliable
when build with gcc5 too when it is marked as noinline.
This commit fixes the reliability issues by replacing the C-code with
the ready to use asm implementation from the kernel.
Given that this code when written as C-code clearly is quite fragile
(also see the existing comments about the C-code being the way it is
to get optimal assembly) and that we have a proven asm alternative,
I believe that this is the best solution.
Note that we actually already had a copy of the kernel's
v7_flush_dcache_all() before this commit in
arch/arm/mach-uniphier/arm32/lowlevel_init.S.
This commit moves that code arch/arm/cpu/armv7/cache_v7_asm.S, renames
it to __v7_flush_dcache_all(), and adds a v7_flush_dcache_all() wrapper
which saves / restores the clobbered registers for use from C-code.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Acked-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Add code to aid tracking down cache alignment issues.
In case DEBUG is defined in the cache.c, this code will
check alignment of each attempt to flush/invalidate data
cache and print a warning if the alignment is incorrect.
If DEBUG is not defined, this code is optimized out.
Signed-off-by: Marek Vasut <marex@denx.de>
Cc: Dinh Nguyen <dinguyen@opensource.altera.com>
Cc: Albert Aribaud <albert.u.boot@aribaud.net>
Cc: Tom Rini <trini@konsulko.com>
Current many cpu use the same flush_cache() function, which just call
the flush_dcache_range().
So implement a weak flush_cache() for all the cpus to use.
In original weak flush_cache() in arch/arm/lib/cache.c, there has some
code for ARM1136 & ARM926ejs. But in the arch/arm/cpu/arm1136/cpu.c and
arch/arm/cpu/arm926ejs/cache.c, there implements a real flush_cache()
function as well. That means the original code for ARM1136 & ARM926ejs
in weak flush_cache() of arch/arm/lib/cache.c is totally useless.
So in this patch remove such code in flush_cache() and only call
flush_dcache_range().
Signed-off-by: Josh Wu <josh.wu@atmel.com>
Since some driver like ohci, lcd used dcache functions. But some ARM
cpu don't implement the invalidate_dcache_range()/flush_dcache_range()
functions.
To avoid compiling errors this patch adds an weak empty stub function
for all ARM cpu in arch/arm/lib/cache.c.
And ARM cpu still can implemnt its own cache functions on the cpu folder.
Signed-off-by: Josh Wu <josh.wu@atmel.com>
Reviewed-by: York Sun <yorksun@freescale.com>
This enables ARMv7 barrier operations support when
march=armv7-a is enabled.
Using CP15 barriers causes U-Boot bootm command crash when
transferring control to the loaded image on Renesas R8A7794 Cortex A7 CPU.
Using ARMv7 barrier operations instead of the deprecated CP15 barriers
helps to avoid these issues.
Signed-off-by: Valentine Barshak <valentine.barshak+renesas@cogentembedded.com>
Signed-off-by: Vladimir Barinov <vladimir.barinov+renesas@cogentembedded.com>
Reviewed-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com>
Remove two gratuituous blank lines, uses u32 (instead of int) as the
type for values that will be written to a register, moves the beginning
of the variable declaration section to a separate line (rather than the
one with the opening brace) and keeps the function signature on a single
line where possible.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Acked-by: Simon Glass <sjg@chromium.org>
This is not only more readable but also prevents a warning
about a missing prototype. The prototypes which are actually
missing are added.
cc: Albert Aribaud <albert.u.boot@aribaud.net>
Signed-off-by: Jeroen Hofstee <jeroen@myspectrum.nl>
Reviewed-by: Tom Rini <trini@ti.com>
The 'XN' execute never bit is set in the pagetables. This will
prevent speculative prefetches to non executable regions. But the
domain permissions are set as master in the DACR register.
So the pagetable attribute for 'XN' is not effective. Change the
permissions to client.
This fixes lot of speculative prefetch aborts seen on OMAP5
secure devices.
Signed-off-by: R Sricharan <r.sricharan@ti.com>
Tested-by: Vincent Stehle <v-stehle@ti.com>
Cc: Vincent Stehle <v-stehle@ti.com>
Cc: Tom Rini <trini@ti.com>
Cc: Albert ARIBAUD <albert.u.boot@aribaud.net>
Add support for adjusting the L1 cache behavior by updating the MMU
configuration. The mmu_set_region_dcache_behaviour() function allows
drivers to make these changes after the MMU is set up.
It is implemented only for ARMv7 at present.
This is needed for LCD support, where we want to make the LCD frame buffer
write-through (or off) rather than write-back.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Remove the flush of boundary cache-lines done as part
of invalidate on a non cache-line boundary aligned
buffer
Also, print a warning when this situation is recognized.
Signed-off-by: Aneesh V <aneesh@ti.com>
set-way operations need a DSB after them to ensure the
operation is complete. DMB may not be enough. Use DSB
after all operations instead of DMB.
Signed-off-by: Aneesh V <aneesh@ti.com>
- Add a framework for layered cache maintenance
- separate out SOC specific outer cache maintenance from
maintenance of caches known to CPU
- Add generic ARMv7 cache maintenance operations that affect all
caches known to ARMv7 CPUs. For instance in Cortex-A8 these
opertions will affect both L1 and L2 caches. In Cortex-A9
these will affect only L1 cache
- D-cache operations supported:
- Invalidate entire D-cache
- Invalidate D-cache range
- Flush(clean & invalidate) entire D-cache
- Flush D-cache range
- I-cache operations supported:
- Invalidate entire I-cache
- Add maintenance functions for TLB, branch predictor array etc.
- Enable -march=armv7-a so that armv7 assembly instructions can be
used
Signed-off-by: Aneesh V <aneesh@ti.com>