Add code to aid tracking down cache alignment issues.
In case DEBUG is defined in the cache.c, this code will
check alignment of each attempt to flush/invalidate data
cache and print a warning if the alignment is incorrect.
If DEBUG is not defined, this code is optimized out.
Signed-off-by: Marek Vasut <marex@denx.de>
Cc: Dinh Nguyen <dinguyen@opensource.altera.com>
Cc: Albert Aribaud <albert.u.boot@aribaud.net>
Cc: Tom Rini <trini@konsulko.com>
Current many cpu use the same flush_cache() function, which just call
the flush_dcache_range().
So implement a weak flush_cache() for all the cpus to use.
In original weak flush_cache() in arch/arm/lib/cache.c, there has some
code for ARM1136 & ARM926ejs. But in the arch/arm/cpu/arm1136/cpu.c and
arch/arm/cpu/arm926ejs/cache.c, there implements a real flush_cache()
function as well. That means the original code for ARM1136 & ARM926ejs
in weak flush_cache() of arch/arm/lib/cache.c is totally useless.
So in this patch remove such code in flush_cache() and only call
flush_dcache_range().
Signed-off-by: Josh Wu <josh.wu@atmel.com>
Since some driver like ohci, lcd used dcache functions. But some ARM
cpu don't implement the invalidate_dcache_range()/flush_dcache_range()
functions.
To avoid compiling errors this patch adds an weak empty stub function
for all ARM cpu in arch/arm/lib/cache.c.
And ARM cpu still can implemnt its own cache functions on the cpu folder.
Signed-off-by: Josh Wu <josh.wu@atmel.com>
Reviewed-by: York Sun <yorksun@freescale.com>
This enables ARMv7 barrier operations support when
march=armv7-a is enabled.
Using CP15 barriers causes U-Boot bootm command crash when
transferring control to the loaded image on Renesas R8A7794 Cortex A7 CPU.
Using ARMv7 barrier operations instead of the deprecated CP15 barriers
helps to avoid these issues.
Signed-off-by: Valentine Barshak <valentine.barshak+renesas@cogentembedded.com>
Signed-off-by: Vladimir Barinov <vladimir.barinov+renesas@cogentembedded.com>
Reviewed-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com>
Remove two gratuituous blank lines, uses u32 (instead of int) as the
type for values that will be written to a register, moves the beginning
of the variable declaration section to a separate line (rather than the
one with the opening brace) and keeps the function signature on a single
line where possible.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Acked-by: Simon Glass <sjg@chromium.org>
This is not only more readable but also prevents a warning
about a missing prototype. The prototypes which are actually
missing are added.
cc: Albert Aribaud <albert.u.boot@aribaud.net>
Signed-off-by: Jeroen Hofstee <jeroen@myspectrum.nl>
Reviewed-by: Tom Rini <trini@ti.com>
The 'XN' execute never bit is set in the pagetables. This will
prevent speculative prefetches to non executable regions. But the
domain permissions are set as master in the DACR register.
So the pagetable attribute for 'XN' is not effective. Change the
permissions to client.
This fixes lot of speculative prefetch aborts seen on OMAP5
secure devices.
Signed-off-by: R Sricharan <r.sricharan@ti.com>
Tested-by: Vincent Stehle <v-stehle@ti.com>
Cc: Vincent Stehle <v-stehle@ti.com>
Cc: Tom Rini <trini@ti.com>
Cc: Albert ARIBAUD <albert.u.boot@aribaud.net>
Add support for adjusting the L1 cache behavior by updating the MMU
configuration. The mmu_set_region_dcache_behaviour() function allows
drivers to make these changes after the MMU is set up.
It is implemented only for ARMv7 at present.
This is needed for LCD support, where we want to make the LCD frame buffer
write-through (or off) rather than write-back.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Remove the flush of boundary cache-lines done as part
of invalidate on a non cache-line boundary aligned
buffer
Also, print a warning when this situation is recognized.
Signed-off-by: Aneesh V <aneesh@ti.com>
set-way operations need a DSB after them to ensure the
operation is complete. DMB may not be enough. Use DSB
after all operations instead of DMB.
Signed-off-by: Aneesh V <aneesh@ti.com>
- Add a framework for layered cache maintenance
- separate out SOC specific outer cache maintenance from
maintenance of caches known to CPU
- Add generic ARMv7 cache maintenance operations that affect all
caches known to ARMv7 CPUs. For instance in Cortex-A8 these
opertions will affect both L1 and L2 caches. In Cortex-A9
these will affect only L1 cache
- D-cache operations supported:
- Invalidate entire D-cache
- Invalidate D-cache range
- Flush(clean & invalidate) entire D-cache
- Flush D-cache range
- I-cache operations supported:
- Invalidate entire I-cache
- Add maintenance functions for TLB, branch predictor array etc.
- Enable -march=armv7-a so that armv7 assembly instructions can be
used
Signed-off-by: Aneesh V <aneesh@ti.com>