mirror of
https://github.com/AsahiLinux/u-boot
synced 2024-11-15 09:27:35 +00:00
c27814be33
We don't implement separate flush_dcache_all() intentionally as entire data cache invalidation is dangerous operation even if we flush data cache right before invalidation. There is the real example: We may get stuck in the following code if we store any context (like BLINK register) on stack in invalidate_dcache_all() function. BLINK register is the register where return address is automatically saved when we do function call with instructions like 'bl'. void flush_dcache_all() { __dc_entire_op(OP_FLUSH); // Other code // } void invalidate_dcache_all() { __dc_entire_op(OP_INV); // Other code // } void foo(void) { flush_dcache_all(); invalidate_dcache_all(); } Now let's see what really happens during that code execution: foo() |->> call flush_dcache_all [return address is saved to BLINK register] [push BLINK] (save to stack) ![point 1] |->> call __dc_entire_op(OP_FLUSH) [return address is saved to BLINK register] [flush L1 D$] return [jump to BLINK] <<------ [other flush_dcache_all code] [pop BLINK] (get from stack) return [jump to BLINK] <<------ |->> call invalidate_dcache_all [return address is saved to BLINK register] [push BLINK] (save to stack) ![point 2] |->> call __dc_entire_op(OP_FLUSH) [return address is saved to BLINK register] [invalidate L1 D$] ![point 3] // Oops!!! // We lose return address from invalidate_dcache_all function: // we save it to stack and invalidate L1 D$ after that! return [jump to BLINK] <<------ [other invalidate_dcache_all code] [pop BLINK] (get from stack) // we don't have this data in L1 dcache as we invalidated it in [point 3] // so we get it from next memory level (for example DDR memory) // but in the memory we have value which we save in [point 1], which // is return address from flush_dcache_all function (instead of // address from current invalidate_dcache_all function which we // saved in [point 2] !) return [jump to BLINK] <<------ // As BLINK points to invalidate_dcache_all, we call it again and // loop forever. Fortunately we may do flush and invalidation of D$ with a single one instruction which automatically mitigates a situation described above. And because invalidate_dcache_all() isn't used in common U-Boot code we implement "flush and invalidate dcache all" instead. Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
37 lines
902 B
C
37 lines
902 B
C
/*
|
|
* Copyright (C) 2013-2014 Synopsys, Inc. All rights reserved.
|
|
*
|
|
* SPDX-License-Identifier: GPL-2.0+
|
|
*/
|
|
|
|
#ifndef __ASM_ARC_CACHE_H
|
|
#define __ASM_ARC_CACHE_H
|
|
|
|
#include <config.h>
|
|
|
|
/*
|
|
* As of today we may handle any L1 cache line length right in software.
|
|
* For that essentially cache line length is a variable not constant.
|
|
* And to satisfy users of ARCH_DMA_MINALIGN we just use largest line length
|
|
* that may exist in either L1 or L2 (AKA SLC) caches on ARC.
|
|
*/
|
|
#define ARCH_DMA_MINALIGN 128
|
|
|
|
#if defined(ARC_MMU_ABSENT)
|
|
#define CONFIG_ARC_MMU_VER 0
|
|
#elif defined(CONFIG_ARC_MMU_V2)
|
|
#define CONFIG_ARC_MMU_VER 2
|
|
#elif defined(CONFIG_ARC_MMU_V3)
|
|
#define CONFIG_ARC_MMU_VER 3
|
|
#elif defined(CONFIG_ARC_MMU_V4)
|
|
#define CONFIG_ARC_MMU_VER 4
|
|
#endif
|
|
|
|
#ifndef __ASSEMBLY__
|
|
|
|
void cache_init(void);
|
|
void flush_n_invalidate_dcache_all(void);
|
|
|
|
#endif /* __ASSEMBLY__ */
|
|
|
|
#endif /* __ASM_ARC_CACHE_H */
|