The current code in reserve_noncached() has two issues:
1) The first update of gd->start_addr_sp always rounds down to a section
start. However, the equivalent calculation in cache.c:noncached_init()
always first rounds up to a section start, then subtracts a section size.
These two calculations differ if the initial value is already rounded to
section alignment.
2) The second update of gd->start_addr_sp subtracts exactly
CONFIG_SYS_NONCACHED_MEMORY, whereas the equivalent calculation in
cache.c:noncached_init() rounds the noncached size up to section
alignment before subtracting it. The two calculations differ if the
noncached region size is not a multiple of the MMU section size.
In practice, one/both of those issues causes a practical problem on
Jetson TX1; U-Boot triggers a synchronous abort during initialization,
likely due to overlapping use of some memory region.
This change fixes both these issues by duplicating the exact calculations
from noncached_init() into reserve_noncached().
However, this fix assumes that gd->start_addr_sp on entry to
reserve_noncached() exactly matches mem_malloc_start on entry to
noncached_init(). I haven't traced the code to see whether it absolutely
guarantees this in all (or indeed any!) cases. Consequently, I added some
comments in the hope that this condition will continue to be true.
Fixes: 5f7adb5b1c ("board_f: reserve noncached space below malloc area")
Cc: Vikas Manocha <vikas.manocha@st.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>