There is no justification for this function, especially in exported
form.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Add code to detect timeouts when waiting for HW events such as PLL
lock done. Any errors are logged and trigger an error return code.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Add the tables defining which pads and mux options exist in the Tegra210
XUSB padctl hardware.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
This change simply deletes code from the Tegra210 XUSB padctl driver that
is already present in the common XUSB padctl code. Since all the arrays
in tegra210_socdata are empty, this update may leave the Tegra210 XUSB
padctl driver non-functional at run-time. However, (a) this driver is not
used yet so no regression can be observed and (b) the next commit will
immediately fix this up.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
There are some differences between the Tegra124 and Tegra210 XUSB padctl
code. So far, the common XUSB padctl code only supports Tegra124. Add
some parameters etc. so that it can work for both chips.
This also allows moving Tegra124's process_nodes() into the common file;
something that would have requires edits during the move if done in the
previous commit.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
A fair amount of the XUSB padctl driver will be common between Tegra124
and Tegra210. To avoid cut/paste between the two chips, create a new
file that will contain the common code, and convert the Tegra124 code to
use it. This change doesn't move every last piece of code that can/will be
shared, but rather concentrates on moving code that can be moved with zero
changes, so there are no other diffs mixed in.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
This file defines pr_fmt(), so the individual error() calls don't need to
include the prefix in their format strings. Doing so results in duplicate
text in any error messages. Remove the duplication.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
A future patch will soon move some of the XUSB padctl code into a common
file in arch/arm/mach-tegra. Rename the existing dummy XUSB padctl file
to avoid conflicting with that, or being confusing.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Implement the procedure that the TRM mandates to initialize PLLREFE and
PLLE. This makes the PLL actually lock.
Note that this section of the TRM is being cleaned up to remove some
confusion. The set of register accesses in this patch should be final,
although the step numbers/descriptions might still change.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
This sets up a fine-grained page table, which is a requirement for
noncached_init() to operate correctly.
MMU setup code currently exists in a number of places:
- A version in the core ARMv8 support code that sets up page tables that
use very large block sizes that CONFIG_SYS_NONCACHED_MEMORY doesn't
support.
- Enhanced versions for fsl-lsch3 and zynmq that set up finer grained
page tables.
Ideally, rather than duplicating the MMU setup code yet again this patch
would instead consolidate all the different routines into the core ARMv8
code so that it supported all use-cases. However, this will require
significant effort since there appear to be a number of discrepancies[1]
between different versions of the code, and between the defines/values by
some copies of the MMU setup code use and the architectural MMU
documentation. Some reverse engineering will be required to determine the
intent of the current code.
[1] For example, in the core ARMv8 MMU setup code, three defines named
TCR_EL[123]_IPS_BITS exist, but only one of them sets the IPS field and
the others set a different field (T1SZ) in the page tables. As far as I
can tell so far, there should be no need to set different values per
exception level nor to modify the T1SZ field at all, since TTBR1 shouldn't
be enabled anyway. Another example is inconsistent values for *_VA_BITS
between the current core ARMv8 MMU setup code and the various SoC-
specific MMU setup code. Another example is that asm/armv8/mmu.h's value
for SECTION_SHIFT doesn't match asm/system.h's MMU_SECTION_SHIFT;
research is needed to determine which code relies on which of those
values and why, and whether fixing the incorrect value will cause any
regression.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
After consulting with some of the SPDX team, the conclusion is that
Makefiles are worth adding SPDX-License-Identifier tags too, and most of
ours have one. This adds tags to ones that lack them and converts a few
that had full (or in one case, very partial) license blobs into the
equivalent tag.
Cc: Kate Stewart <kstewart@linuxfoundation.org>
Signed-off-by: Tom Rini <trini@konsulko.com>
Enabling a PLL while IDDQ is high. The Linux kernel checks for this
condition and warns about it verbosely, so while this seems to work
fine, fix it up according to the programming guidelines provided in
the Tegra K1 TRM (v02p), Section 5.3.8.1 ("PLLC and PLLC4 Startup
Sequence"). The Tegra114 TRM doesn't contain this information, but
the programming of PLLC is the same on Tegra114 and Tegra124.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Enabling a PLL while IDDQ is high. The Linux kernel checks for this
condition and warns about it verbosely, so while this seems to work
fine, fix it up according to the programming guidelines provided in
the Tegra K1 TRM (v02p), Section 5.3.8.1 ("PLLC and PLLC4 Startup
Sequence").
Reported-by: Nicolas Chauvet <kwizart@gmail.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
While clk_m and the oscillator run at the same frequencies on Tegra114
and Tegra124, clk_m is the proper source for the architected timer. On
more recent Tegra generations, Tegra210 and later, both the oscillator
and clk_m can run at different frequencies. clk_m will be divided down
from the oscillator.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
On currently supported SoCs, clk_m always runs at the same frequency as
the oscillator input. However newer SoC generations such as Tegra210 no
longer have that restriction. Prepare for that by separating clk_m from
the oscillator clock and allow SoC code to override the clk_m rate.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
AFAIK, for all PLLs on all Tegra SoCs, the primary PLL output frequency
is (input * m) / (n * p). However, PLLP's primary output (pllP_out0) on
T210 is the VCO output, and divp is not applied. pllP_out2 does have divp
applied. All other pllP_outN are divided down from pllP_out0. We only
support pllP_out0 in U-Boot at the time of writing.
Fix clock_get_rate() to handle this special case.
This corrects the returned rate for PLLP to be 408MHz rather than 204MHz.
In turn, this causes high enough dividers to be calculated for the various
peripheral clocks that feed off of PLLP. Without this, some peripherals
failed to operate correctly. For instance, one of my SD cards worked
perfectly but an older (presumably slower) card could not be read.
Note that prior to commit 722e000ccd "Tegra: PLL: use per-SoC pllinfo
table instead of PLL_DIVM/N/P, etc.", the calculated PLL frequency was
816MHz since the wrong values were being extracted from the PLLP divider
register. This caused overly large peripheral dividers to be calculated,
which while wrong, didn't cause any correctness issues; things simply ran
slower than they could.
Reported-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
P2371-2180 is a P2180 CPU board married to a P2597 I/O board. The
combination contains SoC, DRAM, eMMC, SD card slot, HDMI, USB
micro-B port, Ethernet via USB3, USB3 host port, SATA, PCIe, and
two GPIO expansion headers.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
We have flipped CONFIG_SPL_DISABLE_OF_CONTROL. We have cleansing
devices, $(SPL_) and CONFIG_IS_ENABLED(), so we are ready to clear
away the ugly logic in include/fdtdec.h:
#ifdef CONFIG_OF_CONTROL
# if defined(CONFIG_SPL_BUILD) && !defined(SPL_OF_CONTROL)
# define OF_CONTROL 0
# else
# define OF_CONTROL 1
# endif
#else
# define OF_CONTROL 0
#endif
Now CONFIG_IS_ENABLED(OF_CONTROL) is the substitute. It refers to
CONFIG_OF_CONTROL for U-boot proper and CONFIG_SPL_OF_CONTROL for
SPL.
Also, we no longer have to cancel CONFIG_OF_CONTROL in
include/config_uncmd_spl.h and scripts/Makefile.spl.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Reviewed-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
As we discussed a couple of times, negative CONFIG options make our
life difficult; CONFIG_SYS_NO_FLASH, CONFIG_SYS_DCACHE_OFF, ...
and here is another one.
Now, there are three boards enabling OF_CONTROL on SPL:
- socfpga_arria5_defconfig
- socfpga_cyclone5_defconfig
- socfpga_socrates_defconfig
This commit adds CONFIG_SPL_OF_CONTROL for them and deletes
CONFIG_SPL_DISABLE_OF_CONTROL from the other boards to invert
the logic.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Reviewed-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Represent all available RAM in either one or two banks. The first bank
describes any RAM below 4GB. The second bank describes any RAM above 4GB.
This split is driven by the following requirements:
- The NVIDIA L4T kernel requires separate entries in the DT /memory/reg
property for memory below and above the 4GB boundary. The layout of that
DT property is directly driven by the entries in the U-Boot bank array.
- On systems with RAM beyond a physical address of 4GB, the potential
existence of a carve-out at the end of RAM below 4GB can only be
represented using multiple banks, since usable RAM is not contiguous.
While making this change, add a lot more comments re: how and why RAM is
represented in banks, and implement a few more "semantic" functions that
define (and perhaps later detect at run-time) the size of any carve-out.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
The return value of query_sdram_size() is assigned directly to
gd->ram_size in dram_init(). Adjust the return type to match the field
it's assigned to. This has the beneficial effect that on 64-bit systems,
the return value can correctly represent large RAM sizes over 4GB.
For similar reasons, change the type of variable size_bytes in the same
way.
query_sdram_size() would previously clip the detected RAM size to at most
just under 4GB in all cases, since on 32-bit systems, larger values could
not be represented. Disable this feature on 64-bit systems since the
representation restriction does not exist.
On 64-bit systems, never call get_ram_size() to validate the detected/
calculated RAM size. On any system with a secure OS/... carve-out, RAM
may not have a single contiguous usable area, and this can confuse
get_ram_size(). Ideally, we'd make this call conditional upon some other
flag that indicates specifically that a carve-out is actually in use. At
present, building for a 64-bit system is the best indication we have of
this fact. In fact, the call to get_ram_size() is not useful by the time
U-Boot runs on any system, since U-Boot (and potentially much other early
boot software) always runs from RAM on Tegra, so any mistakes in memory
controller register programming will already have manifested themselves
and prevented U-Boot from running to this point. In the future, we may
simply delete the call to get_ram_size() in all cases.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
The logic for simple PLLs on T124 was broken by this commit:
722e000c Tegra: PLL: use per-SoC pllinfo table instead of PLL_DIVM/N/P, etc.
Correct it by reading from the same pll_misc register that it writes to and
adding an entry for the DP PLL in the pllinfo table.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
P2371-0000 is a P2581 or P2530 CPU board married to a P2595 I/O
board. The combination contains SoC, DRAM, eMMC, SD card slot,
HDMI, USB micro-B port, Ethernet via USB3, USB3 host port, SATA,
a GPIO expansion header, and an analog audio jack.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
E2220-1170 is a Tegra210 bringup board with onboard SoC, DRAM,
eMMC, SD card slot, HDMI, USB micro-B port, and sockets for various
expansion modules.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
T124/210 requires some specific configuration (VPR setup) to
be performed by the bootloader before the GPU can be used.
For this reason, the GPU node in the device tree is disabled
by default. This patch enables the node if U-boot has performed
VPR configuration.
Boards enabled by this patch are T124's Jetson TK1 and Venice2
and T210's P2571.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Cc: Stephen Warren <swarren@nvidia.com>
Cc: Tom Warren <twarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
U-boot is responsible for enabling the GPU DT node after all necessary
configuration (VPR setup for T124) is performed. In order to be able to
check whether this configuration has been performed right before booting
the kernel, make it happen during board_init().
Also move VPR configuration into the more generic gpu.c file, which will
also host other GPU-related functions, and let boards specify
individually whether they need VPR setup or not.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Cc: Stephen Warren <swarren@nvidia.com>
Cc: Tom Warren <twarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Additionally, ARM64 devices typically run a secure monitor in EL3 and
U-Boot in EL2, and set up some secure RAM carve-outs to contain the EL3
code and data. These carve-outs are located at the top of 32-bit address
space. Restrict U-Boot's RAM usage to well below the location of those
carve-outs. Ideally, we would the secure monitor would inform U-Boot of
exactly which RAM it could use at run-time. However, I'm not sure how to
do that at present (and even if such a mechanism does exist, it would
likely not be generic across all forms of secure monitor).
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Added PLL variables (dividers mask/shift, lock enable/detect, etc.)
to new pllinfo struct for each Soc/PLL. PLLA/C/D/E/M/P/U/X.
Used pllinfo struct in all clock functions, validated on T210.
Should be equivalent to prior code on T124/114/30/20. Thanks
to Marcel Ziswiler for corrections to the T20/T30 values.
Signed-off-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Tested-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Added 38.4MHz/48MHz entries to pll_x_table for CPU PLL. Needs
to be measured - should be close to 700MHz (1.4G/2).
Note that some freqs aren't in the PLLU table in T210 TRM
(13, 26MHz), so I used the 12MHz table entry for them. They
shouldn't be selected since they're not viable T210 OSC freqs.
Since there are now 2 new OSC defines, all tables (pll_x_table,
PLLU) had to increase by two entries, but since 38.4/48MHz are
not viable osc freqs on T20/30/114, etc, they're just set to 0.
Signed-off-by: Tom Warren <twarren@nvidia.com>
CPU board (E2530) has a fan - turn it on via GPIO to keep
the SoC cool.
Acked-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Based on Venice2, incorporates Stephen Warren's
latest P2571 pinmux table.
With Thierry Reding's 64-bit build fixes, this
will build and and boot in 64-bit on my P2571
(when used with a 32-bit AVP loader).
Signed-off-by: Tom Warren <twarren@nvidia.com>
Derived from Tegra124, modified as appropriate during T210
board bringup. Cleaned up debug statements to conserve
string space, too. This also adds misc 64-bit changes
from Thierry Reding/Stephen Warren.
Signed-off-by: Tom Warren <twarren@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
All based off of Tegra124. As a Tegra210 board is brought
up, these may change a bit to match the HW more closely,
but probably 90% of this is identical to T124.
Note that since T210 is a 64-bit build, it has no SPL
component, and hence no cpu.c for Tegra210.
Signed-off-by: Tom Warren <twarren@nvidia.com>
Moved Tegra config options to mach-tegra/Kconfig so that both
32-bit and 64-bit builds can co-exist for Tegra SoCs.
T210 will be 64-bit only (no SPL) and will require a 32-bit
AVP/BPMP loader.
Signed-off-by: Tom Warren <twarren@nvidia.com>
Simon's 'tegra124: Implement spl_was_boot_source()' needs
a prototype for save_boot_params_ret() to build cleanly
for 64-bit Tegra210.
Signed-off-by: Tom Warren <twarren@nvidia.com>
A subsequent patch will enable the use of the architected timer on
ARMv8. Doing so implies that udelay() will be backed by this timer
implementation, and hence the architected timer must be ready when
udelay() is first called. The first time udelay() is used is while
resetting the debug UART, which happens very early. Make sure that
arch_timer_init() is called before that.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
On 64-bit SoCs the I-cache isn't enabled in early code, so the default
cache enable functions for 64-bit ARM can be used.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Most peripherals on Tegra can do DMA only to the lower 32-bit
address space, even on 64-bit SoCs. This limitation is
typically overcome by the use of an IOMMU. Since the IOMMU is
not entirely trivial to set up and serves no other purpose
(I/O protection, ...) in U-Boot, restrict 64-bit Tegra SoCs to
the lower 32-bit address space for RAM. This ensures that the
physical addresses of buffers that are programmed into the
various DMA engines are valid and don't alias to lower addresses.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
[swarren, stripped out changes not strictly related to warnings]
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Some archs/boards specify their own default by pre-defining the config
which causes the Kconfig system to mix up the order of the configs in
the defconfigs... This will cause merge pain if allowed to proliferate.
Remove the configs that behave this way from the archs.
A few configs still remain, but that is because they only exist as
defaults and do not have a proper Kconfig entry. Those appear to be:
SPIFLASH
DISPLAY_BOARDINFO
Signed-off-by: Joe Hershberger <joe.hershberger@ni.com>
[trini: rastaban, am43xx_evm_usbhost_boot, am43xx_evm_ethboot updates,
drop DM_USB from MSI_Primo81 as USB_MUSB_SUNXI isn't converted yet to DM]
Signed-off-by: Tom Rini <trini@konsulko.com>
We plan to enable device tree in SPL by default. Before doing this,
explicitly disable it for all boards.
Signed-off-by: Simon Glass <sjg@chromium.org>
Somehow this change was dropped in the various merges. I noticed when I
came to turn off the non-driver-model support for Tegra. We need to make
this change (and deal with any problems) before going further.
Change-Id: Ib9389a0d41008014eb0df0df98c27be65bc79ce6
Signed-off-by: Simon Glass <sjg@chromium.org>
Acked-by: Marek Vasut <marex@denx.de>
With the rename the MAINTAINER file was not updated. Fix it and the
'Chrombook' typo in Kconfig.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Add a hook to allows boards to add their own init to board_init().
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
This is required in order to avoid instability when running from caches
after the kernel starts.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
A harmless but confusing warning is displayed when looking up the
DisplayPort PLL. Correct this.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
All the Tegra boards borrow the files from board/nvidia/common/
directory, i.e., board/nvidia/common/* are not vendor-common files,
but SoC-common files.
Move NVIDIA common files to arch/arm/mach-tegra/ to clean up
Makefiles.
As arch/arm/mach-tegra/board.c already exists, this commit renames
board/nvidia/common/board.c to arch/arm/mach-tegra/board2.c,
expecting they will be consolidated as a second step.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Cc: Stephen Warren <swarren@nvidia.com>
Cc: Tom Warren <twarren@nvidia.com>
Cc: Simon Glass <sjg@chromium.org>
Acked-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
The secure world code is relocated to the MB just below the top of 4G, we
reserve it in the FDT (by setting CONFIG_ARMV7_SECURE_RESERVE_SIZE) but it is
not protected in h/w.
Signed-off-by: Ian Campbell <ijc@hellion.org.uk>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Tested-by: Ian Campbell <ijc@hellion.org.uk>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Make sure to enable the SMMU when booting the kernel in non-secure mode.
This is necessary because some of the SMMU registers are restricted to
TrustZone-secured requestors, hence the kernel wouldn't be able to turn
the SMMU on. At the same time, enable translation for all memory clients
for the same reasons. The kernel will still be able to control SMMU IOVA
translation using the per-SWGROUP enable bits.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
We only set CNTFRQ in arch_timer_init for the boot CPU. But this has to
happen for all cores.
Fixing this resolves problems of KVM with emulating the generic
timer/counter.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Tested-by: Ian Campbell <ijc@hellion.org.uk>
Signed-off-by: Tom Warren <twarren@nvidia.com>
These registers can be used to prevent non-secure world from accessing a
megabyte aligned region of RAM, use them to protect the u-boot secure monitor
code.
At first I tried to do this from s_init(), however this inexplicably causes
u-boot's networking (e.g. DHCP) to fail, while networking under Linux was fine.
So instead I have added a new weak arch function protect_secure_section()
called from relocate_secure_section() and reserved the region there. This is
better overall since it defers the reservation until after the sec vs. non-sec
decision (which can be influenced by an envvar) has been made when booting the
os.
Signed-off-by: Ian Campbell <ijc@hellion.org.uk>
[Jan: tiny style adjustment]
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Tested-by: Ian Campbell <ijc@hellion.org.uk>
Signed-off-by: Tom Warren <twarren@nvidia.com>
This is based on Thierry Reding's work and uses Ian Campell's
preparatory patches. It comes with full support for CPU_ON/OFF PSCI
services. The algorithm used in this version for turning CPUs on and
off was proposed by Peter De Schrijver and Thierry Reding in
http://thread.gmane.org/gmane.comp.boot-loaders.u-boot/210881. It
consists of first enabling CPU1..3 via the PMC, just to powergate them
again with the help of the Flow Controller. Once the Flow Controller is
in place, we can leave the PMC alone while processing CPU_ON and CPU_OFF
PSCI requests.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Will be used for unpowergating CPUs.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Tested-by: Ian Campbell <ijc@hellion.org.uk>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Add functions to provide access to the display clocks on Tegra124 including
setting the clock rate for an EDP display.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Create a function which sets the source clock for a peripheral, given
the number of mux bits to adjust. This can then be used more generally.
For now, don't export it.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
The get_pll() function can do the wrong thing if passed values that are
out of range. Add checks for this and add a function which can return
a 'simple' PLL. This can be defined by SoCs with their own clocks.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
This is only used by Nvidia boards, so move it into nvidia/common to
simplify things.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
When enabling a PWM, allow the existing clock rate and source to stand
unchanged.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
This is needed for tegra124 also, so make it common and add a header file
for tegra124.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
By making the board selections optional, every defconfig will include
the board selection when running savedefconfig so if a new board is
added to the top of the list of choices the former top's defconfig will
still be correct.
Signed-off-by: Joe Hershberger <joe.hershberger@ni.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Stephen Warren <swarren@wwwdotorg.org>
Cc: Tom Rini <trini@konsulko.com>
As mentioned in the previous commit, adding default values in each
Kconfig causes problems because it does not co-exist with the
"depends on" syntax. (Please note this is not a bug of Kconfig.)
We should not do so unless we have a special reason. Actually,
for CONFIG_DM*, we have no good reason to do so.
Generally, CONFIG_DM is not a user-configurable option. Once we
convert a driver into Driver Model, the board only works with Driver
Model, i.e. CONFIG_DM must be always enabled for that board.
So, using "select DM" is more suitable rather than allowing users to
modify it. Another good thing is, Kconfig warns unmet dependencies
for "select" syntax, so we easily notice bugs.
Actually, CONFIG_DM and other related options have been added
without consistency: some into arch/*/Kconfig, some into
board/*/Kconfig, and some into configs/*_defconfig.
This commit prefers "select" and cleans up the following issues.
[1] Never use "CONFIG_DM=n" in defconfig files
It is really rare to add "CONFIG_FOO=n" to disable CONFIG options.
It is more common to use "# CONFIG_FOO is not set". But here, we
do not even have to do it.
Less than half of OMAP3 boards have been converted to Driver Model.
Adding the default values to arch/arm/cpu/armv7/omap3/Kconfig is
weird. Instead, add "select DM" only to appropriate boards, which
eventually eliminates "CONFIG_DM=n", etc.
[2] Delete redundant CONFIGs
Sandbox sets CONFIG_DM in arch/sandbox/Kconfig and defines it again
in configs/sandbox_defconfig.
Likewise, OMAP3 sets CONFIG_DM arch/arm/cpu/armv7/omap3/Kconfig and
defines it also in omap3_beagle_defconfig and devkit8000_defconfig.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Even the 8-bit case needs KBCB configured, as pin D7 is located in this
pingroup.
Please note that pingroup ATC seems to come out of reset with its
config set to NAND so one needs to explicitly configure some other
function to this group in order to avoid clashing settings which is
outside the scope of this patch.
Signed-off-by: Lucas Stach <dev@lynxeye.de>
Signed-off-by: Marcel Ziswiler <marcel@ziswiler.com>
Tested-by: Marcel Ziswiler <marcel@ziswiler.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
In accordance with our other modules supported by U-Boot and as agreed
upon for Apalis/Colibri T30 get rid of the carrier board in the board/
configuration/device-tree naming.
While at it also bring the prompt more in line with our other products.
Signed-off-by: Marcel Ziswiler <marcel@ziswiler.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
This allows selection between CSI and DSI_B on the MIPI pads.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Some pinmux controls are in a different register set. Add support for
manipulating those in a similar way to existing pins/groups.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Move struct pmux_pingrp_desc type and tegra_soc_pingroups variable
declaration together with other pin/mux level definitions. Now the whole
file is grouped/ordered pin/mux-related then drvgrp-related definitions.
Fix typo in ifdef comment.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Patches that added the Tegra210 pinctrl driver and renamed directories
arch/arm/cpu/tegra{$soc}-common -> arch/arm/mach-tegra/tegra-${soc}
crossed. Move the Tegra210 pinctrl driver to the correct location. This
wasn't detected since Tegra210 support is in the process of being added,
and isn't buildable yet.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
This option has a bool type, not hex.
Fix it and enable it if CONFIG_DM is on because Driver Model always
requires malloc memory. Devices are scanned twice, before/after
relocation. CONFIG_SYS_MALLOC_F should be enabled to use malloc
memory before relocation. As it is board-independent, handle it
globally.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Stephen Warren <swarren@wwwdotorg.org>
Reviewed-by: Simon Glass <sjg@chromium.org>
Acked-by: Robert Baldyga <r.baldyga@samsung.com>
Various files are needlessly rebuilt every time due to the version and
build time changing. As version.h is not actually needed, remove the
include.
Signed-off-by: Rob Herring <robh@kernel.org>
Cc: Albert Aribaud <albert.u.boot@aribaud.net>
Cc: Stefano Babic <sbabic@denx.de>
Cc: Minkyu Kang <mk7.kang@samsung.com>
Cc: Marek Vasut <marex@denx.de>
Cc: Tom Warren <twarren@nvidia.com>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Macpaul Lin <macpaul@andestech.com>
Cc: Wolfgang Denk <wd@denx.de>
Cc: York Sun <yorksun@freescale.com>
Cc: Stefan Roese <sr@denx.de>
Cc: Nobuhiro Iwamatsu <iwamatsu@nigauri.org>
Cc: Simon Glass <sjg@chromium.org>
Cc: Philippe Reynes <tremyfr@yahoo.fr>
Cc: Eric Jarrige <eric.jarrige@armadeus.org>
Cc: "David Müller" <d.mueller@elsoft.ch>
Cc: Phil Edworthy <phil.edworthy@renesas.com>
Cc: Robert Baldyga <r.baldyga@samsung.com>
Cc: Torsten Koschorrek <koschorrek@synertronixx.de>
Cc: Anatolij Gustschin <agust@denx.de>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Łukasz Majewski <l.majewski@samsung.com>
Use the full driver model GPIO and serial drivers in SPL now that these are
supported. Since device tree is not available they will use platform data.
Remove the special SPL GPIO function as it is no longer needed.
This is all in one commit to maintain bisectability.
Signed-off-by: Simon Glass <sjg@chromium.org>
Tegra210 has a per-pin option named e_io_hv, which indicates that the
pin's input path should be configured to be 3.3v-tolerant. Add support
for this.
Note that this is very similar to previous chip's rcv_sel option.
However, since the Tegra TRM names this option differently for the
different chips, we support the new name so that the code exactly matches
the naming in the TRM, to avoid confusion.
This patch incorporates a few fixes from Tom Warren <twarren@nvidia.com>.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Tegra210 starts its drive group registers at a different offset from the
APB MISC register block that other SoCs. Update the code to handle this.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
T210 support HSM and Schmitt options in the pinmux register (previous
chips placed these options in the drive group register). Update the
code to handle this.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Tegra210 moves some bits around in the pinmux registers. Update the code
to handle this.
This doesn't attempt to address the issues with the group-to-group varying
drive group register layout mentioned earlier. This patch handles the
SoC-to-SoC differences in the mux register layout.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
On some future SoCs, some per-drive-group features became per-pin
features. Move all type definitions early in the header so they can
be enabled irrespective of the setting of TEGRA_PMX_SOC_HAS_DRVGRPS.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
On some future SoCs, some of the per-drive-group features no longer
exist. Add some ifdefs to support this.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Future SoCs have a slightly different combination of pinmux options per
pin. This will be simpler to handle if we simply have one define per
option, rather than grouping various options together, in combinations
that don't align with future chips.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Tegra's drive group registers have a remarkably inconsistent layout. The
current U-Boot driver doesn't take this into account at all. Add a
comment to describe the issue, so at least anyone debugging the driver
will be aware of this. To solve this, we'd need to add a per-drive-group
data structure describing the layout for the individual register. Since
we don't set up too many drive groups in U-Boot at present, this
hopefully isn't causing too much practical issue. Still, we probably need
to fix this sometime.
Wth Tegra210, the register layout becomes almost entirely consistent, so
this problem partially solves itself over time.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
This is needed to correctly apply the new Jetson TK1 pinmux config.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
When the CPU is in non-secure (NS) mode (when running U-Boot under a
secure monitor), certain actions cannot be taken, since they would need
to write to secure-only registers. One example is configuring the ARM
architectural timer's CNTFRQ register.
We could support this in one of two ways:
1) Compile twice, once for secure mode (in which case anything goes) and
once for non-secure mode (in which case certain actions are disabled).
This complicates things, since everyone needs to keep track of
different U-Boot binaries for different situations.
2) Detect NS mode at run-time, and optionally skip any impossible actions.
This has the advantage of a single U-Boot binary working in all cases.
(2) is not possible on ARM in general, since there's no architectural way
to detect secure-vs-non-secure. However, there is a Tegra-specific way to
detect this.
This patches uses that feature to detect secure vs. NS mode on Tegra, and
uses that to:
* Skip the ARM arch timer initialization.
* Set/clear an environment variable so that boot scripts can take
different action depending on which mode the CPU is in. This might be
something like:
if CPU is secure:
load secure monitor code into RAM.
boot secure monitor.
secure monitor will restart (a new copy of) U-Boot in NS mode.
else:
execute normal boot process
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Some systems have so much RAM that the end of RAM is beyond 4GB. An
example would be a Tegra124 system (where RAM starts at 2GB physical)
that has more than 2GB of RAM.
In this case, we want gd->ram_size to represent the actual RAM size, so
that the actual RAM size is passed to the OS. This is useful if the OS
implements LPAE, and can actually use the "extra" RAM.
However, we can't use get_ram_size() to verify the actual amount of RAM
present on such systems, since some of the RAM can't be accesses, which
confuses that function. Avoid calling get_ram_size() when the RAM size
is too large for it to work correctly. It's never actually needed anyway,
since there's no reason for the BCT to report the wrong RAM size.
In systems with >=4GB RAM, we still need to clip the reported RAM size
since U-Boot uses a 32-bit variable to represent the RAM size in bytes.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
size_mb is used to hold a value that's sometimes KB, sometimes MB,
and sometimes bytes. Use separate correctly named variables to avoid
confusion here. Also fix indentation of a conditional statement.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Now CONFIG_SPL_BUILD is not defined in Kconfig, so
"!depends on SPL_BUILD" and "if !SPL_BUILD" are redundant.
Signed-off-by: Masahiro Yamada <yamada.m@jp.panasonic.com>
When Kconfig for U-boot was examined, one of the biggest issues was
how to support multiple images (Normal, SPL, TPL). There were
actually two options, "single .config" and "multiple .config".
After some discussions and thought experiments, I chose the latter,
i.e. to create ".config", "spl/.config", "tpl/.config" for Normal,
SPL, TPL, respectively.
It is true that the "multiple .config" strategy provided us the
maximum flexibility and helped to avoid duplicating CONFIGs among
Normal, SPL, TPL, but I have noticed some fatal problems:
[1] It is impossible to share CONFIG options across the images.
If you change the configuration of Main image, you often have to
adjust some SPL configurations correspondingly. Currently, we
cannot handle the dependencies between them. It means one of the
biggest advantages of Kconfig is lost.
[2] It is too painful to change both ".config" and "spl/.config".
Sunxi guys started to work around this problem by creating a new
configuration target. Commit cbdd9a9737 (sunxi: kconfig: Add
%_felconfig rule to enable FEL build of sunxi platforms.) added
"make *_felconfig" to enable CONFIG_SPL_FEL on both images.
Changing the configuration of multiple images in one command is a
generic demand. The current implementation cannot propose any
good solution about this.
[3] Kconfig files are getting ugly and difficult to understand.
Commit b724bd7d63 (dm: Kconfig: Move CONFIG_SYS_MALLOC_F_LEN to
Kconfig) has sprinkled "if !SPL_BUILD" over the Kconfig files.
[4] The build system got more complicated than it should be.
To adjust Linux-originated Kconfig to U-Boot, the helper script
"scripts/multiconfig.sh" was introduced. Writing a complicated
text processor is a shell script sometimes caused problems.
Now I believe the "single .config" will serve us better. With it,
all the problems above would go away. Instead, we will have to add
some CONFIG_SPL_* (and CONFIG_TPL_*) options such as CONFIG_SPL_DM,
but we will not have much. Anyway, this is what we do now in
scripts/Makefile.spl.
I admit my mistake with my apology and this commit switches to the
single .config configuration.
It is not so difficult to do that:
- Remove unnecessary processings from scripts/multiconfig.sh
This file will remain for a while to support the current defconfig
format. It will be removed after more cleanups are done.
- Adjust some makefiles and Kconfigs
- Add some entries to include/config_uncmd_spl.h and the new file
scripts/Makefile.uncmd_spl. Some CONFIG options that are not
supported on SPL must be disabled because one .config is shared
between SPL and U-Boot proper going forward. I know this is not
a beautiful solution and I think we can do better, but let's see
how much we will have to describe them.
- update doc/README.kconfig
More cleaning up patches will follow this.
Signed-off-by: Masahiro Yamada <yamada.m@jp.panasonic.com>
Reviewed-by: Simon Glass <sjg@chromium.org>