According to the HW team, for some reason the normal clock select code
picks what appears to be a perfectly valid 375KHz SD card clock, based
on the CAR clock source and SDMMC1 controller register settings (CAR =
408MHz PLLP0 divided by 68 for 6MHz, then a SD Clock Control register
divisor of 16 = 375KHz). But the resulting SD card clock, as measured by
the HW team, is 700KHz, which is out-of-spec. So the WAR is to use the
values given in the TRM PLLP table to generate a 400KHz SD-clock (CAR
clock of 24.7MHz, SD Clock Control divisor of 62) only for SDMMC1 on
T210 when the requested clock is <= 400KHz. Note that as far as I can
tell, the other requests for clocks in the Tegra MMC driver result in
valid SD clocks.
Signed-off-by: Tom Warren <twarren@nvidia.com>
Reviewed-by: Jaehoon Chung <jh80.chung@samsung.com>
As per the T210 TRM, when running at 3.3v, the SDMMC1 tap/trim and
autocal values need to be set to condition the signals correctly before
talking to the SD-card. This is the same as what's being done in CBoot,
but it gets reset when the SDMMC1 HW is soft-reset during SD driver
init, so needs to be repeated here. Also set autocal and tap/trim for
SDMMC3, although no T210 boards use it for SD-card at this time.
Signed-off-by: Tom Warren <twarren@nvidia.com>
Reviewed-by: Jaehoon Chung <jh80.chung@samsung.com>
This commit removes the programming sequence that enables PLLE and UPHY
PLL hardware power sequencers. Per TRM, boot software should enable PLLE
and UPHY PLLs in software controlled power-on state and should power
down PLL before jumping into kernel or the next stage boot software.
Adds call to board_cleanup_before_linux to facilitate this.
Signed-off-by: JC Kuo <jckuo@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
This function will attempt to look up an Ethernet address in the DTB
that was passed in from cboot. It does so by first trying to locate the
default Ethernet device for the board (identified by the "ethernet"
alias) and if found, reads the "local-mac-address" property. If the
"ethernet" alias does not exist, or if it points to a device tree node
that doesn't exist, or if the device tree node that it points to does
not have a "local-mac-address" property or if the value is invalid, it
will fall back to the legacy mechanism of looking for the MAC address
stored in the "nvidia,ethernet-mac" or "nvidia,ether-mac" properties of
the "/chosen" node.
The MAC address is then written to the default Ethernet device for the
board (again identified by the "ethernet" alias) in U-Boot's control
DTB. This allows the device driver for that device to read the MAC
address from the standard location in device tree.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Tegra186 build are currently dealt with in very special ways, which is
because Tegra186 is fundamentally different in many respects. It is no
longer necessary to do many of the low-level programming because early
boot firmware will already have taken care of it.
Unfortunately, separating Tegra186 builds from the rest in this way
makes it difficult to share code with prior generations of Tegra. With
all of the low-level programming code behind Kconfig guards, the build
for Tegra186 can again be unified.
As a side-effect, and partial reason for this change, other Tegra SoC
generations can now make use of the code that deals with taking over a
boot from earlier bootloaders. This used to be nvtboot, but has been
replaced by cboot nowadays. Rename the files and functions related to
this to avoid confusion. The implemented protocols are unchanged.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Some devices may restrict access to the PMC to TrustZone software only.
Non-TZ software can detect this and use SMC calls to the firmware that
runs in the TrustZone to perform accesses to PMC registers.
Note that this also fixes reset_cpu() and the enterrcm command on
Tegra186 where they were previously trying to access the PMC at a wrong
physical address.
Based on work by Kalyani Chidambaram <kalyanic@nvidia.com> and Tom
Warren <twarren@nvidia.com>.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
There's no need to replicate the pmu.h header file for every Tegra SoC
generation. Use a single header that is shared across generations.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Add a driver which supports transmitting digital sound to an audio codec.
This uses fixed parameters as a device-tree binding is not currently
defined.
Signed-off-by: Simon Glass <sjg@chromium.org>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Add a driver for the audio hub. This is modelled as a misc device which
supports writing audio data from I2S.
Signed-off-by: Simon Glass <sjg@chromium.org>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
When U-Boot started using SPDX tags we were among the early adopters and
there weren't a lot of other examples to borrow from. So we picked the
area of the file that usually had a full license text and replaced it
with an appropriate SPDX-License-Identifier: entry. Since then, the
Linux Kernel has adopted SPDX tags and they place it as the very first
line in a file (except where shebangs are used, then it's second line)
and with slightly different comment styles than us.
In part due to community overlap, in part due to better tag visibility
and in part for other minor reasons, switch over to that style.
This commit changes all instances where we have a single declared
license in the tag as both the before and after are identical in tag
contents. There's also a few places where I found we did not have a tag
and have introduced one.
Signed-off-by: Tom Rini <trini@konsulko.com>
Adjust this to take a device as a parameter instead of a node.
Signed-off-by: Simon Glass <sjg@chromium.org>
Tested-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Tested-on: Beaver, Jetson-TK1
Tested-by: Stephen Warren <swarren@nvidia.com>
Adjust this code to support a live device tree. This should be implemented
as a PHY driver but that is left as an exercise for the maintainer.
Signed-off-by: Simon Glass <sjg@chromium.org>
Tested-by: Stephen Warren <swarren@nvidia.com>
The PMC can be modelled as a syscon peripheral. Add a driver for this
so that it can be accessed by drivers when needed. Enable it for tegra124
boards.
Signed-off-by: Simon Glass <sjg@chromium.org>
Tested-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Tested-on: Beaver, Jetson-TK1
Tested-by: Stephen Warren <swarren@nvidia.com>
At present early clock init happens in SPL. If SPL did not run (because
for example U-Boot is chain-loaded from another boot loader) then the
clocks are not set as U-Boot expects.
Add a function to detect this and call the early clock init in U-Boot
proper.
Signed-off-by: Simon Glass <sjg@chromium.org>
Introduce CONFIG_TEGRA124_MMC_DISABLE_EXT_LOOPBACK to disable the external clock
loopback and use the internal one on SDMMC3 as per the SDMMC_VENDOR_MISC_CNTRL_0
register's SDMMC_SPARE1 bits being set to 0xfffd according to the TRM.
Signed-off-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Acked-by: Jaehoon Chung <jh80.chung@samsung.com>
Signed-off-by: Marcel Ziswiler <marcel@ziswiler.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
A future patch will implement a clock uclass driver for Tegra. That driver
will call into Tegra's existing clock code to simplify the transition;
this avoids tieing the clock uclass patches into significant refactoring
of the existing custom clock API implementation.
Some of the Tegra clock APIs that manipulate peripheral clocks require
both the peripheral clock ID and parent clock ID to be passed in together.
However, the clock uclass API does not require any such "parent"
parameter, so the clock driver must determine this information itself.
This patch implements new Tegra- specific clock API
clock_get_periph_parent() for this purpose.
The new API is implemented in the core Tegra clock code rather than SoC-
specific clock code. The implementation uses various SoC-/clock-specific
data. That data is only available in SoC-specific clock code.
Consequently, two new internal APIs are added that enable the core clock
code to retrieve this information from the SoC-specific clock code. Due to
the structure of the Tegra clock code, this leads to some unfortunate code
duplication. However, this situation predates this patch.
Ideally, future work will de-duplicate the Tegra clock code, and migrate
it into drivers/clk/tegra. However, such refactoring is kept separate from
this series.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Currently, Tegra peripheral drivers control two aspects of their HW module
clock(s):
1) The clock enable/rate for the peripheral clock itself.
2) The system-level clock tree setup, i.e. the clock parent.
Aspect 1 is reasonable, but aspect 2 is a system-level decision, not
something that an individual peripheral driver should in general know
about or influence. Such system-level knowledge ties the driver to a
specific SoC implementation, even when they use generic APIs for clock
manipulation, since they must have SoC-specific knowledge such as parent
clock IDs. Limited exceptions exist, such as where peripheral HW is
expected to dynamically switch between clock sources at run-time, such
as CPU clock scaling or display clock conflict management in a multi-head
scenario.
This patch enhances the Tegra core code to perform system-level clock
tree setup, in a similar fashion to the Linux kernel Tegra clock driver.
This will allow future patches to simplify peripheral drivers by removing
the clock parent setup logic.
This change is required prior to converting peripheral drivers to use the
standard clock APIs, since:
1) The clock uclass doesn't currently support a set_parent() operation.
Adding one is possible, but not necessary at the moment.
2) The clock APIs retrieve all clock IDs from device tree, and the DT
bindings for almost all peripherals only includes information about the
relevant peripheral clocks, and not any potential parent clocks.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Convert the Tegra MMC driver to DM_MMC. Support for non-DM is removed
to avoid ifdefs in the code. DM_MMC is now enabled for all Tegra builds.
Cc: Jaehoon Chung <jh80.chung@samsung.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
(swarren, fixed some NULL pointer dereferences, removed extraneous
changes, rebased on various other changes, removed non-DM support etc.)
Signed-off-by: Stephen Warren <swarren@nvidia.com>
struct mmc_host is a Tegra-specific structure, but the name implies it's
something defined by core MMC code, which is confusing. Rename it to
struct tegra_mmc_priv to make its purpose more obvious. The new name is
also more appropriate for a DM driver private data structure, which will
be relevant later in this series.
Nothing needs access to this type except the MMC driver itself. Move the
definition into the driver C file.
Make sure all Tegra MMC functions are named tegra_mmc_*. Even though
they're all static, it's useful to have good naming so that symbol tables
are easy to interpret. A few functions aren't renamed by this patch since
they'll be deleted by a subsequent patch in this series.
Cc: Jaehoon Chung <jh80.chung@samsung.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
pad_init_mmc() is performing an SoC-specific operation, using registers
within the MMC controller. There's no reason to implement this code
outside the MMC driver, so move it inside the driver.
Cc: Jaehoon Chung <jh80.chung@samsung.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Tegra186 supports the new standard clock and reset APIs. Older Tegra SoCs
still use custom APIs. Enhance the Tegra MMC driver so that it can operate
with either set of APIs.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
The Tegra BPMP (Boot and Power Management Processor) is a separate
auxiliary CPU embedded into Tegra to perform power management work, and
controls related features such as clocks, resets, power domains, PMIC I2C
bus, etc. This driver provides the core low-level communication path by
which feature-specific drivers (such as clock) can make requests to the
BPMP. This driver is similar to an MFD driver in the Linux kernel. It is
unconditionally selected by CONFIG_TEGRA186 since virtually any Tegra186
build of U-Boot will need the feature.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
IVC (Inter-VM Communication) protocol is a Tegra-specific IPC (Inter
Processor Communication) framework. Within the context of U-Boot, it is
typically used for communication between the main CPU and various
auxiliary processors. In particular, it will be used to communicate with
the BPMP (Boot and Power Management Processor) on Tegra186 in order to
manipulate clocks and reset signals.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Tegra186's MMC controller needs to be explicitly identified. Add another
compatible value for it.
Tegra186 will use an entirely different clock/reset control mechanism to
existing chips, and will use standard clock/reset APIs rather than the
existing Tegra-specific custom APIs. The driver support for that isn't
ready yet, so simply disable all clock/reset usage if compiling for
Tegra186. This must happen at compile time rather than run-time since the
custom APIs won't even be compiled in on Tegra186. In the long term, the
plan would be to convert the existing custom APIs to standard APIs and get
rid of the ifdefs completely.
The system's main eMMC will work without any clock/reset support, since
the firmware will have already initialized the controller in order to
load U-Boot. Hence the driver is useful even in this apparently crippled
state.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
There are currently many places that define the list of all Tegra GPIOs;
the DT binding header and custom Tegra-specific header file gpio.h. Fix
the redundancy by replacing everything with the DT binding header file.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
In current Linux kernel Tegra DT files, 64-bit addresses are represented
in unit addresses as a pair of comma-separated 32-bit values. Apparently
this is no longer the correct representation for simple busses, and the
unit address should be represented as a single 64-bit value. If this is
changed in the DTs, arm/arm/mach-tegra/board2.c:ft_system_setup() will no
longer be able to find and enable the GPU node, since it looks up the node
by name.
Fix that function to enable nodes based on their compatible value rather
than their node name. This will work no matter what the node name is, i.e
for DTs both before and after any rename operation.
Cc: Thierry Reding <treding@nvidia.com>
Cc: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Remove the old PWM code. Remove calls to CONFIG_LCD functions now that we
are using driver model for video.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
There isn't a lot of benefit of have two separate files. With driver model
the code needs to be in the same driver, so it's better to have it in the
same file.
Signed-off-by: Simon Glass <sjg@chromium.org>
Acked-by: Anatolij Gustschin <agust@denx.de>
Signed-off-by: Tom Warren <twarren@nvidia.com>
This PWM supports four channels. The driver always uses the 32KHz clock,
and adjusts the duty cycle accordingly.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
In a number of places we had wordings of the GPL (or LGPL in a few
cases) license text that were split in such a way that it wasn't caught
previously. Convert all of these to the correct SPDX-License-Identifier
tag.
Signed-off-by: Tom Rini <trini@konsulko.com>
Rename GPU functions to less generic names to avoid potential name
collisions.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
There is no justification for this function, especially in exported
form.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
On currently supported SoCs, clk_m always runs at the same frequency as
the oscillator input. However newer SoC generations such as Tegra210 no
longer have that restriction. Prepare for that by separating clk_m from
the oscillator clock and allow SoC code to override the clk_m rate.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
This function is deleted by commit 2fccd2d96b
"tegra: Convert tegra GPIO driver to use driver model".
Signed-off-by: Axel Lin <axel.lin@ingics.com>
Acked-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
introduce BIT() definition, used in at91_udc gadget
driver.
Signed-off-by: Heiko Schocher <hs@denx.de>
[remove all other occurrences of BIT(x) definition]
Signed-off-by: Andreas Bießmann <andreas.devel@googlemail.com>
Acked-by: Stefan Roese <sr@denx.de>
Acked-by: Anatolij Gustschin <agust@denx.de>
This header file uses type definitions (u8, u32) from linux/types.h but
doesn't include it. If includes aren't carefully ordered this can cause
build failures.
Cc: Tom Warren <twarren@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Add defines to allow reading recovery mode (RCM) boot type from the boot
information table (BIT) written by the boot ROM (BR) to the IRAM.
Signed-off-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Acked-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
T124/210 requires some specific configuration (VPR setup) to
be performed by the bootloader before the GPU can be used.
For this reason, the GPU node in the device tree is disabled
by default. This patch enables the node if U-boot has performed
VPR configuration.
Boards enabled by this patch are T124's Jetson TK1 and Venice2
and T210's P2571.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Cc: Stephen Warren <swarren@nvidia.com>
Cc: Tom Warren <twarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
U-boot is responsible for enabling the GPU DT node after all necessary
configuration (VPR setup for T124) is performed. In order to be able to
check whether this configuration has been performed right before booting
the kernel, make it happen during board_init().
Also move VPR configuration into the more generic gpu.c file, which will
also host other GPU-related functions, and let boards specify
individually whether they need VPR setup or not.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Cc: Stephen Warren <swarren@nvidia.com>
Cc: Tom Warren <twarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Added PLL variables (dividers mask/shift, lock enable/detect, etc.)
to new pllinfo struct for each Soc/PLL. PLLA/C/D/E/M/P/U/X.
Used pllinfo struct in all clock functions, validated on T210.
Should be equivalent to prior code on T124/114/30/20. Thanks
to Marcel Ziswiler for corrections to the T20/T30 values.
Signed-off-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Tested-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Added 38.4MHz/48MHz entries to pll_x_table for CPU PLL. Needs
to be measured - should be close to 700MHz (1.4G/2).
Note that some freqs aren't in the PLLU table in T210 TRM
(13, 26MHz), so I used the 12MHz table entry for them. They
shouldn't be selected since they're not viable T210 OSC freqs.
Since there are now 2 new OSC defines, all tables (pll_x_table,
PLLU) had to increase by two entries, but since 38.4/48MHz are
not viable osc freqs on T20/30/114, etc, they're just set to 0.
Signed-off-by: Tom Warren <twarren@nvidia.com>
Derived from Tegra124, modified as appropriate during T210
board bringup. Cleaned up debug statements to conserve
string space, too. This also adds misc 64-bit changes
from Thierry Reding/Stephen Warren.
Signed-off-by: Tom Warren <twarren@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Add a hook to allows boards to add their own init to board_init().
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Will be used for unpowergating CPUs.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Tested-by: Ian Campbell <ijc@hellion.org.uk>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Add full link training as a fallback in case the fast link training
fails.
Signed-off-by: Simon Glass <sjg@chromium.org>
Acked-by: Anatolij Gustschin <agust@denx.de>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Add functions to provide access to the display clocks on Tegra124 including
setting the clock rate for an EDP display.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Create a function which sets the source clock for a peripheral, given
the number of mux bits to adjust. This can then be used more generally.
For now, don't export it.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>