Tegra186's MMC controller needs to be explicitly identified. Add another
compatible value for it.
Tegra186 will use an entirely different clock/reset control mechanism to
existing chips, and will use standard clock/reset APIs rather than the
existing Tegra-specific custom APIs. The driver support for that isn't
ready yet, so simply disable all clock/reset usage if compiling for
Tegra186. This must happen at compile time rather than run-time since the
custom APIs won't even be compiled in on Tegra186. In the long term, the
plan would be to convert the existing custom APIs to standard APIs and get
rid of the ifdefs completely.
The system's main eMMC will work without any clock/reset support, since
the firmware will have already initialized the controller in order to
load U-Boot. Hence the driver is useful even in this apparently crippled
state.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
There are currently many places that define the list of all Tegra GPIOs;
the DT binding header and custom Tegra-specific header file gpio.h. Fix
the redundancy by replacing everything with the DT binding header file.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
In current Linux kernel Tegra DT files, 64-bit addresses are represented
in unit addresses as a pair of comma-separated 32-bit values. Apparently
this is no longer the correct representation for simple busses, and the
unit address should be represented as a single 64-bit value. If this is
changed in the DTs, arm/arm/mach-tegra/board2.c:ft_system_setup() will no
longer be able to find and enable the GPU node, since it looks up the node
by name.
Fix that function to enable nodes based on their compatible value rather
than their node name. This will work no matter what the node name is, i.e
for DTs both before and after any rename operation.
Cc: Thierry Reding <treding@nvidia.com>
Cc: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Remove the old PWM code. Remove calls to CONFIG_LCD functions now that we
are using driver model for video.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
There isn't a lot of benefit of have two separate files. With driver model
the code needs to be in the same driver, so it's better to have it in the
same file.
Signed-off-by: Simon Glass <sjg@chromium.org>
Acked-by: Anatolij Gustschin <agust@denx.de>
Signed-off-by: Tom Warren <twarren@nvidia.com>
This PWM supports four channels. The driver always uses the 32KHz clock,
and adjusts the duty cycle accordingly.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
In a number of places we had wordings of the GPL (or LGPL in a few
cases) license text that were split in such a way that it wasn't caught
previously. Convert all of these to the correct SPDX-License-Identifier
tag.
Signed-off-by: Tom Rini <trini@konsulko.com>
Rename GPU functions to less generic names to avoid potential name
collisions.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
There is no justification for this function, especially in exported
form.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
On currently supported SoCs, clk_m always runs at the same frequency as
the oscillator input. However newer SoC generations such as Tegra210 no
longer have that restriction. Prepare for that by separating clk_m from
the oscillator clock and allow SoC code to override the clk_m rate.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
This function is deleted by commit 2fccd2d96b
"tegra: Convert tegra GPIO driver to use driver model".
Signed-off-by: Axel Lin <axel.lin@ingics.com>
Acked-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
introduce BIT() definition, used in at91_udc gadget
driver.
Signed-off-by: Heiko Schocher <hs@denx.de>
[remove all other occurrences of BIT(x) definition]
Signed-off-by: Andreas Bießmann <andreas.devel@googlemail.com>
Acked-by: Stefan Roese <sr@denx.de>
Acked-by: Anatolij Gustschin <agust@denx.de>
This header file uses type definitions (u8, u32) from linux/types.h but
doesn't include it. If includes aren't carefully ordered this can cause
build failures.
Cc: Tom Warren <twarren@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Add defines to allow reading recovery mode (RCM) boot type from the boot
information table (BIT) written by the boot ROM (BR) to the IRAM.
Signed-off-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Acked-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
T124/210 requires some specific configuration (VPR setup) to
be performed by the bootloader before the GPU can be used.
For this reason, the GPU node in the device tree is disabled
by default. This patch enables the node if U-boot has performed
VPR configuration.
Boards enabled by this patch are T124's Jetson TK1 and Venice2
and T210's P2571.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Cc: Stephen Warren <swarren@nvidia.com>
Cc: Tom Warren <twarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
U-boot is responsible for enabling the GPU DT node after all necessary
configuration (VPR setup for T124) is performed. In order to be able to
check whether this configuration has been performed right before booting
the kernel, make it happen during board_init().
Also move VPR configuration into the more generic gpu.c file, which will
also host other GPU-related functions, and let boards specify
individually whether they need VPR setup or not.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Cc: Stephen Warren <swarren@nvidia.com>
Cc: Tom Warren <twarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Added PLL variables (dividers mask/shift, lock enable/detect, etc.)
to new pllinfo struct for each Soc/PLL. PLLA/C/D/E/M/P/U/X.
Used pllinfo struct in all clock functions, validated on T210.
Should be equivalent to prior code on T124/114/30/20. Thanks
to Marcel Ziswiler for corrections to the T20/T30 values.
Signed-off-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Tested-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Added 38.4MHz/48MHz entries to pll_x_table for CPU PLL. Needs
to be measured - should be close to 700MHz (1.4G/2).
Note that some freqs aren't in the PLLU table in T210 TRM
(13, 26MHz), so I used the 12MHz table entry for them. They
shouldn't be selected since they're not viable T210 OSC freqs.
Since there are now 2 new OSC defines, all tables (pll_x_table,
PLLU) had to increase by two entries, but since 38.4/48MHz are
not viable osc freqs on T20/30/114, etc, they're just set to 0.
Signed-off-by: Tom Warren <twarren@nvidia.com>
Derived from Tegra124, modified as appropriate during T210
board bringup. Cleaned up debug statements to conserve
string space, too. This also adds misc 64-bit changes
from Thierry Reding/Stephen Warren.
Signed-off-by: Tom Warren <twarren@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Add a hook to allows boards to add their own init to board_init().
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Will be used for unpowergating CPUs.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Tested-by: Ian Campbell <ijc@hellion.org.uk>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Add full link training as a fallback in case the fast link training
fails.
Signed-off-by: Simon Glass <sjg@chromium.org>
Acked-by: Anatolij Gustschin <agust@denx.de>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Add functions to provide access to the display clocks on Tegra124 including
setting the clock rate for an EDP display.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Create a function which sets the source clock for a peripheral, given
the number of mux bits to adjust. This can then be used more generally.
For now, don't export it.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
The get_pll() function can do the wrong thing if passed values that are
out of range. Add checks for this and add a function which can return
a 'simple' PLL. This can be defined by SoCs with their own clocks.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Some LCDs require a PMIC to be set up - add a function for this.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
When enabling a PWM, allow the existing clock rate and source to stand
unchanged.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
This is needed for tegra124 also, so make it common and add a header file
for tegra124.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Some pinmux controls are in a different register set. Add support for
manipulating those in a similar way to existing pins/groups.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Move struct pmux_pingrp_desc type and tegra_soc_pingroups variable
declaration together with other pin/mux level definitions. Now the whole
file is grouped/ordered pin/mux-related then drvgrp-related definitions.
Fix typo in ifdef comment.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Tegra210 has a per-pin option named e_io_hv, which indicates that the
pin's input path should be configured to be 3.3v-tolerant. Add support
for this.
Note that this is very similar to previous chip's rcv_sel option.
However, since the Tegra TRM names this option differently for the
different chips, we support the new name so that the code exactly matches
the naming in the TRM, to avoid confusion.
This patch incorporates a few fixes from Tom Warren <twarren@nvidia.com>.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
T210 support HSM and Schmitt options in the pinmux register (previous
chips placed these options in the drive group register). Update the
code to handle this.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
On some future SoCs, some per-drive-group features became per-pin
features. Move all type definitions early in the header so they can
be enabled irrespective of the setting of TEGRA_PMX_SOC_HAS_DRVGRPS.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
On some future SoCs, some of the per-drive-group features no longer
exist. Add some ifdefs to support this.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Future SoCs have a slightly different combination of pinmux options per
pin. This will be simpler to handle if we simply have one define per
option, rather than grouping various options together, in combinations
that don't align with future chips.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
This is needed to correctly apply the new Jetson TK1 pinmux config.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
When the CPU is in non-secure (NS) mode (when running U-Boot under a
secure monitor), certain actions cannot be taken, since they would need
to write to secure-only registers. One example is configuring the ARM
architectural timer's CNTFRQ register.
We could support this in one of two ways:
1) Compile twice, once for secure mode (in which case anything goes) and
once for non-secure mode (in which case certain actions are disabled).
This complicates things, since everyone needs to keep track of
different U-Boot binaries for different situations.
2) Detect NS mode at run-time, and optionally skip any impossible actions.
This has the advantage of a single U-Boot binary working in all cases.
(2) is not possible on ARM in general, since there's no architectural way
to detect secure-vs-non-secure. However, there is a Tegra-specific way to
detect this.
This patches uses that feature to detect secure vs. NS mode on Tegra, and
uses that to:
* Skip the ARM arch timer initialization.
* Set/clear an environment variable so that boot scripts can take
different action depending on which mode the CPU is in. This might be
something like:
if CPU is secure:
load secure monitor code into RAM.
boot secure monitor.
secure monitor will restart (a new copy of) U-Boot in NS mode.
else:
execute normal boot process
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
This controller was introduced on Tegra114 to handle XUSB pads. On
Tegra124 it is also used for PCIe and SATA pin muxing and PHY control.
Only the Tegra124 PCIe and SATA functionality is currently implemented,
with weak symbols on Tegra114.
Tegra20 and Tegra30 also provide weak symbols for these functions so
that drivers can use the same API irrespective of which SoC they're
being built for.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Implement the powergate API that allows various power partitions to be
power up and down.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
This converts all Tegra boards over to use driver model for I2C. The driver
is adjusted to use driver model and the following obsolete CONFIGs are
removed:
- CONFIG_SYS_I2C_INIT_BOARD
- CONFIG_I2C_MULTI_BUS
- CONFIG_SYS_MAX_I2C_BUS
- CONFIG_SYS_I2C_SPEED
- CONFIG_SYS_I2C
This has been tested on:
- trimslice (no I2C)
- beaver
- Jetson-TK1
It has not been tested on Tegra 114 as I don't have that board.
Acked-by: Heiko Schocher <hs@denx.de>
Signed-off-by: Simon Glass <sjg@chromium.org>
This is an implementation of GPIOs for Tegra that uses driver model. It has
been tested on trimslice and also using the new iotrace feature.
The implementation uses a top-level GPIO device (which has no actual GPIOS).
Under this all the banks are created as separate GPIO devices.
The GPIOs are named as per the Tegra datasheet/header files: A0..A7, B0..B7,
..., Z0..Z7, AA0..AA7, etc.
Since driver model is not yet available before relocation, or in SPL, a
special function is provided for seaboard's SPL code.
Signed-off-by: Simon Glass <sjg@chromium.org>
On Tegra114 and Tegra124 platforms, certain display-related registers cannot
be accessed unless the VPR registers are programmed. For bootloader, we
probably don't care about VPR, so we disable it (which counts as programming
it, and allows those display-related registers to be accessed).
This patch is based on the commit 5f499646c83ba08079f3fdff6591f638a0ce4c0c
in Chromium OS U-Boot project.
Signed-off-by: Andrew Chew <achew@nvidia.com>
Signed-off-by: Jimmy Zhang <jimmzhang@nvidia.com>
Signed-off-by: Bryan Wu <pengw@nvidia.com>
[acourbot: ensure write went through, vpr.c style changes]
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Reviewed-by: Stephen Warren <swarren@nvidia.com>
Cc: Tom Warren <TWarren@nvidia.com>
Cc: Stephen Warren <swarren@nvidia.com>
Cc: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
I2C read transactions are typically implemented as follows:
START(write) address REPEATED_START(read) data... STOP
However, Tegra's I2C driver currently implements reads as follows:
START(write) address STOP START(read) data... STOP
This sequence confuses at least the AS3722 PMIC on the Jetson TK1 board,
leading to corrupted read data in some cases. Fix the driver to chain
the transactions together using repeated starts to solve this.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Yen Lin <yelin@nvidia.com>
A few changes are made to the Tegra EHCI driver so that it can set
everything up for device-mode operation on the first USB controller.
This can be used in conjunction with ci_udc.c to operate as a USB
device.
Detailed changes are:
* Rename set_host_mode() to set_up_vbus() since that's really what it
does.
* Modify set_up_vbus() to know whether it's initializing in host or
device mode, and:
- Skip the external VBUS check in device mode, since external VBUS is
expected in this case.
- Disable VBUS output in device mode.
* Modify init_phy_mux() to know whether it's initializing in host or
device mode, and hence skip setting USBMODE_CM_HC (which enables host
mode) in device mode. See the comments in that function for why this
is safe w.r.t. the ordering requirements of PHY selection.
* Modify init_utmi_usb_controller() to force "b session valid" in device
mode, since the HW requires this. This is done in UTMI-specific code,
since we only support device mode on the first USB controller, and that
controller can only talk to a UTMI PHY.
* Enhance ehci_hcd_init() to error-check the requested host-/device-mode
vs. the dr_mode (dual-role mode) value present in device tree, and the
HW configurations which support device mode.
* Enhance ehci_hcd_init() not to skip HW initialization when switching
between host and device mode on a controller. This requires remembering
which mode the last initialization used.
Cc: Jim Lin <jilin@nvidia.com>
Cc: Stefan Agner <stefan@agner.ch>
Signed-off-by: Stephen Warren <swarren@nvidia.com>