Based on the Tegra TRM, the system clock (which is the AVP clock) can
run up to 275MHz. On power on, the default sytem clock source is set to
PLLP_OUT0. In function clock_early_init(), PLLP_OUT0 will be set to
408MHz which is beyond system clock's upper limit.
The fix is to set the system clock to CLK_M before initializing PLLP,
and then switch back to PLLP_OUT4, which has an appropriate divider
configured, after PLLP has been configured
Implement this logic in new function tegra30_set_up_pllp(),
which sets up PLLP and all PLLP_OUT* dividers, and handles the AVP
clock switching. Remove the duplicate PLLP setup from pllx_set_rate()
and adjust_pllp_out_freqs().
Signed-off-by: Jimmy Zhang <jimmzhang@nvidia.com>
[swarren, significantly refactored the change]
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Some clock sources have 3-bit muxes in bits 31:29. Implement core
support for this mux field.
Signed-off-by: Tom Warren <twarren@nvidia.com>
[swarren, extracted from a larger patch by Tom]
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
Not all code that set or interpreted "mux_bits" was using the named
macros, but rather some was simply using hard-coded integer constants.
This makes it hard to determine which pieces of code are affected by
changes to those constants.
Replace the integer constants with the equivalent macro definitions so
that everything is nicely tied together.
Note that I'm not convinced all the code was using the correct integer
constants, and hence I'm not convinced that all the code is now using
the desired macros. However, this change is a purely mechanical
replacement and should have no functional change. Fixing any bugs will
come later, separately.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
OUT_CLK_SOURCE_ are currently named after the number of bits the mask
they represent includes. However, bit count is not the only possible
variable; bit position may also vary. Rename OUT_CLK_SOURCE_ to
OUT_CLK_SOURCE_31_30_ and OUT_CLK_SOURCE4_ to OUT_CLK_SOURCE_31_28 to
more completely describe exactly what they represent, without having to
go look up the definitions.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
When adjusting peripheral clocks always use find_best_divider()
instead of clk_get_divider() even when a secondary divider is not
available. In the case where is requested clock is too slow to be
derived from the parent clock this allows a best effort to get close
to the requested clock.
This comes up for commands like "sf" where the user can pass a clock
speed on the command line or "sspi" where the clock is hardcoded to
1MHz, but the Tegra114 SPI controller can't go that low.
Signed-off-by: Allen Martin <amartin@nvidia.com>
Acked-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
T114 needs the SYSCTR0 counter initialized so the TSC can be
read by the kernel. Do it in the bootloader since it's a write-once
deal (secure/non-secure mode dependent).
Signed-off-by: Tom Warren <twarren@nvidia.com>
Reviewed-by: Stephen Warren <swarren@nvidia.com>
This 'commonizes' much of the clock/pll code. SoC-dependent code
and tables are left in arch/cpu/tegraXXX-common/clock.c
Some T30 tables needed whitespace fixes due to checkpatch complaints.
Signed-off-by: Tom Warren <twarren@nvidia.com>