Turns out we had a bunch of silly dependencies on libc headers that are
not included with freestanding compilers. Fix all this and change the
CFLAGS to exclude libc headers and only include the built-in compiler
path.
Add our own versions of assert.h, errno.h, limits.h, and move malloc.h
and string.h together into a new path used as -isystem, so these headers
can be included using #include <>.
Remove a bunch of other dependencies in third-party code.
Add a strnlen function.
Disable building the libfdt overlay code for now, as it needs a strtoul
implementation. We can throw that in if/when we decide to use overlays.
Signed-off-by: Hector Martin <marcan@marcan.st>
This works by clearing HCR_EL2.TGE, and then doing essentially the same
thunk/return dance as for EL0 calls. However, since most EL1 exceptions
are not routed to EL2, we install hypercall vectors in EL1 to forward
them to EL2, and then short circuit the exception return to whatever
triggered the original exception.
Signed-off-by: Hector Martin <marcan@marcan.st>
Since we're in VHE mode, we can pretend to be in EL1 - but this will
allow us to really run in EL1 if we want to in the future.
Signed-off-by: Hector Martin <marcan@marcan.st>
call allows the caller to override the function used for the function
call, e.g. to use EL0.
Use silent=True for find_all_regs.py
Signed-off-by: Hector Martin <marcan@marcan.st>
This lets us test register access and other features from EL0.
No serious attempt at security is made, but at least EL0 runs off of a
separate stack and can return to EL2 at any time with `brk`; we can
easily implement a guard mode to break straight to EL2 on exception
later if needed.
Signed-off-by: Hector Martin <marcan@marcan.st>
Enable EL0 access to MMIO/etc, but not main RAM, because AArch64
architecturally enforces EL0w ^ EL2x.
Instead, create an alias of main RAM to grant EL0 full permissions,
at 0x80_0000_0000.
Grant EL0 full access to MMIO stuff, since EL2 will never execute
from there.
Signed-off-by: Hector Martin <marcan@marcan.st>
Previously all MMIO was mapped twice with different attributes
which may or may not lead to strange behaviour when the same
physical range is accessed from both mappings.
We now have a better idea which ranges require nGnRE and nGnRnE
and can just do it correctly instead.
Signed-off-by: Sven Peter <sven@svenpeter.dev>
These functions all perform a store direcly followed by a load.
This is useful to e.g. useful to find busy bits which might
already be cleared a few cycles after a write.
Signed-off-by: Sven Peter <sven@svenpeter.dev>
I can't remember why I used vmalls12e1is but this leads to
the following bug:
1. Load m1n1 with normal MMU setup
2. Disable all mappings, recompile and chainload to that m1n1
3. Everything will work fine for a while even though it should explode
when enabling the MMU.
This happens becuse there are still stale TLB entries in some cache.
Signed-off-by: Sven Peter <sven@svenpeter.dev>
This can be used when the input file size is unknown: the decompression
functions will keep track of it and return it to the caller instead.
Signed-off-by: Hector Martin <marcan@marcan.st>