Output audio format still unknown, not sure if it's garbage (see lpai
commit) or some weird packed float encoding I'm not figuring out.
Signed-off-by: Eileen Yoon <eyn@gmx.com>
Not much to see here, most of the juice is over at:
https://github.com/eiln/avd.git
The kernel driver (m1n1.fw.avd) only really pipes the instruction stream
into the respective hardware FIFOs and then hushes the interrupt lines.
Most of the work (bitstream syntax parsing and instruction generation)
is done in the avid repo above.
I'm hoping to keep this userland-kernel separation in the very imminent
actual driver.
experiments/avd.py: Decode on the command line. Read file for usage.
experiments/avd_e.py: Decode via emulated instruction stream.
experiments/avd_f.py: Decode via Cortex-M3 firmware (for debugging).
hv/trace_avd.py: Tracer. Read file for usage.
m1n1/fw/avd/__init__.py: Driver base class (power, tunables, etc).
m1n1/fw/avd/decoder.py: Codec-specific decode logic + mini media player.
Signed-off-by: Eileen Yoon <eyn@gmx.com>
Extra pstates only for J416c, drop the second pstate field sets (still
no idea what that does), fix a mask.
Signed-off-by: Hector Martin <marcan@marcan.st>
Initializes the display if not already done by iboot/m1n1. Not expected
to change anything for disp0 but might be helful in bringing DP alt
mode for dispexp[0-7].
Signed-off-by: Janne Grunau <j@jannau.net>
The AOP uses an 'EPIC' protocol similar to the one other coprocessor
firmware is using but not in the exact same version. Add code for
tracing the AOP calls and extend the aop.py experiment with the client
side of it. Include description of audio calls and some other calls
related to sensor discovery.
Furthermore, in experiments/aop.py, do some AOP audio setup. Once that
is done we can start streaming samples from the internal microphones by
making what AOP considers power state adjustment calls. That is, we
adjust the power state of a 'hpai' device, first to a 'pw1 ' stage,
then to 'pwrd' stage.
So, to see microphone samples, enter the AOP experiment shell first:
$ M1N1DEVICE=/dev/ttyACM0 experiments/aop.py
Within the shell, adjust the power state of 'hpai':
>>> aop_set_audio_pstate('hpai', 'pw1 ')
At that point /arm-io/admac-aop-audio powers up. In parallel to the AOP
shell, we can start tools/admac_stream.py on the just powered-up ADMAC
instance:
$ M1N1HEAP=0x10010000000 M1N1DEVICE=/dev/ttyACM1 tools/admac_stream.py \
--node admac-aop-audio --channel 1 -v | xxd -g 4 -c 12 -e
Returning back to the AOP shell, we can then set 'hpai' to 'pwrd' state
to kick off the streaming:
>>> aop_set_audio_pstate('hpai', 'pwrd')
By that point, we should see samples coming out on the ADMAC end. The
samples are 32-bit floats packed in groups of three in a frame, e.g.
00000000: ba7ac6a7 ba32d3c3 baa17ae2 ..z...2..z..
0000000c: 38ccea5f b99c1a37 ba0c4bb1 _..87....K..
00000018: 39d2354f 3964b5ff 39b209fb O5.9..d9...9
00000024: b96a1d1f 39c8503f 3958fc4f ..j.?P.9O.X9
00000030: b6b1f5ff 39c72b8f 39bbe017 .....+.9...9
0000003c: 3a912de5 36dd4f7f 37f1147f .-.:.O.6...7
This has been tested and will to some degree be specific to 2021 Macbook
Pro (t6000). Differences on other models TBD (at the very least the
number of microphones can be presumed different).
Signed-off-by: Martin Povišer <povik@protonmail.com>
We need to allocate a buffer for the AOP on the OSLog endpoint for it to
fully boot. Copy in a modified version of the general OSLog endpoint
driver to do that.
Signed-off-by: Martin Povišer <povik@protonmail.com>
In t6000 ADT, the AOP SRAM base is specified *including* the bus offset
where the bus offset isn't expected, so our decoding of it includes the
bus offset twice. Patch it.
Signed-off-by: Martin Povišer <povik@protonmail.com>