It's now enough to change the variable dcp_name to dcpext[0-7].
'dcpext' on t8103/t8112 is handled by "/aliases" in the ADT.
Signed-off-by: Janne Grunau <j@jannau.net>
Allows m1n1 experiments and tracers not to care about the DART variant.
Required to trace and experiment with DCP sanely on M1 and M2.
Signed-off-by: Janne Grunau <j@jannau.net>
The AOP uses an 'EPIC' protocol similar to the one other coprocessor
firmware is using but not in the exact same version. Add code for
tracing the AOP calls and extend the aop.py experiment with the client
side of it. Include description of audio calls and some other calls
related to sensor discovery.
Furthermore, in experiments/aop.py, do some AOP audio setup. Once that
is done we can start streaming samples from the internal microphones by
making what AOP considers power state adjustment calls. That is, we
adjust the power state of a 'hpai' device, first to a 'pw1 ' stage,
then to 'pwrd' stage.
So, to see microphone samples, enter the AOP experiment shell first:
$ M1N1DEVICE=/dev/ttyACM0 experiments/aop.py
Within the shell, adjust the power state of 'hpai':
>>> aop_set_audio_pstate('hpai', 'pw1 ')
At that point /arm-io/admac-aop-audio powers up. In parallel to the AOP
shell, we can start tools/admac_stream.py on the just powered-up ADMAC
instance:
$ M1N1HEAP=0x10010000000 M1N1DEVICE=/dev/ttyACM1 tools/admac_stream.py \
--node admac-aop-audio --channel 1 -v | xxd -g 4 -c 12 -e
Returning back to the AOP shell, we can then set 'hpai' to 'pwrd' state
to kick off the streaming:
>>> aop_set_audio_pstate('hpai', 'pwrd')
By that point, we should see samples coming out on the ADMAC end. The
samples are 32-bit floats packed in groups of three in a frame, e.g.
00000000: ba7ac6a7 ba32d3c3 baa17ae2 ..z...2..z..
0000000c: 38ccea5f b99c1a37 ba0c4bb1 _..87....K..
00000018: 39d2354f 3964b5ff 39b209fb O5.9..d9...9
00000024: b96a1d1f 39c8503f 3958fc4f ..j.?P.9O.X9
00000030: b6b1f5ff 39c72b8f 39bbe017 .....+.9...9
0000003c: 3a912de5 36dd4f7f 37f1147f .-.:.O.6...7
This has been tested and will to some degree be specific to 2021 Macbook
Pro (t6000). Differences on other models TBD (at the very least the
number of microphones can be presumed different).
Signed-off-by: Martin Povišer <povik@protonmail.com>
Image Signal Processor is a co-processor in charge of Facetime camera
in Apple laptops. This tracer is based on ISP found in M1 processor.
The ISP in M1 SoC exposes seven inter-process communication channels.
Using those channels Application Processor (AP) can perform a variety of
task in the ISP and viceversa. All channels seems to be bidirectional, with
unique exception of TERMINAL channel.
DART translation is used often to translate control structures passed
over IPC channels to IO virtual addresses.
How those channels are used is still unclear but tracer should allow us
to dive deep on high-level protocol to communicate with Facetime camera.
ISP exposes also some I2C peripheral and SPMI registers but I haven't
dive into their working or purpose.
Signed-off-by: Kellerman Rivero <krsloco@gmail.com>
Trace async because we don't care about DMA buffers yet and disable
dwc3 tracing by default since that's relatively boring.
Signed-off-by: Sven Peter <sven@svenpeter.dev>
We weren't tracing the SPI control registers before, and the output
was borked due to changes in m1n1. These have been fixed. Still
doesn't show much useful information..
Signed-off-by: James Calligeros <jcalligeros99@gmail.com>