mirror of
https://github.com/carlospolop/hacktricks
synced 2024-11-21 20:23:18 +00:00
GITBOOK-4335: No subject
This commit is contained in:
parent
5ef62472bf
commit
0c8c5b0ede
8 changed files with 628 additions and 117 deletions
|
@ -721,7 +721,11 @@
|
||||||
* [Format Strings - Arbitrary Read Example](binary-exploitation/format-strings/format-strings-arbitrary-read-example.md)
|
* [Format Strings - Arbitrary Read Example](binary-exploitation/format-strings/format-strings-arbitrary-read-example.md)
|
||||||
* [Format Strings Template](binary-exploitation/format-strings/format-strings-template.md)
|
* [Format Strings Template](binary-exploitation/format-strings/format-strings-template.md)
|
||||||
* [Heap](binary-exploitation/heap/README.md)
|
* [Heap](binary-exploitation/heap/README.md)
|
||||||
* [Use After Free](binary-exploitation/heap/use-after-free.md)
|
* [Bins & Memory Allocations](binary-exploitation/heap/bins-and-memory-allocations.md)
|
||||||
|
* [Heap Functions Security Checks](binary-exploitation/heap/heap-functions-security-checks.md)
|
||||||
|
* [Use After Free](binary-exploitation/heap/use-after-free/README.md)
|
||||||
|
* [First Fit](binary-exploitation/heap/use-after-free/first-fit.md)
|
||||||
|
* [Double Free](binary-exploitation/heap/double-free.md)
|
||||||
* [Heap Overflow](binary-exploitation/heap/heap-overflow.md)
|
* [Heap Overflow](binary-exploitation/heap/heap-overflow.md)
|
||||||
* [Common Binary Exploitation Protections & Bypasses](binary-exploitation/common-binary-protections-and-bypasses/README.md)
|
* [Common Binary Exploitation Protections & Bypasses](binary-exploitation/common-binary-protections-and-bypasses/README.md)
|
||||||
* [ASLR](binary-exploitation/common-binary-protections-and-bypasses/aslr/README.md)
|
* [ASLR](binary-exploitation/common-binary-protections-and-bypasses/aslr/README.md)
|
||||||
|
|
|
@ -48,7 +48,77 @@ Subheaps serve as memory reserves for secondary arenas in multithreaded applicat
|
||||||
* To "grow" the subheap, the heap manager uses `mprotect` to change page permissions from `PROT_NONE` to `PROT_READ | PROT_WRITE`, prompting the kernel to allocate physical memory to the previously reserved addresses. This step-by-step approach allows the subheap to expand as needed.
|
* To "grow" the subheap, the heap manager uses `mprotect` to change page permissions from `PROT_NONE` to `PROT_READ | PROT_WRITE`, prompting the kernel to allocate physical memory to the previously reserved addresses. This step-by-step approach allows the subheap to expand as needed.
|
||||||
* Once the entire subheap is exhausted, the heap manager creates a new subheap to continue allocation.
|
* Once the entire subheap is exhausted, the heap manager creates a new subheap to continue allocation.
|
||||||
|
|
||||||
### Metadata
|
### malloc\_state
|
||||||
|
|
||||||
|
**Each heap** (main arena or other threads arenas) has a **`malloc_state` structure.**\
|
||||||
|
It’s important to notice that the **main arena `malloc_stat`**`e` structure is a **global variable in the libc** (therefore located in the libc memory space).\
|
||||||
|
In the case of **`malloc_state`** structures of the heaps of threads, they are located **inside own thread "heap"**.
|
||||||
|
|
||||||
|
There some interesting things to note from this structure (see C code below):
|
||||||
|
|
||||||
|
* The `mchunkptr bins[NBINS * 2 - 2];` contains **pointers** to the **first and last chunks** of the small, large and unsorted **bins** (the -2 is because the index 0 is not used)
|
||||||
|
* Therefore, the **first chunk** of these bins will have a **backwards pointer to this structure** and the **last chunk** of these bins will have a **forward pointer** to this structure. Which basically means that if you can l**eak these addresses in the main arena** you will have a pointer to the structure in the **libc**.
|
||||||
|
* The structs `struct malloc_state *next;` and `struct malloc_state *next_free;` are linked lists os arenas
|
||||||
|
* The `top` chunk is the last "chunk", which is basically **all the heap reminding space**. Once the top chunk is "empty", the heap is completely used and it needs to request more space.
|
||||||
|
* The `last reminder` chunk comes from cases where an exact size chunk is not available and therefore a bigger chunk is splitter, a pointer remaining part is placed here.
|
||||||
|
|
||||||
|
```c
|
||||||
|
// From https://heap-exploitation.dhavalkapil.com/diving_into_glibc_heap/malloc_state
|
||||||
|
struct malloc_state
|
||||||
|
{
|
||||||
|
/* Serialize access. */
|
||||||
|
__libc_lock_define (, mutex);
|
||||||
|
/* Flags (formerly in max_fast). */
|
||||||
|
int flags;
|
||||||
|
|
||||||
|
/* Fastbins */
|
||||||
|
mfastbinptr fastbinsY[NFASTBINS];
|
||||||
|
/* Base of the topmost chunk -- not otherwise kept in a bin */
|
||||||
|
mchunkptr top;
|
||||||
|
/* The remainder from the most recent split of a small request */
|
||||||
|
mchunkptr last_remainder;
|
||||||
|
/* Normal bins packed as described above */
|
||||||
|
mchunkptr bins[NBINS * 2 - 2];
|
||||||
|
|
||||||
|
/* Bitmap of bins */
|
||||||
|
unsigned int binmap[BINMAPSIZE];
|
||||||
|
|
||||||
|
/* Linked list */
|
||||||
|
struct malloc_state *next;
|
||||||
|
/* Linked list for free arenas. Access to this field is serialized
|
||||||
|
by free_list_lock in arena.c. */
|
||||||
|
struct malloc_state *next_free;
|
||||||
|
/* Number of threads attached to this arena. 0 if the arena is on
|
||||||
|
the free list. Access to this field is serialized by
|
||||||
|
free_list_lock in arena.c. */
|
||||||
|
|
||||||
|
INTERNAL_SIZE_T attached_threads;
|
||||||
|
/* Memory allocated from the system in this arena. */
|
||||||
|
INTERNAL_SIZE_T system_mem;
|
||||||
|
INTERNAL_SIZE_T max_system_mem;
|
||||||
|
};
|
||||||
|
|
||||||
|
typedef struct malloc_state *mstate;
|
||||||
|
```
|
||||||
|
|
||||||
|
### malloc\_chunk
|
||||||
|
|
||||||
|
This structure represents a particular chunk of memory. The various fields have different meaning for allocated and unallocated chunks.
|
||||||
|
|
||||||
|
```c
|
||||||
|
// From https://heap-exploitation.dhavalkapil.com/diving_into_glibc_heap/malloc_chunk
|
||||||
|
struct malloc_chunk {
|
||||||
|
INTERNAL_SIZE_T mchunk_prev_size; /* Size of previous chunk, if it is free. */
|
||||||
|
INTERNAL_SIZE_T mchunk_size; /* Size in bytes, including overhead. */
|
||||||
|
struct malloc_chunk* fd; /* double links -- used only if this chunk is free. */
|
||||||
|
struct malloc_chunk* bk;
|
||||||
|
/* Only used for large blocks: pointer to next larger size. */
|
||||||
|
struct malloc_chunk* fd_nextsize; /* double links -- used only if this chunk is free. */
|
||||||
|
struct malloc_chunk* bk_nextsize;
|
||||||
|
};
|
||||||
|
|
||||||
|
typedef struct malloc_chunk* mchunkptr;
|
||||||
|
```
|
||||||
|
|
||||||
As commented previously, these chunks also have some metadata, very good represented in this image:
|
As commented previously, these chunks also have some metadata, very good represented in this image:
|
||||||
|
|
||||||
|
@ -73,120 +143,11 @@ Moreover, when available, the user data is used to contain also some data:
|
||||||
|
|
||||||
<figure><img src="../../.gitbook/assets/image (1243).png" alt=""><figcaption><p><a href="https://azeria-labs.com/wp-content/uploads/2019/03/chunk-allocated-CS.png">https://azeria-labs.com/wp-content/uploads/2019/03/chunk-allocated-CS.png</a></p></figcaption></figure>
|
<figure><img src="../../.gitbook/assets/image (1243).png" alt=""><figcaption><p><a href="https://azeria-labs.com/wp-content/uploads/2019/03/chunk-allocated-CS.png">https://azeria-labs.com/wp-content/uploads/2019/03/chunk-allocated-CS.png</a></p></figcaption></figure>
|
||||||
|
|
||||||
|
{% hint style="info" %}
|
||||||
Note how liking the list this way prevents the need to having an array where every single chunk is being registered.
|
Note how liking the list this way prevents the need to having an array where every single chunk is being registered.
|
||||||
|
|
||||||
## Free Protections
|
|
||||||
|
|
||||||
In order to protect from the accidental or intended abuse of the free function, before executing it's actions it perform some checks:
|
|
||||||
|
|
||||||
* It checks that the address [is aligned](https://sourceware.org/git/gitweb.cgi?p=glibc.git;a=blob;f=malloc/malloc.c;h=6e766d11bc85b6480fa5c9f2a76559f8acf9deb5;hb=HEAD#l4182) on an 8-byte or 16-byte on 64-bit boundary (`(address % 16) == 0`), since _malloc_ ensures all allocations are aligned.
|
|
||||||
* It checks that the chunk’s size field isn’t impossible–either because it is [too small](https://sourceware.org/git/gitweb.cgi?p=glibc.git;a=blob;f=malloc/malloc.c;h=6e766d11bc85b6480fa5c9f2a76559f8acf9deb5;hb=HEAD#l4318), too large, not an aligned size, or [would overlap the end of the process’ address space](https://sourceware.org/git/gitweb.cgi?p=glibc.git;a=blob;f=malloc/malloc.c;h=6e766d11bc85b6480fa5c9f2a76559f8acf9deb5;hb=HEAD#l4175).
|
|
||||||
* It checks that the chunk lies [within the boundaries of the arena](https://sourceware.org/git/gitweb.cgi?p=glibc.git;a=blob;f=malloc/malloc.c;h=6e766d11bc85b6480fa5c9f2a76559f8acf9deb5;hb=HEAD#l4318).
|
|
||||||
* It checks that the chunk is [not already marked as free](https://sourceware.org/git/gitweb.cgi?p=glibc.git;a=blob;f=malloc/malloc.c;h=6e766d11bc85b6480fa5c9f2a76559f8acf9deb5;hb=HEAD#l4182) by checking the corresponding “P” bit that lies in the metadata at the start of the next chunk.
|
|
||||||
|
|
||||||
## Bins
|
|
||||||
|
|
||||||
In order to improve the efficiency on how chunks are stored every chunk is not just in one linked list, but there are several types. These are the bins and there are 5 type of bins: [62](https://sourceware.org/git/gitweb.cgi?p=glibc.git;a=blob;f=malloc/malloc.c;h=6e766d11bc85b6480fa5c9f2a76559f8acf9deb5;hb=HEAD#l1407) small bins, 63 large bins, 1 unsorted bin, 10 fast bins and 64 tcache bins per thread.
|
|
||||||
|
|
||||||
The initial address to each unsorted, small and large bins is inside the same array. The index 0 is unused, 1 is the unsorted bin, bins 2-64 are small bins and bins 65-127 are large bins.
|
|
||||||
|
|
||||||
### Small Bins
|
|
||||||
|
|
||||||
Small bins are faster than large bins but slower than fast bins.
|
|
||||||
|
|
||||||
Each bin of the 62 will have **chunks of the same size**: 16, 24, ... (with a max size of 504 bytes in 32bits and 1024 in 64bits). This helps in the speed on finding the bin where a space should be allocated and inserting and removing of entries on these lists.
|
|
||||||
|
|
||||||
### Large bins
|
|
||||||
|
|
||||||
Unlike small bins, which manage chunks of fixed sizes, each **large bin handle a range of chunk sizes**. This is more flexible, allowing the system to accommodate **various sizes** without needing a separate bin for each size.
|
|
||||||
|
|
||||||
In a memory allocator, large bins start where small bins end. The ranges for large bins grow progressively larger, meaning the first bin might cover chunks from 512 to 576 bytes, while the next covers 576 to 640 bytes. This pattern continues, with the largest bin containing all chunks above 1MB.
|
|
||||||
|
|
||||||
Large bins are slower to operate compared to small bins because they must **sort and search through a list of varying chunk sizes to find the best fit** for an allocation. When a chunk is inserted into a large bin, it has to be sorted, and when memory is allocated, the system must find the right chunk. This extra work makes them **slower**, but since large allocations are less common than small ones, it's an acceptable trade-off.
|
|
||||||
|
|
||||||
There are:
|
|
||||||
|
|
||||||
* 32 bins of 64B range
|
|
||||||
* 16 bins of 512B range
|
|
||||||
* 8bins of 4096B range
|
|
||||||
* 4bins of 32768B range
|
|
||||||
* 2bins of 262144B range
|
|
||||||
* 1bins of for reminding sizes
|
|
||||||
|
|
||||||
### Unsorted bin
|
|
||||||
|
|
||||||
The unsorted bin is a **fast cache** used by the heap manager to make memory allocation quicker. Here's how it works: When a program frees memory, the heap manager doesn't immediately put it in a specific bin. Instead, it first tries to **merge it with any neighbouring free chunks** to create a larger block of free memory. Then, it places this new chunk in a general bin called the "unsorted bin."
|
|
||||||
|
|
||||||
When a program **asks for memory**, the heap manager **first checks the unsorted bin** to see if there's a chunk of the right size. If it finds one, it uses it right away, which is faster than searching through other bins. If it doesn't find a suitable chunk, it moves the freed chunks to their correct bins, either small or large, based on their size.
|
|
||||||
|
|
||||||
So, the unsorted bin is a way to speed up memory allocation by quickly reusing recently freed memory and reducing the need for time-consuming searches and merges.
|
|
||||||
|
|
||||||
{% hint style="danger" %}
|
|
||||||
Note that even in chunks are of different categories, from time to time, if an available chunk is colliding with another available chunk (even if they are of different categories), they will be merged.
|
|
||||||
{% endhint %}
|
{% endhint %}
|
||||||
|
|
||||||
### Fast bins
|
### Quick Heap Example
|
||||||
|
|
||||||
Fast bins are designed to **speed up memory allocation for small chunks** by keeping recently freed chunks in a quick-access structure. These bins use a Last-In, First-Out (LIFO) approach, which means that the **most recently freed chunk is the first** to be reused when there's a new allocation request. This behavior is advantageous for speed, as it's faster to insert and remove from the top of a stack (LIFO) compared to a queue (FIFO).
|
|
||||||
|
|
||||||
Additionally, **fast bins use singly linked lists**, not double linked, which further improves speed. Since chunks in fast bins aren't merged with neighbours, there's no need for a complex structure that allows removal from the middle. A singly linked list is simpler and quicker for these operations.
|
|
||||||
|
|
||||||
Basically, what happens here is that the header (the pointer to the first chunk to check) is always pointing to the latest freed chunk of that size. So:
|
|
||||||
|
|
||||||
* When a new chunk is allocated of that size, the header is pointing to a free chunk to use. As this free chunk is pointing to the next one to use, this address is stored in the header so the next allocation knows where to get ana available chunk
|
|
||||||
* When a chunk is freed, the free chunk will save the address to the current available chunk and the address to this newly freed chunk will be pu in the header
|
|
||||||
|
|
||||||
{% hint style="danger" %}
|
|
||||||
Chunks in fast bins aren't automatically set as available so they keep as fast bin chunks for some time instead of being able to merge with other chunks.
|
|
||||||
{% endhint %}
|
|
||||||
|
|
||||||
### Tcache (Per-Thread Cache) Bins
|
|
||||||
|
|
||||||
Even though threads try to have their own heap (see [Arenas](./#arenas) and [Subheaps](./#subheaps)), there is the possibility that a process with a lot of threads (like a web server) **will end sharing the heap with another threads**. In this case, the main solution is the use of **lockers**, which might **slow down significantly the threads**.
|
|
||||||
|
|
||||||
Therefore, a tcache is similar to a fast bin per thread in the way that it's a **single linked list** that doesn't merge chunks. Each thread has **64 singly-linked tcache bins**. Each bin can have a maximum of [7 same-size chunks](https://sourceware.org/git/?p=glibc.git;a=blob;f=malloc/malloc.c;h=2527e2504761744df2bdb1abdc02d936ff907ad2;hb=d5c3fafc4307c9b7a4c7d5cb381fcdbfad340bcc#l323) ranging from [24 to 1032B on 64-bit systems and 12 to 516B on 32-bit systems](https://sourceware.org/git/?p=glibc.git;a=blob;f=malloc/malloc.c;h=2527e2504761744df2bdb1abdc02d936ff907ad2;hb=d5c3fafc4307c9b7a4c7d5cb381fcdbfad340bcc#l315).
|
|
||||||
|
|
||||||
**When a thread frees** a chunk, **if it isn't too big** to be allocated in the tcache and the respective tcache bin **isn't full** (already 7 chunks), **it'll be allocated in there**. If it cannot go to the tcache, it'll need to wait for the heap lock to be able to perform the free operation globally.
|
|
||||||
|
|
||||||
When a **chunk is allocated**, if there is a free chunk of the needed size in the **Tcache it'll use it**, if not, it'll need to wait for the heap lock to be able to find one in the global bins or create a new one.\
|
|
||||||
There also an optimization, in this case, while having the heap lock, the thread **will fill his Tcache with heap chunks (7) of the requested size**, so if case it needs more, it'll find them in Tcache.
|
|
||||||
|
|
||||||
### Bins order
|
|
||||||
|
|
||||||
#### For allocating:
|
|
||||||
|
|
||||||
1. If available chunk in Tcache of that size, use Tcache
|
|
||||||
2. It super big, use mmap
|
|
||||||
3. Obtain the arena heap lock and:
|
|
||||||
1. If enough small size, fast bin chunk available of the requested size, use it and prefill the tcache from the fast bin
|
|
||||||
2. Check each entry in the unsorted list searching for one chunk big enough, and prefill the tcache if possible
|
|
||||||
3. Check the small bins or large bins (according to the requested size) and prefill the tcache if possible
|
|
||||||
4. Create a new chunk from available memory
|
|
||||||
1. If there isn't available memory, get more using `sbrk`
|
|
||||||
2. If the main heap memory can't grow more, create a new space using mmap
|
|
||||||
5. If nothing worked, return null
|
|
||||||
|
|
||||||
**For freeing:**
|
|
||||||
|
|
||||||
1. If the pointer is Null, finish
|
|
||||||
2. Perform `free` sanity checks in the chunk to try to verify it's a legit chunk
|
|
||||||
1. If small enough and tcache not full, put it there
|
|
||||||
2. If the bit M is set (not heap), use `munmap`
|
|
||||||
3. Get arena heap lock:
|
|
||||||
1. If it fits in a fastbin, put it there
|
|
||||||
2. If the chunk is > 64KB, consolidate the fastbins immediately and put the resulting merged chunks on the unsorted bin.
|
|
||||||
3. Merge the chunk backwards and forwards with neighboring freed chunks in the small, large, and unsorted bins if any.
|
|
||||||
4. If in the top of the head, merge it into the unused memory
|
|
||||||
5. If not the previous ones, store it in the unsorted list
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
\
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Quick heap example from [https://guyinatuxedo.github.io/25-heap/index.html](https://guyinatuxedo.github.io/25-heap/index.html) but in arm64:
|
Quick heap example from [https://guyinatuxedo.github.io/25-heap/index.html](https://guyinatuxedo.github.io/25-heap/index.html) but in arm64:
|
||||||
|
|
||||||
|
@ -217,7 +178,21 @@ The extra spaces reserved (0x21-0x10=0x11) comes from the **added headers** (0x1
|
||||||
0x4: Non Main Arena - Specifies that the chunk was obtained from outside of the main arena
|
0x4: Non Main Arena - Specifies that the chunk was obtained from outside of the main arena
|
||||||
```
|
```
|
||||||
|
|
||||||
##
|
## Bins & Memory Allocations/Frees
|
||||||
|
|
||||||
|
Check what are the bins and how are they organized and how memory is allocated and freed in:
|
||||||
|
|
||||||
|
{% content-ref url="bins-and-memory-allocations.md" %}
|
||||||
|
[bins-and-memory-allocations.md](bins-and-memory-allocations.md)
|
||||||
|
{% endcontent-ref %}
|
||||||
|
|
||||||
|
## Heap Functions Security Checks
|
||||||
|
|
||||||
|
Functions involved in heap will perform certain check before performing its actions to try to make sure the heap wasn't corrupted:
|
||||||
|
|
||||||
|
{% content-ref url="heap-functions-security-checks.md" %}
|
||||||
|
[heap-functions-security-checks.md](heap-functions-security-checks.md)
|
||||||
|
{% endcontent-ref %}
|
||||||
|
|
||||||
## References
|
## References
|
||||||
|
|
||||||
|
|
212
binary-exploitation/heap/bins-and-memory-allocations.md
Normal file
212
binary-exploitation/heap/bins-and-memory-allocations.md
Normal file
|
@ -0,0 +1,212 @@
|
||||||
|
# Bins & Memory Allocations
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary><strong>Learn AWS hacking from zero to hero with</strong> <a href="https://training.hacktricks.xyz/courses/arte"><strong>htARTE (HackTricks AWS Red Team Expert)</strong></a><strong>!</strong></summary>
|
||||||
|
|
||||||
|
Other ways to support HackTricks:
|
||||||
|
|
||||||
|
* If you want to see your **company advertised in HackTricks** or **download HackTricks in PDF** Check the [**SUBSCRIPTION PLANS**](https://github.com/sponsors/carlospolop)!
|
||||||
|
* Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
|
||||||
|
* Discover [**The PEASS Family**](https://opensea.io/collection/the-peass-family), our collection of exclusive [**NFTs**](https://opensea.io/collection/the-peass-family)
|
||||||
|
* **Join the** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** us on **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks\_live)**.**
|
||||||
|
* **Share your hacking tricks by submitting PRs to the** [**HackTricks**](https://github.com/carlospolop/hacktricks) and [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
## Basic Information
|
||||||
|
|
||||||
|
In order to improve the efficiency on how chunks are stored every chunk is not just in one linked list, but there are several types. These are the bins and there are 5 type of bins: [62](https://sourceware.org/git/gitweb.cgi?p=glibc.git;a=blob;f=malloc/malloc.c;h=6e766d11bc85b6480fa5c9f2a76559f8acf9deb5;hb=HEAD#l1407) small bins, 63 large bins, 1 unsorted bin, 10 fast bins and 64 tcache bins per thread.
|
||||||
|
|
||||||
|
The initial address to each unsorted, small and large bins is inside the same array. The index 0 is unused, 1 is the unsorted bin, bins 2-64 are small bins and bins 65-127 are large bins.
|
||||||
|
|
||||||
|
### Small Bins
|
||||||
|
|
||||||
|
Small bins are faster than large bins but slower than fast bins.
|
||||||
|
|
||||||
|
Each bin of the 62 will have **chunks of the same size**: 16, 24, ... (with a max size of 504 bytes in 32bits and 1024 in 64bits). This helps in the speed on finding the bin where a space should be allocated and inserting and removing of entries on these lists.
|
||||||
|
|
||||||
|
### Large bins
|
||||||
|
|
||||||
|
Unlike small bins, which manage chunks of fixed sizes, each **large bin handle a range of chunk sizes**. This is more flexible, allowing the system to accommodate **various sizes** without needing a separate bin for each size.
|
||||||
|
|
||||||
|
In a memory allocator, large bins start where small bins end. The ranges for large bins grow progressively larger, meaning the first bin might cover chunks from 512 to 576 bytes, while the next covers 576 to 640 bytes. This pattern continues, with the largest bin containing all chunks above 1MB.
|
||||||
|
|
||||||
|
Large bins are slower to operate compared to small bins because they must **sort and search through a list of varying chunk sizes to find the best fit** for an allocation. When a chunk is inserted into a large bin, it has to be sorted, and when memory is allocated, the system must find the right chunk. This extra work makes them **slower**, but since large allocations are less common than small ones, it's an acceptable trade-off.
|
||||||
|
|
||||||
|
There are:
|
||||||
|
|
||||||
|
* 32 bins of 64B range
|
||||||
|
* 16 bins of 512B range
|
||||||
|
* 8bins of 4096B range
|
||||||
|
* 4bins of 32768B range
|
||||||
|
* 2bins of 262144B range
|
||||||
|
* 1bins of for reminding sizes
|
||||||
|
|
||||||
|
### Unsorted bin
|
||||||
|
|
||||||
|
The unsorted bin is a **fast cache** used by the heap manager to make memory allocation quicker. Here's how it works: When a program frees memory, the heap manager doesn't immediately put it in a specific bin. Instead, it first tries to **merge it with any neighbouring free chunks** to create a larger block of free memory. Then, it places this new chunk in a general bin called the "unsorted bin."
|
||||||
|
|
||||||
|
When a program **asks for memory**, the heap manager **checks the unsorted bin** to see if there's a chunk of enough size. If it finds one, it uses it right away. If it doesn't find a suitable chunk, it moves the freed chunks to their corresponding bins, either small or large, based on their size.
|
||||||
|
|
||||||
|
So, the unsorted bin is a way to speed up memory allocation by quickly reusing recently freed memory and reducing the need for time-consuming searches and merges.
|
||||||
|
|
||||||
|
{% hint style="danger" %}
|
||||||
|
Note that even in chunks are of different categories, if an available chunk is colliding with another available chunk (even if they are of different categories), they will be merged.
|
||||||
|
{% endhint %}
|
||||||
|
|
||||||
|
### Fast bins
|
||||||
|
|
||||||
|
Fast bins are designed to **speed up memory allocation for small chunks** by keeping recently freed chunks in a quick-access structure. These bins use a Last-In, First-Out (LIFO) approach, which means that the **most recently freed chunk is the first** to be reused when there's a new allocation request. This behavior is advantageous for speed, as it's faster to insert and remove from the top of a stack (LIFO) compared to a queue (FIFO).
|
||||||
|
|
||||||
|
Additionally, **fast bins use singly linked lists**, not double linked, which further improves speed. Since chunks in fast bins aren't merged with neighbours, there's no need for a complex structure that allows removal from the middle. A singly linked list is simpler and quicker for these operations.
|
||||||
|
|
||||||
|
Basically, what happens here is that the header (the pointer to the first chunk to check) is always pointing to the latest freed chunk of that size. So:
|
||||||
|
|
||||||
|
* When a new chunk is allocated of that size, the header is pointing to a free chunk to use. As this free chunk is pointing to the next one to use, this address is stored in the header so the next allocation knows where to get ana available chunk
|
||||||
|
* When a chunk is freed, the free chunk will save the address to the current available chunk and the address to this newly freed chunk will be pu in the header
|
||||||
|
|
||||||
|
{% hint style="danger" %}
|
||||||
|
Chunks in fast bins aren't automatically set as available so they keep as fast bin chunks for some time instead of being able to merge with other chunks.
|
||||||
|
{% endhint %}
|
||||||
|
|
||||||
|
### Tcache (Per-Thread Cache) Bins
|
||||||
|
|
||||||
|
Even though threads try to have their own heap (see [Arenas](bins-and-memory-allocations.md#arenas) and [Subheaps](bins-and-memory-allocations.md#subheaps)), there is the possibility that a process with a lot of threads (like a web server) **will end sharing the heap with another threads**. In this case, the main solution is the use of **lockers**, which might **slow down significantly the threads**.
|
||||||
|
|
||||||
|
Therefore, a tcache is similar to a fast bin per thread in the way that it's a **single linked list** that doesn't merge chunks. Each thread has **64 singly-linked tcache bins**. Each bin can have a maximum of [7 same-size chunks](https://sourceware.org/git/?p=glibc.git;a=blob;f=malloc/malloc.c;h=2527e2504761744df2bdb1abdc02d936ff907ad2;hb=d5c3fafc4307c9b7a4c7d5cb381fcdbfad340bcc#l323) ranging from [24 to 1032B on 64-bit systems and 12 to 516B on 32-bit systems](https://sourceware.org/git/?p=glibc.git;a=blob;f=malloc/malloc.c;h=2527e2504761744df2bdb1abdc02d936ff907ad2;hb=d5c3fafc4307c9b7a4c7d5cb381fcdbfad340bcc#l315).
|
||||||
|
|
||||||
|
**When a thread frees** a chunk, **if it isn't too big** to be allocated in the tcache and the respective tcache bin **isn't full** (already 7 chunks), **it'll be allocated in there**. If it cannot go to the tcache, it'll need to wait for the heap lock to be able to perform the free operation globally.
|
||||||
|
|
||||||
|
When a **chunk is allocated**, if there is a free chunk of the needed size in the **Tcache it'll use it**, if not, it'll need to wait for the heap lock to be able to find one in the global bins or create a new one.\
|
||||||
|
There also an optimization, in this case, while having the heap lock, the thread **will fill his Tcache with heap chunks (7) of the requested size**, so if case it needs more, it'll find them in Tcache.
|
||||||
|
|
||||||
|
## Allocation Flow
|
||||||
|
|
||||||
|
{% hint style="success" %}
|
||||||
|
(This current explanation is from [https://heap-exploitation.dhavalkapil.com/diving\_into\_glibc\_heap/core\_functions](https://heap-exploitation.dhavalkapil.com/diving\_into\_glibc\_heap/core\_functions). TODO: Check last version and update it)
|
||||||
|
{% endhint %}
|
||||||
|
|
||||||
|
Allocations are finally performed with the function: `void * _int_malloc (mstate av, size_t bytes)` and have this order:
|
||||||
|
|
||||||
|
1. Updates `bytes` to take care of **alignments**, etc.
|
||||||
|
2. Checks if `av` is **NULL** or not.
|
||||||
|
3. In the case of absence of **usable arena** (when `av` is NULL), calls `sysmalloc` to obtain chunk using mmap. If successful, calls `alloc_perturb`. Returns the pointer.
|
||||||
|
4. Depending on the size:
|
||||||
|
* \[Addition to the original] Use tcache before checking the next fastbin.
|
||||||
|
* \[Addition to the original] If no tcache but a different bin is used (see later step), try to fill tcache from that bin
|
||||||
|
* If size falls in the **fastbin** range: 
|
||||||
|
1. Get index into the fastbin array to access an appropriate bin according to the request size. 
|
||||||
|
2. Removes the first chunk in that bin and make `victim` point to it.
|
||||||
|
3. If `victim` is NULL, move on to the next case (smallbin).
|
||||||
|
4. If `victim` is not NULL, check the size of the chunk to ensure that it belongs to that particular bin. An error ("malloc(): memory corruption (fast)") is thrown otherwise.
|
||||||
|
5. Calls `alloc_perturb` and then returns the pointer.
|
||||||
|
* If size falls in the **smallbin** range:
|
||||||
|
1. Get index into the smallbin array to access an appropriate bin according to the request size.
|
||||||
|
2. If there are no chunks in this bin, move on to the next case. This is checked by comparing the pointers `bin` and `bin->bk`.
|
||||||
|
3. `victim` is made equal to `bin->bk` (the last chunk in the bin). If it is NULL (happens during `initialization`), call `malloc_consolidate` and skip this complete step of checking into different bins.
|
||||||
|
4. Otherwise, when `victim` is non NULL, check if `victim->bk->fd` and `victim` are equal or not. If they are not equal, an error (`malloc(): smallbin double linked list corrupted`) is thrown.
|
||||||
|
5. Sets the PREV\_INSUSE bit for the next chunk (in memory, not in the doubly linked list) for `victim`.
|
||||||
|
6. Remove this chunk from the bin list.
|
||||||
|
7. Set the appropriate arena bit for this chunk depending on `av`.
|
||||||
|
8. Calls `alloc_perturb` and then returns the pointer.
|
||||||
|
* If size does not fall in the smallbin range:
|
||||||
|
1. Get index into the largebin array to access an appropriate bin according to the request size.
|
||||||
|
2. See if `av` has fastchunks or not. This is done by checking the `FASTCHUNKS_BIT` in `av->flags`. If so, call `malloc_consolidate` on `av`.
|
||||||
|
5. If no pointer has yet been returned, this signifies one or more of the following cases:
|
||||||
|
1. Size falls into 'fastbin' range but no fastchunk is available.
|
||||||
|
2. Size falls into 'smallbin' range but no smallchunk is available (calls `malloc_consolidate` during initialization).
|
||||||
|
3. Size falls into 'largbin' range.
|
||||||
|
6. Next, **unsorted chunks** are checked and traversed chunks are placed into bins. This is the only place where chunks are placed into bins. Iterate the unsorted bin from the 'TAIL'.
|
||||||
|
1. `victim` points to the current chunk being considered.
|
||||||
|
2. Check if `victim`'s chunk size is within minimum (`2*SIZE_SZ`) and maximum (`av->system_mem`) range. Throw an error (`malloc(): memory corruption`) otherwise.
|
||||||
|
3. If (size of requested chunk falls in smallbin range) and (`victim` is the last remainder chunk) and (it is the only chunk in the unsorted bin) and (the chunks size >= the one requested): **Break the chunk into 2 chunks**:
|
||||||
|
* The first chunk matches the size requested and is returned.
|
||||||
|
* Left over chunk becomes the new last remainder chunk. It is inserted back into the unsorted bin.
|
||||||
|
1. Set `chunk_size` and `chunk_prev_size` fields appropriately for both chunks.
|
||||||
|
2. The first chunk is returned after calling `alloc_perturb`.
|
||||||
|
3. If the above condition is false, control reaches here. Remove `victim` from the unsorted bin. If the size of `victim` matches the size requested exactly, return this chunk after calling `alloc_perturb`.
|
||||||
|
4. If `victim`'s size falls in smallbin range, add the chunk in the appropriate smallbin at the `HEAD`.
|
||||||
|
5. Else insert into appropriate largebin while maintaining sorted order:
|
||||||
|
6. First checks the last chunk (smallest). If `victim` is smaller than the last chunk, insert it at the last.
|
||||||
|
7. Otherwise, loop to find a chunk with size >= size of `victim`. If size is exactly same, always insert in the second position.
|
||||||
|
8. Repeat this whole step a maximum of `MAX_ITERS` (10000) times or till all chunks in unsorted bin get exhausted.
|
||||||
|
7. After checking unsorted chunks, check if requested size does not fall in the smallbin range, if so then check **largebins**.
|
||||||
|
1. Get index into largebin array to access an appropriate bin according to the request size.
|
||||||
|
2. If the size of the largest chunk (the first chunk in the bin) is greater than the size requested:
|
||||||
|
1. Iterate from 'TAIL' to find a chunk (`victim`) with the smallest size >= the requested size.
|
||||||
|
2. Call `unlink` to remove the `victim` chunk from the bin.
|
||||||
|
3. Calculate `remainder_size` for the `victim`'s chunk (this will be `victim`'s chunk size - requested size).
|
||||||
|
4. If this `remainder_size` >= `MINSIZE` (the minimum chunk size including the headers), split the chunk into two chunks. Otherwise, the entire `victim` chunk will be returned. Insert the remainder chunk in the unsorted bin (at the 'TAIL' end). A check is made in unsorted bin whether `unsorted_chunks(av)->fd->bk == unsorted_chunks(av)`. An error is thrown otherwise ("malloc(): corrupted unsorted chunks").
|
||||||
|
5. Return the `victim` chunk after calling `alloc_perturb`.
|
||||||
|
8. Till now, we have checked unsorted bin and also the respective fast, small or large bin. Note that a single bin (fast or small) was checked using the **exact** size of the requested chunk. Repeat the following steps till all bins are exhausted:
|
||||||
|
1. The index into bin array is incremented to check the next bin.
|
||||||
|
2. Use `av->binmap` map to skip over bins that are empty.
|
||||||
|
3. `victim` is pointed to the 'TAIL' of the current bin.
|
||||||
|
4. Using the binmap ensures that if a bin is skipped (in the above 2nd step), it is definitely empty. However, it does not ensure that all empty bins will be skipped. Check if the victim is empty or not. If empty, again skip the bin and repeat the above process (or 'continue' this loop) till we arrive at a nonempty bin.
|
||||||
|
5. Split the chunk (`victim` points to the last chunk of a nonempty bin) into two chunks. Insert the remainder chunk in unsorted bin (at the 'TAIL' end). A check is made in the unsorted bin whether `unsorted_chunks(av)->fd->bk == unsorted_chunks(av)`. An error is thrown otherwise ("malloc(): corrupted unsorted chunks 2").
|
||||||
|
6. Return the `victim` chunk after calling `alloc_perturb`.
|
||||||
|
9. If still no empty bin is found, 'top' chunk will be used to service the request:
|
||||||
|
1. `victim` points to `av->top`.
|
||||||
|
2. If size of 'top' chunk >= 'requested size' + `MINSIZE`, split it into two chunks. In this case, the remainder chunk becomes the new 'top' chunk and the other chunk is returned to the user after calling `alloc_perturb`.
|
||||||
|
3. See if `av` has fastchunks or not. This is done by checking the `FASTCHUNKS_BIT` in `av->flags`. If so, call `malloc_consolidate` on `av`. Return to step 6 (where we check unsorted bin).
|
||||||
|
4. If `av` does not have fastchunks, call `sysmalloc` and return the pointer obtained after calling `alloc_perturb`.
|
||||||
|
|
||||||
|
## Free Flow
|
||||||
|
|
||||||
|
{% hint style="success" %}
|
||||||
|
(This current explanation is from [https://heap-exploitation.dhavalkapil.com/diving\_into\_glibc\_heap/core\_functions](https://heap-exploitation.dhavalkapil.com/diving\_into\_glibc\_heap/core\_functions). TODO: Check last version and update it)
|
||||||
|
{% endhint %}
|
||||||
|
|
||||||
|
The final function freeing chunks of memory is `_int_free (mstate av, mchunkptr p, int have_lock)` :
|
||||||
|
|
||||||
|
1. Check whether `p` is before `p + chunksize(p)` in the memory (to avoid wrapping). An error (`free(): invalid pointer`) is thrown otherwise.
|
||||||
|
2. Check whether the chunk is at least of size `MINSIZE` or a multiple of `MALLOC_ALIGNMENT`. An error (`free(): invalid size`) is thrown otherwise.
|
||||||
|
3. If the chunk's size falls in fastbin list:
|
||||||
|
1. Check if next chunk's size is between minimum and maximum size (`av->system_mem`), throw an error (`free(): invalid next size (fast)`) otherwise.
|
||||||
|
2. Calls `free_perturb` on the chunk.
|
||||||
|
3. Set `FASTCHUNKS_BIT` for `av`.
|
||||||
|
4. Get index into fastbin array according to chunk size.
|
||||||
|
5. Check if the top of the bin is not the chunk we are going to add. Otherwise, throw an error (`double free or corruption (fasttop)`).
|
||||||
|
6. Check if the size of the fastbin chunk at the top is the same as the chunk we are adding. Otherwise, throw an error (`invalid fastbin entry (free)`).
|
||||||
|
7. Insert the chunk at the top of the fastbin list and return.
|
||||||
|
4. If the chunk is not mmapped:
|
||||||
|
1. Check if the chunk is the top chunk or not. If yes, an error (`double free or corruption (top)`) is thrown.
|
||||||
|
2. Check whether next chunk (by memory) is within the boundaries of the arena. If not, an error (`double free or corruption (out)`) is thrown.
|
||||||
|
3. Check whether next chunk's (by memory) previous in use bit is marked or not. If not, an error (`double free or corruption (!prev)`) is thrown.
|
||||||
|
4. Check whether the size of next chunk is between the minimum and maximum size (`av->system_mem`). If not, an error (`free(): invalid next size (normal)`) is thrown.
|
||||||
|
5. Call `free_perturb` on the chunk.
|
||||||
|
6. If previous chunk (by memory) is not in use, call `unlink` on the previous chunk.
|
||||||
|
7. If next chunk (by memory) is not top chunk:
|
||||||
|
1. If next chunk is not in use, call `unlink` on the next chunk.
|
||||||
|
2. Merge the chunk with previous, next (by memory), if any is free and add it to the head of unsorted bin. Before inserting, check whether `unsorted_chunks(av)->fd->bk == unsorted_chunks(av)` or not. If not, an error ("free(): corrupted unsorted chunks") is thrown.
|
||||||
|
8. If next chunk (by memory) was a top chunk, merge the chunks appropriately into a single top chunk.
|
||||||
|
5. If the chunk was mmapped, call `munmap_chunk`.
|
||||||
|
|
||||||
|
## Heap Functions Security Checks
|
||||||
|
|
||||||
|
Check the security checks performed by heavily used functions in heap in:
|
||||||
|
|
||||||
|
{% content-ref url="heap-functions-security-checks.md" %}
|
||||||
|
[heap-functions-security-checks.md](heap-functions-security-checks.md)
|
||||||
|
{% endcontent-ref %}
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
* [https://azeria-labs.com/heap-exploitation-part-1-understanding-the-glibc-heap-implementation/](https://azeria-labs.com/heap-exploitation-part-1-understanding-the-glibc-heap-implementation/)
|
||||||
|
* [https://azeria-labs.com/heap-exploitation-part-2-glibc-heap-free-bins/](https://azeria-labs.com/heap-exploitation-part-2-glibc-heap-free-bins/)
|
||||||
|
* [https://heap-exploitation.dhavalkapil.com/diving\_into\_glibc\_heap/core\_functions](https://heap-exploitation.dhavalkapil.com/diving\_into\_glibc\_heap/core\_functions)
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary><strong>Learn AWS hacking from zero to hero with</strong> <a href="https://training.hacktricks.xyz/courses/arte"><strong>htARTE (HackTricks AWS Red Team Expert)</strong></a><strong>!</strong></summary>
|
||||||
|
|
||||||
|
Other ways to support HackTricks:
|
||||||
|
|
||||||
|
* If you want to see your **company advertised in HackTricks** or **download HackTricks in PDF** Check the [**SUBSCRIPTION PLANS**](https://github.com/sponsors/carlospolop)!
|
||||||
|
* Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
|
||||||
|
* Discover [**The PEASS Family**](https://opensea.io/collection/the-peass-family), our collection of exclusive [**NFTs**](https://opensea.io/collection/the-peass-family)
|
||||||
|
* **Join the** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** us on **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks\_live)**.**
|
||||||
|
* **Share your hacking tricks by submitting PRs to the** [**HackTricks**](https://github.com/carlospolop/hacktricks) and [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
|
||||||
|
|
||||||
|
</details>
|
140
binary-exploitation/heap/double-free.md
Normal file
140
binary-exploitation/heap/double-free.md
Normal file
|
@ -0,0 +1,140 @@
|
||||||
|
# Double Free
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary><strong>Learn AWS hacking from zero to hero with</strong> <a href="https://training.hacktricks.xyz/courses/arte"><strong>htARTE (HackTricks AWS Red Team Expert)</strong></a><strong>!</strong></summary>
|
||||||
|
|
||||||
|
Other ways to support HackTricks:
|
||||||
|
|
||||||
|
* If you want to see your **company advertised in HackTricks** or **download HackTricks in PDF** Check the [**SUBSCRIPTION PLANS**](https://github.com/sponsors/carlospolop)!
|
||||||
|
* Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
|
||||||
|
* Discover [**The PEASS Family**](https://opensea.io/collection/the-peass-family), our collection of exclusive [**NFTs**](https://opensea.io/collection/the-peass-family)
|
||||||
|
* **Join the** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** us on **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks\_live)**.**
|
||||||
|
* **Share your hacking tricks by submitting PRs to the** [**HackTricks**](https://github.com/carlospolop/hacktricks) and [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
## Basic Information
|
||||||
|
|
||||||
|
If you free a block of memory more than once, it can mess up the allocator's data and open the door to attacks. Here's how it happens: when you free a block of memory, it goes back into a list of free chunks (e.g. the "fastbin"). If you free the same block twice in a row, the allocator detects this and throws an error. But if you **free another chunk in between, the double-free check is bypassed**, causing corruption.
|
||||||
|
|
||||||
|
Now, when you ask for new memory (using `malloc`), the allocator might give you a **block that's been freed twice**. This can lead to two different pointers pointing to the same memory location. If an attacker controls one of those pointers, they can change the contents of that memory, which can cause security issues or even allow them to execute code.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
```c
|
||||||
|
#include <stdio.h>
|
||||||
|
#include <stdlib.h>
|
||||||
|
|
||||||
|
int main() {
|
||||||
|
// Allocate memory for three chunks
|
||||||
|
char *a = (char *)malloc(10);
|
||||||
|
char *b = (char *)malloc(10);
|
||||||
|
char *c = (char *)malloc(10);
|
||||||
|
char *d = (char *)malloc(10);
|
||||||
|
char *e = (char *)malloc(10);
|
||||||
|
char *f = (char *)malloc(10);
|
||||||
|
char *g = (char *)malloc(10);
|
||||||
|
char *h = (char *)malloc(10);
|
||||||
|
char *i = (char *)malloc(10);
|
||||||
|
|
||||||
|
// Print initial memory addresses
|
||||||
|
printf("Initial allocations:\n");
|
||||||
|
printf("a: %p\n", (void *)a);
|
||||||
|
printf("b: %p\n", (void *)b);
|
||||||
|
printf("c: %p\n", (void *)c);
|
||||||
|
printf("d: %p\n", (void *)d);
|
||||||
|
printf("e: %p\n", (void *)e);
|
||||||
|
printf("f: %p\n", (void *)f);
|
||||||
|
printf("g: %p\n", (void *)g);
|
||||||
|
printf("h: %p\n", (void *)h);
|
||||||
|
printf("i: %p\n", (void *)i);
|
||||||
|
|
||||||
|
// Fill tcache
|
||||||
|
free(a);
|
||||||
|
free(b);
|
||||||
|
free(c);
|
||||||
|
free(d);
|
||||||
|
free(e);
|
||||||
|
free(f);
|
||||||
|
free(g);
|
||||||
|
|
||||||
|
// Introduce double-free vulnerability in fast bin
|
||||||
|
free(h);
|
||||||
|
free(i);
|
||||||
|
free(h);
|
||||||
|
|
||||||
|
|
||||||
|
// Reallocate memory and print the addresses
|
||||||
|
char *a1 = (char *)malloc(10);
|
||||||
|
char *b1 = (char *)malloc(10);
|
||||||
|
char *c1 = (char *)malloc(10);
|
||||||
|
char *d1 = (char *)malloc(10);
|
||||||
|
char *e1 = (char *)malloc(10);
|
||||||
|
char *f1 = (char *)malloc(10);
|
||||||
|
char *g1 = (char *)malloc(10);
|
||||||
|
char *h1 = (char *)malloc(10);
|
||||||
|
char *i1 = (char *)malloc(10);
|
||||||
|
char *i2 = (char *)malloc(10);
|
||||||
|
|
||||||
|
// Print initial memory addresses
|
||||||
|
printf("After reallocations:\n");
|
||||||
|
printf("a1: %p\n", (void *)a1);
|
||||||
|
printf("b1: %p\n", (void *)b1);
|
||||||
|
printf("c1: %p\n", (void *)c1);
|
||||||
|
printf("d1: %p\n", (void *)d1);
|
||||||
|
printf("e1: %p\n", (void *)e1);
|
||||||
|
printf("f1: %p\n", (void *)f1);
|
||||||
|
printf("g1: %p\n", (void *)g1);
|
||||||
|
printf("h1: %p\n", (void *)h1);
|
||||||
|
printf("i1: %p\n", (void *)i1);
|
||||||
|
printf("i2: %p\n", (void *)i1);
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
In this example, after filling the tcache with several freed chunks, the code **frees chunk `h`, then chunk `i`, and then `h` again, causing a double-free error**. This opens the possibility of receiving overlapping memory addresses when reallocating, meaning two or more pointers can point to the same memory location. Manipulating data through one pointer can then affect the other, creating a critical security risk and potential for exploitation.
|
||||||
|
|
||||||
|
Executing it, note how **`i1` and `i2` got the same address**:
|
||||||
|
|
||||||
|
<pre><code>Initial allocations:
|
||||||
|
a: 0xaaab0f0c22a0
|
||||||
|
b: 0xaaab0f0c22c0
|
||||||
|
c: 0xaaab0f0c22e0
|
||||||
|
d: 0xaaab0f0c2300
|
||||||
|
e: 0xaaab0f0c2320
|
||||||
|
f: 0xaaab0f0c2340
|
||||||
|
g: 0xaaab0f0c2360
|
||||||
|
h: 0xaaab0f0c2380
|
||||||
|
i: 0xaaab0f0c23a0
|
||||||
|
After reallocations:
|
||||||
|
a1: 0xaaab0f0c2360
|
||||||
|
b1: 0xaaab0f0c2340
|
||||||
|
c1: 0xaaab0f0c2320
|
||||||
|
d1: 0xaaab0f0c2300
|
||||||
|
e1: 0xaaab0f0c22e0
|
||||||
|
f1: 0xaaab0f0c22c0
|
||||||
|
g1: 0xaaab0f0c22a0
|
||||||
|
h1: 0xaaab0f0c2380
|
||||||
|
<strong>i1: 0xaaab0f0c23a0
|
||||||
|
</strong><strong>i2: 0xaaab0f0c23a0
|
||||||
|
</strong></code></pre>
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
* [https://heap-exploitation.dhavalkapil.com/attacks/double\_free](https://heap-exploitation.dhavalkapil.com/attacks/double\_free)
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary><strong>Learn AWS hacking from zero to hero with</strong> <a href="https://training.hacktricks.xyz/courses/arte"><strong>htARTE (HackTricks AWS Red Team Expert)</strong></a><strong>!</strong></summary>
|
||||||
|
|
||||||
|
Other ways to support HackTricks:
|
||||||
|
|
||||||
|
* If you want to see your **company advertised in HackTricks** or **download HackTricks in PDF** Check the [**SUBSCRIPTION PLANS**](https://github.com/sponsors/carlospolop)!
|
||||||
|
* Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
|
||||||
|
* Discover [**The PEASS Family**](https://opensea.io/collection/the-peass-family), our collection of exclusive [**NFTs**](https://opensea.io/collection/the-peass-family)
|
||||||
|
* **Join the** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** us on **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks\_live)**.**
|
||||||
|
* **Share your hacking tricks by submitting PRs to the** [**HackTricks**](https://github.com/carlospolop/hacktricks) and [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
|
||||||
|
|
||||||
|
</details>
|
92
binary-exploitation/heap/heap-functions-security-checks.md
Normal file
92
binary-exploitation/heap/heap-functions-security-checks.md
Normal file
|
@ -0,0 +1,92 @@
|
||||||
|
# Heap Functions Security Checks
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary><strong>Learn AWS hacking from zero to hero with</strong> <a href="https://training.hacktricks.xyz/courses/arte"><strong>htARTE (HackTricks AWS Red Team Expert)</strong></a><strong>!</strong></summary>
|
||||||
|
|
||||||
|
Other ways to support HackTricks:
|
||||||
|
|
||||||
|
* If you want to see your **company advertised in HackTricks** or **download HackTricks in PDF** Check the [**SUBSCRIPTION PLANS**](https://github.com/sponsors/carlospolop)!
|
||||||
|
* Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
|
||||||
|
* Discover [**The PEASS Family**](https://opensea.io/collection/the-peass-family), our collection of exclusive [**NFTs**](https://opensea.io/collection/the-peass-family)
|
||||||
|
* **Join the** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** us on **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks\_live)**.**
|
||||||
|
* **Share your hacking tricks by submitting PRs to the** [**HackTricks**](https://github.com/carlospolop/hacktricks) and [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
## unlink
|
||||||
|
|
||||||
|
This function removes a chunk from a doubly linked list. Common checks ensure that the linked list structure remains consistent when unlinking chunks.
|
||||||
|
|
||||||
|
* **Consistency Checks**:
|
||||||
|
* Check if `P->fd->bk == P` and `P->bk->fd == P`.
|
||||||
|
* Error message: `corrupted double-linked list`
|
||||||
|
|
||||||
|
## \_int\_malloc
|
||||||
|
|
||||||
|
This function is responsible for allocating memory from the heap. Checks here ensure memory is not corrupted during allocation.
|
||||||
|
|
||||||
|
* **Fastbin Size Check**:
|
||||||
|
* When removing a chunk from a fastbin, ensure the chunk's size is within the fastbin range.
|
||||||
|
* Error message: `malloc(): memory corruption (fast)`
|
||||||
|
* **Smallbin Consistency Check**:
|
||||||
|
* When removing a chunk from a smallbin, ensure the previous and next links in the doubly linked list are consistent.
|
||||||
|
* Error message: `malloc(): smallbin double linked list corrupted`
|
||||||
|
* **Unsorted Bin Memory Range Check**:
|
||||||
|
* Ensure the size of chunks in the unsorted bin is within minimum and maximum limits.
|
||||||
|
* Error message: `malloc(): memory corruption`
|
||||||
|
* **Unsorted Bin Consistency Check (First Scenario)**:
|
||||||
|
* When inserting a remainder chunk into the unsorted bin, check if `unsorted_chunks(av)->fd->bk == unsorted_chunks(av)`.
|
||||||
|
* Error message: `malloc(): corrupted unsorted chunks`
|
||||||
|
* **Unsorted Bin Consistency Check (Second Scenario)**:
|
||||||
|
* Same as the previous check, but triggered when inserting after splitting a fast or small chunk.
|
||||||
|
* Error message: `malloc(): corrupted unsorted chunks 2`
|
||||||
|
|
||||||
|
## \_int\_free
|
||||||
|
|
||||||
|
This function frees previously allocated memory. The checks here help ensure proper memory deallocation and prevent memory corruption.
|
||||||
|
|
||||||
|
* **Pointer Boundary Check**:
|
||||||
|
* Ensure the pointer being freed isn't wrapping around the memory.
|
||||||
|
* Error message: `free(): invalid pointer`
|
||||||
|
* **Size Check**:
|
||||||
|
* Ensure the size of the chunk being freed is at least `MINSIZE` or a multiple of `MALLOC_ALIGNMENT`.
|
||||||
|
* Error message: `free(): invalid size`
|
||||||
|
* **Fastbin Size Check**:
|
||||||
|
* For fastbin chunks, ensure the next chunk's size is within the minimum and maximum limits.
|
||||||
|
* Error message: `free(): invalid next size (fast)`
|
||||||
|
* **Fastbin Double Free Check**:
|
||||||
|
* When inserting a chunk into a fastbin, ensure the chunk at the head isn't the same as the one being inserted.
|
||||||
|
* Error message: `double free or corruption (fasttop)`
|
||||||
|
* **Fastbin Consistency Check**:
|
||||||
|
* When inserting into a fastbin, ensure the sizes of the head chunk and the chunk being inserted are the same.
|
||||||
|
* Error message: `invalid fastbin entry (free)`
|
||||||
|
* **Top Chunk Consistency Check**:
|
||||||
|
* For non-fastbin chunks, ensure the chunk isn't the same as the top chunk.
|
||||||
|
* Error message: `double free or corruption (top)`
|
||||||
|
* **Memory Boundaries Check**:
|
||||||
|
* Ensure the next chunk by memory is within the boundaries of the arena.
|
||||||
|
* Error message: `double free or corruption (out)`
|
||||||
|
* **Prev\_inuse Bit Check**:
|
||||||
|
* Ensure the previous-in-use bit in the next chunk is marked.
|
||||||
|
* Error message: `double free or corruption (!prev)`
|
||||||
|
* **Normal Size Check**:
|
||||||
|
* Ensure the size of the next chunk is within valid ranges.
|
||||||
|
* Error message: `free(): invalid next size (normal)`
|
||||||
|
* **Unsorted Bin Consistency Check**:
|
||||||
|
* When inserting a coalesced chunk into the unsorted bin, check if `unsorted_chunks(av)->fd->bk == unsorted_chunks(av)`.
|
||||||
|
* Error message: `free(): corrupted unsorted chunks`
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary><strong>Learn AWS hacking from zero to hero with</strong> <a href="https://training.hacktricks.xyz/courses/arte"><strong>htARTE (HackTricks AWS Red Team Expert)</strong></a><strong>!</strong></summary>
|
||||||
|
|
||||||
|
Other ways to support HackTricks:
|
||||||
|
|
||||||
|
* If you want to see your **company advertised in HackTricks** or **download HackTricks in PDF** Check the [**SUBSCRIPTION PLANS**](https://github.com/sponsors/carlospolop)!
|
||||||
|
* Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
|
||||||
|
* Discover [**The PEASS Family**](https://opensea.io/collection/the-peass-family), our collection of exclusive [**NFTs**](https://opensea.io/collection/the-peass-family)
|
||||||
|
* **Join the** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** us on **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks\_live)**.**
|
||||||
|
* **Share your hacking tricks by submitting PRs to the** [**HackTricks**](https://github.com/carlospolop/hacktricks) and [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
|
||||||
|
|
||||||
|
</details>
|
|
@ -24,6 +24,20 @@ In stack overflows we know that some registers like the instruction pointer or t
|
||||||
In order to find overflow offsets you can use the same patters as in [**stack overflows**](../stack-overflow/#finding-stack-overflows-offsets).
|
In order to find overflow offsets you can use the same patters as in [**stack overflows**](../stack-overflow/#finding-stack-overflows-offsets).
|
||||||
{% endhint %}
|
{% endhint %}
|
||||||
|
|
||||||
|
### Stack Overflows vs Heap Overflows
|
||||||
|
|
||||||
|
In stack overflows the arranging and data that is going to be present in the stack at the moment the vulnerability can be triggered is fairly reliable. This is because the stack is linear, always increasing in colliding memory, in **specific places of the program run the stack memory usually stores similar kind of data** and it has some specific structure with some pointers at the end of the stack part used by each function.
|
||||||
|
|
||||||
|
However, in the case of a heap overflow, because the used memory isn’t linear but **allocated chunks of are usually in separated positions of memory** (not one next to the other) because of **bins and zones** separating allocations by size and because **previous freed memory is used** before allocating new chunks. It’s **complicated to know the object that is going to be colliding with the one vulnerable** to a heap overflow. So, when a heap overflow is found, it’s needed to find a **reliable way to make the desired object to be next in memory** from the one that can be overflowed.
|
||||||
|
|
||||||
|
One of the techniques used for this is **Heap Grooming** which is used for example [**in this post**](https://azeria-labs.com/grooming-the-ios-kernel-heap/). In the post it’s explained how when in iOS kernel when a zone run out of memory to store chunks of memory, it expands it by a kernel page, and this page is splitted into chunks of the expected sizes which would be used in order (until iOS version 9.2, then these chunks are used in a randomised way to difficult the exploitation of these attacks).
|
||||||
|
|
||||||
|
Therefore, in the previous post where a heap overflow is happening, in order to force the overflowed object to be colliding with a victim order, several **`kallocs` are forced by several threads to try to ensure that all the free chunks are filled and that a new page is created**.
|
||||||
|
|
||||||
|
In order to force this filling with objects of a specific size, the **out-of-line allocation associated with an iOS mach port** is an ideal candidate. By crafting the size of the message, it’s possible to exactly specify the size of `kalloc` allocation and when the corresponding mach port is destroyed, the corresponding allocation will be immediately released back to `kfree`.
|
||||||
|
|
||||||
|
Then, some of these placeholders can be **freed**. The **`kalloc.4096` free list releases elements in a last-in-first-out order**, which basically means that if some place holders are freed and the exploit try lo allocate several victim objects while trying to allocate the object vulnerable to overflow, it’s probable that this object will be followed by a victim object.
|
||||||
|
|
||||||
## Example ARM64
|
## Example ARM64
|
||||||
|
|
||||||
In the page [https://8ksec.io/arm64-reversing-and-exploitation-part-1-arm-instruction-set-simple-heap-overflow/](https://8ksec.io/arm64-reversing-and-exploitation-part-1-arm-instruction-set-simple-heap-overflow/) you can find a heap overflow example where a command that is going to be executed is stored in the following chunk from the overflowed chunk. So, it's possible to modify the executed command by overwriting it with an easy exploit such as:
|
In the page [https://8ksec.io/arm64-reversing-and-exploitation-part-1-arm-instruction-set-simple-heap-overflow/](https://8ksec.io/arm64-reversing-and-exploitation-part-1-arm-instruction-set-simple-heap-overflow/) you can find a heap overflow example where a command that is going to be executed is stored in the following chunk from the overflowed chunk. So, it's possible to modify the executed command by overwriting it with an easy exploit such as:
|
||||||
|
|
|
@ -20,10 +20,16 @@ As the name implies, this vulnerability occurs when a program **stores some spac
|
||||||
|
|
||||||
The problem here is that it's not ilegal (there **won't be errors**) when a **freed memory is accessed**. So, if the program (or the attacker) managed to **allocate the freed memory and store arbitrary data**, when the freed memory is accessed from the initial pointer that **data would be have been overwritten** causing a **vulnerability that will depends on the sensitivity of the data** that was stored original (if it was a pointer of a function that was going to be be called, an attacker could know control it).
|
The problem here is that it's not ilegal (there **won't be errors**) when a **freed memory is accessed**. So, if the program (or the attacker) managed to **allocate the freed memory and store arbitrary data**, when the freed memory is accessed from the initial pointer that **data would be have been overwritten** causing a **vulnerability that will depends on the sensitivity of the data** that was stored original (if it was a pointer of a function that was going to be be called, an attacker could know control it).
|
||||||
|
|
||||||
## Other References & Examples
|
### First Fit attack
|
||||||
|
|
||||||
* [https://8ksec.io/arm64-reversing-and-exploitation-part-2-use-after-free/](https://8ksec.io/arm64-reversing-and-exploitation-part-2-use-after-free/)
|
A first fit attack targets the way some memory allocators, like in glibc, manage freed memory. When you free a block of memory, it gets added to a list, and new memory requests pull from that list from the end. Attackers can use this behavior to manipulate **which memory blocks get reused, potentially gaining control over them**. This can lead to "use-after-free" issues, where an attacker could **change the contents of memory that gets reallocated**, creating a security risk.\
|
||||||
* ARM64. Use after free: Generate a user, free it, reuse the same chunk **overwriting the position of user->password** from the previous one. Reuse the user to **bypass the password check**
|
Check more info in:
|
||||||
|
|
||||||
|
{% content-ref url="first-fit.md" %}
|
||||||
|
[first-fit.md](first-fit.md)
|
||||||
|
{% endcontent-ref %}
|
||||||
|
|
||||||
|
##
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
|
|
68
binary-exploitation/heap/use-after-free/first-fit.md
Normal file
68
binary-exploitation/heap/use-after-free/first-fit.md
Normal file
|
@ -0,0 +1,68 @@
|
||||||
|
# First Fit
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary><strong>Learn AWS hacking from zero to hero with</strong> <a href="https://training.hacktricks.xyz/courses/arte"><strong>htARTE (HackTricks AWS Red Team Expert)</strong></a><strong>!</strong></summary>
|
||||||
|
|
||||||
|
Other ways to support HackTricks:
|
||||||
|
|
||||||
|
* If you want to see your **company advertised in HackTricks** or **download HackTricks in PDF** Check the [**SUBSCRIPTION PLANS**](https://github.com/sponsors/carlospolop)!
|
||||||
|
* Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
|
||||||
|
* Discover [**The PEASS Family**](https://opensea.io/collection/the-peass-family), our collection of exclusive [**NFTs**](https://opensea.io/collection/the-peass-family)
|
||||||
|
* **Join the** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** us on **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks\_live)**.**
|
||||||
|
* **Share your hacking tricks by submitting PRs to the** [**HackTricks**](https://github.com/carlospolop/hacktricks) and [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
## **First Fit**
|
||||||
|
|
||||||
|
When you free memory in a program using glibc, different "bins" are used to manage the memory chunks. Here's a simplified explanation of two common scenarios: unsorted bins and fastbins.
|
||||||
|
|
||||||
|
### Unsorted Bins
|
||||||
|
|
||||||
|
When you free a memory chunk that's not a fast chunk, it goes to the unsorted bin. This bin acts like a list where new freed chunks are added to the front (the "head"). When you request a new chunk of memory, the allocator looks at the unsorted bin from the back (the "tail") to find a chunk that's big enough. If a chunk from the unsorted bin is bigger than what you need, it gets split, with the front part being returned and the remaining part staying in the bin.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
* You allocate 300 bytes (`a`), then 250 bytes (`b`), the free `a` and request again 250 bytes (`c`).
|
||||||
|
* When you free `a`, it goes to the unsorted bin.
|
||||||
|
* If you then request 250 bytes again, the allocator finds `a` at the tail and splits it, returning the part that fits your request and keeping the rest in the bin.
|
||||||
|
* `c` will be pointing to the previous `a` and filled with the `a's`.
|
||||||
|
|
||||||
|
```c
|
||||||
|
char *a = malloc(300);
|
||||||
|
char *b = malloc(250);
|
||||||
|
free(a);
|
||||||
|
char *c = malloc(250);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Fastbins
|
||||||
|
|
||||||
|
Fastbins are used for small memory chunks. Unlike unsorted bins, fastbins add new chunks to the head, creating a last-in-first-out (LIFO) behavior. If you request a small chunk of memory, the allocator will pull from the fastbin's head.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
* You allocate four chunks of 20 bytes each (`a`, `b`, `c`, `d`).
|
||||||
|
* When you free them in any order, the freed chunks are added to the fastbin's head.
|
||||||
|
* If you then request a 20-byte chunk, the allocator will return the most recently freed chunk from the head of the fastbin.
|
||||||
|
|
||||||
|
```c
|
||||||
|
char *a = malloc(20);
|
||||||
|
char *b = malloc(20);
|
||||||
|
char *c = malloc(20);
|
||||||
|
char *d = malloc(20);
|
||||||
|
free(a);
|
||||||
|
free(b);
|
||||||
|
free(c);
|
||||||
|
free(d);
|
||||||
|
a = malloc(20); // d
|
||||||
|
b = malloc(20); // c
|
||||||
|
c = malloc(20); // b
|
||||||
|
d = malloc(20); // a
|
||||||
|
```
|
||||||
|
|
||||||
|
## Other References & Examples
|
||||||
|
|
||||||
|
* [https://heap-exploitation.dhavalkapil.com/attacks/first\_fit](https://heap-exploitation.dhavalkapil.com/attacks/first\_fit)
|
||||||
|
* [https://8ksec.io/arm64-reversing-and-exploitation-part-2-use-after-free/](https://8ksec.io/arm64-reversing-and-exploitation-part-2-use-after-free/)
|
||||||
|
* ARM64. Use after free: Generate an user object, free it, generate an object that gets the freed chunk and allow to write to it, **overwriting the position of user->password** from the previous one. Reuse the user to **bypass the password check**
|
Loading…
Reference in a new issue