GITBOOK-4338: No subject

This commit is contained in:
CPol 2024-05-15 16:38:37 +00:00 committed by gitbook-bot
parent e0650f6fb2
commit a25b11ae80
No known key found for this signature in database
GPG key ID: 07D2180C7B12D0FF
38 changed files with 3080 additions and 284 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.6 KiB

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

After

Width:  |  Height:  |  Size: 142 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 142 KiB

After

Width:  |  Height:  |  Size: 108 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 116 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 116 KiB

After

Width:  |  Height:  |  Size: 418 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 74 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 74 KiB

After

Width:  |  Height:  |  Size: 271 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 271 KiB

After

Width:  |  Height:  |  Size: 105 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 237 KiB

After

Width:  |  Height:  |  Size: 254 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 254 KiB

After

Width:  |  Height:  |  Size: 5.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.5 KiB

After

Width:  |  Height:  |  Size: 254 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 254 KiB

After

Width:  |  Height:  |  Size: 112 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 112 KiB

After

Width:  |  Height:  |  Size: 63 KiB

View file

@ -722,7 +722,11 @@
* [Format Strings Template](binary-exploitation/format-strings/format-strings-template.md)
* [Heap](binary-exploitation/heap/README.md)
* [Bins & Memory Allocations](binary-exploitation/heap/bins-and-memory-allocations.md)
* [Heap Functions Security Checks](binary-exploitation/heap/heap-functions-security-checks.md)
* [Heap Memory Functions](binary-exploitation/heap/heap-memory-functions/README.md)
* [free](binary-exploitation/heap/heap-memory-functions/free.md)
* [malloc & sysmalloc](binary-exploitation/heap/heap-memory-functions/malloc-and-sysmalloc.md)
* [unlink](binary-exploitation/heap/heap-memory-functions/unlink.md)
* [Heap Functions Security Checks](binary-exploitation/heap/heap-memory-functions/heap-functions-security-checks.md)
* [Use After Free](binary-exploitation/heap/use-after-free/README.md)
* [First Fit](binary-exploitation/heap/use-after-free/first-fit.md)
* [Double Free](binary-exploitation/heap/double-free.md)

View file

@ -23,15 +23,15 @@ There are different ways to reserver the space mainly depending on the used bin,
Note that if the requested **memory passes a threshold**, **`mmap`** will be used to map the requested memory.
### Arenas
## Arenas
In **multithreaded** applications, the heap manager must prevent **race conditions** that could lead to crashes. Initially, this was done using a **global mutex** to ensure that only one thread could access the heap at a time, but this caused **performance issues** due to the mutex-induced bottleneck.
To address this, the ptmalloc2 heap allocator introduced "arenas," where **each arena** acts as a **separate heap** with its **own** data **structures** and **mutex**, allowing multiple threads to perform heap operations without interfering with each other, as long as they use different arenas.
The default "main" arena handles heap operations for single-threaded applications. When **new threads** are added, the heap manager assigns them **secondary arenas** to reduce contention. It first attempts to attach each new thread to an unused arena, creating new ones if needed, up to a limit of 2 times the CPU cores for 32-bit systems and 8 times for 64-bit systems. Once the limit is reached, **threads must share arenas**, leading to potential contention.
The default "main" arena handles heap operations for single-threaded applications. When **new threads** are added, the heap manager assigns them **secondary arenas** to reduce contention. It first attempts to attach each new thread to an unused arena, creating new ones if needed, up to a limit of 2 times the number of CPU cores for 32-bit systems and 8 times for 64-bit systems. Once the limit is reached, **threads must share arenas**, leading to potential contention.
Unlike the main arena, which expands using the `brk` system call, secondary arenas create "subheaps" using `mmap` and `mprotect` to simulate the heap behavior, allowing flexibility in managing memory for multithreaded operations.
Unlike the main arena, which expands using the `brk` system call, secondary arenas create "subheaps" using `mmap` and `mprotect` to simulate the heap behaviour, allowing flexibility in managing memory for multithreaded operations.
### Subheaps
@ -48,14 +48,46 @@ Subheaps serve as memory reserves for secondary arenas in multithreaded applicat
* To "grow" the subheap, the heap manager uses `mprotect` to change page permissions from `PROT_NONE` to `PROT_READ | PROT_WRITE`, prompting the kernel to allocate physical memory to the previously reserved addresses. This step-by-step approach allows the subheap to expand as needed.
* Once the entire subheap is exhausted, the heap manager creates a new subheap to continue allocation.
### heap\_info <a href="#heap_info" id="heap_info"></a>
This struct allocates relevant information of the heap. Moreover, heap memory might not be continuous after more allocations, this struct will also store that info.
```c
// From https://github.com/bminor/glibc/blob/a07e000e82cb71238259e674529c37c12dc7d423/malloc/arena.c#L837
typedef struct _heap_info
{
mstate ar_ptr; /* Arena for this heap. */
struct _heap_info *prev; /* Previous heap. */
size_t size; /* Current size in bytes. */
size_t mprotect_size; /* Size in bytes that has been mprotected
PROT_READ|PROT_WRITE. */
size_t pagesize; /* Page size used when allocating the arena. */
/* Make sure the following data is properly aligned, particularly
that sizeof (heap_info) + 2 * SIZE_SZ is a multiple of
MALLOC_ALIGNMENT. */
char pad[-3 * SIZE_SZ & MALLOC_ALIGN_MASK];
} heap_info;
```
### malloc\_state
**Each heap** (main arena or other threads arenas) has a **`malloc_state` structure.**\
Its important to notice that the **main arena `malloc_stat`**`e` structure is a **global variable in the libc** (therefore located in the libc memory space).\
Its important to notice that the **main arena `malloc_state`** structure is a **global variable in the libc** (therefore located in the libc memory space).\
In the case of **`malloc_state`** structures of the heaps of threads, they are located **inside own thread "heap"**.
There some interesting things to note from this structure (see C code below):
* `__libc_lock_define (, mutex);` Is there to make sure this structure from the heap is accessed by 1 thread at a time
* Flags:
* ```c
#define NONCONTIGUOUS_BIT (2U)
#define contiguous(M) (((M)->flags & NONCONTIGUOUS_BIT) == 0)
#define noncontiguous(M) (((M)->flags & NONCONTIGUOUS_BIT) != 0)
#define set_noncontiguous(M) ((M)->flags |= NONCONTIGUOUS_BIT)
#define set_contiguous(M) ((M)->flags &= ~NONCONTIGUOUS_BIT)
```
* The `mchunkptr bins[NBINS * 2 - 2];` contains **pointers** to the **first and last chunks** of the small, large and unsorted **bins** (the -2 is because the index 0 is not used)
* Therefore, the **first chunk** of these bins will have a **backwards pointer to this structure** and the **last chunk** of these bins will have a **forward pointer** to this structure. Which basically means that if you can l**eak these addresses in the main arena** you will have a pointer to the structure in the **libc**.
* The structs `struct malloc_state *next;` and `struct malloc_state *next_free;` are linked lists os arenas
@ -63,20 +95,29 @@ There some interesting things to note from this structure (see C code below):
* The `last reminder` chunk comes from cases where an exact size chunk is not available and therefore a bigger chunk is splitter, a pointer remaining part is placed here.
```c
// From https://heap-exploitation.dhavalkapil.com/diving_into_glibc_heap/malloc_state
// From https://github.com/bminor/glibc/blob/a07e000e82cb71238259e674529c37c12dc7d423/malloc/malloc.c#L1812
struct malloc_state
{
/* Serialize access. */
__libc_lock_define (, mutex);
/* Flags (formerly in max_fast). */
int flags;
/* Set if the fastbin chunks contain recently inserted free blocks. */
/* Note this is a bool but not all targets support atomics on booleans. */
int have_fastchunks;
/* Fastbins */
mfastbinptr fastbinsY[NFASTBINS];
/* Base of the topmost chunk -- not otherwise kept in a bin */
mchunkptr top;
/* The remainder from the most recent split of a small request */
mchunkptr last_remainder;
/* Normal bins packed as described above */
mchunkptr bins[NBINS * 2 - 2];
@ -85,20 +126,20 @@ struct malloc_state
/* Linked list */
struct malloc_state *next;
/* Linked list for free arenas. Access to this field is serialized
by free_list_lock in arena.c. */
struct malloc_state *next_free;
/* Number of threads attached to this arena. 0 if the arena is on
the free list. Access to this field is serialized by
free_list_lock in arena.c. */
INTERNAL_SIZE_T attached_threads;
/* Memory allocated from the system in this arena. */
INTERNAL_SIZE_T system_mem;
INTERNAL_SIZE_T max_system_mem;
};
typedef struct malloc_state *mstate;
```
### malloc\_chunk
@ -106,7 +147,7 @@ typedef struct malloc_state *mstate;
This structure represents a particular chunk of memory. The various fields have different meaning for allocated and unallocated chunks.
```c
// From https://heap-exploitation.dhavalkapil.com/diving_into_glibc_heap/malloc_chunk
// https://github.com/bminor/glibc/blob/master/malloc/malloc.c
struct malloc_chunk {
INTERNAL_SIZE_T mchunk_prev_size; /* Size of previous chunk, if it is free. */
INTERNAL_SIZE_T mchunk_size; /* Size in bytes, including overhead. */
@ -134,10 +175,10 @@ Then, the space for the user data, and finally 0x08B to indicate the previous ch
Moreover, when available, the user data is used to contain also some data:
* Pointer to the next chunk
* Pointer to the previous chunk
* Size of the next chunk in the list
* Size of the previous chunk in the list
* **`fd`**: Pointer to the next chunk
* **`bk`**: Pointer to the previous chunk
* **`fd_nextsize`**: Pointer to the first chunk in the list is smaller than itself
* **`bk_nextsize`:** Pointer to the first chunk the list that is larger than itself
@ -147,6 +188,233 @@ Moreover, when available, the user data is used to contain also some data:
Note how liking the list this way prevents the need to having an array where every single chunk is being registered.
{% endhint %}
### Chunk Pointers
When malloc is used a pointer to the content that can be written is returned (just after the headers), however, when managing chunks, it's needed a pointer to the begining of the headers (metadata).\
For these conversions these functions are used:
```c
// https://github.com/bminor/glibc/blob/master/malloc/malloc.c
/* Convert a chunk address to a user mem pointer without correcting the tag. */
#define chunk2mem(p) ((void*)((char*)(p) + CHUNK_HDR_SZ))
/* Convert a user mem pointer to a chunk address and extract the right tag. */
#define mem2chunk(mem) ((mchunkptr)tag_at (((char*)(mem) - CHUNK_HDR_SZ)))
/* The smallest possible chunk */
#define MIN_CHUNK_SIZE (offsetof(struct malloc_chunk, fd_nextsize))
/* The smallest size we can malloc is an aligned minimal chunk */
#define MINSIZE \
(unsigned long)(((MIN_CHUNK_SIZE+MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK))
```
### Alignment & min size
The pointer to the chunk and `0x0f` must be 0.
```c
// From https://github.com/bminor/glibc/blob/a07e000e82cb71238259e674529c37c12dc7d423/sysdeps/generic/malloc-size.h#L61
#define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
// https://github.com/bminor/glibc/blob/a07e000e82cb71238259e674529c37c12dc7d423/sysdeps/i386/malloc-alignment.h
#define MALLOC_ALIGNMENT 16
// https://github.com/bminor/glibc/blob/master/malloc/malloc.c
/* Check if m has acceptable alignment */
#define aligned_OK(m) (((unsigned long)(m) & MALLOC_ALIGN_MASK) == 0)
#define misaligned_chunk(p) \
((uintptr_t)(MALLOC_ALIGNMENT == CHUNK_HDR_SZ ? (p) : chunk2mem (p)) \
& MALLOC_ALIGN_MASK)
/* pad request bytes into a usable size -- internal version */
/* Note: This must be a macro that evaluates to a compile time constant
if passed a literal constant. */
#define request2size(req) \
(((req) + SIZE_SZ + MALLOC_ALIGN_MASK < MINSIZE) ? \
MINSIZE : \
((req) + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)
/* Check if REQ overflows when padded and aligned and if the resulting
value is less than PTRDIFF_T. Returns the requested size or
MINSIZE in case the value is less than MINSIZE, or 0 if any of the
previous checks fail. */
static inline size_t
checked_request2size (size_t req) __nonnull (1)
{
if (__glibc_unlikely (req > PTRDIFF_MAX))
return 0;
/* When using tagged memory, we cannot share the end of the user
block with the header for the next chunk, so ensure that we
allocate blocks that are rounded up to the granule size. Take
care not to overflow from close to MAX_SIZE_T to a small
number. Ideally, this would be part of request2size(), but that
must be a macro that produces a compile time constant if passed
a constant literal. */
if (__glibc_unlikely (mtag_enabled))
{
/* Ensure this is not evaluated if !mtag_enabled, see gcc PR 99551. */
asm ("");
req = (req + (__MTAG_GRANULE_SIZE - 1)) &
~(size_t)(__MTAG_GRANULE_SIZE - 1);
}
return request2size (req);
}
```
Note that for calculating the total space needed it's only added `SIZE_SZ` 1 time because the `prev_size` field can be used to store data, therefore only the initial header is needed.
### Get Chunk data and alter metadata
These functions work by receiving a pointer to a chunk and are useful to check/set metadata:
* Check chunk flags
```c
// From https://github.com/bminor/glibc/blob/master/malloc/malloc.c
/* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
#define PREV_INUSE 0x1
/* extract inuse bit of previous chunk */
#define prev_inuse(p) ((p)->mchunk_size & PREV_INUSE)
/* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
#define IS_MMAPPED 0x2
/* check for mmap()'ed chunk */
#define chunk_is_mmapped(p) ((p)->mchunk_size & IS_MMAPPED)
/* size field is or'ed with NON_MAIN_ARENA if the chunk was obtained
from a non-main arena. This is only set immediately before handing
the chunk to the user, if necessary. */
#define NON_MAIN_ARENA 0x4
/* Check for chunk from main arena. */
#define chunk_main_arena(p) (((p)->mchunk_size & NON_MAIN_ARENA) == 0)
/* Mark a chunk as not being on the main arena. */
#define set_non_main_arena(p) ((p)->mchunk_size |= NON_MAIN_ARENA)
```
* Sizes and pointers to other chunks
```c
/*
Bits to mask off when extracting size
Note: IS_MMAPPED is intentionally not masked off from size field in
macros for which mmapped chunks should never be seen. This should
cause helpful core dumps to occur if it is tried by accident by
people extending or adapting this malloc.
*/
#define SIZE_BITS (PREV_INUSE | IS_MMAPPED | NON_MAIN_ARENA)
/* Get size, ignoring use bits */
#define chunksize(p) (chunksize_nomask (p) & ~(SIZE_BITS))
/* Like chunksize, but do not mask SIZE_BITS. */
#define chunksize_nomask(p) ((p)->mchunk_size)
/* Ptr to next physical malloc_chunk. */
#define next_chunk(p) ((mchunkptr) (((char *) (p)) + chunksize (p)))
/* Size of the chunk below P. Only valid if !prev_inuse (P). */
#define prev_size(p) ((p)->mchunk_prev_size)
/* Set the size of the chunk below P. Only valid if !prev_inuse (P). */
#define set_prev_size(p, sz) ((p)->mchunk_prev_size = (sz))
/* Ptr to previous physical malloc_chunk. Only valid if !prev_inuse (P). */
#define prev_chunk(p) ((mchunkptr) (((char *) (p)) - prev_size (p)))
/* Treat space at ptr + offset as a chunk */
#define chunk_at_offset(p, s) ((mchunkptr) (((char *) (p)) + (s)))
```
* Insue bit
```c
/* extract p's inuse bit */
#define inuse(p) \
((((mchunkptr) (((char *) (p)) + chunksize (p)))->mchunk_size) & PREV_INUSE)
/* set/clear chunk as being inuse without otherwise disturbing */
#define set_inuse(p) \
((mchunkptr) (((char *) (p)) + chunksize (p)))->mchunk_size |= PREV_INUSE
#define clear_inuse(p) \
((mchunkptr) (((char *) (p)) + chunksize (p)))->mchunk_size &= ~(PREV_INUSE)
/* check/set/clear inuse bits in known places */
#define inuse_bit_at_offset(p, s) \
(((mchunkptr) (((char *) (p)) + (s)))->mchunk_size & PREV_INUSE)
#define set_inuse_bit_at_offset(p, s) \
(((mchunkptr) (((char *) (p)) + (s)))->mchunk_size |= PREV_INUSE)
#define clear_inuse_bit_at_offset(p, s) \
(((mchunkptr) (((char *) (p)) + (s)))->mchunk_size &= ~(PREV_INUSE))
```
* Set head and footer (when chunk nos in use
```c
/* Set size at head, without disturbing its use bit */
#define set_head_size(p, s) ((p)->mchunk_size = (((p)->mchunk_size & SIZE_BITS) | (s)))
/* Set size/use field */
#define set_head(p, s) ((p)->mchunk_size = (s))
/* Set size at footer (only when chunk is not in use) */
#define set_foot(p, s) (((mchunkptr) ((char *) (p) + (s)))->mchunk_prev_size = (s))
```
* Get the size of the real usable data inside the chunk
```c
#pragma GCC poison mchunk_size
#pragma GCC poison mchunk_prev_size
/* This is the size of the real usable data in the chunk. Not valid for
dumped heap chunks. */
#define memsize(p) \
(__MTAG_GRANULE_SIZE > SIZE_SZ && __glibc_unlikely (mtag_enabled) ? \
chunksize (p) - CHUNK_HDR_SZ : \
chunksize (p) - CHUNK_HDR_SZ + (chunk_is_mmapped (p) ? 0 : SIZE_SZ))
/* If memory tagging is enabled the layout changes to accommodate the granule
size, this is wasteful for small allocations so not done by default.
Both the chunk header and user data has to be granule aligned. */
_Static_assert (__MTAG_GRANULE_SIZE <= CHUNK_HDR_SZ,
"memory tagging is not supported with large granule.");
static __always_inline void *
tag_new_usable (void *ptr)
{
if (__glibc_unlikely (mtag_enabled) && ptr)
{
mchunkptr cp = mem2chunk(ptr);
ptr = __libc_mtag_tag_region (__libc_mtag_new_tag (ptr), memsize (cp));
}
return ptr;
}
```
## Examples
### Quick Heap Example
Quick heap example from [https://guyinatuxedo.github.io/25-heap/index.html](https://guyinatuxedo.github.io/25-heap/index.html) but in arm64:
@ -178,6 +446,68 @@ The extra spaces reserved (0x21-0x10=0x11) comes from the **added headers** (0x1
0x4: Non Main Arena - Specifies that the chunk was obtained from outside of the main arena
```
### Multithreading Example
<details>
<summary>Multithread</summary>
```c
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <unistd.h>
#include <sys/types.h>
void* threadFuncMalloc(void* arg) {
printf("Hello from thread 1\n");
char* addr = (char*) malloc(1000);
printf("After malloc and before free in thread 1\n");
free(addr);
printf("After free in thread 1\n");
}
void* threadFuncNoMalloc(void* arg) {
printf("Hello from thread 2\n");
}
int main() {
pthread_t t1;
void* s;
int ret;
char* addr;
printf("Before creating thread 1\n");
getchar();
ret = pthread_create(&t1, NULL, threadFuncMalloc, NULL);
getchar();
printf("Before creating thread 2\n");
ret = pthread_create(&t1, NULL, threadFuncNoMalloc, NULL);
printf("Before exit\n");
getchar();
return 0;
}
```
</details>
Debugging the previous example it's possible to see how at the beginning there is only 1 arena:
<figure><img src="../../.gitbook/assets/image.png" alt=""><figcaption></figcaption></figure>
Then, after calling the first thread, the one that calls malloc, a new arena is created:
<figure><img src="../../.gitbook/assets/image (1).png" alt=""><figcaption></figcaption></figure>
and inside of it some chunks can be found:
<figure><img src="../../.gitbook/assets/image (2).png" alt=""><figcaption></figcaption></figure>
## Bins & Memory Allocations/Frees
Check what are the bins and how are they organized and how memory is allocated and freed in:
@ -190,8 +520,8 @@ Check what are the bins and how are they organized and how memory is allocated a
Functions involved in heap will perform certain check before performing its actions to try to make sure the heap wasn't corrupted:
{% content-ref url="heap-functions-security-checks.md" %}
[heap-functions-security-checks.md](heap-functions-security-checks.md)
{% content-ref url="heap-memory-functions/heap-functions-security-checks.md" %}
[heap-functions-security-checks.md](heap-memory-functions/heap-functions-security-checks.md)
{% endcontent-ref %}
## References

View file

@ -20,56 +20,6 @@ In order to improve the efficiency on how chunks are stored every chunk is not j
The initial address to each unsorted, small and large bins is inside the same array. The index 0 is unused, 1 is the unsorted bin, bins 2-64 are small bins and bins 65-127 are large bins.
### Small Bins
Small bins are faster than large bins but slower than fast bins.
Each bin of the 62 will have **chunks of the same size**: 16, 24, ... (with a max size of 504 bytes in 32bits and 1024 in 64bits). This helps in the speed on finding the bin where a space should be allocated and inserting and removing of entries on these lists.
### Large bins
Unlike small bins, which manage chunks of fixed sizes, each **large bin handle a range of chunk sizes**. This is more flexible, allowing the system to accommodate **various sizes** without needing a separate bin for each size.
In a memory allocator, large bins start where small bins end. The ranges for large bins grow progressively larger, meaning the first bin might cover chunks from 512 to 576 bytes, while the next covers 576 to 640 bytes. This pattern continues, with the largest bin containing all chunks above 1MB.
Large bins are slower to operate compared to small bins because they must **sort and search through a list of varying chunk sizes to find the best fit** for an allocation. When a chunk is inserted into a large bin, it has to be sorted, and when memory is allocated, the system must find the right chunk. This extra work makes them **slower**, but since large allocations are less common than small ones, it's an acceptable trade-off.
There are:
* 32 bins of 64B range
* 16 bins of 512B range
* 8bins of 4096B range
* 4bins of 32768B range
* 2bins of 262144B range
* 1bins of for reminding sizes
### Unsorted bin
The unsorted bin is a **fast cache** used by the heap manager to make memory allocation quicker. Here's how it works: When a program frees memory, the heap manager doesn't immediately put it in a specific bin. Instead, it first tries to **merge it with any neighbouring free chunks** to create a larger block of free memory. Then, it places this new chunk in a general bin called the "unsorted bin."
When a program **asks for memory**, the heap manager **checks the unsorted bin** to see if there's a chunk of enough size. If it finds one, it uses it right away. If it doesn't find a suitable chunk, it moves the freed chunks to their corresponding bins, either small or large, based on their size.
So, the unsorted bin is a way to speed up memory allocation by quickly reusing recently freed memory and reducing the need for time-consuming searches and merges.
{% hint style="danger" %}
Note that even in chunks are of different categories, if an available chunk is colliding with another available chunk (even if they are of different categories), they will be merged.
{% endhint %}
### Fast bins
Fast bins are designed to **speed up memory allocation for small chunks** by keeping recently freed chunks in a quick-access structure. These bins use a Last-In, First-Out (LIFO) approach, which means that the **most recently freed chunk is the first** to be reused when there's a new allocation request. This behavior is advantageous for speed, as it's faster to insert and remove from the top of a stack (LIFO) compared to a queue (FIFO).
Additionally, **fast bins use singly linked lists**, not double linked, which further improves speed. Since chunks in fast bins aren't merged with neighbours, there's no need for a complex structure that allows removal from the middle. A singly linked list is simpler and quicker for these operations.
Basically, what happens here is that the header (the pointer to the first chunk to check) is always pointing to the latest freed chunk of that size. So:
* When a new chunk is allocated of that size, the header is pointing to a free chunk to use. As this free chunk is pointing to the next one to use, this address is stored in the header so the next allocation knows where to get ana available chunk
* When a chunk is freed, the free chunk will save the address to the current available chunk and the address to this newly freed chunk will be pu in the header
{% hint style="danger" %}
Chunks in fast bins aren't automatically set as available so they keep as fast bin chunks for some time instead of being able to merge with other chunks.
{% endhint %}
### Tcache (Per-Thread Cache) Bins
Even though threads try to have their own heap (see [Arenas](bins-and-memory-allocations.md#arenas) and [Subheaps](bins-and-memory-allocations.md#subheaps)), there is the possibility that a process with a lot of threads (like a web server) **will end sharing the heap with another threads**. In this case, the main solution is the use of **lockers**, which might **slow down significantly the threads**.
@ -81,114 +31,213 @@ Therefore, a tcache is similar to a fast bin per thread in the way that it's a *
When a **chunk is allocated**, if there is a free chunk of the needed size in the **Tcache it'll use it**, if not, it'll need to wait for the heap lock to be able to find one in the global bins or create a new one.\
There also an optimization, in this case, while having the heap lock, the thread **will fill his Tcache with heap chunks (7) of the requested size**, so if case it needs more, it'll find them in Tcache.
## Allocation Flow
### Fast bins
{% hint style="success" %}
(This current explanation is from [https://heap-exploitation.dhavalkapil.com/diving\_into\_glibc\_heap/core\_functions](https://heap-exploitation.dhavalkapil.com/diving\_into\_glibc\_heap/core\_functions). TODO: Check last version and update it)
Fast bins are designed to **speed up memory allocation for small chunks** by keeping recently freed chunks in a quick-access structure. These bins use a Last-In, First-Out (LIFO) approach, which means that the **most recently freed chunk is the first** to be reused when there's a new allocation request. This behavior is advantageous for speed, as it's faster to insert and remove from the top of a stack (LIFO) compared to a queue (FIFO).
Additionally, **fast bins use singly linked lists**, not double linked, which further improves speed. Since chunks in fast bins aren't merged with neighbours, there's no need for a complex structure that allows removal from the middle. A singly linked list is simpler and quicker for these operations.
Basically, what happens here is that the header (the pointer to the first chunk to check) is always pointing to the latest freed chunk of that size. So:
* When a new chunk is allocated of that size, the header is pointing to a free chunk to use. As this free chunk is pointing to the next one to use, this address is stored in the header so the next allocation knows where to get ana available chunk
* When a chunk is freed, the free chunk will save the address to the current available chunk and the address to this newly freed chunk will be put in the header
{% hint style="danger" %}
Chunks in fast bins aren't set as available so they are keep as fast bin chunks for some time instead of being able to merge with other free chunks surrounding them.
{% endhint %}
Allocations are finally performed with the function: `void * _int_malloc (mstate av, size_t bytes)` and have this order:
```c
// From https://github.com/bminor/glibc/blob/a07e000e82cb71238259e674529c37c12dc7d423/malloc/malloc.c#L1711
1. Updates `bytes` to take care of **alignments**, etc.
2. Checks if `av` is **NULL** or not.
3. In the case of absence of **usable arena** (when `av` is NULL), calls `sysmalloc` to obtain chunk using mmap. If successful, calls `alloc_perturb`. Returns the pointer.
4. Depending on the size:
* \[Addition to the original] Use tcache before checking the next fastbin.
* \[Addition to the original] If no tcache but a different bin is used (see later step), try to fill tcache from that bin
* If size falls in the **fastbin** range:&#x20;
1. Get index into the fastbin array to access an appropriate bin according to the request size.&#x20;
2. Removes the first chunk in that bin and make `victim` point to it.
3. If `victim` is NULL, move on to the next case (smallbin).
4. If `victim` is not NULL, check the size of the chunk to ensure that it belongs to that particular bin. An error ("malloc(): memory corruption (fast)") is thrown otherwise.
5. Calls `alloc_perturb` and then returns the pointer.
* If size falls in the **smallbin** range:
1. Get index into the smallbin array to access an appropriate bin according to the request size.
2. If there are no chunks in this bin, move on to the next case. This is checked by comparing the pointers `bin` and `bin->bk`.
3. `victim` is made equal to `bin->bk` (the last chunk in the bin). If it is NULL (happens during `initialization`), call `malloc_consolidate` and skip this complete step of checking into different bins.
4. Otherwise, when `victim` is non NULL, check if `victim->bk->fd` and `victim` are equal or not. If they are not equal, an error (`malloc(): smallbin double linked list corrupted`) is thrown.
5. Sets the PREV\_INSUSE bit for the next chunk (in memory, not in the doubly linked list) for `victim`.
6. Remove this chunk from the bin list.
7. Set the appropriate arena bit for this chunk depending on `av`.
8. Calls `alloc_perturb` and then returns the pointer.
* If size does not fall in the smallbin range:
1. Get index into the largebin array to access an appropriate bin according to the request size.
2. See if `av` has fastchunks or not. This is done by checking the `FASTCHUNKS_BIT` in `av->flags`. If so, call `malloc_consolidate` on `av`.
5. If no pointer has yet been returned, this signifies one or more of the following cases:
1. Size falls into 'fastbin' range but no fastchunk is available.
2. Size falls into 'smallbin' range but no smallchunk is available (calls `malloc_consolidate` during initialization).
3. Size falls into 'largbin' range.
6. Next, **unsorted chunks** are checked and traversed chunks are placed into bins. This is the only place where chunks are placed into bins. Iterate the unsorted bin from the 'TAIL'.
1. `victim` points to the current chunk being considered.
2. Check if `victim`'s chunk size is within minimum (`2*SIZE_SZ`) and maximum (`av->system_mem`) range. Throw an error (`malloc(): memory corruption`) otherwise.
3. If (size of requested chunk falls in smallbin range) and (`victim` is the last remainder chunk) and (it is the only chunk in the unsorted bin) and (the chunks size >= the one requested): **Break the chunk into 2 chunks**:
* The first chunk matches the size requested and is returned.
* Left over chunk becomes the new last remainder chunk. It is inserted back into the unsorted bin.
1. Set `chunk_size` and `chunk_prev_size` fields appropriately for both chunks.
2. The first chunk is returned after calling `alloc_perturb`.
3. If the above condition is false, control reaches here. Remove `victim` from the unsorted bin. If the size of `victim` matches the size requested exactly, return this chunk after calling `alloc_perturb`.
4. If `victim`'s size falls in smallbin range, add the chunk in the appropriate smallbin at the `HEAD`.
5. Else insert into appropriate largebin while maintaining sorted order:
6. First checks the last chunk (smallest). If `victim` is smaller than the last chunk, insert it at the last.
7. Otherwise, loop to find a chunk with size >= size of `victim`. If size is exactly same, always insert in the second position.
8. Repeat this whole step a maximum of `MAX_ITERS` (10000) times or till all chunks in unsorted bin get exhausted.
7. After checking unsorted chunks, check if requested size does not fall in the smallbin range, if so then check **largebins**.
1. Get index into largebin array to access an appropriate bin according to the request size.
2. If the size of the largest chunk (the first chunk in the bin) is greater than the size requested:
1. Iterate from 'TAIL' to find a chunk (`victim`) with the smallest size >= the requested size.
2. Call `unlink` to remove the `victim` chunk from the bin.
3. Calculate `remainder_size` for the `victim`'s chunk (this will be `victim`'s chunk size - requested size).
4. If this `remainder_size` >= `MINSIZE` (the minimum chunk size including the headers), split the chunk into two chunks. Otherwise, the entire `victim` chunk will be returned. Insert the remainder chunk in the unsorted bin (at the 'TAIL' end). A check is made in unsorted bin whether `unsorted_chunks(av)->fd->bk == unsorted_chunks(av)`. An error is thrown otherwise ("malloc(): corrupted unsorted chunks").
5. Return the `victim` chunk after calling `alloc_perturb`.
8. Till now, we have checked unsorted bin and also the respective fast, small or large bin. Note that a single bin (fast or small) was checked using the **exact** size of the requested chunk. Repeat the following steps till all bins are exhausted:
1. The index into bin array is incremented to check the next bin.
2. Use `av->binmap` map to skip over bins that are empty.
3. `victim` is pointed to the 'TAIL' of the current bin.
4. Using the binmap ensures that if a bin is skipped (in the above 2nd step), it is definitely empty. However, it does not ensure that all empty bins will be skipped. Check if the victim is empty or not. If empty, again skip the bin and repeat the above process (or 'continue' this loop) till we arrive at a nonempty bin.
5. Split the chunk (`victim` points to the last chunk of a nonempty bin) into two chunks. Insert the remainder chunk in unsorted bin (at the 'TAIL' end). A check is made in the unsorted bin whether `unsorted_chunks(av)->fd->bk == unsorted_chunks(av)`. An error is thrown otherwise ("malloc(): corrupted unsorted chunks 2").
6. Return the `victim` chunk after calling `alloc_perturb`.
9. If still no empty bin is found, 'top' chunk will be used to service the request:
1. `victim` points to `av->top`.
2. If size of 'top' chunk >= 'requested size' + `MINSIZE`, split it into two chunks. In this case, the remainder chunk becomes the new 'top' chunk and the other chunk is returned to the user after calling `alloc_perturb`.
3. See if `av` has fastchunks or not. This is done by checking the `FASTCHUNKS_BIT` in `av->flags`. If so, call `malloc_consolidate` on `av`. Return to step 6 (where we check unsorted bin).
4. If `av` does not have fastchunks, call `sysmalloc` and return the pointer obtained after calling `alloc_perturb`.
/*
Fastbins
An array of lists holding recently freed small chunks. Fastbins
are not doubly linked. It is faster to single-link them, and
since chunks are never removed from the middles of these lists,
double linking is not necessary. Also, unlike regular bins, they
are not even processed in FIFO order (they use faster LIFO) since
ordering doesn't much matter in the transient contexts in which
fastbins are normally used.
Chunks in fastbins keep their inuse bit set, so they cannot
be consolidated with other free chunks. malloc_consolidate
releases all chunks in fastbins and consolidates them with
other free chunks.
*/
typedef struct malloc_chunk *mfastbinptr;
#define fastbin(ar_ptr, idx) ((ar_ptr)->fastbinsY[idx])
/* offset 2 to use otherwise unindexable first 2 bins */
#define fastbin_index(sz) \
((((unsigned int) (sz)) >> (SIZE_SZ == 8 ? 4 : 3)) - 2)
/* The maximum fastbin request size we support */
#define MAX_FAST_SIZE (80 * SIZE_SZ / 4)
#define NFASTBINS (fastbin_index (request2size (MAX_FAST_SIZE)) + 1)
```
### Unsorted bin
The unsorted bin is a **cache** used by the heap manager to make memory allocation quicker. Here's how it works: When a program frees a chunk, and if this chunk cannot be allocated in a tcache or fast bin and is not colliding with the top chunk, the heap manager doesn't immediately put it in a specific small or large bin. Instead, it first tries to **merge it with any neighbouring free chunks** to create a larger block of free memory. Then, it places this new chunk in a general bin called the "unsorted bin."
When a program **asks for memory**, the heap manager **checks the unsorted bin** to see if there's a chunk of enough size. If it finds one, it uses it right away. If it doesn't find a suitable chunk in the unsorted bin, it moves all the chunks in this list to their corresponding bins, either small or large, based on their size.
Note that if a larger chunk is split in 2 halves and the rest is larger than MINSIZE, it'll be paced back into the unsorted bin.&#x20;
So, the unsorted bin is a way to speed up memory allocation by quickly reusing recently freed memory and reducing the need for time-consuming searches and merges.
{% hint style="danger" %}
Note that even if chunks are of different categories, if an available chunk is colliding with another available chunk (even if they belong originally to different bins), they will be merged.
{% endhint %}
### Small Bins
Small bins are faster than large bins but slower than fast bins.
Each bin of the 62 will have **chunks of the same size**: 16, 24, ... (with a max size of 504 bytes in 32bits and 1024 in 64bits). This helps in the speed on finding the bin where a space should be allocated and inserting and removing of entries on these lists.
This is how the size of the small bin is calculated according to the index of the bin:
* Smallest size: 2\*4\*index (e.g. index 5 -> 40)
* Biggest size: 2\*8\*index (e.g. index 5 -> 80)
```c
// From https://github.com/bminor/glibc/blob/a07e000e82cb71238259e674529c37c12dc7d423/malloc/malloc.c#L1711
#define NSMALLBINS 64
#define SMALLBIN_WIDTH MALLOC_ALIGNMENT
#define SMALLBIN_CORRECTION (MALLOC_ALIGNMENT > CHUNK_HDR_SZ)
#define MIN_LARGE_SIZE ((NSMALLBINS - SMALLBIN_CORRECTION) * SMALLBIN_WIDTH)
#define in_smallbin_range(sz) \
((unsigned long) (sz) < (unsigned long) MIN_LARGE_SIZE)
#define smallbin_index(sz) \
((SMALLBIN_WIDTH == 16 ? (((unsigned) (sz)) >> 4) : (((unsigned) (sz)) >> 3))\
+ SMALLBIN_CORRECTION)
```
Function to choose between small and large bins:
```c
#define bin_index(sz) \
((in_smallbin_range (sz)) ? smallbin_index (sz) : largebin_index (sz))
```
### Large bins
Unlike small bins, which manage chunks of fixed sizes, each **large bin handle a range of chunk sizes**. This is more flexible, allowing the system to accommodate **various sizes** without needing a separate bin for each size.
In a memory allocator, large bins start where small bins end. The ranges for large bins grow progressively larger, meaning the first bin might cover chunks from 512 to 576 bytes, while the next covers 576 to 640 bytes. This pattern continues, with the largest bin containing all chunks above 1MB.
Large bins are slower to operate compared to small bins because they must **sort and search through a list of varying chunk sizes to find the best fit** for an allocation. When a chunk is inserted into a large bin, it has to be sorted, and when memory is allocated, the system must find the right chunk. This extra work makes them **slower**, but since large allocations are less common than small ones, it's an acceptable trade-off.
There are:
* 32 bins of 64B range (collide with small bins)
* 16 bins of 512B range (collide with small bins)
* 8bins of 4096B range (part collide with small bins)
* 4bins of 32768B range
* 2bins of 262144B range
* 1bins of for reminding sizes
```c
// From https://github.com/bminor/glibc/blob/a07e000e82cb71238259e674529c37c12dc7d423/malloc/malloc.c#L1711
#define largebin_index_32(sz) \
(((((unsigned long) (sz)) >> 6) <= 38) ? 56 + (((unsigned long) (sz)) >> 6) :\
((((unsigned long) (sz)) >> 9) <= 20) ? 91 + (((unsigned long) (sz)) >> 9) :\
((((unsigned long) (sz)) >> 12) <= 10) ? 110 + (((unsigned long) (sz)) >> 12) :\
((((unsigned long) (sz)) >> 15) <= 4) ? 119 + (((unsigned long) (sz)) >> 15) :\
((((unsigned long) (sz)) >> 18) <= 2) ? 124 + (((unsigned long) (sz)) >> 18) :\
126)
#define largebin_index_32_big(sz) \
(((((unsigned long) (sz)) >> 6) <= 45) ? 49 + (((unsigned long) (sz)) >> 6) :\
((((unsigned long) (sz)) >> 9) <= 20) ? 91 + (((unsigned long) (sz)) >> 9) :\
((((unsigned long) (sz)) >> 12) <= 10) ? 110 + (((unsigned long) (sz)) >> 12) :\
((((unsigned long) (sz)) >> 15) <= 4) ? 119 + (((unsigned long) (sz)) >> 15) :\
((((unsigned long) (sz)) >> 18) <= 2) ? 124 + (((unsigned long) (sz)) >> 18) :\
126)
// XXX It remains to be seen whether it is good to keep the widths of
// XXX the buckets the same or whether it should be scaled by a factor
// XXX of two as well.
#define largebin_index_64(sz) \
(((((unsigned long) (sz)) >> 6) <= 48) ? 48 + (((unsigned long) (sz)) >> 6) :\
((((unsigned long) (sz)) >> 9) <= 20) ? 91 + (((unsigned long) (sz)) >> 9) :\
((((unsigned long) (sz)) >> 12) <= 10) ? 110 + (((unsigned long) (sz)) >> 12) :\
((((unsigned long) (sz)) >> 15) <= 4) ? 119 + (((unsigned long) (sz)) >> 15) :\
((((unsigned long) (sz)) >> 18) <= 2) ? 124 + (((unsigned long) (sz)) >> 18) :\
126)
#define largebin_index(sz) \
(SIZE_SZ == 8 ? largebin_index_64 (sz) \
: MALLOC_ALIGNMENT == 16 ? largebin_index_32_big (sz) \
: largebin_index_32 (sz))
```
### Top Chunk
```c
// From https://github.com/bminor/glibc/blob/a07e000e82cb71238259e674529c37c12dc7d423/malloc/malloc.c#L1711
/*
Top
The top-most available chunk (i.e., the one bordering the end of
available memory) is treated specially. It is never included in
any bin, is used only if no other chunk is available, and is
released back to the system if it is very large (see
M_TRIM_THRESHOLD). Because top initially
points to its own bin with initial zero size, thus forcing
extension on the first malloc request, we avoid having any special
code in malloc to check whether it even exists yet. But we still
need to do so when getting memory from system, so we make
initial_top treat the bin as a legal but unusable chunk during the
interval between initialization and the first call to
sysmalloc. (This is somewhat delicate, since it relies on
the 2 preceding words to be zero during this interval as well.)
*/
/* Conveniently, the unsorted bin can be used as dummy top on first call */
#define initial_top(M) (unsorted_chunks (M))
```
Basically, this is a chunk containing all the currently available heap. When a malloc is performed, if there isn't any available free chunk to use, this top chunk will be reducing its size giving the necessary space.\
The pointer to the Top Chunk is stored in the `malloc_state` struct.
Moreover, at the beginning, it's possible to use the unsorted chunk as the top chunk.
### Last Reminder
When malloc is used and a chunk is divided (from the unlinked list or from the top chunk for example), the chunk created from the rest of the divided chunk is called Last Reminder and it's pointer is stored in the `malloc_state` struct.
## Allocation Flow
Check out:
{% content-ref url="heap-memory-functions/malloc-and-sysmalloc.md" %}
[malloc-and-sysmalloc.md](heap-memory-functions/malloc-and-sysmalloc.md)
{% endcontent-ref %}
## Free Flow
{% hint style="success" %}
(This current explanation is from [https://heap-exploitation.dhavalkapil.com/diving\_into\_glibc\_heap/core\_functions](https://heap-exploitation.dhavalkapil.com/diving\_into\_glibc\_heap/core\_functions). TODO: Check last version and update it)
{% endhint %}
Check out:
The final function freeing chunks of memory is `_int_free (mstate av, mchunkptr p, int have_lock)` :
1. Check whether `p` is before `p + chunksize(p)` in the memory (to avoid wrapping). An error (`free(): invalid pointer`) is thrown otherwise.
2. Check whether the chunk is at least of size `MINSIZE` or a multiple of `MALLOC_ALIGNMENT`. An error (`free(): invalid size`) is thrown otherwise.
3. If the chunk's size falls in fastbin list:
1. Check if next chunk's size is between minimum and maximum size (`av->system_mem`), throw an error (`free(): invalid next size (fast)`) otherwise.
2. Calls `free_perturb` on the chunk.
3. Set `FASTCHUNKS_BIT` for `av`.
4. Get index into fastbin array according to chunk size.
5. Check if the top of the bin is not the chunk we are going to add. Otherwise, throw an error (`double free or corruption (fasttop)`).
6. Check if the size of the fastbin chunk at the top is the same as the chunk we are adding. Otherwise, throw an error (`invalid fastbin entry (free)`).
7. Insert the chunk at the top of the fastbin list and return.
4. If the chunk is not mmapped:
1. Check if the chunk is the top chunk or not. If yes, an error (`double free or corruption (top)`) is thrown.
2. Check whether next chunk (by memory) is within the boundaries of the arena. If not, an error (`double free or corruption (out)`) is thrown.
3. Check whether next chunk's (by memory) previous in use bit is marked or not. If not, an error (`double free or corruption (!prev)`) is thrown.
4. Check whether the size of next chunk is between the minimum and maximum size (`av->system_mem`). If not, an error (`free(): invalid next size (normal)`) is thrown.
5. Call `free_perturb` on the chunk.
6. If previous chunk (by memory) is not in use, call `unlink` on the previous chunk.
7. If next chunk (by memory) is not top chunk:
1. If next chunk is not in use, call `unlink` on the next chunk.
2. Merge the chunk with previous, next (by memory), if any is free and add it to the head of unsorted bin. Before inserting, check whether `unsorted_chunks(av)->fd->bk == unsorted_chunks(av)` or not. If not, an error ("free(): corrupted unsorted chunks") is thrown.
8. If next chunk (by memory) was a top chunk, merge the chunks appropriately into a single top chunk.
5. If the chunk was mmapped, call `munmap_chunk`.
{% content-ref url="heap-memory-functions/free.md" %}
[free.md](heap-memory-functions/free.md)
{% endcontent-ref %}
## Heap Functions Security Checks
Check the security checks performed by heavily used functions in heap in:
{% content-ref url="heap-functions-security-checks.md" %}
[heap-functions-security-checks.md](heap-functions-security-checks.md)
{% content-ref url="heap-memory-functions/heap-functions-security-checks.md" %}
[heap-functions-security-checks.md](heap-memory-functions/heap-functions-security-checks.md)
{% endcontent-ref %}
## References

View file

@ -1,92 +0,0 @@
# Heap Functions Security Checks
<details>
<summary><strong>Learn AWS hacking from zero to hero with</strong> <a href="https://training.hacktricks.xyz/courses/arte"><strong>htARTE (HackTricks AWS Red Team Expert)</strong></a><strong>!</strong></summary>
Other ways to support HackTricks:
* If you want to see your **company advertised in HackTricks** or **download HackTricks in PDF** Check the [**SUBSCRIPTION PLANS**](https://github.com/sponsors/carlospolop)!
* Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
* Discover [**The PEASS Family**](https://opensea.io/collection/the-peass-family), our collection of exclusive [**NFTs**](https://opensea.io/collection/the-peass-family)
* **Join the** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** us on **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks\_live)**.**
* **Share your hacking tricks by submitting PRs to the** [**HackTricks**](https://github.com/carlospolop/hacktricks) and [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
</details>
## unlink
This function removes a chunk from a doubly linked list. Common checks ensure that the linked list structure remains consistent when unlinking chunks.
* **Consistency Checks**:
* Check if `P->fd->bk == P` and `P->bk->fd == P`.
* Error message: `corrupted double-linked list`
## \_int\_malloc
This function is responsible for allocating memory from the heap. Checks here ensure memory is not corrupted during allocation.
* **Fastbin Size Check**:
* When removing a chunk from a fastbin, ensure the chunk's size is within the fastbin range.
* Error message: `malloc(): memory corruption (fast)`
* **Smallbin Consistency Check**:
* When removing a chunk from a smallbin, ensure the previous and next links in the doubly linked list are consistent.
* Error message: `malloc(): smallbin double linked list corrupted`
* **Unsorted Bin Memory Range Check**:
* Ensure the size of chunks in the unsorted bin is within minimum and maximum limits.
* Error message: `malloc(): memory corruption | malloc(): invalid next size (unsorted)`
* **Unsorted Bin Consistency Check (First Scenario)**:
* When inserting a remainder chunk into the unsorted bin, check if `unsorted_chunks(av)->fd->bk == unsorted_chunks(av)`.
* Error message: `malloc(): corrupted unsorted chunks`
* **Unsorted Bin Consistency Check (Second Scenario)**:
* Same as the previous check, but triggered when inserting after splitting a fast or small chunk.
* Error message: `malloc(): corrupted unsorted chunks 2`
## \_int\_free
This function frees previously allocated memory. The checks here help ensure proper memory deallocation and prevent memory corruption.
* **Pointer Boundary Check**:
* Ensure the pointer being freed isn't wrapping around the memory.
* Error message: `free(): invalid pointer`
* **Size Check**:
* Ensure the size of the chunk being freed is at least `MINSIZE` or a multiple of `MALLOC_ALIGNMENT`.
* Error message: `free(): invalid size`
* **Fastbin Size Check**:
* For fastbin chunks, ensure the next chunk's size is within the minimum and maximum limits.
* Error message: `free(): invalid next size (fast)`
* **Fastbin Double Free Check**:
* When inserting a chunk into a fastbin, ensure the chunk at the head isn't the same as the one being inserted.
* Error message: `double free or corruption (fasttop)`
* **Fastbin Consistency Check**:
* When inserting into a fastbin, ensure the sizes of the head chunk and the chunk being inserted are the same.
* Error message: `invalid fastbin entry (free)`
* **Top Chunk Consistency Check**:
* For non-fastbin chunks, ensure the chunk isn't the same as the top chunk.
* Error message: `double free or corruption (top)`
* **Memory Boundaries Check**:
* Ensure the next chunk by memory is within the boundaries of the arena.
* Error message: `double free or corruption (out)`
* **Prev\_inuse Bit Check**:
* Ensure the previous-in-use bit in the next chunk is marked.
* Error message: `double free or corruption (!prev)`
* **Normal Size Check**:
* Ensure the size of the next chunk is within valid ranges.
* Error message: `free(): invalid next size (normal)`
* **Unsorted Bin Consistency Check**:
* When inserting a coalesced chunk into the unsorted bin, check if `unsorted_chunks(av)->fd->bk == unsorted_chunks(av)`.
* Error message: `free(): corrupted unsorted chunks`
<details>
<summary><strong>Learn AWS hacking from zero to hero with</strong> <a href="https://training.hacktricks.xyz/courses/arte"><strong>htARTE (HackTricks AWS Red Team Expert)</strong></a><strong>!</strong></summary>
Other ways to support HackTricks:
* If you want to see your **company advertised in HackTricks** or **download HackTricks in PDF** Check the [**SUBSCRIPTION PLANS**](https://github.com/sponsors/carlospolop)!
* Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
* Discover [**The PEASS Family**](https://opensea.io/collection/the-peass-family), our collection of exclusive [**NFTs**](https://opensea.io/collection/the-peass-family)
* **Join the** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** us on **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks\_live)**.**
* **Share your hacking tricks by submitting PRs to the** [**HackTricks**](https://github.com/carlospolop/hacktricks) and [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
</details>

View file

@ -0,0 +1,31 @@
# Heap Memory Functions
<details>
<summary><strong>Learn AWS hacking from zero to hero with</strong> <a href="https://training.hacktricks.xyz/courses/arte"><strong>htARTE (HackTricks AWS Red Team Expert)</strong></a><strong>!</strong></summary>
Other ways to support HackTricks:
* If you want to see your **company advertised in HackTricks** or **download HackTricks in PDF** Check the [**SUBSCRIPTION PLANS**](https://github.com/sponsors/carlospolop)!
* Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
* Discover [**The PEASS Family**](https://opensea.io/collection/the-peass-family), our collection of exclusive [**NFTs**](https://opensea.io/collection/the-peass-family)
* **Join the** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** us on **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks\_live)**.**
* **Share your hacking tricks by submitting PRs to the** [**HackTricks**](https://github.com/carlospolop/hacktricks) and [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
</details>
##
<details>
<summary><strong>Learn AWS hacking from zero to hero with</strong> <a href="https://training.hacktricks.xyz/courses/arte"><strong>htARTE (HackTricks AWS Red Team Expert)</strong></a><strong>!</strong></summary>
Other ways to support HackTricks:
* If you want to see your **company advertised in HackTricks** or **download HackTricks in PDF** Check the [**SUBSCRIPTION PLANS**](https://github.com/sponsors/carlospolop)!
* Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
* Discover [**The PEASS Family**](https://opensea.io/collection/the-peass-family), our collection of exclusive [**NFTs**](https://opensea.io/collection/the-peass-family)
* **Join the** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** us on **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks\_live)**.**
* **Share your hacking tricks by submitting PRs to the** [**HackTricks**](https://github.com/carlospolop/hacktricks) and [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
</details>

View file

@ -0,0 +1,410 @@
# free
<details>
<summary><strong>Learn AWS hacking from zero to hero with</strong> <a href="https://training.hacktricks.xyz/courses/arte"><strong>htARTE (HackTricks AWS Red Team Expert)</strong></a><strong>!</strong></summary>
Other ways to support HackTricks:
* If you want to see your **company advertised in HackTricks** or **download HackTricks in PDF** Check the [**SUBSCRIPTION PLANS**](https://github.com/sponsors/carlospolop)!
* Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
* Discover [**The PEASS Family**](https://opensea.io/collection/the-peass-family), our collection of exclusive [**NFTs**](https://opensea.io/collection/the-peass-family)
* **Join the** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** us on **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks\_live)**.**
* **Share your hacking tricks by submitting PRs to the** [**HackTricks**](https://github.com/carlospolop/hacktricks) and [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
</details>
## Free Order Summary <a href="#libc_free" id="libc_free"></a>
(No checks are explained in this summary and some case have been omitted for brevity)
1. If the address is null don't do anything
2. If the chunk was mmaped, mummap it and finish
3. Call `_int_free`:
1. If possible, add the chunk to the tcache
2. If possible, add the chunk to the fast bin
3. Call `_int_free_merge_chunk` to consolidate the chunk is needed and add it to the unsorted list
## \_\_libc\_free <a href="#libc_free" id="libc_free"></a>
`Free` calls `__libc_free`.
* If the address passed is Null (0) don't do anything.
* Check pointer tag
* If the chunk is `mmaped`, `mummap` it and that all
* If not, add the color and call `_int_free` over it
<details>
<summary>__lib_free code</summary>
```c
void
__libc_free (void *mem)
{
mstate ar_ptr;
mchunkptr p; /* chunk corresponding to mem */
if (mem == 0) /* free(0) has no effect */
return;
/* Quickly check that the freed pointer matches the tag for the memory.
This gives a useful double-free detection. */
if (__glibc_unlikely (mtag_enabled))
*(volatile char *)mem;
int err = errno;
p = mem2chunk (mem);
if (chunk_is_mmapped (p)) /* release mmapped memory. */
{
/* See if the dynamic brk/mmap threshold needs adjusting.
Dumped fake mmapped chunks do not affect the threshold. */
if (!mp_.no_dyn_threshold
&& chunksize_nomask (p) > mp_.mmap_threshold
&& chunksize_nomask (p) <= DEFAULT_MMAP_THRESHOLD_MAX)
{
mp_.mmap_threshold = chunksize (p);
mp_.trim_threshold = 2 * mp_.mmap_threshold;
LIBC_PROBE (memory_mallopt_free_dyn_thresholds, 2,
mp_.mmap_threshold, mp_.trim_threshold);
}
munmap_chunk (p);
}
else
{
MAYBE_INIT_TCACHE ();
/* Mark the chunk as belonging to the library again. */
(void)tag_region (chunk2mem (p), memsize (p));
ar_ptr = arena_for_chunk (p);
_int_free (ar_ptr, p, 0);
}
__set_errno (err);
}
libc_hidden_def (__libc_free)
```
</details>
## \_int\_free <a href="#int_free" id="int_free"></a>
### \_int\_free start <a href="#int_free" id="int_free"></a>
It starts with some checks making sure:
* the **pointer** is **aligned,** or trigger error `free(): invalid pointer`
* the **size** isn't less than the minimum and that the **size** is also **aligned** or trigger error: `free(): invalid size`
<details>
<summary>_int_free start</summary>
```c
// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L4493C1-L4513C28
#define aligned_OK(m) (((unsigned long) (m) &MALLOC_ALIGN_MASK) == 0)
static void
_int_free (mstate av, mchunkptr p, int have_lock)
{
INTERNAL_SIZE_T size; /* its size */
mfastbinptr *fb; /* associated fastbin */
size = chunksize (p);
/* Little security check which won't hurt performance: the
allocator never wraps around at the end of the address space.
Therefore we can exclude some size values which might appear
here by accident or by "design" from some intruder. */
if (__builtin_expect ((uintptr_t) p > (uintptr_t) -size, 0)
|| __builtin_expect (misaligned_chunk (p), 0))
malloc_printerr ("free(): invalid pointer");
/* We know that each chunk is at least MINSIZE bytes in size or a
multiple of MALLOC_ALIGNMENT. */
if (__glibc_unlikely (size < MINSIZE || !aligned_OK (size)))
malloc_printerr ("free(): invalid size");
check_inuse_chunk(av, p);
```
</details>
### \_int\_free tcache <a href="#int_free" id="int_free"></a>
It'll first try to allocate this chunk in the related tcache. However, some checks are performed previously. It'll loop through all the chunks of the tcache in the same index as the freed chunk and:
* If there are more entries than `mp_.tcache_count`: `free(): too many chunks detected in tcache`
* If the entry is not aligned: free(): `unaligned chunk detected in tcache 2`
* if the freed chunk was already freed and is present as chunk in the tcache: `free(): double free detected in tcache 2`
If all goes well, the chunk is added to the tcache and the functions returns.
<details>
<summary>_int_free tcache</summary>
```c
// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L4515C1-L4554C7
#if USE_TCACHE
{
size_t tc_idx = csize2tidx (size);
if (tcache != NULL && tc_idx < mp_.tcache_bins)
{
/* Check to see if it's already in the tcache. */
tcache_entry *e = (tcache_entry *) chunk2mem (p);
/* This test succeeds on double free. However, we don't 100%
trust it (it also matches random payload data at a 1 in
2^<size_t> chance), so verify it's not an unlikely
coincidence before aborting. */
if (__glibc_unlikely (e->key == tcache_key))
{
tcache_entry *tmp;
size_t cnt = 0;
LIBC_PROBE (memory_tcache_double_free, 2, e, tc_idx);
for (tmp = tcache->entries[tc_idx];
tmp;
tmp = REVEAL_PTR (tmp->next), ++cnt)
{
if (cnt >= mp_.tcache_count)
malloc_printerr ("free(): too many chunks detected in tcache");
if (__glibc_unlikely (!aligned_OK (tmp)))
malloc_printerr ("free(): unaligned chunk detected in tcache 2");
if (tmp == e)
malloc_printerr ("free(): double free detected in tcache 2");
/* If we get here, it was a coincidence. We've wasted a
few cycles, but don't abort. */
}
}
if (tcache->counts[tc_idx] < mp_.tcache_count)
{
tcache_put (p, tc_idx);
return;
}
}
}
#endif
```
</details>
### \_int\_free fast bin <a href="#int_free" id="int_free"></a>
Start by checking that the size is suitable for fast bin and check if it's possible to set it close to the top chunk.
Then, add the freed chunk at the top of the fast bin while performing some checks:
* If the size of the chunk is invalid (too big or small) trigger: `free(): invalid next size (fast)`
* If the added chunk was already the top of the fast bin: `double free or corruption (fasttop)`
* If the size of the chunk at the top has a different size of the chunk we are adding: `invalid fastbin entry (free)`
<details>
<summary>_int_free Fast Bin</summary>
```c
// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L4556C2-L4631C4
/*
If eligible, place chunk on a fastbin so it can be found
and used quickly in malloc.
*/
if ((unsigned long)(size) <= (unsigned long)(get_max_fast ())
#if TRIM_FASTBINS
/*
If TRIM_FASTBINS set, don't place chunks
bordering top into fastbins
*/
&& (chunk_at_offset(p, size) != av->top)
#endif
) {
if (__builtin_expect (chunksize_nomask (chunk_at_offset (p, size))
<= CHUNK_HDR_SZ, 0)
|| __builtin_expect (chunksize (chunk_at_offset (p, size))
>= av->system_mem, 0))
{
bool fail = true;
/* We might not have a lock at this point and concurrent modifications
of system_mem might result in a false positive. Redo the test after
getting the lock. */
if (!have_lock)
{
__libc_lock_lock (av->mutex);
fail = (chunksize_nomask (chunk_at_offset (p, size)) <= CHUNK_HDR_SZ
|| chunksize (chunk_at_offset (p, size)) >= av->system_mem);
__libc_lock_unlock (av->mutex);
}
if (fail)
malloc_printerr ("free(): invalid next size (fast)");
}
free_perturb (chunk2mem(p), size - CHUNK_HDR_SZ);
atomic_store_relaxed (&av->have_fastchunks, true);
unsigned int idx = fastbin_index(size);
fb = &fastbin (av, idx);
/* Atomically link P to its fastbin: P->FD = *FB; *FB = P; */
mchunkptr old = *fb, old2;
if (SINGLE_THREAD_P)
{
/* Check that the top of the bin is not the record we are going to
add (i.e., double free). */
if (__builtin_expect (old == p, 0))
malloc_printerr ("double free or corruption (fasttop)");
p->fd = PROTECT_PTR (&p->fd, old);
*fb = p;
}
else
do
{
/* Check that the top of the bin is not the record we are going to
add (i.e., double free). */
if (__builtin_expect (old == p, 0))
malloc_printerr ("double free or corruption (fasttop)");
old2 = old;
p->fd = PROTECT_PTR (&p->fd, old);
}
while ((old = catomic_compare_and_exchange_val_rel (fb, p, old2))
!= old2);
/* Check that size of fastbin chunk at the top is the same as
size of the chunk that we are adding. We can dereference OLD
only if we have the lock, otherwise it might have already been
allocated again. */
if (have_lock && old != NULL
&& __builtin_expect (fastbin_index (chunksize (old)) != idx, 0))
malloc_printerr ("invalid fastbin entry (free)");
}
```
</details>
### \_int\_free finale <a href="#int_free" id="int_free"></a>
If the chunk wasn't allocated yet on any bin, call `_int_free_merge_chunk`
<details>
<summary>_int_free finale</summary>
```c
/*
Consolidate other non-mmapped chunks as they arrive.
*/
else if (!chunk_is_mmapped(p)) {
/* If we're single-threaded, don't lock the arena. */
if (SINGLE_THREAD_P)
have_lock = true;
if (!have_lock)
__libc_lock_lock (av->mutex);
_int_free_merge_chunk (av, p, size);
if (!have_lock)
__libc_lock_unlock (av->mutex);
}
/*
If the chunk was allocated via mmap, release via munmap().
*/
else {
munmap_chunk (p);
}
}
```
</details>
## \_int\_free\_merge\_chunk
This function will try to merge chunk P of SIZE bytes with its neighbours. Put the resulting chunk on the unsorted bin list.
Some checks are performed:
* If the chunk is the top chunk: `double free or corruption (top)`
* If the next chunk is outside of the boundaries of the arena: `double free or corruption (out)`
* If the chunk is not marked as used (in the `prev_inuse` from the following chunk): `double free or corruption (!prev)`
* If the next chunk has a too little size or too big: `free(): invalid next size (normal)`
* if the previous chunk is not in use, it will try to consolidate. But, if the prev\_size differs from the size indicated in the previous chunk: `corrupted size vs. prev_size while consolidating`
<details>
<summary>_int_free_merge_chunk code</summary>
```c
// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L4660C1-L4702C2
/* Try to merge chunk P of SIZE bytes with its neighbors. Put the
resulting chunk on the appropriate bin list. P must not be on a
bin list yet, and it can be in use. */
static void
_int_free_merge_chunk (mstate av, mchunkptr p, INTERNAL_SIZE_T size)
{
mchunkptr nextchunk = chunk_at_offset(p, size);
/* Lightweight tests: check whether the block is already the
top block. */
if (__glibc_unlikely (p == av->top))
malloc_printerr ("double free or corruption (top)");
/* Or whether the next chunk is beyond the boundaries of the arena. */
if (__builtin_expect (contiguous (av)
&& (char *) nextchunk
>= ((char *) av->top + chunksize(av->top)), 0))
malloc_printerr ("double free or corruption (out)");
/* Or whether the block is actually not marked used. */
if (__glibc_unlikely (!prev_inuse(nextchunk)))
malloc_printerr ("double free or corruption (!prev)");
INTERNAL_SIZE_T nextsize = chunksize(nextchunk);
if (__builtin_expect (chunksize_nomask (nextchunk) <= CHUNK_HDR_SZ, 0)
|| __builtin_expect (nextsize >= av->system_mem, 0))
malloc_printerr ("free(): invalid next size (normal)");
free_perturb (chunk2mem(p), size - CHUNK_HDR_SZ);
/* Consolidate backward. */
if (!prev_inuse(p))
{
INTERNAL_SIZE_T prevsize = prev_size (p);
size += prevsize;
p = chunk_at_offset(p, -((long) prevsize));
if (__glibc_unlikely (chunksize(p) != prevsize))
malloc_printerr ("corrupted size vs. prev_size while consolidating");
unlink_chunk (av, p);
}
/* Write the chunk header, maybe after merging with the following chunk. */
size = _int_free_create_chunk (av, p, size, nextchunk, nextsize);
_int_free_maybe_consolidate (av, size);
}
```
</details>
<details>
<summary><strong>Learn AWS hacking from zero to hero with</strong> <a href="https://training.hacktricks.xyz/courses/arte"><strong>htARTE (HackTricks AWS Red Team Expert)</strong></a><strong>!</strong></summary>
Other ways to support HackTricks:
* If you want to see your **company advertised in HackTricks** or **download HackTricks in PDF** Check the [**SUBSCRIPTION PLANS**](https://github.com/sponsors/carlospolop)!
* Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
* Discover [**The PEASS Family**](https://opensea.io/collection/the-peass-family), our collection of exclusive [**NFTs**](https://opensea.io/collection/the-peass-family)
* **Join the** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** us on **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks\_live)**.**
* **Share your hacking tricks by submitting PRs to the** [**HackTricks**](https://github.com/carlospolop/hacktricks) and [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
</details>

View file

@ -0,0 +1,187 @@
# Heap Functions Security Checks
<details>
<summary><strong>Learn AWS hacking from zero to hero with</strong> <a href="https://training.hacktricks.xyz/courses/arte"><strong>htARTE (HackTricks AWS Red Team Expert)</strong></a><strong>!</strong></summary>
Other ways to support HackTricks:
* If you want to see your **company advertised in HackTricks** or **download HackTricks in PDF** Check the [**SUBSCRIPTION PLANS**](https://github.com/sponsors/carlospolop)!
* Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
* Discover [**The PEASS Family**](https://opensea.io/collection/the-peass-family), our collection of exclusive [**NFTs**](https://opensea.io/collection/the-peass-family)
* **Join the** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** us on **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks\_live)**.**
* **Share your hacking tricks by submitting PRs to the** [**HackTricks**](https://github.com/carlospolop/hacktricks) and [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
</details>
## unlink
For more info check:
{% content-ref url="unlink.md" %}
[unlink.md](unlink.md)
{% endcontent-ref %}
This is a summary of the performed checks:
* Check if the indicated size of the chunk is the same as the `prev_size` indicated in the next chunk
* Error message: `corrupted size vs. prev_size`
* Check also that `P->fd->bk == P` and `P->bk->fw == P`
* Error message: `corrupted double-linked list`
* If the chunk is not small, check that `P->fd_nextsize->bk_nextsize == P` and `P->bk_nextsize->fd_nextsize == P`
* Error message: `corrupted double-linked list (not small)`
## \_int\_malloc
For more info check:
{% content-ref url="malloc-and-sysmalloc.md" %}
[malloc-and-sysmalloc.md](malloc-and-sysmalloc.md)
{% endcontent-ref %}
* **Checks during fast bin search:**
* If the chunk is misaligned:
* Error message: `malloc(): unaligned fastbin chunk detected 2`
* If the forward chunk is misaligned:
* Error message: `malloc(): unaligned fastbin chunk detected`
* If the returned chunk has a size that isn't correct because of it's index in the fast bin:
* Error message: `malloc(): memory corruption (fast)`
* If any chunk used to fill the tcache is misaligned:
* Error message: `malloc(): unaligned fastbin chunk detected 3`
* **Checks during small bin search:**
* If `victim->bk->fd != victim`:
* Error message: `malloc(): smallbin double linked list corrupted`
* **Checks during consolidate** performed for each fast bin chunk:&#x20;
* If the chunk is unaligned trigger:
* Error message: `malloc_consolidate(): unaligned fastbin chunk detected`
* If the chunk has a different size that the one it should because of the index it's in:
* Error message: `malloc_consolidate(): invalid chunk size`
* If the previous chunk is not in use and the previous chunk has a size different of the one indicated by prev\_chunk:
* Error message: `corrupted size vs. prev_size in fastbins`
* **Checks during unsorted bin search**:
* If the chunk size is weird (too small or too big):&#x20;
* Error message: `malloc(): invalid size (unsorted)`
* If the next chunk size is weird (too small or too big):
* Error message: `malloc(): invalid next size (unsorted)`
* If the previous size indicated by the next chunk differs from the size of the chunk:
* Error message: `malloc(): mismatching next->prev_size (unsorted)`
* If not `victim->bck->fd == victim` or not `victim->fd == av (arena)`:
* Error message: `malloc(): unsorted double linked list corrupted`
* As we are always checking the las one, it's fd should be pointing always to the arena struct.
* If the next chunk isn't indicating that the previous is in use:
* Error message: `malloc(): invalid next->prev_inuse (unsorted)`
* If `fwd->bk_nextsize->fd_nextsize != fwd`:
* Error message: `malloc(): largebin double linked list corrupted (nextsize)`
* If `fwd->bk->fd != fwd`:
* Error message: `malloc(): largebin double linked list corrupted (bk)`
* **Checks during large bin (by index) search:**
* `bck->fd-> bk != bck`:
* Error message: `malloc(): corrupted unsorted chunks`
* **Checks during large bin (next bigger) search:**
* `bck->fd-> bk != bck`:
* Error message: `malloc(): corrupted unsorted chunks2`
* **Checks during Top chunk use:**
* `chunksize(av->top) > av->system_mem`:
* Error message: `malloc(): corrupted top size`
## `tcache_get_n`
* **Checks in `tcache_get_n`:**
* If chunk is misaligned:
* Error message: `malloc(): unaligned tcache chunk detected`
## `tcache_thread_shutdown`
* **Checks in `tcache_thread_shutdown`:**
* If chunk is misaligned:
* Error message: `tcache_thread_shutdown(): unaligned tcache chunk detected`
## `__libc_realloc`
* **Checks in `__libc_realloc`:**
* If old pointer is misaligned or the size was incorrect:
* Error message: `realloc(): invalid pointer`
## `_int_free`
For more info check:
{% content-ref url="free.md" %}
[free.md](free.md)
{% endcontent-ref %}
* **Checks during the start of `_int_free`:**
* Pointer is aligned:
* Error message: `free(): invalid pointer`
* Size larger than `MINSIZE` and size also aligned:
* Error message: `free(): invalid size`
* **Checks in `_int_free` tcache:**
* If there are more entries than `mp_.tcache_count`:
* Error message: `free(): too many chunks detected in tcache`
* If the entry is not aligned:
* Error message: `free(): unaligned chunk detected in tcache 2`
* If the freed chunk was already freed and is present as chunk in the tcache:
* Error message: `free(): double free detected in tcache 2`
* **Checks in `_int_free` fast bin:**
* If the size of the chunk is invalid (too big or small) trigger:
* Error message: `free(): invalid next size (fast)`
* If the added chunk was already the top of the fast bin:
* Error message: `double free or corruption (fasttop)`
* If the size of the chunk at the top has a different size of the chunk we are adding:
* Error message: `invalid fastbin entry (free)`
## **`_int_free_merge_chunk`**
* **Checks in `_int_free_merge_chunk`:**
* If the chunk is the top chunk:
* Error message: `double free or corruption (top)`
* If the next chunk is outside of the boundaries of the arena:
* Error message: `double free or corruption (out)`
* If the chunk is not marked as used (in the prev\_inuse from the following chunk):
* Error message: `double free or corruption (!prev)`
* If the next chunk has a too little size or too big:
* Error message: `free(): invalid next size (normal)`
* If the previous chunk is not in use, it will try to consolidate. But, if the `prev_size` differs from the size indicated in the previous chunk:
* Error message: `corrupted size vs. prev_size while consolidating`
## **`_int_free_create_chunk`**
* **Checks in `_int_free_create_chunk`:**
* Adding a chunk into the unsorted bin, check if `unsorted_chunks(av)->fd->bk == unsorted_chunks(av)`:
* Error message: `free(): corrupted unsorted chunks`
## `do_check_malloc_state`
* **Checks in `do_check_malloc_state`:**
* If misaligned fast bin chunk:
* Error message: `do_check_malloc_state(): unaligned fastbin chunk detected`
## `malloc_consolidate`
* **Checks in `malloc_consolidate`:**
* If misaligned fast bin chunk:
* Error message: `malloc_consolidate(): unaligned fastbin chunk detected`
* If incorrect fast bin chunk size:
* Error message: `malloc_consolidate(): invalid chunk size`
## `_int_realloc`
* **Checks in `_int_realloc`:**
* Size is too big or too small:
* Error message: `realloc(): invalid old size`
* Size of the next chunk is too big or too small:
* Error message: `realloc(): invalid next size`
<details>
<summary><strong>Learn AWS hacking from zero to hero with</strong> <a href="https://training.hacktricks.xyz/courses/arte"><strong>htARTE (HackTricks AWS Red Team Expert)</strong></a><strong>!</strong></summary>
Other ways to support HackTricks:
* If you want to see your **company advertised in HackTricks** or **download HackTricks in PDF** Check the [**SUBSCRIPTION PLANS**](https://github.com/sponsors/carlospolop)!
* Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
* Discover [**The PEASS Family**](https://opensea.io/collection/the-peass-family), our collection of exclusive [**NFTs**](https://opensea.io/collection/the-peass-family)
* **Join the** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** us on **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks\_live)**.**
* **Share your hacking tricks by submitting PRs to the** [**HackTricks**](https://github.com/carlospolop/hacktricks) and [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
</details>

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,107 @@
# unlink
<details>
<summary><strong>Learn AWS hacking from zero to hero with</strong> <a href="https://training.hacktricks.xyz/courses/arte"><strong>htARTE (HackTricks AWS Red Team Expert)</strong></a><strong>!</strong></summary>
Other ways to support HackTricks:
* If you want to see your **company advertised in HackTricks** or **download HackTricks in PDF** Check the [**SUBSCRIPTION PLANS**](https://github.com/sponsors/carlospolop)!
* Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
* Discover [**The PEASS Family**](https://opensea.io/collection/the-peass-family), our collection of exclusive [**NFTs**](https://opensea.io/collection/the-peass-family)
* **Join the** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** us on **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks\_live)**.**
* **Share your hacking tricks by submitting PRs to the** [**HackTricks**](https://github.com/carlospolop/hacktricks) and [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
</details>
### Code
```c
// From https://github.com/bminor/glibc/blob/master/malloc/malloc.c
/* Take a chunk off a bin list. */
static void
unlink_chunk (mstate av, mchunkptr p)
{
if (chunksize (p) != prev_size (next_chunk (p)))
malloc_printerr ("corrupted size vs. prev_size");
mchunkptr fd = p->fd;
mchunkptr bk = p->bk;
if (__builtin_expect (fd->bk != p || bk->fd != p, 0))
malloc_printerr ("corrupted double-linked list");
fd->bk = bk;
bk->fd = fd;
if (!in_smallbin_range (chunksize_nomask (p)) && p->fd_nextsize != NULL)
{
if (p->fd_nextsize->bk_nextsize != p
|| p->bk_nextsize->fd_nextsize != p)
malloc_printerr ("corrupted double-linked list (not small)");
// Added: If the FD is not in the nextsize list
if (fd->fd_nextsize == NULL)
{
if (p->fd_nextsize == p)
fd->fd_nextsize = fd->bk_nextsize = fd;
else
// Link the nexsize list in when removing the new chunk
{
fd->fd_nextsize = p->fd_nextsize;
fd->bk_nextsize = p->bk_nextsize;
p->fd_nextsize->bk_nextsize = fd;
p->bk_nextsize->fd_nextsize = fd;
}
}
else
{
p->fd_nextsize->bk_nextsize = p->bk_nextsize;
p->bk_nextsize->fd_nextsize = p->fd_nextsize;
}
}
}
```
### Graphical Explanation
Check this great graphical explanation of the unlink process:
<figure><img src="../../../.gitbook/assets/image (3).png" alt=""><figcaption><p><a href="https://ctf-wiki.mahaloz.re/pwn/linux/glibc-heap/implementation/figure/unlink_smallbin_intro.png">https://ctf-wiki.mahaloz.re/pwn/linux/glibc-heap/implementation/figure/unlink_smallbin_intro.png</a></p></figcaption></figure>
### Security Checks
* Check if the indicated size of the chunk is the same as the prev\_size indicated in the next chunk
* Check also that `P->fd->bk == P` and `P->bk->fw == P`
* If the chunk is not small, check that `P->fd_nextsize->bk_nextsize == P` and `P->bk_nextsize->fd_nextsize == P`
### Leaks
An unlinked chunk is not cleaning the allocated addreses, so having access to rad it, it's possible to leak some interesting addresses:
Libc Leaks:
* If P is located in the head of the doubly linked list, `bk` will be pointing to `malloc_state` in libc
* If P is located at the end of the doubly linked list, `fd` will be pointing to `malloc_state` in libc
* When the doubly linked list contains only one free chunk, P is in the doubly linked list, and both `fd` and `bk` can leak the address inside `malloc_state`.
Heap leaks:
* If P is located in the head of the doubly linked list, `fd` will be pointing to an available chunk in the heap
* If P is located at the end of the doubly linked list, `bk` will be pointing to an available chunk in the heap
* If P is in the doubly linked list, both `fd` and `bk` will be pointing to an available chunk in the heap
<details>
<summary><strong>Learn AWS hacking from zero to hero with</strong> <a href="https://training.hacktricks.xyz/courses/arte"><strong>htARTE (HackTricks AWS Red Team Expert)</strong></a><strong>!</strong></summary>
Other ways to support HackTricks:
* If you want to see your **company advertised in HackTricks** or **download HackTricks in PDF** Check the [**SUBSCRIPTION PLANS**](https://github.com/sponsors/carlospolop)!
* Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
* Discover [**The PEASS Family**](https://opensea.io/collection/the-peass-family), our collection of exclusive [**NFTs**](https://opensea.io/collection/the-peass-family)
* **Join the** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** us on **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks\_live)**.**
* **Share your hacking tricks by submitting PRs to the** [**HackTricks**](https://github.com/carlospolop/hacktricks) and [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
</details>

View file

@ -40,7 +40,7 @@ This gadget basically allows to confirm that something interesting was executed
This technique uses the [**ret2csu**](ret2csu.md) gadget. And this is because if you access this gadget in the middle of some instructions you get gadgets to control **`rsi`** and **`rdi`**:
<figure><img src="../../.gitbook/assets/image (1) (1).png" alt="" width="278"><figcaption><p><a href="https://www.scs.stanford.edu/brop/bittau-brop.pdf">https://www.scs.stanford.edu/brop/bittau-brop.pdf</a></p></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (1) (1) (1).png" alt="" width="278"><figcaption><p><a href="https://www.scs.stanford.edu/brop/bittau-brop.pdf">https://www.scs.stanford.edu/brop/bittau-brop.pdf</a></p></figcaption></figure>
These would be the gadgets:

View file

@ -87,7 +87,7 @@ gef➤ search-pattern 0x400560
Another way to control **`rdi`** and **`rsi`** from the ret2csu gadget is by accessing it specific offsets:
<figure><img src="../../.gitbook/assets/image (2) (1).png" alt="" width="283"><figcaption><p><a href="https://www.scs.stanford.edu/brop/bittau-brop.pdf">https://www.scs.stanford.edu/brop/bittau-brop.pdf</a></p></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (2) (1) (1).png" alt="" width="283"><figcaption><p><a href="https://www.scs.stanford.edu/brop/bittau-brop.pdf">https://www.scs.stanford.edu/brop/bittau-brop.pdf</a></p></figcaption></figure>
Check this page for more info:

View file

@ -14,7 +14,7 @@ Other ways to support HackTricks:
</details>
<figure><img src="../../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).
@ -730,7 +730,7 @@ There are several tools out there that will perform part of the proposed actions
* All free courses of [**@Jhaddix**](https://twitter.com/Jhaddix) like [**The Bug Hunter's Methodology v4.0 - Recon Edition**](https://www.youtube.com/watch?v=p4JgIu1mceI)
<figure><img src="../../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).

View file

@ -14,7 +14,7 @@ Other ways to support HackTricks:
</details>
<figure><img src="../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).
@ -150,7 +150,7 @@ Check also the page about [**NTLM**](../windows-hardening/ntlm/), it could be ve
* [**CBC-MAC**](../crypto-and-stego/cipher-block-chaining-cbc-mac-priv.md)
* [**Padding Oracle**](../crypto-and-stego/padding-oracle-priv.md)
<figure><img src="../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).

View file

@ -14,7 +14,7 @@ Other ways to support HackTricks:
</details>
<figure><img src="../../../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).
@ -133,7 +133,7 @@ However, in this kind of containers these protections will usually exist, but yo
You can find **examples** on how to **exploit some RCE vulnerabilities** to get scripting languages **reverse shells** and execute binaries from memory in [**https://github.com/carlospolop/DistrolessRCE**](https://github.com/carlospolop/DistrolessRCE).
<figure><img src="../../../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).

View file

@ -14,7 +14,7 @@ Other ways to support HackTricks:
</details>
<figure><img src="../../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).
@ -260,7 +260,7 @@ If there is an ACL that only allows some IPs to query the SMNP service, you can
* snmpd.conf
* snmp-config.xml
<figure><img src="../../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).

View file

@ -12,7 +12,7 @@
</details>
<figure><img src="../../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).
@ -53,7 +53,7 @@ msf6 auxiliary(scanner/snmp/snmp_enum) > exploit
* [https://medium.com/@in9uz/cisco-nightmare-pentesting-cisco-networks-like-a-devil-f4032eb437b9](https://medium.com/@in9uz/cisco-nightmare-pentesting-cisco-networks-like-a-devil-f4032eb437b9)
<figure><img src="../../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).

View file

@ -14,7 +14,7 @@ Other ways to support HackTricks:
</details>
<figure><img src="../../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).
@ -365,7 +365,7 @@ Find more info about web vulns in:
You can use tools such as [https://github.com/dgtlmoon/changedetection.io](https://github.com/dgtlmoon/changedetection.io) to monitor pages for modifications that might insert vulnerabilities.
<figure><img src="../../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).

View file

@ -102,13 +102,13 @@ In the _Extend_ menu (/admin/modules), you can activate what appear to be plugin
Before activation:
<figure><img src="../../../.gitbook/assets/image.png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (4).png" alt=""><figcaption></figcaption></figure>
After activation:
<figure><img src="../../../.gitbook/assets/image (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (2).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (2) (1).png" alt=""><figcaption></figcaption></figure>
### Part 2 (leveraging feature _Configuration synchronization_) <a href="#part-2-leveraging-feature-configuration-synchronization" id="part-2-leveraging-feature-configuration-synchronization"></a>
@ -133,7 +133,7 @@ allow_insecure_uploads: false
```
<figure><img src="../../../.gitbook/assets/image (3).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (3) (1).png" alt=""><figcaption></figcaption></figure>
To:
@ -149,7 +149,7 @@ allow_insecure_uploads: true
```
<figure><img src="../../../.gitbook/assets/image (4).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (4) (1).png" alt=""><figcaption></figcaption></figure>
**Patch field.field.media.document.field\_media\_document.yml**

View file

@ -14,7 +14,7 @@ Other ways to support HackTricks:
</details>
<figure><img src="../../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).
@ -80,7 +80,7 @@ curl https://jira.some.example.com/rest/api/2/mypermissions | jq | grep -iB6 '"h
* [https://github.com/0x48piraj/Jiraffe](https://github.com/0x48piraj/Jiraffe)
* [https://github.com/bcoles/jira\_scan](https://github.com/bcoles/jira\_scan)
<figure><img src="../../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).

View file

@ -14,7 +14,7 @@ Other ways to support HackTricks:
</details>
<figure><img src="../../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).
@ -340,7 +340,7 @@ More information in: [https://medium.com/swlh/polyglot-files-a-hackers-best-frie
* [https://www.idontplaydarts.com/2012/06/encoding-web-shells-in-png-idat-chunks/](https://www.idontplaydarts.com/2012/06/encoding-web-shells-in-png-idat-chunks/)
* [https://medium.com/swlh/polyglot-files-a-hackers-best-friend-850bf812dd8a](https://medium.com/swlh/polyglot-files-a-hackers-best-friend-850bf812dd8a)
<figure><img src="../../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).

View file

@ -14,7 +14,7 @@ Other ways to support HackTricks:
</details>
<figure><img src="../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).
@ -279,7 +279,7 @@ The token's expiry is checked using the "exp" Payload claim. Given that JWTs are
{% embed url="https://github.com/ticarpi/jwt_tool" %}
<figure><img src="../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).

View file

@ -16,7 +16,7 @@ Other ways to support HackTricks:
</details>
<figure><img src="../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).
@ -235,7 +235,7 @@ intitle:"phpLDAPadmin" inurl:cmd.php
{% embed url="https://github.com/swisskyrepo/PayloadsAllTheThings/tree/master/LDAP%20Injection" %}
<figure><img src="../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).

View file

@ -14,7 +14,7 @@ Other ways to support HackTricks:
</details>
<figure><img src="../../../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).
@ -106,7 +106,7 @@ SELECT $$hacktricks$$;
SELECT $TAG$hacktricks$TAG$;
```
<figure><img src="../../../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).

View file

@ -1,6 +1,6 @@
# XSS (Cross Site Scripting)
<figure><img src="../../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).
@ -1544,7 +1544,7 @@ Find **more SVG payloads in** [**https://github.com/allanlw/svg-cheatsheet**](ht
* [https://gist.github.com/rvrsh3ll/09a8b933291f9f98e8ec](https://gist.github.com/rvrsh3ll/09a8b933291f9f98e8ec)
* [https://netsec.expert/2020/02/01/xss-in-2020.html](https://netsec.expert/2020/02/01/xss-in-2020.html)
<figure><img src="../../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you are interested in **hacking career** and hack the unhackable - **we are hiring!** (_fluent polish written and spoken required_).