hacktricks/binary-exploitation/libc-heap/heap-memory-functions/malloc-and-sysmalloc.md

52 KiB

malloc & sysmalloc

Apprenez le piratage AWS de zéro à héros avec htARTE (HackTricks AWS Red Team Expert)!

Autres façons de soutenir HackTricks :

Résumé de l'ordre d'allocation

(Aucune vérification n'est expliquée dans ce résumé et certains cas ont été omis pour des raisons de concision)

  1. __libc_malloc tente d'obtenir un chunk depuis le tcache, sinon il appelle _int_malloc
  2. _int_malloc :
    1. Tente de générer l'arène s'il n'y en a pas
    2. Si un chunk fast bin de la bonne taille est disponible, l'utiliser
    3. Remplir le tcache avec d'autres chunks fast
    4. Si un chunk small bin de la bonne taille est disponible, l'utiliser
    5. Remplir le tcache avec d'autres chunks de cette taille
    6. Si la taille demandée n'est pas pour les petits bins, consolider le fast bin en unsorted bin
    7. Vérifier le unsorted bin, utiliser le premier chunk avec suffisamment d'espace
      1. Si le chunk trouvé est plus grand, le diviser pour renvoyer une partie et ajouter le reste à unsorted bin
      2. Si un chunk est de la même taille que la taille demandée, l'utiliser pour remplir le tcache au lieu de le renvoyer (jusqu'à ce que le tcache soit plein, puis renvoyer le suivant)
      3. Pour chaque chunk de taille plus petite vérifié, le mettre dans son petit bin respectif ou grand bin
    8. Vérifier le grand bin à l'index de la taille demandée
      1. Commencer à chercher à partir du premier chunk qui est plus grand que la taille demandée, si l'un est trouvé le renvoyer et ajouter le reste au petit bin
    9. Vérifier les grands bins des index suivants jusqu'à la fin
      1. À partir de l'index plus grand suivant, vérifier s'il y a un chunk, diviser le premier chunk trouvé pour l'utiliser pour la taille demandée et ajouter le reste à unsorted bin
    10. Si rien n'est trouvé dans les bins précédents, obtenir un chunk depuis le chunk supérieur
    11. Si le chunk supérieur n'était pas assez grand, l'agrandir avec sysmalloc

__libc_malloc

La fonction malloc appelle en réalité __libc_malloc. Cette fonction vérifiera le tcache pour voir s'il y a un chunk disponible de la taille désirée. S'il y en a, il l'utilisera et sinon, il vérifiera s'il s'agit d'un seul thread et dans ce cas, il appellera _int_malloc dans l'arène principale, et sinon, il appellera _int_malloc dans l'arène du thread.

Code de __libc_malloc ```c // From https://github.com/bminor/glibc/blob/master/malloc/malloc.c

#if IS_IN (libc) void * __libc_malloc (size_t bytes) { mstate ar_ptr; void *victim;

_Static_assert (PTRDIFF_MAX <= SIZE_MAX / 2, "PTRDIFF_MAX is not more than half of SIZE_MAX");

if (!__malloc_initialized) ptmalloc_init (); #if USE_TCACHE /* int_free also calls request2size, be careful to not pad twice. */ size_t tbytes = checked_request2size (bytes); if (tbytes == 0) { __set_errno (ENOMEM); return NULL; } size_t tc_idx = csize2tidx (tbytes);

MAYBE_INIT_TCACHE ();

DIAG_PUSH_NEEDS_COMMENT; if (tc_idx < mp_.tcache_bins && tcache != NULL && tcache->counts[tc_idx] > 0) { victim = tcache_get (tc_idx); return tag_new_usable (victim); } DIAG_POP_NEEDS_COMMENT; #endif

if (SINGLE_THREAD_P) { victim = tag_new_usable (_int_malloc (&main_arena, bytes)); assert (!victim || chunk_is_mmapped (mem2chunk (victim)) || &main_arena == arena_for_chunk (mem2chunk (victim))); return victim; }

arena_get (ar_ptr, bytes);

victim = _int_malloc (ar_ptr, bytes); /* Retry with another arena only if we were able to find a usable arena before. */ if (!victim && ar_ptr != NULL) { LIBC_PROBE (memory_malloc_retry, 1, bytes); ar_ptr = arena_get_retry (ar_ptr, bytes); victim = _int_malloc (ar_ptr, bytes); }

if (ar_ptr != NULL) __libc_lock_unlock (ar_ptr->mutex);

victim = tag_new_usable (victim);

assert (!victim || chunk_is_mmapped (mem2chunk (victim)) || ar_ptr == arena_for_chunk (mem2chunk (victim))); return victim; }

</details>

Notez comment il étiquettera toujours le pointeur retourné avec `tag_new_usable`, à partir du code :
```c
void *tag_new_usable (void *ptr)

Allocate a new random color and use it to color the user region of
a chunk; this may include data from the subsequent chunk's header
if tagging is sufficiently fine grained.  Returns PTR suitably
recolored for accessing the memory there.

_int_malloc

C'est la fonction qui alloue de la mémoire en utilisant les autres bacs et le chunk supérieur.

  • Début

Il commence par définir certaines variables et obtenir la taille réelle dont l'espace mémoire demandé a besoin :

Début de \_int\_malloc ```c // From f942a732d3/malloc/malloc.c (L3847) static void * _int_malloc (mstate av, size_t bytes) { INTERNAL_SIZE_T nb; /* normalized request size */ unsigned int idx; /* associated bin index */ mbinptr bin; /* associated bin */

mchunkptr victim; /* inspected/selected chunk / INTERNAL_SIZE_T size; / its size / int victim_index; / its bin index */

mchunkptr remainder; /* remainder from a split / unsigned long remainder_size; / its size */

unsigned int block; /* bit map traverser / unsigned int bit; / bit map traverser / unsigned int map; / current word of binmap */

mchunkptr fwd; /* misc temp for linking / mchunkptr bck; / misc temp for linking */

#if USE_TCACHE size_t tcache_unsorted_count; /* count of unsorted chunks processed */ #endif

/* Convert request size to internal form by adding SIZE_SZ bytes overhead plus possibly more to obtain necessary alignment and/or to obtain a size of at least MINSIZE, the smallest allocatable size. Also, checked_request2size returns false for request sizes that are so large that they wrap around zero when padded and aligned. */

nb = checked_request2size (bytes); if (nb == 0) { __set_errno (ENOMEM); return NULL; }

</details>

### Arène

Dans le cas improbable où il n'y a pas d'arènes utilisables, il utilise `sysmalloc` pour obtenir un morceau de `mmap`:

<details>

<summary>_int_malloc pas d'arène</summary>
```c
// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L3885C3-L3893C6
/* There are no usable arenas.  Fall back to sysmalloc to get a chunk from
mmap.  */
if (__glibc_unlikely (av == NULL))
{
void *p = sysmalloc (nb, av);
if (p != NULL)
alloc_perturb (p, bytes);
return p;
}

Fast Bin

Si la taille nécessaire se trouve dans les tailles des Fast Bins, essayez d'utiliser un chunk du fast bin. Fondamentalement, en fonction de la taille, il trouvera l'index du fast bin où les chunks valides devraient être situés, et s'il y en a, il en retournera un.
De plus, si le tcache est activé, il remplira le tcache bin de cette taille avec des fast bins.

Pendant l'exécution de ces actions, certaines vérifications de sécurité sont effectuées ici :

  • Si le chunk n'est pas aligné : malloc(): unaligned fastbin chunk detected 2
  • Si le chunk suivant n'est pas aligné : malloc(): unaligned fastbin chunk detected
  • Si le chunk retourné a une taille incorrecte en raison de son index dans le fast bin : malloc(): memory corruption (fast)
  • Si un chunk utilisé pour remplir le tcache n'est pas aligné : malloc(): unaligned fastbin chunk detected 3
_int_malloc fast bin ```c // From f942a732d3/malloc/malloc.c (L3895C3-L3967C6) /* If the size qualifies as a fastbin, first check corresponding bin. This code is safe to execute even if av is not yet initialized, so we can try it without checking, which saves some time on this fast path. */

#define REMOVE_FB(fb, victim, pp)
do
{
victim = pp;
if (victim == NULL)
break;
pp = REVEAL_PTR (victim->fd);
if (__glibc_unlikely (pp != NULL && misaligned_chunk (pp)))
malloc_printerr ("malloc(): unaligned fastbin chunk detected");
}
while ((pp = catomic_compare_and_exchange_val_acq (fb, pp, victim))
!= victim); \

if ((unsigned long) (nb) <= (unsigned long) (get_max_fast ())) { idx = fastbin_index (nb); mfastbinptr *fb = &fastbin (av, idx); mchunkptr pp; victim = *fb;

if (victim != NULL) { if (__glibc_unlikely (misaligned_chunk (victim))) malloc_printerr ("malloc(): unaligned fastbin chunk detected 2");

if (SINGLE_THREAD_P) fb = REVEAL_PTR (victim->fd); else REMOVE_FB (fb, pp, victim); if (__glibc_likely (victim != NULL)) { size_t victim_idx = fastbin_index (chunksize (victim)); if (__builtin_expect (victim_idx != idx, 0)) malloc_printerr ("malloc(): memory corruption (fast)"); check_remalloced_chunk (av, victim, nb); #if USE_TCACHE / While we're here, if we see other chunks of the same size, stash them in the tcache. */ size_t tc_idx = csize2tidx (nb); if (tcache != NULL && tc_idx < mp_.tcache_bins) { mchunkptr tc_victim;

/* While bin not empty and tcache not full, copy chunks. */ while (tcache->counts[tc_idx] < mp_.tcache_count && (tc_victim = *fb) != NULL) { if (__glibc_unlikely (misaligned_chunk (tc_victim))) malloc_printerr ("malloc(): unaligned fastbin chunk detected 3"); if (SINGLE_THREAD_P) *fb = REVEAL_PTR (tc_victim->fd); else { REMOVE_FB (fb, pp, tc_victim); if (__glibc_unlikely (tc_victim == NULL)) break; } tcache_put (tc_victim, tc_idx); } } #endif void *p = chunk2mem (victim); alloc_perturb (p, bytes); return p; } } }

</details>

### Petit Bac

Comme indiqué dans un commentaire, les petits bacs contiennent une taille par index, donc vérifier si un chunk valide est disponible est très rapide, donc après les fast bins, les petits bacs sont vérifiés.

La première vérification consiste à déterminer si la taille demandée pourrait être à l'intérieur d'un petit bac. Dans ce cas, obtenez l'**index** correspondant à l'intérieur du petit bac et voyez s'il y a **un chunk disponible**.

Ensuite, une vérification de sécurité est effectuée pour vérifier :

* &#x20;si `victim->bk->fd = victim`. Pour voir que les deux chunks sont correctement liés.

Dans ce cas, le chunk **obtient le bit `inuse`,** la liste doublement chaînée est corrigée pour que ce chunk disparaisse de celle-ci (car il va être utilisé), et le bit non principal de l'arène est défini si nécessaire.

Enfin, **remplissez l'index du tcache de la taille demandée** avec d'autres chunks à l'intérieur du petit bac (s'il y en a).
```c
// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L3895C3-L3967C6

/*
If a small request, check regular bin.  Since these "smallbins"
hold one size each, no searching within bins is necessary.
(For a large request, we need to wait until unsorted chunks are
processed to find best fit. But for small ones, fits are exact
anyway, so we can check now, which is faster.)
*/

if (in_smallbin_range (nb))
{
idx = smallbin_index (nb);
bin = bin_at (av, idx);

if ((victim = last (bin)) != bin)
{
bck = victim->bk;
if (__glibc_unlikely (bck->fd != victim))
malloc_printerr ("malloc(): smallbin double linked list corrupted");
set_inuse_bit_at_offset (victim, nb);
bin->bk = bck;
bck->fd = bin;

if (av != &main_arena)
set_non_main_arena (victim);
check_malloced_chunk (av, victim, nb);
#if USE_TCACHE
/* While we're here, if we see other chunks of the same size,
stash them in the tcache.  */
size_t tc_idx = csize2tidx (nb);
if (tcache != NULL && tc_idx < mp_.tcache_bins)
{
mchunkptr tc_victim;

/* While bin not empty and tcache not full, copy chunks over.  */
while (tcache->counts[tc_idx] < mp_.tcache_count
&& (tc_victim = last (bin)) != bin)
{
if (tc_victim != 0)
{
bck = tc_victim->bk;
set_inuse_bit_at_offset (tc_victim, nb);
if (av != &main_arena)
set_non_main_arena (tc_victim);
bin->bk = bck;
bck->fd = bin;

tcache_put (tc_victim, tc_idx);
}
}
}
#endif
void *p = chunk2mem (victim);
alloc_perturb (p, bytes);
return p;
}
}

malloc_consolidate

S'il ne s'agissait pas d'un petit fragment, il s'agit d'un grand fragment, et dans ce cas, malloc_consolidate est appelé pour éviter la fragmentation de la mémoire.

```c /* If this is a large request, consolidate fastbins before continuing. While it might look excessive to kill all fastbins before even seeing if there is space available, this avoids fragmentation problems normally associated with fastbins. Also, in practice, programs tend to have runs of either small or large requests, but less often mixtures, so consolidation is not invoked all that often in most programs. And the programs that it is called frequently in otherwise tend to fragment. */

else { idx = largebin_index (nb); if (atomic_load_relaxed (&av->have_fastchunks)) malloc_consolidate (av); }

</details>

La fonction de consolidation malloc supprime essentiellement des morceaux du fast bin et les place dans le unsorted bin. Après le prochain malloc, ces morceaux seront organisés dans leurs fast bins respectifs.

Notez que lors de la suppression de ces morceaux, s'ils sont trouvés avec des morceaux précédents ou suivants qui ne sont pas utilisés, ils seront **désenchaînés et fusionnés** avant de placer le morceau final dans le **unsorted** bin.

Pour chaque morceau de fast bin, quelques vérifications de sécurité sont effectuées :

* Si le morceau n'est pas aligné, déclencher : `malloc_consolidate(): unaligned fastbin chunk detected`
* Si le morceau a une taille différente de celle qu'il devrait avoir en raison de l'index dans lequel il se trouve : `malloc_consolidate(): invalid chunk size`
* Si le morceau précédent n'est pas utilisé et que sa taille diffère de celle indiquée par `prev_chunk` : `corrupted size vs. prev_size in fastbins`

<details>

<summary>fonction malloc_consolidate</summary>
```c
// https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L4810C1-L4905C2

static void malloc_consolidate(mstate av)
{
mfastbinptr*    fb;                 /* current fastbin being consolidated */
mfastbinptr*    maxfb;              /* last fastbin (for loop control) */
mchunkptr       p;                  /* current chunk being consolidated */
mchunkptr       nextp;              /* next chunk to consolidate */
mchunkptr       unsorted_bin;       /* bin header */
mchunkptr       first_unsorted;     /* chunk to link to */

/* These have same use as in free() */
mchunkptr       nextchunk;
INTERNAL_SIZE_T size;
INTERNAL_SIZE_T nextsize;
INTERNAL_SIZE_T prevsize;
int             nextinuse;

atomic_store_relaxed (&av->have_fastchunks, false);

unsorted_bin = unsorted_chunks(av);

/*
Remove each chunk from fast bin and consolidate it, placing it
then in unsorted bin. Among other reasons for doing this,
placing in unsorted bin avoids needing to calculate actual bins
until malloc is sure that chunks aren't immediately going to be
reused anyway.
*/

maxfb = &fastbin (av, NFASTBINS - 1);
fb = &fastbin (av, 0);
do {
p = atomic_exchange_acquire (fb, NULL);
if (p != 0) {
do {
{
if (__glibc_unlikely (misaligned_chunk (p)))
malloc_printerr ("malloc_consolidate(): "
"unaligned fastbin chunk detected");

unsigned int idx = fastbin_index (chunksize (p));
if ((&fastbin (av, idx)) != fb)
malloc_printerr ("malloc_consolidate(): invalid chunk size");
}

check_inuse_chunk(av, p);
nextp = REVEAL_PTR (p->fd);

/* Slightly streamlined version of consolidation code in free() */
size = chunksize (p);
nextchunk = chunk_at_offset(p, size);
nextsize = chunksize(nextchunk);

if (!prev_inuse(p)) {
prevsize = prev_size (p);
size += prevsize;
p = chunk_at_offset(p, -((long) prevsize));
if (__glibc_unlikely (chunksize(p) != prevsize))
malloc_printerr ("corrupted size vs. prev_size in fastbins");
unlink_chunk (av, p);
}

if (nextchunk != av->top) {
nextinuse = inuse_bit_at_offset(nextchunk, nextsize);

if (!nextinuse) {
size += nextsize;
unlink_chunk (av, nextchunk);
} else
clear_inuse_bit_at_offset(nextchunk, 0);

first_unsorted = unsorted_bin->fd;
unsorted_bin->fd = p;
first_unsorted->bk = p;

if (!in_smallbin_range (size)) {
p->fd_nextsize = NULL;
p->bk_nextsize = NULL;
}

set_head(p, size | PREV_INUSE);
p->bk = unsorted_bin;
p->fd = first_unsorted;
set_foot(p, size);
}

else {
size += nextsize;
set_head(p, size | PREV_INUSE);
av->top = p;
}

} while ( (p = nextp) != 0);

}
} while (fb++ != maxfb);
}

Bloc non trié

Il est temps de vérifier le bloc non trié pour trouver un morceau valide à utiliser.

Début

Cela commence par une grande boucle qui va parcourir le bloc non trié dans la direction bk jusqu'à arriver à la fin (la structure de l'arène) avec while ((victim = unsorted_chunks (av)->bk) != unsorted_chunks (av))

De plus, des vérifications de sécurité sont effectuées à chaque fois qu'un nouveau morceau est considéré :

  • Si la taille du morceau est étrange (trop petite ou trop grande) : malloc(): taille non valide (non trié)
  • Si la taille du morceau suivant est étrange (trop petite ou trop grande) : malloc(): taille suivante non valide (non trié)
  • Si la taille précédente indiquée par le morceau suivant diffère de la taille du morceau : malloc(): taille next->prev_size non concordante (non trié)
  • Si victim->bck->fd == victim ou si victim->fd == av (arène) n'est pas vrai : malloc(): liste doublement chaînée non triée corrompue
  • Comme nous vérifions toujours le dernier, son fd devrait toujours pointer vers la structure de l'arène.
  • Si le morceau suivant n'indique pas que le précédent est en cours d'utilisation : malloc(): next->prev_inuse non valide (non trié)
Début de _int_malloc du bloc non trié ```c /* Process recently freed or remaindered chunks, taking one only if it is exact fit, or, if this a small request, the chunk is remainder from the most recent non-exact fit. Place other traversed chunks in bins. Note that this step is the only place in any routine where chunks are placed in bins.

The outer loop here is needed because we might not realize until near the end of malloc that we should have consolidated, so must do so and retry. This happens at most once, and only when we would otherwise need to expand memory to service a "small" request. */

#if USE_TCACHE INTERNAL_SIZE_T tcache_nb = 0; size_t tc_idx = csize2tidx (nb); if (tcache != NULL && tc_idx < mp_.tcache_bins) tcache_nb = nb; int return_cached = 0;

tcache_unsorted_count = 0; #endif

for (;; ) { int iters = 0; while ((victim = unsorted_chunks (av)->bk) != unsorted_chunks (av)) { bck = victim->bk; size = chunksize (victim); mchunkptr next = chunk_at_offset (victim, size);

if (__glibc_unlikely (size <= CHUNK_HDR_SZ) || __glibc_unlikely (size > av->system_mem)) malloc_printerr ("malloc(): invalid size (unsorted)"); if (__glibc_unlikely (chunksize_nomask (next) < CHUNK_HDR_SZ) || __glibc_unlikely (chunksize_nomask (next) > av->system_mem)) malloc_printerr ("malloc(): invalid next size (unsorted)"); if (__glibc_unlikely ((prev_size (next) & ~(SIZE_BITS)) != size)) malloc_printerr ("malloc(): mismatching next->prev_size (unsorted)"); if (__glibc_unlikely (bck->fd != victim) || __glibc_unlikely (victim->fd != unsorted_chunks (av))) malloc_printerr ("malloc(): unsorted double linked list corrupted"); if (__glibc_unlikely (prev_inuse (next))) malloc_printerr ("malloc(): invalid next->prev_inuse (unsorted)");

</details>

#### si `in_smallbin_range`

Si le chunk est plus grand que la taille demandée, l'utiliser et définir le reste de l'espace du chunk dans la liste non triée et mettre à jour le `last_remainder` avec cela.

<details>

<summary><code>_int_malloc</code> liste non triée du bin <code>in_smallbin_range</code></summary>
```c
// From https://github.com/bminor/glibc/blob/master/malloc/malloc.c#L4090C11-L4124C14

/*
If a small request, try to use last remainder if it is the
only chunk in unsorted bin.  This helps promote locality for
runs of consecutive small requests. This is the only
exception to best-fit, and applies only when there is
no exact fit for a small chunk.
*/

if (in_smallbin_range (nb) &&
bck == unsorted_chunks (av) &&
victim == av->last_remainder &&
(unsigned long) (size) > (unsigned long) (nb + MINSIZE))
{
/* split and reattach remainder */
remainder_size = size - nb;
remainder = chunk_at_offset (victim, nb);
unsorted_chunks (av)->bk = unsorted_chunks (av)->fd = remainder;
av->last_remainder = remainder;
remainder->bk = remainder->fd = unsorted_chunks (av);
if (!in_smallbin_range (remainder_size))
{
remainder->fd_nextsize = NULL;
remainder->bk_nextsize = NULL;
}

set_head (victim, nb | PREV_INUSE |
(av != &main_arena ? NON_MAIN_ARENA : 0));
set_head (remainder, remainder_size | PREV_INUSE);
set_foot (remainder, remainder_size);

check_malloced_chunk (av, victim, nb);
void *p = chunk2mem (victim);
alloc_perturb (p, bytes);
return p;
}

Si cela a réussi, retournez le chunk et c'est fini, sinon, continuez d'exécuter la fonction...

si la taille est égale

Continuez à retirer le chunk du bin, au cas où la taille demandée est exactement celle du chunk :

  • Si le tcache n'est pas rempli, ajoutez-le au tcache et continuez à indiquer qu'il y a un chunk tcache qui pourrait être utilisé
  • Si le tcache est plein, utilisez-le simplement en le retournant
_int_malloc unsorted bin equal size ```c // From https://github.com/bminor/glibc/blob/master/malloc/malloc.c#L4126C11-L4157C14

/* remove from unsorted list */ unsorted_chunks (av)->bk = bck; bck->fd = unsorted_chunks (av);

/* Take now instead of binning if exact fit */

if (size == nb) { set_inuse_bit_at_offset (victim, size); if (av != &main_arena) set_non_main_arena (victim); #if USE_TCACHE /* Fill cache first, return to user only if cache fills. We may return one of these chunks later. */ if (tcache_nb > 0 && tcache->counts[tc_idx] < mp_.tcache_count) { tcache_put (victim, tc_idx); return_cached = 1; continue; } else { #endif check_malloced_chunk (av, victim, nb); void *p = chunk2mem (victim); alloc_perturb (p, bytes); return p; #if USE_TCACHE } #endif }

</details>

Si le chunk n'est pas retourné ou ajouté au tcache, continuer avec le code...

#### placer le chunk dans une liste

Stocker le chunk vérifié dans la petite liste ou dans la grande liste en fonction de la taille du chunk (en gardant la grande liste correctement organisée).

Des vérifications de sécurité sont effectuées pour s'assurer que les listes doublement chaînées des grandes listes ne sont pas corrompues :

* Si `fwd->bk_nextsize->fd_nextsize != fwd` : `malloc() : liste doublement chaînée de la grande liste corrompue (nextsize)`
* Si `fwd->bk->fd != fwd` : `malloc() : liste doublement chaînée de la grande liste corrompue (bk)`

<details>

<summary><code>_int_malloc</code> placer le chunk dans une liste</summary>
```c
/* place chunk in bin */

if (in_smallbin_range (size))
{
victim_index = smallbin_index (size);
bck = bin_at (av, victim_index);
fwd = bck->fd;
}
else
{
victim_index = largebin_index (size);
bck = bin_at (av, victim_index);
fwd = bck->fd;

/* maintain large bins in sorted order */
if (fwd != bck)
{
/* Or with inuse bit to speed comparisons */
size |= PREV_INUSE;
/* if smaller than smallest, bypass loop below */
assert (chunk_main_arena (bck->bk));
if ((unsigned long) (size)
< (unsigned long) chunksize_nomask (bck->bk))
{
fwd = bck;
bck = bck->bk;

victim->fd_nextsize = fwd->fd;
victim->bk_nextsize = fwd->fd->bk_nextsize;
fwd->fd->bk_nextsize = victim->bk_nextsize->fd_nextsize = victim;
}
else
{
assert (chunk_main_arena (fwd));
while ((unsigned long) size < chunksize_nomask (fwd))
{
fwd = fwd->fd_nextsize;
assert (chunk_main_arena (fwd));
}

if ((unsigned long) size
== (unsigned long) chunksize_nomask (fwd))
/* Always insert in the second position.  */
fwd = fwd->fd;
else
{
victim->fd_nextsize = fwd;
victim->bk_nextsize = fwd->bk_nextsize;
if (__glibc_unlikely (fwd->bk_nextsize->fd_nextsize != fwd))
malloc_printerr ("malloc(): largebin double linked list corrupted (nextsize)");
fwd->bk_nextsize = victim;
victim->bk_nextsize->fd_nextsize = victim;
}
bck = fwd->bk;
if (bck->fd != fwd)
malloc_printerr ("malloc(): largebin double linked list corrupted (bk)");
}
}
else
victim->fd_nextsize = victim->bk_nextsize = victim;
}

mark_bin (av, victim_index);
victim->bk = bck;
victim->fd = fwd;
fwd->bk = victim;
bck->fd = victim;

Limites de _int_malloc

À ce stade, si un morceau était stocké dans le tcache qui peut être utilisé et que la limite est atteinte, retournez simplement un morceau de tcache.

De plus, si MAX_ITERS est atteint, sortez de la boucle et obtenez un morceau d'une manière différente (top chunk).

Si return_cached était défini, retournez simplement un morceau du tcache pour éviter des recherches plus importantes.

Limites de _int_malloc ```c // From https://github.com/bminor/glibc/blob/master/malloc/malloc.c#L4227C1-L4250C7

#if USE_TCACHE /* If we've processed as many chunks as we're allowed while filling the cache, return one of the cached ones. */ ++tcache_unsorted_count; if (return_cached && mp_.tcache_unsorted_limit > 0 && tcache_unsorted_count > mp_.tcache_unsorted_limit) { return tcache_get (tc_idx); } #endif

#define MAX_ITERS 10000 if (++iters >= MAX_ITERS) break; }

#if USE_TCACHE /* If all the small chunks we found ended up cached, return one now. */ if (return_cached) { return tcache_get (tc_idx); } #endif

</details>

Si les limites ne sont pas atteintes, continuer avec le code...

### Grand bac (par index)

Si la demande est grande (pas dans le petit bac) et que nous n'avons pas encore retourné de morceau, obtenir l'**index** de la taille demandée dans le **grand bac**, vérifier si **non vide** ou si le **plus gros morceau de ce bac est plus grand** que la taille demandée et dans ce cas trouver le **plus petit morceau qui peut être utilisé** pour la taille demandée.

Si l'espace restant du morceau finalement utilisé peut être un nouveau morceau, l'ajouter au bac non trié et le dernier\_rappel est mis à jour.

Une vérification de sécurité est effectuée lors de l'ajout du rappel au bac non trié :

* `bck->fd-> bk != bck` : `malloc(): morceaux non triés corrompus`

<details>

<summary><code>_int_malloc</code> Grand bac (par index)</summary>
```c
// From https://github.com/bminor/glibc/blob/master/malloc/malloc.c#L4252C7-L4317C10

/*
If a large request, scan through the chunks of current bin in
sorted order to find smallest that fits.  Use the skip list for this.
*/

if (!in_smallbin_range (nb))
{
bin = bin_at (av, idx);

/* skip scan if empty or largest chunk is too small */
if ((victim = first (bin)) != bin
&& (unsigned long) chunksize_nomask (victim)
>= (unsigned long) (nb))
{
victim = victim->bk_nextsize;
while (((unsigned long) (size = chunksize (victim)) <
(unsigned long) (nb)))
victim = victim->bk_nextsize;

/* Avoid removing the first entry for a size so that the skip
list does not have to be rerouted.  */
if (victim != last (bin)
&& chunksize_nomask (victim)
== chunksize_nomask (victim->fd))
victim = victim->fd;

remainder_size = size - nb;
unlink_chunk (av, victim);

/* Exhaust */
if (remainder_size < MINSIZE)
{
set_inuse_bit_at_offset (victim, size);
if (av != &main_arena)
set_non_main_arena (victim);
}
/* Split */
else
{
remainder = chunk_at_offset (victim, nb);
/* We cannot assume the unsorted list is empty and therefore
have to perform a complete insert here.  */
bck = unsorted_chunks (av);
fwd = bck->fd;
if (__glibc_unlikely (fwd->bk != bck))
malloc_printerr ("malloc(): corrupted unsorted chunks");
last_re->bk = bck;
remainder->fd = fwd;
bck->fd = remainder;
fwd->bk = remainder;
if (!in_smallbin_range (remainder_size))
{
remainder->fd_nextsize = NULL;
remainder->bk_nextsize = NULL;
}
set_head (victim, nb | PREV_INUSE |
(av != &main_arena ? NON_MAIN_ARENA : 0));
set_head (remainder, remainder_size | PREV_INUSE);
set_foot (remainder, remainder_size);
}
check_malloced_chunk (av, victim, nb);
void *p = chunk2mem (victim);
alloc_perturb (p, bytes);
return p;
}
}

Si un chunk n'est pas trouvé pour cela, continuer

Grand Bin (le suivant plus grand)

Si dans le grand bin exact il n'y avait aucun chunk qui pouvait être utilisé, commencer à parcourir tous les prochains grands bin (en commençant par le plus grand immédiatement) jusqu'à ce qu'un soit trouvé (s'il y en a).

Le reste du chunk divisé est ajouté dans le bin non trié, last_reminder est mis à jour et la même vérification de sécurité est effectuée :

  • bck->fd-> bk != bck : malloc(): corrupted unsorted chunks2
_int_malloc Grand Bin (le suivant plus grand) ```c // From https://github.com/bminor/glibc/blob/master/malloc/malloc.c#L4319C7-L4425C10

/* Search for a chunk by scanning bins, starting with next largest bin. This search is strictly by best-fit; i.e., the smallest (with ties going to approximately the least recently used) chunk that fits is selected.

The bitmap avoids needing to check that most blocks are nonempty. The particular case of skipping all bins during warm-up phases when no chunks have been returned yet is faster than it might look. */

++idx; bin = bin_at (av, idx); block = idx2block (idx); map = av->binmap[block]; bit = idx2bit (idx);

for (;; ) { /* Skip rest of block if there are no more set bits in this block. / if (bit > map || bit == 0) { do { if (++block >= BINMAPSIZE) / out of bins */ goto use_top; } while ((map = av->binmap[block]) == 0);

bin = bin_at (av, (block << BINMAPSHIFT)); bit = 1; }

/* Advance to bin with set bit. There must be one. */ while ((bit & map) == 0) { bin = next_bin (bin); bit <<= 1; assert (bit != 0); }

/* Inspect the bin. It is likely to be non-empty */ victim = last (bin);

/* If a false alarm (empty bin), clear the bit. / if (victim == bin) { av->binmap[block] = map &= ~bit; / Write through */ bin = next_bin (bin); bit <<= 1; }

else { size = chunksize (victim);

/* We know the first chunk in this bin is big enough to use. */ assert ((unsigned long) (size) >= (unsigned long) (nb));

remainder_size = size - nb;

/* unlink */ unlink_chunk (av, victim);

/* Exhaust */ if (remainder_size < MINSIZE) { set_inuse_bit_at_offset (victim, size); if (av != &main_arena) set_non_main_arena (victim); }

/* Split */ else { remainder = chunk_at_offset (victim, nb);

/* We cannot assume the unsorted list is empty and therefore have to perform a complete insert here. */ bck = unsorted_chunks (av); fwd = bck->fd; if (__glibc_unlikely (fwd->bk != bck)) malloc_printerr ("malloc(): corrupted unsorted chunks 2"); remainder->bk = bck; remainder->fd = fwd; bck->fd = remainder; fwd->bk = remainder;

/* advertise as last remainder */ if (in_smallbin_range (nb)) av->last_remainder = remainder; if (!in_smallbin_range (remainder_size)) { remainder->fd_nextsize = NULL; remainder->bk_nextsize = NULL; } set_head (victim, nb | PREV_INUSE | (av != &main_arena ? NON_MAIN_ARENA : 0)); set_head (remainder, remainder_size | PREV_INUSE); set_foot (remainder, remainder_size); } check_malloced_chunk (av, victim, nb); void *p = chunk2mem (victim); alloc_perturb (p, bytes); return p; } }

</details>

### Haut Chunk

À ce stade, il est temps d'obtenir un nouveau chunk à partir du Haut chunk (si suffisamment grand).

Il commence par une vérification de sécurité pour s'assurer que la taille du chunk n'est pas trop grande (corrompue) :

* `chunksize(av->top) > av->system_mem` : `malloc() : taille du haut corrompue`

Ensuite, il utilisera l'espace du chunk du haut s'il est suffisamment grand pour créer un chunk de la taille demandée.\
Sinon, s'il y a des chunks rapides, les consolider et réessayer.\
Enfin, s'il n'y a pas assez d'espace, utilisez `sysmalloc` pour allouer une taille suffisante.

<details>

<summary><code>_int_malloc</code> Haut chunk</summary>
```c
use_top:
/*
If large enough, split off the chunk bordering the end of memory
(held in av->top). Note that this is in accord with the best-fit
search rule.  In effect, av->top is treated as larger (and thus
less well fitting) than any other available chunk since it can
be extended to be as large as necessary (up to system
limitations).

We require that av->top always exists (i.e., has size >=
MINSIZE) after initialization, so if it would otherwise be
exhausted by current request, it is replenished. (The main
reason for ensuring it exists is that we may need MINSIZE space
to put in fenceposts in sysmalloc.)
*/

victim = av->top;
size = chunksize (victim);

if (__glibc_unlikely (size > av->system_mem))
malloc_printerr ("malloc(): corrupted top size");

if ((unsigned long) (size) >= (unsigned long) (nb + MINSIZE))
{
remainder_size = size - nb;
remainder = chunk_at_offset (victim, nb);
av->top = remainder;
set_head (victim, nb | PREV_INUSE |
(av != &main_arena ? NON_MAIN_ARENA : 0));
set_head (remainder, remainder_size | PREV_INUSE);

check_malloced_chunk (av, victim, nb);
void *p = chunk2mem (victim);
alloc_perturb (p, bytes);
return p;
}

/* When we are using atomic ops to free fast chunks we can get
here for all block sizes.  */
else if (atomic_load_relaxed (&av->have_fastchunks))
{
malloc_consolidate (av);
/* restore original bin index */
if (in_smallbin_range (nb))
idx = smallbin_index (nb);
else
idx = largebin_index (nb);
}

/*
Otherwise, relay to handle system-dependent cases
*/
else
{
void *p = sysmalloc (nb, av);
if (p != NULL)
alloc_perturb (p, bytes);
return p;
}
}
}

Début de sysmalloc

Début de sysmalloc

Si l'arène est nulle ou si la taille demandée est trop grande (et qu'il reste des mmaps autorisés), utilisez sysmalloc_mmap pour allouer de l'espace et le renvoyer.

```c // From f942a732d3/malloc/malloc.c (L2531)

/* sysmalloc handles malloc cases requiring more memory from the system. On entry, it is assumed that av->top does not have enough space to service request for nb bytes, thus requiring that av->top be extended or replaced. */

static void * sysmalloc (INTERNAL_SIZE_T nb, mstate av) { mchunkptr old_top; /* incoming value of av->top / INTERNAL_SIZE_T old_size; / its size */ char old_end; / its end address */

long size; /* arg to first MORECORE or mmap call */ char brk; / return value from MORECORE */

long correction; /* arg to 2nd MORECORE call */ char snd_brk; / 2nd return val */

INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space / INTERNAL_SIZE_T end_misalign; / partial page left at end of new space */ char aligned_brk; / aligned offset into brk */

mchunkptr p; /* the allocated/returned chunk / mchunkptr remainder; / remainder from allocation / unsigned long remainder_size; / its size */

size_t pagesize = GLRO (dl_pagesize); bool tried_mmap = false;

/* If have mmap, and the request size meets the mmap threshold, and the system supports mmap, and there are few enough currently allocated mmapped regions, try to directly map this request rather than expanding top. */

if (av == NULL || ((unsigned long) (nb) >= (unsigned long) (mp_.mmap_threshold) && (mp_.n_mmaps < mp_.n_mmaps_max))) { char mm; if (mp_.hp_pagesize > 0 && nb >= mp_.hp_pagesize) { / There is no need to issue the THP madvise call if Huge Pages are used directly. */ mm = sysmalloc_mmap (nb, mp_.hp_pagesize, mp_.hp_flags, av); if (mm != MAP_FAILED) return mm; } mm = sysmalloc_mmap (nb, pagesize, 0, av); if (mm != MAP_FAILED) return mm; tried_mmap = true; }

/* There are no usable arenas and mmap also failed. */ if (av == NULL) return 0;

<details>

<summary>Vérifications de sysmalloc</summary>

Il commence par obtenir les informations sur l'ancien chunk supérieur et vérifie que certaines des conditions suivantes sont vraies :

* La taille de l'ancien tas est de 0 (nouveau tas)
* La taille du tas précédent est supérieure à MINSIZE et l'ancien Top est en cours d'utilisation
* Le tas est aligné sur la taille de la page (0x1000 donc les 12 bits inférieurs doivent être à 0)

Ensuite, il vérifie également que :

* La vieille taille n'a pas assez d'espace pour créer un chunk de la taille demandée

</details>
```c
/* Record incoming configuration of top */

old_top = av->top;
old_size = chunksize (old_top);
old_end = (char *) (chunk_at_offset (old_top, old_size));

brk = snd_brk = (char *) (MORECORE_FAILURE);

/*
If not the first time through, we require old_size to be
at least MINSIZE and to have prev_inuse set.
*/

assert ((old_top == initial_top (av) && old_size == 0) ||
((unsigned long) (old_size) >= MINSIZE &&
prev_inuse (old_top) &&
((unsigned long) old_end & (pagesize - 1)) == 0));

/* Precondition: not enough current space to satisfy nb request */
assert ((unsigned long) (old_size) < (unsigned long) (nb + MINSIZE));

sysmalloc pas dans l'arène principale

Il va d'abord essayer de étendre le tas précédent pour ce tas. Si ce n'est pas possible, il essaiera d'allouer un nouveau tas et mettra à jour les pointeurs pour pouvoir l'utiliser.
Enfin, si cela ne fonctionne pas, essayez d'appeler sysmalloc_mmap.

sysmalloc pas dans l'arène principale ```c if (av != &main_arena) { heap_info *old_heap, *heap; size_t old_heap_size;

/* First try to extend the current heap. */ old_heap = heap_for_ptr (old_top); old_heap_size = old_heap->size; if ((long) (MINSIZE + nb - old_size) > 0 && grow_heap (old_heap, MINSIZE + nb - old_size) == 0) { av->system_mem += old_heap->size - old_heap_size; set_head (old_top, (((char *) old_heap + old_heap->size) - (char *) old_top) | PREV_INUSE); } else if ((heap = new_heap (nb + (MINSIZE + sizeof (heap)), mp_.top_pad))) { / Use a newly allocated heap. / heap->ar_ptr = av; heap->prev = old_heap; av->system_mem += heap->size; / Set up the new top. */ top (av) = chunk_at_offset (heap, sizeof (*heap)); set_head (top (av), (heap->size - sizeof (*heap)) | PREV_INUSE);

/* Setup fencepost and free the old top chunk with a multiple of MALLOC_ALIGNMENT in size. / / The fencepost takes at least MINSIZE bytes, because it might become the top chunk again later. Note that a footer is set up, too, although the chunk is marked in use. / old_size = (old_size - MINSIZE) & ~MALLOC_ALIGN_MASK; set_head (chunk_at_offset (old_top, old_size + CHUNK_HDR_SZ), 0 | PREV_INUSE); if (old_size >= MINSIZE) { set_head (chunk_at_offset (old_top, old_size), CHUNK_HDR_SZ | PREV_INUSE); set_foot (chunk_at_offset (old_top, old_size), CHUNK_HDR_SZ); set_head (old_top, old_size | PREV_INUSE | NON_MAIN_ARENA); _int_free (av, old_top, 1); } else { set_head (old_top, (old_size + CHUNK_HDR_SZ) | PREV_INUSE); set_foot (old_top, (old_size + CHUNK_HDR_SZ)); } } else if (!tried_mmap) { / We can at least try to use to mmap memory. If new_heap fails it is unlikely that trying to allocate huge pages will succeed. */ char *mm = sysmalloc_mmap (nb, pagesize, 0, av); if (mm != MAP_FAILED) return mm; } }

</details>

### sysmalloc arène principale

Il commence par calculer la quantité de mémoire nécessaire. Il commencera par demander de la mémoire contiguë afin de pouvoir utiliser la vieille mémoire inutilisée. De plus, certaines opérations d'alignement sont effectuées.

<details>

<summary>sysmalloc arène principale</summary>
```c
// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L2665C1-L2713C10

else     /* av == main_arena */


{ /* Request enough space for nb + pad + overhead */
size = nb + mp_.top_pad + MINSIZE;

/*
If contiguous, we can subtract out existing space that we hope to
combine with new space. We add it back later only if
we don't actually get contiguous space.
*/

if (contiguous (av))
size -= old_size;

/*
Round to a multiple of page size or huge page size.
If MORECORE is not contiguous, this ensures that we only call it
with whole-page arguments.  And if MORECORE is contiguous and
this is not first time through, this preserves page-alignment of
previous calls. Otherwise, we correct to page-align below.
*/

#ifdef MADV_HUGEPAGE
/* Defined in brk.c.  */
extern void *__curbrk;
if (__glibc_unlikely (mp_.thp_pagesize != 0))
{
uintptr_t top = ALIGN_UP ((uintptr_t) __curbrk + size,
mp_.thp_pagesize);
size = top - (uintptr_t) __curbrk;
}
else
#endif
size = ALIGN_UP (size, GLRO(dl_pagesize));

/*
Don't try to call MORECORE if argument is so big as to appear
negative. Note that since mmap takes size_t arg, it may succeed
below even if we cannot call MORECORE.
*/

if (size > 0)
{
brk = (char *) (MORECORE (size));
if (brk != (char *) (MORECORE_FAILURE))
madvise_thp (brk, size);
LIBC_PROBE (memory_sbrk_more, 2, brk, size);
}

Erreur précédente de l'arène principale de sysmalloc 1

Si le précédent a renvoyé MORECORE_FAILURE, essayez à nouveau d'allouer de la mémoire en utilisant sysmalloc_mmap_fallback

// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L2715C7-L2740C10

if (brk == (char *) (MORECORE_FAILURE))
{
/*
If have mmap, try using it as a backup when MORECORE fails or
cannot be used. This is worth doing on systems that have "holes" in
address space, so sbrk cannot extend to give contiguous space, but
space is available elsewhere.  Note that we ignore mmap max count
and threshold limits, since the space will not be used as a
segregated mmap region.
*/

char *mbrk = MAP_FAILED;
if (mp_.hp_pagesize > 0)
mbrk = sysmalloc_mmap_fallback (&size, nb, old_size,
mp_.hp_pagesize, mp_.hp_pagesize,
mp_.hp_flags, av);
if (mbrk == MAP_FAILED)
mbrk = sysmalloc_mmap_fallback (&size, nb, old_size, MMAP_AS_MORECORE_SIZE,
pagesize, 0, av);
if (mbrk != MAP_FAILED)
{
/* We do not need, and cannot use, another sbrk call to find end */
brk = mbrk;
snd_brk = brk + size;
}
}

Continuer l'arène principale sysmalloc

Si la commande précédente n'a pas renvoyé MORECORE_FAILURE, si elle a fonctionné, créez quelques alignements:

Erreur précédente de l'arène principale sysmalloc 2 ```c // From f942a732d3/malloc/malloc.c (L2742)

if (brk != (char *) (MORECORE_FAILURE)) { if (mp_.sbrk_base == 0) mp_.sbrk_base = brk; av->system_mem += size;

/* If MORECORE extends previous space, we can likewise extend top size. */

if (brk == old_end && snd_brk == (char *) (MORECORE_FAILURE)) set_head (old_top, (size + old_size) | PREV_INUSE);

else if (contiguous (av) && old_size && brk < old_end) /* Oops! Someone else killed our space.. Can't touch anything. */ malloc_printerr ("break adjusted to free malloc space");

/* Otherwise, make adjustments:

  • If the first time through or noncontiguous, we need to call sbrk just to find out where the end of memory lies.

  • We need to ensure that all returned chunks from malloc will meet MALLOC_ALIGNMENT

  • If there was an intervening foreign sbrk, we need to adjust sbrk request size to account for fact that we will not be able to combine new space with existing space in old_top.

  • Almost all systems internally allocate whole pages at a time, in which case we might as well use the whole last page of request. So we allocate enough more memory to hit a page boundary now, which in turn causes future contiguous calls to page-align. */

else { front_misalign = 0; end_misalign = 0; correction = 0; aligned_brk = brk;

/* handle contiguous cases / if (contiguous (av)) { / Count foreign sbrk as system_mem. */ if (old_size) av->system_mem += brk - old_end;

/* Guarantee alignment of first new chunk made from this space */

front_misalign = (INTERNAL_SIZE_T) chunk2mem (brk) & MALLOC_ALIGN_MASK; if (front_misalign > 0) { /* Skip over some bytes to arrive at an aligned position. We don't need to specially mark these wasted front bytes. They will never be accessed anyway because prev_inuse of av->top (and any chunk created from its start) is always true after initialization. */

correction = MALLOC_ALIGNMENT - front_misalign; aligned_brk += correction; }

/* If this isn't adjacent to existing space, then we will not be able to merge with old_top space, so must add to 2nd request. */

correction += old_size;

/* Extend the end address to hit a page boundary */ end_misalign = (INTERNAL_SIZE_T) (brk + size + correction); correction += (ALIGN_UP (end_misalign, pagesize)) - end_misalign;

assert (correction >= 0); snd_brk = (char *) (MORECORE (correction));

/* If can't allocate correction, try to at least find out current brk. It might be enough to proceed without failing.

Note that if second sbrk did NOT fail, we assume that space is contiguous with first sbrk. This is a safe assumption unless program is multithreaded but doesn't use locks and a foreign sbrk occurred between our first and second calls. */

if (snd_brk == (char *) (MORECORE_FAILURE)) { correction = 0; snd_brk = (char *) (MORECORE (0)); } else madvise_thp (snd_brk, correction); }

/* handle non-contiguous cases / else { if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ) / MORECORE/mmap must correctly align / assert (((unsigned long) chunk2mem (brk) & MALLOC_ALIGN_MASK) == 0); else { front_misalign = (INTERNAL_SIZE_T) chunk2mem (brk) & MALLOC_ALIGN_MASK; if (front_misalign > 0) { / Skip over some bytes to arrive at an aligned position. We don't need to specially mark these wasted front bytes. They will never be accessed anyway because prev_inuse of av->top (and any chunk created from its start) is always true after initialization. */

aligned_brk += MALLOC_ALIGNMENT - front_misalign; } }

/* Find out current end of memory */ if (snd_brk == (char *) (MORECORE_FAILURE)) { snd_brk = (char *) (MORECORE (0)); } }

/* Adjust top based on results of second sbrk */ if (snd_brk != (char *) (MORECORE_FAILURE)) { av->top = (mchunkptr) aligned_brk; set_head (av->top, (snd_brk - aligned_brk + correction) | PREV_INUSE); av->system_mem += correction;

/* If not the first time through, we either have a gap due to foreign sbrk or a non-contiguous region. Insert a double fencepost at old_top to prevent consolidation with space we don't own. These fenceposts are artificial chunks that are marked as inuse and are in any case too small to use. We need two to make sizes and alignments work out. */

if (old_size != 0) { /* Shrink old_top to insert fenceposts, keeping size a multiple of MALLOC_ALIGNMENT. We know there is at least enough space in old_top to do this. */ old_size = (old_size - 2 * CHUNK_HDR_SZ) & ~MALLOC_ALIGN_MASK; set_head (old_top, old_size | PREV_INUSE);

/* Note that the following assignments completely overwrite old_top when old_size was previously MINSIZE. This is intentional. We need the fencepost, even if old_top otherwise gets lost. */ set_head (chunk_at_offset (old_top, old_size), CHUNK_HDR_SZ | PREV_INUSE); set_head (chunk_at_offset (old_top, old_size + CHUNK_HDR_SZ), CHUNK_HDR_SZ | PREV_INUSE);

/* If possible, release the rest. / if (old_size >= MINSIZE) { _int_free (av, old_top, 1); } } } } } } / if (av != &main_arena) */

</details>

### Allocation finale de sysmalloc

Terminer l'allocation en mettant à jour les informations de l'arène
```c
// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L2921C3-L2943C12

if ((unsigned long) av->system_mem > (unsigned long) (av->max_system_mem))
av->max_system_mem = av->system_mem;
check_malloc_state (av);

/* finally, do the allocation */
p = av->top;
size = chunksize (p);

/* check that one of the above allocation paths succeeded */
if ((unsigned long) (size) >= (unsigned long) (nb + MINSIZE))
{
remainder_size = size - nb;
remainder = chunk_at_offset (p, nb);
av->top = remainder;
set_head (p, nb | PREV_INUSE | (av != &main_arena ? NON_MAIN_ARENA : 0));
set_head (remainder, remainder_size | PREV_INUSE);
check_malloced_chunk (av, p, nb);
return chunk2mem (p);
}

/* catch all failure paths */
__set_errno (ENOMEM);
return 0;

sysmalloc_mmap

Code de sysmalloc_mmap ```c // From f942a732d3/malloc/malloc.c (L2392C1-L2481C2)

static void * sysmalloc_mmap (INTERNAL_SIZE_T nb, size_t pagesize, int extra_flags, mstate av) { long int size;

/* Round up size to nearest page. For mmapped chunks, the overhead is one SIZE_SZ unit larger than for normal chunks, because there is no following chunk whose prev_size field could be used.

See the front_misalign handling below, for glibc there is no need for further alignments unless we have have high alignment. */ if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ) size = ALIGN_UP (nb + SIZE_SZ, pagesize); else size = ALIGN_UP (nb + SIZE_SZ + MALLOC_ALIGN_MASK, pagesize);

/* Don't try if size wraps around 0. */ if ((unsigned long) (size) <= (unsigned long) (nb)) return MAP_FAILED;

char *mm = (char *) MMAP (0, size, mtag_mmap_flags | PROT_READ | PROT_WRITE, extra_flags); if (mm == MAP_FAILED) return mm;

#ifdef MAP_HUGETLB if (!(extra_flags & MAP_HUGETLB)) madvise_thp (mm, size); #endif

__set_vma_name (mm, size, " glibc: malloc");

/* The offset to the start of the mmapped region is stored in the prev_size field of the chunk. This allows us to adjust returned start address to meet alignment requirements here and in memalign(), and still be able to compute proper address argument for later munmap in free() and realloc(). */

INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */

if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ) { /* For glibc, chunk2mem increases the address by CHUNK_HDR_SZ and MALLOC_ALIGN_MASK is CHUNK_HDR_SZ-1. Each mmap'ed area is page aligned and therefore definitely MALLOC_ALIGN_MASK-aligned. */ assert (((INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK) == 0); front_misalign = 0; } else front_misalign = (INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK;

mchunkptr p; /* the allocated/returned chunk */

if (front_misalign > 0) { ptrdiff_t correction = MALLOC_ALIGNMENT - front_misalign; p = (mchunkptr) (mm + correction); set_prev_size (p, correction); set_head (p, (size - correction) | IS_MMAPPED); } else { p = (mchunkptr) mm; set_prev_size (p, 0); set_head (p, size | IS_MMAPPED); }

/* update statistics */ int new = atomic_fetch_add_relaxed (&mp_.n_mmaps, 1) + 1; atomic_max (&mp_.max_n_mmaps, new);

unsigned long sum; sum = atomic_fetch_add_relaxed (&mp_.mmapped_mem, size) + size; atomic_max (&mp_.max_mmapped_mem, sum);

check_chunk (av, p);

return chunk2mem (p); }

</details>

<details>

<summary><strong>Apprenez le piratage AWS de zéro à héros avec</strong> <a href="https://training.hacktricks.xyz/courses/arte"><strong>htARTE (Expert Red Team AWS de HackTricks)</strong></a><strong>!</strong></summary>

Autres façons de soutenir HackTricks :

* Si vous souhaitez voir votre **entreprise annoncée dans HackTricks** ou **télécharger HackTricks en PDF**, consultez les [**PLANS D'ABONNEMENT**](https://github.com/sponsors/carlospolop) !
* Obtenez le [**swag officiel PEASS & HackTricks**](https://peass.creator-spring.com)
* Découvrez [**La famille PEASS**](https://opensea.io/collection/the-peass-family), notre collection exclusive de [**NFTs**](https://opensea.io/collection/the-peass-family)
* **Rejoignez le** 💬 [**groupe Discord**](https://discord.gg/hRep4RUj7f) ou le [**groupe Telegram**](https://t.me/peass) ou **suivez-nous** sur **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks\_live)**.**
* **Partagez vos astuces de piratage en soumettant des PR aux** [**HackTricks**](https://github.com/carlospolop/hacktricks) et [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) dépôts GitHub.

</details>