GITBOOK-4325: No subject

This commit is contained in:
CPol 2024-04-29 23:17:49 +00:00 committed by gitbook-bot
parent 7fffdef994
commit 11e343c5e7
No known key found for this signature in database
GPG key ID: 07D2180C7B12D0FF
3 changed files with 216 additions and 5 deletions

View file

@ -159,7 +159,7 @@
* [macOS Universal binaries & Mach-O Format](macos-hardening/macos-security-and-privilege-escalation/macos-files-folders-and-binaries/universal-binaries-and-mach-o-format.md)
* [macOS Objective-C](macos-hardening/macos-security-and-privilege-escalation/macos-basic-objective-c.md)
* [macOS Privilege Escalation](macos-hardening/macos-security-and-privilege-escalation/macos-privilege-escalation.md)
* [macOS Proces Abuse](macos-hardening/macos-security-and-privilege-escalation/macos-proces-abuse/README.md)
* [macOS Process Abuse](macos-hardening/macos-security-and-privilege-escalation/macos-proces-abuse/README.md)
* [macOS Dirty NIB](macos-hardening/macos-security-and-privilege-escalation/macos-proces-abuse/macos-dirty-nib.md)
* [macOS Chromium Injection](macos-hardening/macos-security-and-privilege-escalation/macos-proces-abuse/macos-chromium-injection.md)
* [macOS Electron Applications Injection](macos-hardening/macos-security-and-privilege-escalation/macos-proces-abuse/macos-electron-applications-injection.md)

View file

@ -16,15 +16,80 @@ Other ways to support HackTricks:
## Basic Information
**Grand Central Dispatch (GCD),** also known as **libdispatch**, is available in both macOS and iOS. It's a technology developed by Apple to optimize application support for concurrent (multithreaded) execution on multicore hardware.
**Grand Central Dispatch (GCD),** also known as **libdispatch** (`libdispatch.dyld`), is available in both macOS and iOS. It's a technology developed by Apple to optimize application support for concurrent (multithreaded) execution on multicore hardware.
**GCD** provides and manages **FIFO queues** to which your application can **submit tasks** in the form of **block objects**. Blocks submitted to dispatch queues are **executed on a pool of threads** fully managed by the system. GCD automatically creates threads for executing the tasks in the dispatch queues and schedules those tasks to run on the available cores.
{% hint style="success" %}
In summary, to execute code in **parallel**, processes can send **blocks of code to GCD**, which will take care of their execution. Therefore, processes don't create new threads; **GCD executes the given code with its own pool of threads**.
In summary, to execute code in **parallel**, processes can send **blocks of code to GCD**, which will take care of their execution. Therefore, processes don't create new threads; **GCD executes the given code with its own pool of threads** (which might increase or decrease as necessary).
{% endhint %}
This is very helpful to manage parallel execution successfully, greatly reducing the number of threads processes create and optimising the parallel execution. This is idea for tasks that require **great parallelism** (brute-forcing?) or for tasks that shouldn't block the main thread: For example, the main thread on iOS handles UI interactions, so any other functionality that could make the app hang (searching, accessing a web, reading a file...) is managed this way.
This is very helpful to manage parallel execution successfully, greatly reducing the number of threads processes create and optimising the parallel execution. This is ideal for tasks that require **great parallelism** (brute-forcing?) or for tasks that shouldn't block the main thread: For example, the main thread on iOS handles UI interactions, so any other functionality that could make the app hang (searching, accessing a web, reading a file...) is managed this way.
### Blocks
A block is a **self contained section of code** (like a function with arguments returning a value) and can also specify bound variables.\
However, at compiler level blocks doesn't exist, they are `os_object`s. Each of these objects is formed by two structures:
* **block literal**: 
* It starts by the **`isa`** field, pointing to the block's class:
* `NSConcreteGlobalBlock` (blocks from `__DATA.__const`)
* `NSConcreteMallocBlock` (blocks in the heap)
* `NSConcreateStackBlock` (blocks in stack)
* It has **`flags`** (indicating fields present in the block descriptor) and some reserved bytes
* The function pointer to call
* A pointer to the block descriptor
* Block imported variables (if any)
* **block descriptor**: It's size depends on the data that is present (as indicated in the previous flags)
* It has some reserved bytes
* The size of it
* It'll usually have a pointer to an Objective-C style signature to know how much space is needed for the params (flag `BLOCK_HAS_SIGNATURE`)
* If variables are referenced, this block will also have pointers to a copy helper (copying the value at the begining) and dispose helper (freeing it).
### Queues
A dispatch queue is a named object providing FIFO ordering of blocks for executions.
Blocks a set in queues to be executed, and these support 2 modes: `DISPATCH_QUEUE_SERIAL` and `DISPATCH_QUEUE_CONCURRENT`. Of course the **serial** one **won't have race condition** problems as a block won't be executed until the previous one has finished. But **the other type of queue might have it**.
Default queues:
* `.main-thread`: From `dispatch_get_main_queue()`
* `.libdispatch-manager`: GCD's queue manager
* `.root.libdispatch-manager`: GCD's queue manager
* `.root.maintenance-qos`: Lowest priority tasks
* `.root.maintenance-qos.overcommit`
* `.root.background-qos`: Available as `DISPATCH_QUEUE_PRIORITY_BACKGROUND`
* `.root.background-qos.overcommit`
* `.root.utility-qos`: Available as `DISPATCH_QUEUE_PRIORITY_NON_INTERACTIVE`
* `.root.utility-qos.overcommit`
* `.root.default-qos`: Available as `DISPATCH_QUEUE_PRIORITY_DEFAULT`
* `.root.background-qos.overcommit`
* `.root.user-initiated-qos`: Available as `DISPATCH_QUEUE_PRIORITY_HIGH`
* `.root.background-qos.overcommit`
* `.root.user-interactive-qos`: Highest priority
* `.root.background-qos.overcommit`
Notice that it will be the system who decides **which threads handle which queues at each time** (multiple threads might work in the same queue or the same thread might work in different queues at some point)
#### Attributtes
When creating a queue with **`dispatch_queue_create`** the third argument is a `dispatch_queue_attr_t`, which usually is either `DISPATCH_QUEUE_SERIAL` (which is actually NULL) or `DISPATCH_QUEUE_CONCURRENT` which is a pointer to a `dispatch_queue_attr_t` struct which allow to control some parameters of the queue.
### Dispatch objects
There are several objects that libdispatch uses and queues and blocks are just 2 of them. It's possible to create these objects with `dispatch_object_create`:
* `block`
* `data`: Data blocks
* `group`: Group of blocks
* `io`: Async I/O requests
* `mach`: Mach ports
* `mach_msg`: Mach messages
* `pthread_root_queue`:A queue with a pthread thread pool and not workqueues
* `queue`
* `semaphore`
* `source`: Event source
## Objective-C
@ -167,6 +232,10 @@ Ghidra will automatically rewrite everything:
<figure><img src="../../.gitbook/assets/image (1163).png" alt="" width="563"><figcaption></figcaption></figure>
## References
* [**\*OS Internals, Volume I: User Mode. By Jonathan Levin**](https://www.amazon.com/MacOS-iOS-Internals-User-Mode/dp/099105556X)
<details>
<summary><strong>Learn AWS hacking from zero to hero with</strong> <a href="https://training.hacktricks.xyz/courses/arte"><strong>htARTE (HackTricks AWS Red Team Expert)</strong></a><strong>!</strong></summary>

View file

@ -1,4 +1,4 @@
# macOS Proces Abuse
# macOS Process Abuse
<details>
@ -14,6 +14,148 @@ Other ways to support HackTricks:
</details>
## Processes Basic Information
A process is an instance of a running executable, however processes doesn't run code, these are threads. Therefore **processes are just containers for running threads** providing the memory, descriptors, ports, permissions...
Traditionally, processes where started within other processes (except PID 1) by calling **`fork`** which would create a exact copy of the current process and then the **child process** would generally call **`execve`** to load the new executable and run it. Then, **`vfork`** was introduced to make this process faster without any memory copying.\
Then **`posix_spawn`** was introduced combining **`vfork`** and **`execve`** in one call and accepting flags:
* `POSIX_SPAWN_RESETIDS`: Reset effective ids to real ids
* `POSIX_SPAWN_SETPGROUP`: Set process group affiliation
* `POSUX_SPAWN_SETSIGDEF`: Set signal default behaviour
* `POSIX_SPAWN_SETSIGMASK`: Set signal mask
* `POSIX_SPAWN_SETEXEC`: Exec in the same process (like `execve` with more options)
* `POSIX_SPAWN_START_SUSPENDED`: Start suspended
* `_POSIX_SPAWN_DISABLE_ASLR`: Start without ASLR
* `_POSIX_SPAWN_NANO_ALLOCATOR:` Use libmalloc's Nano allocator
* `_POSIX_SPAWN_ALLOW_DATA_EXEC:` Allow `rwx` on data segments
* `POSIX_SPAWN_CLOEXEC_DEFAULT`: Close all file descriptions on exec(2) by default
* `_POSIX_SPAWN_HIGH_BITS_ASLR:` Randomize high bits of ASLR slide
Moreover, `posix_spawn` allows to specify an array of **`posix_spawnattr`** that controls some aspects of the spawned process, and **`posix_spawn_file_actions`** to modify the state of the descriptors.
When a process dies it send the **return code to the parent process** (if the parent died, the new parent is PID 1) with the signal `SIGCHLD`. The parent needs to get this value calling `wait4()` or `waitid()` and until that happen the child stays in a zombie state where it's still listed but doesn't consume resources.
### PIDs
PIDs, process identifiers, identifies a uniq process. In XNU the **PIDs** are of **64bits** increasing monotonically and **never wrap** (to avoid abuses).
### Process Groups, Sessions & Coalations
**Processes** can be inserted in **groups** to make it easier to handle them. For example, commands in a shell script will be in the same process group so it's possible to **signal them together** using kill for example.\
It's also possible to **group processes in sessions**. When a process starts a session (`setsid(2)`), the children processes are set inside the session, unless they start their own session.
Coalition is another waya to group processes in Darwin. A process joining a coalation allows it to access pool resources, sharing a ledger or facing Jetsam. Coalations have different roles: Leader, XPC service, Extension.
### Credentials & Personae
Each process with hold **credentials** that **identify its privileges** in the system. Each process will have one primary `uid` and one primary `gid` (although might belong to several groups).\
It's also possible to change the user and group id if the binary has the `setuid/setgid` bit.\
There are several functions to **set new uids/gids**.
The syscall **`persona`** provides an **alternate** set of **credentials**. Adopting a persona assumes its uid, gid and group memberships **at one**. In the [**source code**](https://github.com/apple/darwin-xnu/blob/main/bsd/sys/persona.h) it's possible to find the struct:
```c
struct kpersona_info { uint32_t persona_info_version;
uid_t persona_id; /* overlaps with UID */
int persona_type;
gid_t persona_gid;
uint32_t persona_ngroups;
gid_t persona_groups[NGROUPS];
uid_t persona_gmuid;
char persona_name[MAXLOGNAME + 1];
/* TODO: MAC policies?! */
}
```
## Threads Basic Information
1. **POSIX Threads (pthreads):** macOS supports POSIX threads (`pthreads`), which are part of a standard threading API for C/C++. The implementation of pthreads in macOS is found in `/usr/lib/system/libsystem_pthread.dylib`, which comes from the publicly available `libpthread` project. This library provides the necessary functions to create and manage threads.
2. **Creating Threads:** The `pthread_create()` function is used to create new threads. Internally, this function calls `bsdthread_create()`, which is a lower-level system call specific to the XNU kernel (the kernel macOS is based on). This system call takes various flags derived from `pthread_attr` (attributes) that specify thread behavior, including scheduling policies and stack size.
* **Default Stack Size:** The default stack size for new threads is 512 KB, which is sufficient for typical operations but can be adjusted via thread attributes if more or less space is needed.
3. **Thread Initialization:** The `__pthread_init()` function is crucial during thread setup, utilizing the `env[]` argument to parse environment variables that can include details about the stack's location and size.
#### Thread Termination in macOS
1. **Exiting Threads:** Threads are typically terminated by calling `pthread_exit()`. This function allows a thread to exit cleanly, performing necessary cleanup and allowing the thread to send a return value back to any joiners.
2. **Thread Cleanup:** Upon calling `pthread_exit()`, the function `pthread_terminate()` is invoked, which handles the removal of all associated thread structures. It deallocates Mach thread ports (Mach is the communication subsystem in the XNU kernel) and calls `bsdthread_terminate`, a syscall that removes the kernel-level structures associated with the thread.
#### Synchronization Mechanisms
To manage access to shared resources and avoid race conditions, macOS provides several synchronization primitives. These are critical in multi-threading environments to ensure data integrity and system stability:
1. **Mutexes:**
* **Regular Mutex (Signature: 0x4D555458):** Standard mutex with a memory footprint of 60 bytes (56 bytes for the mutex and 4 bytes for the signature).
* **Fast Mutex (Signature: 0x4d55545A):** Similar to a regular mutex but optimized for faster operations, also 60 bytes in size.
2. **Condition Variables:**
* Used for waiting for certain conditions to occur, with a size of 44 bytes (40 bytes plus a 4-byte signature).
* **Condition Variable Attributes (Signature: 0x434e4441):** Configuration attributes for condition variables, sized at 12 bytes.
3. **Once Variable (Signature: 0x4f4e4345):**
* Ensures that a piece of initialization code is executed only once. Its size is 12 bytes.
4. **Read-Write Locks:**
* Allows multiple readers or one writer at a time, facilitating efficient access to shared data.
* **Read Write Lock (Signature: 0x52574c4b):** Sized at 196 bytes.
* **Read Write Lock Attributes (Signature: 0x52574c41):** Attributes for read-write locks, 20 bytes in size.
{% hint style="success" %}
The last 4 bytes of those objects are used to deetct overflows.
{% endhint %}
### Thread Local Variables (TLV)
**Thread Local Variables (TLV)** in the context of Mach-O files (the format for executables in macOS) are used to declare variables that are specific to **each thread** in a multi-threaded application. This ensures that each thread has its own separate instance of a variable, providing a way to avoid conflicts and maintain data integrity without needing explicit synchronization mechanisms like mutexes.
In C and related languages, you can declare a thread-local variable using the **`__thread`** keyword. Heres how it works in your example:
```c
cCopy code__thread int tlv_var;
void main (int argc, char **argv){
tlv_var = 10;
}
```
This snippet defines `tlv_var` as a thread-local variable. Each thread running this code will have its own `tlv_var`, and changes one thread makes to `tlv_var` will not affect `tlv_var` in another thread.
In the Mach-O binary, the data related to thread local variables is organized into specific sections:
* **`__DATA.__thread_vars`**: This section contains the metadata about the thread-local variables, like their types and initialization status.
* **`__DATA.__thread_bss`**: This section is used for thread-local variables that are not explicitly initialized. It's a part of memory set aside for zero-initialized data.
Mach-O also provides a specific API called **`tlv_atexit`** to manage thread-local variables when a thread exits. This API allows you to **register destructors**—special functions that clean up thread-local data when a thread terminates.
### Threading Priorities
Understanding thread priorities involves looking at how the operating system decides which threads to run and when. This decision is influenced by the priority level assigned to each thread. In macOS and Unix-like systems, this is handled using concepts like `nice`, `renice`, and Quality of Service (QoS) classes.
#### Nice and Renice
1. **Nice:**
* The `nice` value of a process is a number that affects its priority. Every process has a nice value ranging from -20 (the highest priority) to 19 (the lowest priority). The default nice value when a process is created is typically 0.
* A lower nice value (closer to -20) makes a process more "selfish," giving it more CPU time compared to other processes with higher nice values.
2. **Renice:**
* `renice` is a command used to change the nice value of an already running process. This can be used to dynamically adjust the priority of processes, either increasing or decreasing their CPU time allocation based on new nice values.
* For example, if a process needs more CPU resources temporarily, you might lower its nice value using `renice`.
#### Quality of Service (QoS) Classes
QoS classes are a more modern approach to handling thread priorities, particularly in systems like macOS that support **Grand Central Dispatch (GCD)**. QoS classes allow developers to **categorize** work into different levels based on their importance or urgency. macOS manages thread prioritization automatically based on these QoS classes:
1. **User Interactive:**
* This class is for tasks that are currently interacting with the user or require immediate results to provide a good user experience. These tasks are given the highest priority to keep the interface responsive (e.g., animations or event handling).
2. **User Initiated:**
* Tasks that the user initiates and expects immediate results, such as opening a document or clicking a button that requires computations. These are high priority but below user interactive.
3. **Utility:**
* These tasks are long-running and typically show a progress indicator (e.g., downloading files, importing data). They are lower in priority than user-initiated tasks and do not need to finish immediately.
4. **Background:**
* This class is for tasks that operate in the background and are not visible to the user. These can be tasks like indexing, syncing, or backups. They have the lowest priority and minimal impact on system performance.
Using QoS classes, developers do not need to manage the exact priority numbers but rather focus on the nature of the task, and the system optimizes the CPU resources accordingly.
Moreover, there are different **thread scheduling policies** that flows to specify a set of scheduling parameters that the scheduler will take into consideration. This can be done using `thread_policy_[set/get]`. This might be useful in race condition attacks.
## MacOS Process Abuse
MacOS, like any other operating system, provides a variety of methods and mechanisms for **processes to interact, communicate, and share data**. While these techniques are essential for efficient system functioning, they can also be abused by threat actors to **perform malicious activities**.