.. | ||
buckets | ||
php-tricks-esp | ||
aem-adobe-experience-cloud.md | ||
apache.md | ||
artifactory-hacking-guide.md | ||
cgi.md | ||
code-review-tools.md | ||
drupal.md | ||
flask.md | ||
git.md | ||
golang.md | ||
graphql.md | ||
h2-java-sql-database.md | ||
iis-internet-information-services.md | ||
jboss.md | ||
jenkins.md | ||
jira.md | ||
joomla.md | ||
jsp.md | ||
laravel.md | ||
moodle.md | ||
nginx.md | ||
put-method-webdav.md | ||
python.md | ||
README.md | ||
spring-actuators.md | ||
symphony.md | ||
tomcat.md | ||
uncovering-cloudflare.md | ||
vmware-esx-vcenter....md | ||
web-api-pentesting.md | ||
werkzeug.md | ||
wordpress.md | ||
xss-to-rce-electron-desktop-apps.md |
80,443 - Pentesting Web Methodology
If you want to know about my latest modifications/additions or you have any suggestion for HackTricks or PEASS, join the 💬 PEASS & HackTricks telegram group here, or follow me on Twitter 🐦@carlospolopm.
If you want to share some tricks with the community you can also submit pull requests to https://github.com/carlospolop/hacktricks that will be reflected in this book.
Don't forget to give ⭐ on the github to motivate me to continue developing this book.
Basic Info
The web service is the most common and extensive service and a lot of different types of vulnerabilities exists.
Default port: 80 HTTP
, 443(HTTPS)
PORT STATE SERVICE
80/tcp open http
443/tcp open ssl/https
nc -v domain.com 80 # GET / HTTP/1.0
openssl s_client -connect domain.com:443 # GET / HTTP/1.0
Web API Guidance
{% page-ref page="web-api-pentesting.md" %}
Methodology summary
In this methodology we are going to suppose that you are going to a attack a domain
or subdomain
and only that. So, you should apply this methodology to each discovered domain, subdomain or IP with undetermined web server inside the scope.
- Start by identifying the technologies used by the web server. Look for tricks to keep in mind during the rest of the test if you can successfully identify the tech.
- Any known vulnerability of the version of the technology?
- Using any well known tech? Any useful trick to extract more information?
- Any specialised scanner to run
like wpscan
?
- Launch general purposes scanners. You never know if they are going to find something or if the are going to find some interesting information.
- Start with the initial checks: robots, sitemap, 404 error and SSL/TLS scan
if HTTPS
. - Start spidering the web page: It's time to find all the possible files, folders and parameters being used. Also, check for special findings.
- Note that anytime a new directory is discovered during brute-forcing or spidering, it should be spidered.
- Directory Brute-Forcing: Try to brute force all the discovered folders searching for new files and directories.
- Note that anytime a new directory is discovered during brute-forcing or spidering, it should be Brute-Forced.
- Backups checking: Test if you can find backups of discovered files appending common backup extensions.
- Brute-Force parameters: Try to find hidden parameters.
- Once you have identified all the possible endpoints accepting user input, check for all kind of vulnerabilities related to it.
Server Version Vulnerable?
Identify
Check if there are known vulnerabilities for the server version that is running.
The HTTP headers and cookies of the response could be very useful to identify the technologies and/or version being used. Nmap scan can identify the server version, but it could also be useful the tools whatweb, webtech or https://builtwith.com/:
whatweb -a 1 <URL> #Stealthy
whatweb -a 3 <URL> #Aggresive
webtech -u <URL>
Search for vulnerabilities of the web application version
Check if any WAF
- https://github.com/EnableSecurity/wafw00f
- https://github.com/Ekultek/WhatWaf.git
- https://nmap.org/nsedoc/scripts/http-waf-detect.html
Web tech tricks
Some tricks for finding vulnerabilities in different well known technologies being used:
- AEM - Adobe Experience Cloud
- Apache
- Artifactory
- Buckets
- CGI
- Drupal
- Flask
- Git
- Golang
- GraphQL
- H2 - Java SQL database
- IIS tricks
- JBOSS
- Jenkins
- Jira
- Joomla
- JSP
- Laravel
- Moodle
- Nginx
- PHP (php has a lot of interesting tricks that could be exploited)
- Python
- Spring Actuators
- Symphony
- Tomcat
- VMWare
- Web API Pentesting
- WebDav
- Werkzeug
- Wordpress
- Electron Desktop (XSS to RCE)
Take into account that the same domain can be using different technologies in different ports, folders and subdomains.
If the web application is using any well known tech/platform listed before or any other, don't forget to search on the Internet new tricks and let me know!
.
Source Code Review
If the source code of the application is available in github, apart of performing by your own a White box test of the application there is some information that could be useful for the current Black-Box testing:
- Is there a Change-log or Readme or Version file or anything with version info accessible via web?
- How and where are saved the credentials? Is there any
accessible?
file with credentialsusernames or passwords
? - Are passwords in plain text, encrypted or which hashing algorithm is used?
- Is it using any master key for encrypting something? Which algorithm is used?
- Can you access any of these files exploiting some vulnerability?
- Is there any interesting information in the github
solved and not solved
issues? Or in commit historymaybe some **password introduced inside an old commit**
?
{% page-ref page="code-review-tools.md" %}
Automatic scanners
General purpose automatic scanners
nikto -h <URL>
whatweb -a 4 <URL>
wapiti -u <URL>
W3af
zaproxy #You can use an API
nuclei -t nuclei-templates
CMS scanners
If a CMS is used don't forget to run a scanner, maybe something juicy is found:
Clusterd: JBoss, ColdFusion, WebLogic, Tomcat, Railo, Axis2, Glassfish
CMSScan: WordPress, Drupal, Joomla, vBulletin websites for Security issues. GUI
VulnX: Joomla, Wordpress, Drupal, PrestaShop, Opencart
CMSMap: (W)ordpress, (J)oomla, (D)rupal or (M)oodle
droopscan: Drupal, Joomla, Moodle, Silverstripe, Wordpress
cmsmap [-f W] -F -d <URL>
wpscan --force update -e --url <URL>
joomscan --ec -u <URL>
joomlavs.rb #https://github.com/rastating/joomlavs
At this point you should already have some information of the web server being used by the client
if any data is given
and some tricks to keep in mind during the test. If you are lucky you have even found a CMS and run some scanner.
Step-by-step Web Application Discovery
From this point we are going to start interacting with the web application.
Initial checks
Default pages with interesting info:
- /robots.txt
- /sitemap.xml
- /crossdomain.xml
- /clientaccesspolicy.xml
- /.well-known/
- Check also comments in the main and secondary pages.
Forcing errors
Web servers may behave unexpectedly when weird data is sent to them. This may open vulnerabilities or disclosure sensitive information.
- Access fake pages like /whatever_fake.php
.aspx,.html,.etc
- Add "[]", "]]", and "[[" in cookie values and parameter values to create errors
- Generate error by giving input as
/~randomthing/%s
at the end of URL - Try different HTTP Verbs like PATCH, DEBUG or wrong like FAKE
Check if you can upload files [PUT verb, WebDav](put-method-webdav.md)
If you find that WebDav is enabled but you don't have enough permissions for uploading files in the root folder try to:
- Brute Force credentials
- Upload files via WebDav to the rest of found folders inside the web page. You may have permissions to upload files in other folders.
SSL/TLS vulnerabilites
- If the application isn't forcing the user of HTTPS in any part, then it's vulnerable to MitM
- If the application is sending sensitive data
passwords
using HTTP. Then it's a high vulnerability.
Use testssl.sh to checks for vulnerabilities In Bug Bounty programs probably these kind of vulnerabilities won't be accepted
and use a2sv to recheck the vulnerabilities:
./testssl.sh [--htmlfile] 10.10.10.10:443
#Use the --htmlfile to save the output inside an htmlfile also
## You can also use other tools, by testssl.sh at this momment is the best one (I think)
sslscan <host:port>
sslyze --regular <ip:port>
Information about SSL/TLS vulnerabilities:
- https://www.gracefulsecurity.com/tls-ssl-vulnerabilities/
- https://www.acunetix.com/blog/articles/tls-vulnerabilities-attacks-final-part/
Spidering
Launch some kind of spider inside the web. The goal of the spider is to find as much paths as possible from the tested application. Therefore, web crawling and external sources should be used to find as much valid paths as possible.
- gospider
go
: HTML spider, LinkFinder in JS files and external sourcesArchive.org, CommonCrawl.org, VirusTotal.com, AlienVault.com
. - hakrawler
go
: HML spider, with LinkFider for JS files and Archive.org as external source. - dirhunt
python
: HTML spider, also indicates "juicy files". - evine
go
: Interactive CLI HTML spider. It also searches in Archive.org - meg
go
: This tool isn't a spider but it can be useful. You can just indicate a file with hosts and a file with paths and meg will fetch each path on each host and save the response. - urlgrab
go
: HTML spider with JS rendering capabilities. However, it looks like it's unmaintained, the precompiled version is old and the current code doesn't compile - gau go): HTML spider that uses external providers
wayback, otx, commoncrawl
- ParamSpider: This script will find URLs with parameter and will list them.
- galer
go
: HTML spider with JS rendering capabilities. - LinkFinder
python
: HTML spider, with JS beautify capabilities capable of search new paths in JS files. It could be worth it also take a look to JSScanner, which is a wrapper of LinkFinder. - JSParser
python2.7
: A python 2.7 script using Tornado and JSBeautifier to parse relative URLs from JavaScript files. Useful for easily discovering AJAX requests. Looks like unmaintained. - relative-url-extractor
ruby
: Given a fileHTML
it will extract URLs from it using nifty regular expression to find and extract the relative URLs from uglyminify
files. - JSFScan
bash, several tools
: Gather interesting information from JS files using several tools. - subjs
go
: Find JS files. - page-fetch
go
: Load a page in a headless browser and print out all the urls loaded to load the page.
Brute Force directories and files
Start brute-forcing from the root folder and be sure to brute-force all the directories found using this method and all the directories discovered by the Spidering you can do this brute-forcing **recursively** and appending at the beginning of the used wordlist the names of the found directories
.
Tools:
- Dirb / Dirbuster - Included in Kali, old
and **slow**
but functional. Allow auto-signed certificates and recursive search. Too slow compared with th other options. - Dirsearch (python): It doesn't allow auto-signed certificates but allows recursive search.
- Gobuster
go
: It allows auto-signed certificates, it doesn't have recursive search. - Feroxbuster - Fast, supports recursive search.
- wfuzz
wfuzz -w /usr/share/seclists/Discovery/Web-Content/raft-medium-directories.txt https://domain.com/api/FUZZ
- ffuf - Fast:
ffuf -c -w /usr/share/wordlists/dirb/big.txt -u http://10.10.10.10/FUZZ
Recommended dictionaries:
- https://github.com/carlospolop/Auto_Wordlists/blob/main/wordlists/bf_directories.txt
- Dirsearch included dictionary
- http://gist.github.com/jhaddix/b80ea67d85c13206125806f0828f4d10
- Assetnote wordlists
- https://github.com/danielmiessler/SecLists/tree/master/Discovery/Web-Content
- raft-large-directories-lowercase.txt
- directory-list-2.3-medium.txt
- RobotsDisallowed/top10000.txt
- https://github.com/random-robbie/bruteforce-lists
- https://github.com/google/fuzzing/tree/master/dictionaries
- https://github.com/six2dez/OneListForAll
- https://github.com/random-robbie/bruteforce-lists
- /usr/share/wordlists/dirb/common.txt
- /usr/share/wordlists/dirb/big.txt
- /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt
Note that anytime a new directory is discovered during brute-forcing or spidering, it should be Brute-Forced.
What to check on each file found
- Broken link checker: Find broken links inside HTMLs that may be prone to takeovers
- File Backups: Once you have found all the files, look for backups of all the executable files
"_.php_", "_.aspx_"...
. Common variations for naming a backup are: file.ext~, #file.ext#, ~file.ext, file.ext.bak, file.ext.tmp, file.ext.old, file.bak, file.tmp and file.old. You can also use the tool bfac. - Discover new parameters: You can use tools like Arjun, parameth, x8 and Param Miner to discover hidden parameters. If you can, you could try to search hidden parameters on each executable web file.
- Arjun all default wordlists: https://github.com/s0md3v/Arjun/tree/master/arjun/db__
- Param-miner “params” : https://github.com/PortSwigger/param-miner/blob/master/resources/params__
- Assetnote “parameters_top_1m”: https://wordlists.assetnote.io/__
- nullenc0de “params.txt”: https://gist.github.com/nullenc0de/9cb36260207924f8e1787279a05eb773
- Comments: Check the comments of all the files, you can find credentials or hidden functionality.
- If you are playing CTF, a "common" trick is to hide information inside comments at the right of the page
using **hundreds** of **spaces** so you don't see the data if you open the source code with the browser
. Other possibility is to use several new lines and hide information in a comment at the bottom of the web page.
- If you are playing CTF, a "common" trick is to hide information inside comments at the right of the page
- API keys: If you find any API key there is guide that indicates how to use API keys of different platforms: keyhacks, zile, truffleHog, SecretFinder, [RegHex](https://github.com/l4yton/RegHex%29).
- Google API keys: If you find any API key looking like AIzaSyA-qLheq6xjDiEIRisP_ujUseYLQCHUjik you can use the project gmapapiscanner to check which apis the key can access.
- S3 Buckets: While spidering look if any subdomain or any link is related with some S3 bucket. In that case, check the permissions of the bucket.
Special findings
While performing the spidering and brute-forcing you could find interesting things that you have to notice.
Interesting files
- Look for links to other files inside the CSS files.
- If you find a .git file some information can be extracted
- If you find a .env information such as api keys, dbs passwords and other information can be found.
- If you find API endpoints you should also test them. These aren't files, but will probably "look like" them.
- JS files: In the spidering section several tools that can extract path from JS files were mentioned. Also, It would be interesting to monitor each JS file found, as in some ocations, a change may indicate that a potential vulnerability was introduced in the code. You could use for example JSMon.
- You should also check discovered JS files with RetireJS or JSHole to find if it's vulnerable.
- Javascript Deobfuscator and Unpacker
[https://lelinhtinh.github.io/de4js/](https://lelinhtinh.github.io/de4js/)
- Javascript Beautifier
[http://jsbeautifier.org/](https://beautifier.io/)
- JsFuck deobfuscation
javascript with chars:"\[\]!+" [https://ooze.ninja/javascript/poisonjs/](https://ooze.ninja/javascript/poisonjs/)
- In several occasions you will need to understand regular expressions used, this will be useful: https://regex101.com/
- You could also monitor the files were forms were detected, as a change in the parameter or the apearance f a new form may indicate a potential new vulnerable functionality.
403 Forbidden/Basic Authentication/401 Unauthorized bypass
-
Try using different verbs to access the file: GET, POST, INVENTED
-
If /path is blocked, try using /%2e/path
if the access is blocked by a proxy, this could bypass the protection
. Try also /%252e/pathdouble URL encode
-
Try Unicode bypass: /%ef%bc%8fpath
The URL encoded chars are like "/"
so when encoded back it will be //path and maybe you will have already bypassed the /path name check -
Try to stress the server sending common GET requests
[It worked for this guy wit Facebook](https://medium.com/@amineaboud/story-of-a-weird-vulnerability-i-found-on-facebook-fc0875eb5125)
. -
Change the protocol: from http to https, or for https to http
-
Change Host header to some arbitrary value
[that worked here](https://medium.com/@sechunter/exploiting-admin-panel-like-a-boss-fc2dd2499d31)
-
Other path bypasses:
- site.com/secret –> HTTP 403 Forbidden
- site.com/SECRET –> HTTP 200 OK
- site.com/secret/ –> HTTP 200 OK
- site.com/secret/. –> HTTP 200 OK
- site.com//secret// –> HTTP 200 OK
- site.com/./secret/.. –> HTTP 200 OK
- site.com/;/secret –> HTTP 200 OK
- site.com/.;/secret –> HTTP 200 OK
- site.com//;//secret –> HTTP 200 OK
- site.com/secret.json –> HTTP 200 OK
ruby
- Use all this list in the following situations:
- /FUZZsecret
- /FUZZ/secret
- /secretFUZZ
-
Other bypasses:
- /v3/users_data/1234 --> 403 Forbidden
- /v1/users_data/1234 --> 200 OK
- {“id”:111} --> 401 Unauthriozied
- {“id”:[111]} --> 200 OK
- {“id”:111} --> 401 Unauthriozied
- {“id”:{“id”:111}} --> 200 OK
- {"user_id":"<legit_id>","user_id":"<victims_id>"}
JSON Parameter Pollution
- user_id=ATTACKER_ID&user_id=VICTIM_ID
Parameter Pollution
-
Go to https://archive.org/web/ and check if in the past that file was worldwide accessible.
-
Try to use other User Agents to access the resource.
-
Fuzz the page: Try using HTTP Proxy Headers, HTTP Authentication Basic and NTLM brute-force
with a few combinations only
and other techniques. To do all of this I have created the tool fuzzhttpbypass.X-Originating-IP: 127.0.0.1
X-Forwarded-For: 127.0.0.1
X-Remote-IP: 127.0.0.1
X-Remote-Addr: 127.0.0.1
X-ProxyUser-Ip: 127.0.0.1
X-Original-URL: 127.0.0.1
- If the path is protected you can try to bypass the path protection using these other headers:
X-Original-URL: /admin/console
X-Rewrite-URL: /admin/console
-
Guess the password: Test the following common credentials. Do you know something about the victim? Or the CTF challenge name?
-
Brute force: Try basic, digest and NTLM auth.
{% code title="Common creds" %}
admin admin admin password admin 1234 admin admin1234 admin 123456 root toor test test guest guest
{% endcode %}
502 Proxy Error
If any page responds with that code, it's probably a bad configured proxy. If you send a HTTP request like: GET https://google.com HTTP/1.1
with the host header and other common headers
, the proxy will try to access google.com and you will have found a SSRF.
NTLM Authentication - Info disclosure
If the running server asking for authentication is Windows or you find a login asking for your credentials and asking for **domain** **name**
, you can provoke an information disclosure.
Send the header: “Authorization: NTLM TlRMTVNTUAABAAAAB4IIAAAAAAAAAAAAAAAAAAAAAAA=”
and due to how the NTLM authentication works, the server will respond with internal info IIS version, Windows version...
inside the header "WWW-Authenticate".
You can automate this using the nmap plugin "http-ntlm-info.nse".
HTTP Redirect CTF
It is possible to put content inside a Redirection. This content won't be shown to the user as the browser will execute the redirection
but something could be hidden in there.
Web Vulnerabilities Checking
Now that a comprehensive enumeration of the web application has been performed it's time to check for a lot of possible vulnerabilities. You can find the checklist here:
{% page-ref page="../../pentesting-web/web-vulnerabilities-methodology.md" %}
TODO: Complete the list of vulnerabilities and techniques with https://six2dez.gitbook.io/pentest-book/others/web-checklist and https://kennel209.gitbooks.io/owasp-testing-guide-v4/content/en/web_application_security_testing/configuration_and_deployment_management_testing.html, https://owasp-skf.gitbook.io/asvs-write-ups/kbid-111-client-side-template-injection
HackTricks Automatic Commands
Protocol_Name: Web #Protocol Abbreviation if there is one.
Port_Number: 80,443 #Comma separated if there is more than one.
Protocol_Description: Web #Protocol Abbreviation Spelled out
Entry_1:
Name: Notes
Description: Notes for Web
Note: |
https://book.hacktricks.xyz/pentesting/pentesting-web
Entry_2:
Name: Quick Web Scan
Description: Nikto and GoBuster
Command: nikto -host {Web_Proto}://{IP}:{Web_Port} &&&& gobuster dir -w {Small_Dirlist} -u {Web_Proto}://{IP}:{Web_Port} && gobuster dir -w {Big_Dirlist} -u {Web_Proto}://{IP}:{Web_Port}
Entry_3:
Name: Nikto
Description: Basic Site Info via Nikto
Command: nikto -host {Web_Proto}://{IP}:{Web_Port}
Entry_4:
Name: WhatWeb
Description: General purpose auto scanner
Command: whatweb -a 4 {IP}
Entry_5:
Name: Directory Brute Force Non-Recursive
Description: Non-Recursive Directory Brute Force
Command: gobuster dir -w {Big_Dirlist} -u {Web_Proto}://{IP}:{Web_Port}
Entry_6:
Name: Directory Brute Force Recursive
Description: Recursive Directory Brute Force
Command: python3 {Tool_Dir}dirsearch/dirsearch.py -w {Small_Dirlist} -e php,exe,sh,py,html,pl -f -t 20 -u {Web_Proto}://{IP}:{Web_Port} -r 10
Entry_7:
Name: Directory Brute Force CGI
Description: Common Gateway Interface Brute Force
Command: gobuster dir -u {Web_Proto}://{IP}:{Web_Port}/ -w /usr/share/seclists/Discovery/Web-Content/CGIs.txt -s 200
Entry_8:
Name: Nmap Web Vuln Scan
Description: Tailored Nmap Scan for web Vulnerabilities
Command: nmap -vv --reason -Pn -sV -p {Web_Port} --script=`banner,(http* or ssl*) and not (brute or broadcast or dos or external or http-slowloris* or fuzzer)` {IP}
Entry_9:
Name: Drupal
Description: Drupal Enumeration Notes
Note: |
git clone https://github.com/immunIT/drupwn.git for low hanging fruit and git clone https://github.com/droope/droopescan.git for deeper enumeration
Entry_10:
Name: WordPress
Description: WordPress Enumeration with WPScan
Command: |
?What is the location of the wp-login.php? Example: /Yeet/cannon/wp-login.php
wpscan --url {Web_Proto}://{IP}{1} --enumerate ap,at,cb,dbe && wpscan --url {Web_Proto}://{IP}{1} --enumerate u,tt,t,vp --passwords {Big_Passwordlist} -e