From 65d511c12242e70199446455ad4aa89ab0907cc6 Mon Sep 17 00:00:00 2001 From: Xalgord Date: Sun, 16 Jul 2023 06:23:01 +0000 Subject: [PATCH] GITBOOK-46: change request with no subject merged in GitBook --- SUMMARY.md | 1 + .../simple-recon-methodology.md | 160 ++++++++++++++++++ 2 files changed, 161 insertions(+) create mode 100644 recon-strategies-by-other-hackers/simple-recon-methodology.md diff --git a/SUMMARY.md b/SUMMARY.md index 4950266..1ec9a4a 100644 --- a/SUMMARY.md +++ b/SUMMARY.md @@ -11,4 +11,5 @@ * [🔎 Extensive Recon Guide For Bug Hunting](recon-strategies-by-other-hackers/extensive-recon-guide-for-bug-hunting.md) * [🔎 Deep-Subdomains-Enumeration-Methodology](recon-strategies-by-other-hackers/deep-subdomains-enumeration-methodology.md) * [🔎 How I hacked NASA and got 8 bugs ?](recon-strategies-by-other-hackers/how-i-hacked-nasa-and-got-8-bugs.md) + * [Simple Recon Methodology](recon-strategies-by-other-hackers/simple-recon-methodology.md) * [🌀 Possible "Content-Type" Header values](possible-content-type-header-values.md) diff --git a/recon-strategies-by-other-hackers/simple-recon-methodology.md b/recon-strategies-by-other-hackers/simple-recon-methodology.md new file mode 100644 index 0000000..4540422 --- /dev/null +++ b/recon-strategies-by-other-hackers/simple-recon-methodology.md @@ -0,0 +1,160 @@ +--- +description: 'source: https://infosecwriteups.com/simple-recon-methodology-920f5c5936d4' +--- + +# Simple Recon Methodology + +Hey folks, here we back again with the most important topic in penetration testing or Bug Bounty Hunting “Recon” or “Information gathering”. + +
+ +### Content + +1. What’s Recon ? +2. Recon based scope +3. simple steps to collect all information in few time +4. Recommended tools and automation frameworks +5. Recommended blogs, streams to follow + +### What’s Recon ? + +Before we start our talk, let’s know what’s the recon first? + +**Recon** is the process by which you collect more information about your target, more information like subdomains, links, open ports, hidden directories, service information, etc. + +To know more about recon, just see this pic to know where you're before and after recon… + +

informations Before Recon and After Recon

+ +So the question of which in your mind now is how we will collect all this information, and what’s kind of tools we will use?\ +Actually, to collect all this information you need to follow methodology, I’ll show you my own methodology and after a few minutes you will know how it works. + +

My own methodology — 3klcon Automation framework — src: https://github.com/eslam3kl/3klCon/blob/v2.0/3klcon-MEthedology.png

+ +> The Recon process should be based on scope, and I mean that you should collect information depending on your scope area **(small, medium, or large)**. The difference will be in the **amount** and **type** of data you will collect, so let’s get started. + +### Recon based scope + +We will divide the scopes into 3 types (Small, Medium and large scope) + +**A. Small Scope** + +In this type of scopes, you have the only _subdomain_ which you are allowed to test on it like `sub.domain.com` and you don’t have any permission to test on any other subdomain, the information which you should collect will be like this… + +
+ +As you can see the information you should collect will be based on the subdomain you have permission to test on it only like directory discovery files, service information, JS files, GitHub dorks, waybackurls, etc + +**B. Medium scope** + +Here your testing area will be increased to contain all subdomains related to a specific domain, for example, you have a domain like `example.com` and on your program page, you’re allowed to test all subdomains like `*.domain.com` In this step the information which you should collect will be more than the small scope to contain for example all subdomains and treat every subdomain as small scope “_we will talk more about this point_”, just know the type of the information only. + +

Medium scope required informations

+ +**C. Large scope** + +In this type of scopes, you have the permission to test all websites which belong to the main company, for example, you started to test on `IBM` company, so you need to collect all domains, subdomains, acquisitions, and ASN related to this company and treat every domain as medium scope. This type of scopes is the best scopes ever ❤ + +

Large scope required informations

+ +So here we know all the information which you need to collect for every scope, now let’s talk about how to collect all this info. + +Let’s see how to collect this ! + +

Ready ?

+ +### Simple steps to collect all information + +we will work here as medium scope to be simple to understand + +> All the tools used here are free as open source on GitHub + +* collect all subdomains from tools like `subfinder`, `amass`, `crtfinder`, `sublist3r` (Use more than tool) +* Use Google dorks for example `site:ibm.com -www` +* collect all these informations from `subdinder + amass + crtfinder + sublist3r + google_dorks` and collect all of them into one text file `all_subdomains.txt` + +**\[\*] Now we have 1 text file contains all subdomains** `all_subdomains.txt`**, let’s continue…** + +* Pass the text file over `httpx` or `httprobe` , these tools will filter all subdomains and return only live subdomains which works on ports 80 and 443 +* take these live subdomains and collect them into separate file `live_subdomains.txt` + +**\[\*] Now we have 2 text files** `all_subdomains.txt + live_subdomains.txt` + +* take the `live_subdomains.txt` file and pass it over `waybackurls` tool to collect all links which related to all live subdomains +* collect all these links into new file `waybackurls.txt` + +**\[\*] Now we have 3 text files** `all_subdomains.txt + live_subdomains.txt+ waybackurls.txt` + +* take all subdomains text file and pass it over `dirsearch` or `ffuf` to discover all hidden directories like [`https://community`](https://community/)`.ibm.com/database_conf.txt` +* collect and filter all the results to show only 2xx, 3xx, 403 response codes from the tool itself (use -h to know how to filter the results) +* collect all these informations into text file `hidden_directories.txt` and try to discover the leakage data or the forbidden pages and try to bypass them + +**\[\*] Now we have 4 text files** `all_subdomains.txt + live_subdomains.txt + waybackurls.txt + hidden_directories.txt` + +* pass `all_subdomains.txt` to `nmap` or `masscan` to scan all ports and discover open ports + try to brute force this open ports if you see that this ports may be brute forced, use `brute-spray`to brute force this credentials +* collect all the results into text file `nmap_results.txt` + +**\[\*] Now we have 5 text files** `all_subdomains.txt + live_subdomains.txt + waybackurls.txt + hidden_directories.txt + nmap_results.txt` + +* use `live_subdomains.txt` and search for credentials in GitHub by using automated tools like `GitHound` or by manual search (I’ll put pretty reference in the references section) +* collect all these information into text file `GitHub_search.txt` + +**\[\*] Now we have 6 text files** `all_subdomains.txt + live_subdomains.txt + waybackurls.txt + hidden_directories.txt + nmap_results.txt + GitHub_search.txt` + +* use `altdns` to collect subdomains from subdomains, for example `sub.sub.sub.domain.com` +* As usual :) collect all this info into text file `altdns_subdomain.txt` + +**\[\*] Now we have 7 text files** `all_subdomains.txt + live_subdomains.txt + waybackurls.txt + hidden_directories.txt + nmap_results.txt + GitHub_search.txt + altdns_subdomain.txt` + +* pass `waybackurls.txt` file over `gf` tool and use `gf-patterns` to filter the links to possible vulnerable links, for example if the link has parameter like `?user_id=` so this link may be vulnerable to **sqli** or **idor**, if the link has parameter like `?page=` so this link may be vulnerable to **lfi** +* collect all this vulnerable links into directory `vulnerable_links.txt` and into this directory have separated text files for all vulnerable links `gf_sqli.txt` , `gf_idor.txt` ,etc + +**\[\*] Now we have 7 text files** `all_subdomains.txt + live_subdomains.txt + waybackurls.txt + hidden_directories.txt + nmap_results.txt + GitHub_search.txt + altdns_subdomain.txt` **and one directory** `vulnerable_links.txt` + +* use `grep` to collect all JS files form `waybackurls.txt` as `cat waybackurls.txt | grep js > js_files.txt` +* you can analyze these files manually or use automation tools (I recommend manual scan, see references) +* save all the results to `js_files.txt` + +**\[\*] Now we have 8 text files** `all_subdomains.txt + live_subdomains.txt + waybackurls.txt + hidden_directories.txt + nmap_results.txt + GitHub_search.txt + altdns_subdomain.txt + js_files.txt` **+ one directory** `vulnerable_links.txt` + +* Pass `all_subdomain.txt + waybackurls.txt + vulnerable_links.txt` to `nuclei` “Automation scanner” to scan all of them. + +Next step!! Don’t worry, No more steps :) + +
+ +**Congratulations,** you have finished the biggest part of your recon ❤ + +Now I’m sure you know all this steps good, go to the upper methodology and check it again and see if you understand it or not! + +Good ! Let’s move to the next step… + +### Recommended tools and automation frameworks + +**> For Automation frameworks, I recommend 2 frameworks** + +* `3klcon` [https://github.com/eslam3kl/3klCon](https://github.com/eslam3kl/3klCon) — My own framework and it depends on the upper methodology +* `Bheem` [https://github.com/harsh-bothra/Bheem](https://github.com/harsh-bothra/Bheem) + +**> For the tools** + +* `3klector` [https://github.com/eslam3kl/3klector](https://github.com/eslam3kl/3klector) +* `crtfinder` [https://github.com/eslam3kl/crtfinder](https://github.com/eslam3kl/crtfinder) +* `Subfinder` [https://github.com/projectdiscovery/subfinder](https://github.com/projectdiscovery/subfinder) +* `Assetfinder` [https://github.com/tomnomnom/assetfinder](https://github.com/tomnomnom/assetfinder) +* `Altdns` [https://github.com/infosec-au/altdns](https://github.com/infosec-au/altdns) +* `Dirsearch` [https://github.com/maurosoria/dirsearch](https://github.com/maurosoria/dirsearch) +* `Httpx` [https://github.com/projectdiscovery/httpx](https://github.com/projectdiscovery/httpx) +* `Waybackurls` [https://github.com/tomnomnom/waybackurls](https://github.com/tomnomnom/waybackurls) +* `Gau` [https://github.com/lc/gau](https://github.com/lc/gau) +* `Git-hound` [https://github.com/tillson/git-hound](https://github.com/tillson/git-hound) +* `Gf` [https://github.com/tomnomnom/gf](https://github.com/tomnomnom/gf) +* `Gf-pattern` [https://github.com/1ndianl33t/Gf-Patterns](https://github.com/1ndianl33t/Gf-Patterns) +* `Nuclei` [https://github.com/projectdiscovery/nuclei](https://github.com/projectdiscovery/nuclei) +* `Nuclei-templets` [https://github.com/projectdiscovery/nuclei-templates](https://github.com/projectdiscovery/nuclei-templates) +* `Subjack` [https://github.com/haccer/subjack](https://github.com/haccer/subjack) + +### Credits + +Harsh Bothra “[Recon based scope](https://cobalt.io/blog/scope-based-recon-smart-recon-tactics)”\ +Jhaddix [offensity](https://www.offensity.com/en/blog/just-another-recon-guide-pentesters-and-bug-bounty-hunters/)