How to start Bug Bounty?
2020-11-06 03:51:02 Author: medium.com(查看原文) 阅读量:304 收藏

1. Scope domain

Finding roots (show in-scope targets(subdomains) in bug bounty platform; like HackerOne and bug crowd.)

2. Acquisitions

Understand the company.

We want to continue to gather seed/root domains. Acquisitions are often a new way to expand our available assets if they are in scope. We can investigate a company’s acquisition on sites like https://crunchbase.com, Wikipedia, & google.

It is important to do some googling on these acquisitions to see if they’re still owned by the parent company. Many times, acquisitions will split back out or get sold to another company.

3. ASN Enumeration

Autonomous system numbers are given to large enough networks. These ASNs will help us track down some semblance of an entity’s IT infrastructure. The most reliable way to get these is manually through Human Electric’s free-form search. eg: http://bgp.he.net

Some automation is available to get ASNs. One such tool is the ‘net’ switch of “Metabigor” which will fetch ASN data fro a keyword from http://bgp.he.net and http://asnlookup.com

One problem with cmd line enumeration is that you could return records from another org on an accident that contains the keyword “testa”

Because of the advent of cloud infrastructure, ASNs aren’t always a complete picture of a network. Range assets could exist in cloud environments like AWS & Azure. Here we can see several IP ranges.

For discovering more seed domains we want to scan the whole ASN with a port scanner and return any root domains we see in SSL certificates etc.

>Ad/Analytics Relationships

  • We can also glean related domains and subdomains by looking at a target’s ad/analytics tracker codes. Many sites use the same codes across all their domains. Google analytics and New relic codes are the most common. We can look at these relationships through a site called Builtwith. It also has a Chrome and Firefox extension to do this on the fly.

>Google-Fu

We can google the

  • copyright text
  • Terms of service text
  • Privacy policy text

from the main target to glean related hosts in google

>Shodan

Shodan is a tool that continuously spiders infrastructure on the internet. It is much more verbose than regular spiders. It captures response data, cert data, and more. It requires registration. eg: https://www.shodan.io

Image for post

4. Subdomain Enumeration

i. Linked and JS Discovery

Another way to widen our scope is to examine all the links of our main target. We can visit a seed/root and recursively spider all the links for a term with regex, examining those links… and their links, and so on… until we have found all sites that could be in our scope.

Linked discovery just counts on using a spider recursively.

One of the most extensible spiders for general automation is Gospider which can be used for many things and supports parsing js very well. Besides, hakcrawler has many parsing strategies that interest bug hunters.

This is a hybrid technique that will find both roots/seeds and subdomains.

ii. Subdomain Scraping

The next set of tools scrape domain information from all sorts of projects that expose databases of URLs or domains. For scraping subdomain data there are two industry-leading tools at the moment; Amass and Subfinder. They parse all the sources’ references in the previous slide and more.

Subfinder is another best in breed tool incorporates multiple sources have extensible output and more.

Github Clone: https://github.com/projectdiscovery/subfinder.git

A highly valuable technique is to monitor whole cloud ranges of AWS, GCP, and Azure for SSL sites, and parse their certificates to match your targets.

iii. Subdomain Bruteforce

At this point, we move into guessing for live subdomains. We can use a large list of common subdomain names and just try and resolve them analyzing if they succeed.
The problem with this method is that only using one DNS server to do this will take forever. Some tools have come out that are both threaded and use multiple DNS resolvers simultaneously. This speeds up this process significantly.
A multi resolver, threaded subdomain brute is only as good as it’s word-list. there are two trains of thought here

  • Tailored word-lists
  • Massive word-lists

>Alteration Scanning

  • When brute-forcing or gathering subdomains via scraping you may come across a naming pattern in these subdomains.
  • Even though we may not have found it yet, there may be other targets that conform to naming conventions.
  • Besides, sometimes targets are not explicitly protected across naming conventions.

5. Port analysis

Massscan or Nmap can be used in this technique. These achieve this speed with re-written TCP/IP stack, true multi-threading. We can scan the remote administration protocols for default passwords in this technique.

> GitHub Dorking

Many organizations quickly grow in their engineering teams. Sooner or later a new developer, intern, or other staff will leak source code online, usually through a public Github.com repo that they mistakenly thought they had set private.

>Screenshotting

At this point, we have a lot of attack surface. We can feed possible domains to a tool and attempt to screenshot the results. This will allow us to eye-ball things that might be interesting.

6. Automation++

Eventually, we will want to make a script or recon framework of our own. We can rewrite a tool ourselves to handle these issues but some help does not exist here. The interface can take these tools and support for: CIDR input, Glob input, threading, proxying, queued commands, and more.

>Frameworks

It could be recon is not your thing. That’s all right! :-)
Several hunters have open-sourced their automation at this point and you can choose one that fits you and use it without worrying too much. I usually classify recon frameworks in rough tiers.


文章来源: https://medium.com/bugbountywriteup/how-to-start-bug-bounty-5a4042212a2?source=rss----7b722bfd1b8d--bug_bounty
如有侵权请联系:admin#unsafe.sh