# Bug Bounty — Lessons Learned

Universal lessons from real hunting sessions. No target-specific references. Applicable to any program.

---

## Table of Contents

1. [The Staging Environment Trap](#1-the-staging-environment-trap)
2. [How Modern SPAs Leak Secrets](#2-how-modern-spas-leak-secrets)
3. [Azure AD / Okta Public Clients — The Enterprise Misconfiguration](#3-azure-ad--okta-public-clients--the-enterprise-misconfiguration)
4. [Device Code Phishing — What It Is and How to Prove It](#4-device-code-phishing--what-it-is-and-how-to-prove-it)
5. [ROPC — The OAuth Grant Nobody Disabled](#5-ropc--the-oauth-grant-nobody-disabled)
6. [The A→B Signal — One Developer, Many Mistakes](#6-the-ab-signal--one-developer-many-mistakes)
7. [Open Redirect Bypasses — Why Simple Validation Always Fails](#7-open-redirect-bypasses--why-simple-validation-always-fails)
8. [Error Messages Are Intelligence, Not Failures](#8-error-messages-are-intelligence-not-failures)
9. [CSP Headers as a Recon Tool](#9-csp-headers-as-a-recon-tool)
10. [Sentry DSN Exposure — From Config Leak to Write Access](#10-sentry-dsn-exposure--from-config-leak-to-write-access)
11. [Reading Minified JavaScript for Vulnerabilities](#11-reading-minified-javascript-for-vulnerabilities)
12. [Subdomain Enumeration That Finds What Others Miss](#12-subdomain-enumeration-that-finds-what-others-miss)
13. [Rate Limiting as Evidence, Not Just a Defense Check](#13-rate-limiting-as-evidence-not-just-a-defense-check)
14. [Attack Chaining — Turning Low into Critical](#14-attack-chaining--turning-low-into-critical)
15. [Dead End Management — The Cost of Rabbit Holes](#15-dead-end-management--the-cost-of-rabbit-holes)
16. [Writing Reports That Actually Get Paid](#16-writing-reports-that-actually-get-paid)
17. [How to Think Like the Developer Who Made the Bug](#17-how-to-think-like-the-developer-who-made-the-bug)
18. [The Session Summary Habit](#18-the-session-summary-habit)
19. [Scope Awareness vs Scope Creep](#19-scope-awareness-vs-scope-creep)
20. [Why Authentication Bugs Cluster](#20-why-authentication-bugs-cluster)

---

## 1. The Staging Environment Trap

### What Staging Is

A staging environment is a copy of production used for testing new features before release. It connects to the same codebase but usually with relaxed security settings because developers need to test without friction.

### Why Staging Is Almost Always Weaker

Developers deliberately relax controls on staging:

| Control | Production | Staging |
|---|---|---|
| CAPTCHA | Enabled | Often disabled (`captcha.enabled: false`) |
| Email verification | Required | Often disabled |
| MFA | Enforced | Often optional or skipped |
| Rate limiting | Active | Often absent |
| WAF rules | Strict | Relaxed or absent |
| TLS certificate | Valid | Often self-signed or wildcard |
| Logging / alerting | Full | Minimal |

The mistake is not that staging is weaker — that is intentional. The mistake is when staging **shares infrastructure with production**: same user database, same OAuth server, same internal APIs, same tokens.

### What to Test on Staging

1. Register an account — is email verification required?
2. Log in — is MFA enforced?
3. Attempt to use a staging-issued OAuth token against production APIs
4. Check if the staging environment references production API hostnames in its config
5. Check if session cookies set on staging are accepted on production (same domain suffix)

### The Key Question

"If I take something I obtain on staging — a session token, an OAuth code, a registered account — and use it on production, does it work?"

If yes: the finding escalates. Staging → production connection is almost always P1 or P2.

---

## 2. How Modern SPAs Leak Secrets

### Why This Happens

Single-page applications (SPAs) run entirely in the browser. Any code or configuration they need must be served to the client — including configuration that developers prefer to keep quiet.

The standard pattern:
1. SPA bundle is compiled once and deployed to staging and production
2. To support different environments, runtime config is loaded from a **separate file** at startup
3. That file must be accessible without authentication (the app hasn't authenticated the user yet)
4. The file contains API client IDs, backend hostnames, sometimes Sentry DSNs

### Common Runtime Config File Paths

Every framework has its own convention:

| Framework | Common config paths |
|---|---|
| Angular | `/assets/app-constants.js`, `/configs/app-constants.js`, `/en-US/assets/app-constants.js` |
| React (CRA) | `REACT_APP_*` vars compiled in; but `/env-config.js`, `/config.json` are common patterns |
| Next.js | `/_next/static/chunks/` (grep for config objects), or runtime `window.__config` |
| SvelteKit | `window.__ENV` injected into the HTML |
| Vue CLI | `/config.js`, `/app.config.js` |
| Custom / rolled | `/env.js`, `/app-constants.js`, `/static/config.js` |

### What to Extract from Config Files

| Key type | Example | Why it matters |
|---|---|---|
| OAuth `client_id` | `"client_id": "abc123"` | Enables device code, ROPC testing |
| OAuth `tenant_id` or `authority` | `"tenant": "9026c5f4-..."` | Tells you which IdP to attack |
| API hostnames | `"apiUrl": "https://internal-api.target.com"` | New attack surface |
| WebSocket URLs | `"wsUrl": "wss://agent.target.com/admin/"` | Admin surfaces |
| Sentry DSN | `"dsn": "https://key@sentry.target.com/1"` | Write access to error logs |
| Feature flags | `"enableDebug": true` | Debug endpoints, verbose logging |
| App version | `"version": "2026.04.02"` | CVE matching |

### How to Enumerate These Files at Scale

```bash
CONFIG_PATHS=(
    "/assets/app-constants.js"
    "/configs/app-constants.js"
    "/config/app-constants.js"
    "/env.js"
    "/config.json"
    "/static/config.js"
)

while read host; do
    for path in "${CONFIG_PATHS[@]}"; do
        url="https://${host}${path}"
        status=$(curl -s -o /tmp/cfg_response -w "%{http_code}" --max-time 5 "$url")
        if [[ "$status" == "200" ]]; then
            size=$(wc -c < /tmp/cfg_response)
            echo "[FOUND] $url ($size bytes)"
        fi
    done
done < live_subdomains.txt
```

### What Is NOT a Vulnerability (to avoid false positives)

A public `client_id` alone is not a vulnerability. It is designed to be public. The vulnerability is what the client_id enables: if the associated OAuth app has dangerous grant types enabled (device code, ROPC), that is the finding.

---

## 3. Azure AD / Okta Public Clients — The Enterprise Misconfiguration

### Background

Enterprise applications that require employee login use an Identity Provider (IdP) — most commonly Azure AD (Microsoft Entra ID) or Okta. Web apps built on Angular or React register as OAuth clients in the IdP to handle authentication.

There are two client types:

| Type | Has `client_secret`? | Where it runs |
|---|---|---|
| Confidential client | Yes | Server-side only |
| Public client | No | Browser, mobile, CLI |

SPAs must be public clients because they run in the browser and cannot safely store a secret.

### Why Public Clients Are Not Dangerous by Default

A `client_id` without a secret is like a username without a password. It identifies the application but does not grant access. The IdP will still require the user to authenticate.

### When Public Clients Become Dangerous

The danger is in which **grant types** are enabled for the public client.

| Grant type | What it allows | Risk if enabled on public client |
|---|---|---|
| Authorization Code + PKCE | Normal browser login flow | Low risk (intended) |
| Device Code | Login from devices with no browser | Any attacker can initiate phishing |
| ROPC (password grant) | Send username + password directly to IdP | Enables credential stuffing without MFA |
| Client Credentials | App-to-app with secret | N/A — requires secret, not applicable to public client |
| Implicit | Legacy; token in URL fragment | Medium risk; deprecated but still seen |

### How to Test for Dangerous Grant Types

**Test Device Code:**
```bash
curl -s "https://login.microsoftonline.com/<tenant_id>/oauth2/v2.0/devicecode" \
  -X POST -H "Content-Type: application/x-www-form-urlencoded" \
  -d "client_id=<client_id>&scope=<scope_from_config>"
```
If response contains `user_code` → Device Code is enabled → P1  
If response contains `unsupported_grant_type` → disabled, safe

**Test ROPC:**
```bash
curl -s "https://login.microsoftonline.com/<tenant_id>/oauth2/v2.0/token" \
  -X POST -H "Content-Type: application/x-www-form-urlencoded" \
  -d "grant_type=password&client_id=<client_id>&username=anyone@target.com&password=wrong&scope=<scope>"
```
Azure AD error code interpretation:

| Error code | Meaning |
|---|---|
| `50034` | User not found — ROPC **is enabled** |
| `50126` | Wrong password — ROPC **is enabled** (and user exists) |
| `70011` | `unsupported_grant_type` — ROPC **disabled** |
| `7000218` | Public client cannot use password grant — ROPC disabled for public clients |
| `50076` | MFA required — ROPC enabled but MFA blocks completion |

Error 50034 confirms ROPC is enabled even though the user doesn't exist.

### Severity Assessment

The severity depends on what the token grants access to:
- Access to internal HR, payroll, or identity data → Critical
- Access to a marketing dashboard → Low
- Access to employee directory → Medium

Always test what the token can actually do against the API backend.

---

## 4. Device Code Phishing — What It Is and How to Prove It

### The RFC 8628 Design Intent

The Device Code grant was designed for input-constrained devices like smart TVs, printers, and IoT sensors that cannot run a browser. The user types a short code on their phone or computer to authenticate on the device's behalf.

### How It Becomes a Phishing Vector

```
1. Attacker requests device code from IdP
   POST /oauth2/v2.0/devicecode
   → {"user_code": "ABCDE-FGHIJ", "verification_uri": "https://login.microsoft.com/device"}

2. Attacker sends user_code to victim
   "IT Security: Please authenticate to verify your device. 
    Go to https://login.microsoft.com/device and enter: ABCDE-FGHIJ"

3. Victim authenticates at Microsoft's real login page — no phishing site required
   The URL is legitimate. MFA is triggered and completed by the victim.

4. Attacker polls for token
   POST /oauth2/v2.0/token with device_code
   → {"access_token": "eyJ...", "refresh_token": "0.AXoA..."}
```

### Key Properties That Make This Severe

- **No phishing site required**: The authentication happens on a legitimate Microsoft/Okta URL
- **MFA is bypassed in effect**: The victim completes MFA — but the attacker receives the token
- **Refresh tokens persist**: Depending on configuration, the attacker may have access for hours or days
- **Zero technical prerequisites**: No CVE, no exploit code, just an HTTP request and a message

### Proof of Concept Without Social Engineering

You do not need to actually trick anyone to prove this works. The PoC is:

1. Show the `/devicecode` request returns `user_code` (not `400 unsupported_grant_type`)
2. Show the polling request returns `authorization_pending` (not `400`)

This is sufficient to prove the flow is enabled. The triager understands that social engineering step is the only remaining prerequisite.

### Remediation

In Azure AD: **App registrations → Authentication → Advanced settings → Allow public client flows → Disable "Enable Device Code flow"**

Add a Conditional Access policy requiring compliant/managed devices if the device code grant is needed for legitimate CLI tooling.

---

## 5. ROPC — The OAuth Grant Nobody Disabled

### Why ROPC Exists

Resource Owner Password Credentials (`grant_type=password`) was included in OAuth 2.0 to ease migration from legacy basic-auth systems. The idea: instead of refactoring everything at once, you can POST credentials to the OAuth server the same way you did to the old API.

### Why It Was Deprecated

The OAuth 2.0 Security Best Current Practice (RFC 9700) explicitly recommends against ROPC because:
- The application handles the user's password directly — violating the OAuth model's separation
- It cannot trigger device-bound MFA (the MFA challenge is tied to the browser session)
- It enables programmatic credential attacks at the speed of an API, not a login form

### What "No Rate Limiting" Means in Practice

Without rate limiting on the ROPC endpoint, an attacker can:
- Test 100,000 username/password pairs from a breach database in minutes
- Password spray (one common password against thousands of emails) with no throttle

With rate limiting and MFA, ROPC is annoying but mitigated. Without either: it's a straight path to credential-stuffing-based ATO at scale.

### Documenting the Rate Limit Test in Your Report

Always include timing evidence:
```bash
for i in {1..5}; do
    time curl -s "https://idp.target.com/oauth2/v2.0/token" \
      -X POST -d "grant_type=password&client_id=...&username=test@target.com&password=wrong&scope=..." \
      > /dev/null
done
```

Five consecutive requests with uniform fast response times and no CAPTCHA challenge proves no throttling.

---

## 6. The A→B Signal — One Developer, Many Mistakes

### The Observation

Bugs are not uniformly distributed across an application. They cluster around specific modules, specific developers, and specific time windows. When you find one bug, the probability of finding another nearby is significantly higher than average.

### Why This Is True

Developers apply security inconsistently because:
- Security knowledge varies across the team
- Features shipped in the same sprint share code patterns (and their flaws)
- Copy-paste architecture propagates both the functionality and the bug
- New developers onboard to existing patterns and inherit their weaknesses

### Types of Siblings

**Same endpoint, different resource type:**  
Bug on `/api/users/{id}` → test `/api/orders/{id}`, `/api/invoices/{id}`, `/api/messages/{id}`

**Same auth pattern, different app:**  
Bug on `app1.target.com` using Azure AD public client → check all other internal Angular apps for the same client_id pattern

**Same config exposure, different path:**  
Config exposed at `/assets/app-constants.js` on one subdomain → check the same path on all live subdomains

**Same developer, different version:**  
Bug in `/api/v2/` → check `/api/v1/` (often has the same bug plus weaker auth)

**Same logic, different flow:**  
Rate limit bypass on login → check password reset, OTP verification, and 2FA confirmation for the same bypass

### The 20-Minute A→B Sprint

After confirming a finding:
1. Write down the root cause in one sentence ("auth check happens after data is returned")
2. List all places in the app where the same root cause might apply
3. Spend exactly 20 minutes testing those places
4. If nothing found in 20 minutes, move on

The A→B signal produces more findings per hour than continued manual testing on a fresh surface.

---

## 7. Open Redirect Bypasses — Why Simple Validation Always Fails

### The Problem with String-Based Validation

Most open redirect fixes look like this:
```python
if not redirect_url.startswith("https://trusted.com"):
    return 403
```

This fails because there are too many ways to form a valid URL that doesn't start with the expected string.

### Common Bypass Techniques

| Technique | Payload | Why it bypasses |
|---|---|---|
| Protocol-relative | `//evil.com` | Does not start with `https://` |
| Double slash | `///evil.com` | Parsed as relative path by some frameworks |
| URL encoding | `%2f%2fevil.com` | Decoded after validation |
| Double encoding | `%252f%252fevil.com` | Double-decoded after validation |
| @ delimiter | `https://trusted.com@evil.com` | `evil.com` is the host; `trusted.com` is user info |
| Subdomain confusion | `https://trusted.com.evil.com` | Passes prefix check |
| Backslash | `https://trusted.com\evil.com` | Windows parses `\` as `/`; some regex doesn't |
| Newline | `https://trusted.com%0d%0a` | CRLF injection; some parsers split |
| Tab / null byte | `https://evil.com%09` | Stripped by some validators |
| IPv6 | `https://[::1]` | Bypasses IP-based blocklists |

### How to Test Systematically

```
Start with: https://evil.com
Try each bypass in order.
For each bypass:
  - Send the request
  - Check if the server returns 200 (or non-403)
  - Check if the Location header or page state stores the bypassed URL
  - Confirm the redirect actually resolves to the external domain
```

### Why This Matters Beyond "Just a Redirect"

An open redirect is the prerequisite for:
- **OAuth code theft**: If a redirect parameter is used as the OAuth callback URI and the server accepts `//evil.com`, the authorization code is sent to the attacker
- **Phishing chain**: The initial URL looks legitimate; only the destination is attacker-controlled
- **XSS via `javascript:`**: If the `javascript:` scheme is accepted, it is XSS

Always check if the redirect parameter is involved in an OAuth flow. If yes, the finding escalates from P4 to P2 or P1.

---

## 8. Error Messages Are Intelligence, Not Failures

### The Rule

A server error is not a test failure. It is a response containing information the server did not intend to share.

### What High-Value Error Messages Look Like

**Stack trace:**
```
java.lang.NullPointerException
    at com.target.api.UserController.getUser(UserController.java:142)
    at com.target.api.UserController$$FastClassBySpringCGLIB$$...
```
Reveals: language (Java), framework (Spring), class hierarchy, exact file and line number.

**Database error:**
```
ERROR 1064 (42000): You have an error in your SQL syntax; 
check the manual for MySQL 5.7 ... near ''test'' at line 1
```
Reveals: database engine (MySQL 5.7), the injected value was reflected in the error, SQL injection is confirmed.

**Authentication scheme list (ASP.NET Core):**
```
No authentication handler is registered for the scheme 'Bearer'. 
The registered schemes are: EmployeeScheme, AppBearer, AppTestPackageBearer, CxIssuedScheme.
```
Reveals: all registered auth schemes, including test/pre-release ones (`AppTestPackageBearer`) that may have weaker validation.

**Internal hostname:**
```
ECONNREFUSED: connection refused at 10.0.0.15:5432
```
Reveals: internal IP address, port 5432 (PostgreSQL), the backend cannot connect to its database.

**File path:**
```
TemplateNotFound: /var/www/html/templates/dashboard.html
```
Reveals: absolute server path, directory structure.

### What to Do With Each Error Type

| Error type | Action |
|---|---|
| SQL syntax error | Confirm injection point, escalate to data extraction |
| Stack trace with class names | Search class name in GitHub for source code |
| Internal IP address | Add to SSRF target list |
| Auth scheme names | Try JWT with each scheme name; probe test schemes |
| File path disclosure | Try path traversal relative to disclosed path |
| Version number | Check CVE databases for that version |

### How to Trigger Useful Errors

Send inputs the server doesn't expect:
- Single quote `'` → SQL injection
- `{{7*7}}` → SSTI
- Very long string (1000+ chars) → buffer-related errors
- Null byte `%00` → file system interaction
- Missing required fields → validation errors that reveal expected schema
- Wrong content type (`text/plain` instead of `application/json`) → framework error pages

---

## 9. CSP Headers as a Recon Tool

### What CSP Is

Content Security Policy is a browser security mechanism. The server sends a `Content-Security-Policy` header telling the browser which origins are allowed to load resources.

```
Content-Security-Policy: 
  script-src 'self' https://cdn.target.com;
  connect-src 'self' https://api.target.com https://internal-service.target.net;
  img-src 'self' data: https://images.target.com;
```

### Why `connect-src` Leaks Backend Hostnames

The `connect-src` directive must list every API hostname the SPA calls via `fetch()` or `XMLHttpRequest`. This includes:
- Production API backends
- Staging API backends (sometimes)
- Third-party services (Sentry, analytics)
- Internal microservices that happen to be accessible from the internet

These hostnames are disclosed to every visitor but are invisible to subdomain enumeration tools — they may be on different TLDs or use internal naming conventions.

### How to Harvest CSP Headers at Scale

```bash
while read host; do
    csp=$(curl -sI --max-time 5 "https://$host" | grep -i "content-security-policy" | tr ';' '\n')
    if [[ -n "$csp" ]]; then
        echo "=== $host ===" && echo "$csp"
    fi
done < live_hosts.txt
```

### What to Extract

- New subdomains not in your enumeration results
- Backend API hostnames (`api.internal.target.com`)
- Internal tooling (`logging.target.net`, `metrics.target.internal`)
- Third-party services the app depends on (check if those services are misconfigured too)

### Using CSP as an SSRF Target List

If you find an SSRF, the `connect-src` list tells you exactly which internal services the app is allowed to talk to — those are the services worth probing via the SSRF.

---

## 10. Sentry DSN Exposure — From Config Leak to Write Access

### What Sentry Is

Sentry is an error tracking platform. Applications capture exceptions and send them to Sentry for developers to review. A DSN (Data Source Name) identifies where to send errors:

```
https://<public_key>@sentry.target.com/<project_id>
```

### Why DSNs Are in Client-Side Code

The browser-side Sentry SDK needs the DSN to report JavaScript errors from the client. Developers embed the DSN in their JS bundle or runtime config. This is intentional — the public key is supposed to be safe for client-side use because it only allows writing errors.

### What You Can Do With a DSN

**Write access** (intended design):
```bash
curl -s "https://sentry.target.com/api/<project_id>/store/" \
  -H "X-Sentry-Auth: Sentry sentry_version=7, sentry_key=<public_key>, sentry_client=test/1.0" \
  -H "Content-Type: application/json" \
  -d '{"message":"test","level":"info","platform":"python","extra":{"test":true}}'
```
If the response contains an event ID, write access is confirmed.

**Read access** (not intended; escalation):
```bash
# Try authenticated endpoints with the public key as Bearer token
curl -s "https://sentry.target.com/api/0/projects/" \
  -H "Authorization: DSN https://<public_key>@sentry.target.com/<project_id>"
```

**Why Read Access Is Critical:**  
Sentry events contain the full context of what the application was doing when the error occurred: request parameters, response bodies, user session data, stack traces, environment variables, and sometimes tokens or credentials accidentally included in error context.

### Severity Tiers

| Access level | Severity | Why |
|---|---|---|
| Write only | P3–P4 | Log injection, noise generation, minor integrity issue |
| Read (own project) | P2 | Production stack traces, user session data in error context |
| Read (other projects) | P1 | Full read access to errors from other apps |
| Admin (DSN = admin token) | Critical | Manage all projects, create users |

---

## 11. Reading Minified JavaScript for Vulnerabilities

### The Challenge

Modern JS build tools (webpack, Rollup, Vite) minify and bundle code into single large files. Variable names are single letters. Whitespace is stripped. Logic is compressed. It looks unreadable.

But the logic is still there.

### Step 1: Prettify First

```bash
# Using js-beautify
npx js-beautify bundle.js > bundle_pretty.js

# Or in browser: F12 → Sources → find the file → {} button (pretty print)
```

### Step 2: Find the Relevant Code

Search for the behavior you are testing:

```bash
# Redirect logic
grep -n "location\|redirect\|navigate\|router\.push\|history\.push" bundle_pretty.js

# Storage operations
grep -n "localStorage\|sessionStorage\|setItem\|getItem" bundle_pretty.js

# postMessage handlers (DOM XSS source)
grep -n "addEventListener.*message\|postMessage" bundle_pretty.js

# Dangerous sinks
grep -n "innerHTML\|outerHTML\|document\.write\|eval\|setTimeout.*string" bundle_pretty.js

# Auth / token handling
grep -n "token\|Authorization\|Bearer\|client_id\|client_secret" bundle_pretty.js

# URLs with parameters
grep -n "fetch\|XMLHttpRequest\|axios\.\|\.get(\|\.post(" bundle_pretty.js
```

### Step 3: Trace the Data Flow

When you find a sink (a dangerous function), trace backwards to find the source:
1. What variable is passed to the sink?
2. Where is that variable set?
3. Is it user-controlled (URL parameter, query string, postMessage data)?

Example of a vulnerable pattern:
```javascript
// Sink: window.location is set
window.location.href = redirectUrl;

// Trace back: where is redirectUrl set?
const redirectUrl = new URLSearchParams(window.location.search).get('redirect');
// → redirectUrl is user-controlled → open redirect
```

### Step 4: Confirm in Burp

Once you identify the code path, replay the request in Burp with your controlled input to confirm it works in the real application — not just in theory from reading the code.

---

## 12. Subdomain Enumeration That Finds What Others Miss

### Why Generic Wordlists Produce Duplicate Findings

Most hunters run the same tools with the same default wordlists. If a subdomain is findable with `subfinder`, thousands of others have already found it. The edge is in finding subdomains that generic tools miss.

### Sources That Find Unique Subdomains

**Certificate Transparency (CT) logs:**
Every TLS certificate is publicly logged. CT logs contain subdomains that never appear in any public wordlist.
```bash
curl -s "https://crt.sh/?q=%.target.com&output=json" | jq -r '.[].name_value' | sort -u
```

**JavaScript bundle analysis:**
The app's own code references its backend hostnames. Tools don't find these — you have to look.
```bash
# Download the bundle and extract hostnames
curl -s "https://target.com/bundle.js" | grep -oE "[a-z0-9-]+\.target\.com" | sort -u
```

**Runtime config files:**
Config files list every API backend. One config file often contains 5–10 new subdomains.

**Android / iOS app analysis:**
```bash
# Extract hostnames from APK strings
strings app.apk | grep "target\.com" | sort -u
```

**CSP header analysis:**
(See Section 9)

**HTTP response body content:**
```bash
# Download all live hosts' root pages and grep for internal hostnames
for host in $(cat live_hosts.txt); do
    curl -s --max-time 5 "https://$host" | grep -oE "[a-z0-9-]+\.target\.com"
done | sort -u
```

### Processing Subdomains Efficiently

```
1. Enumerate (passive sources + active brute force)
   ↓
2. Resolve DNS (puredns) — removes dead hosts
   ↓
3. HTTP probe (httpx) — status code, title, tech stack
   ↓
4. Triage by tech stack — group Angular apps, Next.js apps, etc.
   ↓
5. Config file check on each Angular/React app
   ↓
6. Continue deeper on anything that returns 200 with interesting content
```

### The 5-Minute Rule

If an HTTP probe returns nothing useful (all 404, 401, 502) in 5 minutes of trying common paths, move to the next subdomain. Don't spend 30 minutes brute-forcing paths on a host that isn't serving anything.

---

## 13. Rate Limiting as Evidence, Not Just a Defense Check

### Two Reasons to Test Rate Limiting

**Reason 1 (the obvious one):**  
No rate limiting on a login or OTP endpoint enables brute force and credential stuffing attacks.

**Reason 2 (the underutilized one):**  
The absence of rate limiting is **evidence that makes your finding more severe**. Include it in every report where it is relevant.

Without rate limiting evidence, a triager might think: "ROPC is technically enabled, but surely there's rate limiting that makes it impractical." Your timing data eliminates that objection.

### What to Measure

```bash
# 5 rapid requests, measure each one
for i in {1..5}; do
    time curl -s -o /dev/null "https://idp.target.com/oauth2/token" \
      -X POST -d "grant_type=password&username=test@target.com&password=wrong&..."
done
```

Look for:
- Consistent response times (no slowdown as requests increase)
- No `429 Too Many Requests` response
- No `X-RateLimit-*` headers
- No CAPTCHA challenge
- No account lockout warning in the response body

### What to Include in the Report

```
Rate Limiting Test:
5 rapid consecutive ROPC requests — avg response time 469ms — no throttling, 
no lockout, no CAPTCHA, no progressive delay.

At 469ms per request, an attacker can test ~7,700 credentials per hour 
against a single thread. With 10 parallel threads: ~77,000/hour.
```

Quantifying the attack throughput makes the risk concrete. Triagers respond to numbers.

---

## 14. Attack Chaining — Turning Low into Critical

### The Core Principle

Most web vulnerabilities in isolation are Low or Medium severity. The jump to Critical happens when two or three weaknesses combine to produce an end-to-end attack scenario.

### The Chaining Framework

For every finding, ask these questions:

**What does this give me?**
- Information (internal hostname, username, error message)
- Trust (a token, a session, an OAuth code)
- Redirection (an open redirect)
- Execution (a script in a page)

**What requires that as a prerequisite?**
- Information → look up what that service does; is there a known vuln?
- Trust → what APIs accept this token; what data can be accessed?
- Redirection → is this used in an OAuth flow; can I steal a code?
- Execution → whose context does this execute in; can it steal a session?

**Can I close the gap with another finding?**

### Common Chains

| Finding A | Finding B | Result |
|---|---|---|
| Open redirect | OAuth `redirect_uri` uses the same parameter | Authorization code theft → ATO |
| Self-XSS | CSRF to trigger the XSS on another user | Stored XSS → session theft |
| SSRF (DNS callback only) | Cloud metadata at `169.254.169.254` | AWS credential theft → lateral movement |
| Username enumeration | No rate limit on login | Credential stuffing at scale |
| Exposed client_id | Device code grant enabled | Phishing-based ATO |
| IDOR (read) | UUID leaked in search result / email | PII exfiltration at scale |
| Staging → prod connection | Staging auth bypass | Production ATO |
| Exposed admin hostname in config | No auth on admin WebSocket | Unauthenticated admin access |

### Reporting Chains

File each component separately (if they have standalone value) and file the chain as a third report with escalated severity. Most programs pay the chain at the highest severity plus pay smaller amounts for the components. Three reports, not one.

---

## 15. Dead End Management — The Cost of Rabbit Holes

### What a Rabbit Hole Costs

Forty-five minutes spent on a dead end is 45 minutes not spent on a finding. Over a month of hunting, rabbit holes cost more time than any other factor.

### Dead End Taxonomy

**Hard dead end:** The endpoint does not exist, returns consistent errors, has no interesting behavior, and there is no realistic path to exploitation.

**Soft dead end:** The endpoint exists and looks interesting, but you lack a prerequisite (a valid token, an internal network position, a second vulnerability) that you don't currently have.

Write them differently in your notes. Hard dead ends should never be revisited. Soft dead ends should be revisited when you acquire the missing prerequisite.

### The Dead End Entry Format

```
DEAD: <host or endpoint>
Tried: <exactly what you tested>
Result: <exactly what happened>
Why dead: <the reason this cannot be exploited>
Condition to revisit: <what would make this viable>
```

Example:
```
DEAD: api.target.com/v1/internal/*
Tried: GET/POST all paths, fuzzing with common API wordlist
Result: All 401 with {"error":"unauthorized"}
Why dead: Requires bearer token; no unauthenticated surface
Condition to revisit: If you obtain a valid user token from another finding
```

### The 20-Minute / 45-Minute Rules

- **20 minutes on a parameter:** No progress → rotate vuln class or endpoint
- **45 minutes on a single hypothesis:** No progress → hard stop, add to dead ends, move on

Progress means new information. Trying the same payload five different ways is not progress.

---

## 16. Writing Reports That Actually Get Paid

### Why Reports Get N/A'd

The most common reasons a valid bug gets N/A'd (rejected):
1. Missing HTTP evidence — no actual requests and responses
2. Unclear impact — triager cannot tell what the attacker gains
3. Only affects your own account — self-XSS, self-IDOR
4. Theoretical impact — "could be used to..." without a working PoC
5. Out of scope — not reading the program rules carefully
6. Duplicate — existing report covers the same vulnerability

### Report Title Formula

```
[Bug class] in [endpoint or feature] allows [attacker role] to [impact]
```

Good examples:
- `IDOR in /api/v1/invoices/{id} allows unauthenticated attacker to download any customer invoice`
- `ROPC enabled on public Azure AD client allows credential stuffing against employee login`
- `Stored XSS in profile bio executes in admin dashboard, enabling admin session theft`

Bad examples:
- `Security issue on your website` (no specifics)
- `Missing access control` (no asset, no impact)
- `XSS vulnerability` (no endpoint, no impact)

### Summary Paragraph Structure

```
Sentence 1: What the attacker can do (impact first)
Sentence 2: How (the mechanism, one sentence)
Sentence 3: Business consequence (data type, scale, affected users)
```

Example:
```
An unauthenticated attacker can download any customer invoice by 
incrementing the numeric ID in /api/v1/invoices/{id} — the endpoint 
returns invoice data without verifying that the requesting session 
owns the referenced invoice. Invoices contain customer name, billing 
address, payment method type, and order details for all ~2 million users.
```

### The Evidence Standard

Every step in the PoC must have:
- The exact HTTP request (method, URL, headers, body)
- The exact HTTP response (status code, relevant headers, body)
- No screenshots as primary evidence — raw text is searchable, copy-pasteable, and unambiguous

### CVSS 3.1 Quick Reference

Base metrics that matter most for web bugs:

| Metric | High impact choice | Low impact choice |
|---|---|---|
| Attack Vector | Network (N) | Local (L) |
| Attack Complexity | Low (L) | High (H) |
| Privileges Required | None (N) | High (H) |
| User Interaction | None (N) | Required (R) |
| Confidentiality | High (H) | Low (L) or None (N) |
| Integrity | High (H) | Low (L) or None (N) |
| Availability | High (H) | None (N) |

A critical web vuln (unauthenticated RCE): `CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H` = 10.0  
A typical IDOR with PII read: `CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N` = 6.5

---

## 17. How to Think Like the Developer Who Made the Bug

### The Core Mental Model

Every vulnerability was created by a developer who was:
1. Trying to ship a feature on a deadline
2. Following an existing pattern in the codebase
3. Trusting that another layer would handle the security concern
4. Not thinking about adversarial inputs for this particular field

Your job is not to find bugs — it is to find where developer assumptions break down.

### The Five Developer Assumptions That Break Most Often

**Assumption 1: "The frontend will validate it"**  
Backend developer removes validation because the frontend form already checks it.  
Attack: Send the request directly, bypassing the frontend.

**Assumption 2: "Only our app will call this endpoint"**  
Internal API endpoint with no auth, assumed to be unreachable from the internet.  
Attack: Find the hostname in a config file, JS bundle, or CSP header. Call it directly.

**Assumption 3: "The user can only enter valid values through the UI"**  
Business logic only tested through the UI flow; edge cases only reachable via direct API calls.  
Attack: Skip UI constraints, send invalid combinations directly.

**Assumption 4: "This is staging, nobody will test it seriously"**  
Relaxed security controls on staging with the assumption that only the internal team uses it.  
Attack: Discover staging via subdomain enumeration; exploit its weaker controls.

**Assumption 5: "This was already reviewed"**  
Newly added feature copy-pasted from an existing endpoint, including the auth check — but the auth check on the original was for a different resource type.  
Attack: Test the new endpoint with another user's resource IDs.

### The "What If" Technique

For every security control you encounter, ask: "What if this wasn't here?"

- CAPTCHA on registration → "What if there's a staging endpoint without this?"
- Bearer token required → "What if there's a v0 version of this API?"
- UUID IDs → "What if there's a lookup endpoint that maps sequential IDs to UUIDs?"
- MFA on login → "What if there's a legacy mobile API that accepts username/password?"

The "what if" question is your hunting hypothesis. Write it down. Test it. If true: file a report.

---

## 18. The Session Summary Habit

### The Problem Without Notes

Without structured notes, you will:
- Re-test dead ends you already investigated weeks ago
- Forget a half-confirmed hypothesis that was one step away from a finding
- Lose context on which evidence belongs to which finding
- Start a new session with no idea where you left off

### Minimum Required Notes Per Session

```
DATE: <date>
TARGET: <subdomain or feature>
GOAL: <what you were trying to achieve>

CONFIRMED FINDINGS:
- [ID] Description | Severity | Status

ACTIVE LEADS:
- <hypothesis> | <specific endpoint> | <what needs to be tested next>

DEAD ENDS:
- <what you tested> | <what happened> | <why it's dead>

NEW SURFACE DISCOVERED:
- <subdomain or endpoint> | <tech> | <initial observation>

WEIRD BEHAVIOR (not yet exploitable):
- <observation> — <what it might mean>
```

### The Weird Behavior Entry

This is the most underused section. When you see something that doesn't fit — a response that's slightly different from others, a parameter that seems to do nothing, an error message that's slightly unusual — write it down.

These observations become gadgets. In a future session, after you understand the app better, you will see how two "weird" observations combine into a finding.

---

## 19. Scope Awareness vs Scope Creep

### Scope Creep (Bad)

Testing assets that are explicitly out of scope or clearly not owned by the target. This wastes time, produces N/A reports, and can get you banned from the program.

### Scope Awareness (Good)

Reading the program's scope rules carefully enough to find in-scope assets that other hunters overlook.

### Questions to Answer Before Starting Any Program

1. Is the scope `target.com` only, or `*.target.com` (wildcard)?
2. Are staging environments explicitly in or out of scope?
3. Are employee-facing portals in scope, or only customer-facing?
4. Are mobile apps in scope?
5. Are third-party integrations (using the target's SSO, subdomain, or API) in scope?
6. Is the target's open-source code in scope for code review?

### Wildcard Scope Is the Best Scope

A wildcard scope (`*.target.com`) means every subdomain you discover is automatically in scope. This rewards thorough recon directly — subdomains others haven't tested are your exclusive attack surface.

With a wildcard scope, more time on recon (finding unique subdomains) often produces more findings than more time on exploitation of the main domain.

### When You're Not Sure if Something Is In Scope

Ask before testing, not after. Most programs have a way to ask questions. A 24-hour wait for scope clarification is better than a P1 report getting N/A'd because the asset is out of scope.

---

## 20. Why Authentication Bugs Cluster

### The Observation

Authentication misconfigurations do not appear randomly across an organization's applications. They appear in clusters — either across all apps built by the same team, or across all apps set up in the same time period, or across all apps using the same infrastructure template.

### Why Clusters Form

**Same IdP setup, same misconfiguration:**  
When a company uses Azure AD or Okta, one person (or one team) usually registers all the OAuth applications. If they misconfigure one app (e.g., leaving ROPC enabled), they often misconfigure all apps the same way.

**Shared infrastructure template:**  
If developers use a starter template or infrastructure-as-code module for "create a new web app with Azure AD auth," and that template has a misconfiguration, every app created from it inherits the flaw.

**Same onboarding documentation:**  
If the internal wiki says "set up your Azure AD app like this," and that documentation is wrong, everyone who follows it creates a vulnerable app.

### Practical Implication

When you find an authentication misconfiguration in one app:
1. Write down the exact mechanism (grant type, IdP, scope, client_id pattern)
2. Look for all other apps on the same IdP (same tenant_id in their configs)
3. Extract client_ids from their config files
4. Test each client_id for the same misconfiguration

If the root cause is a shared template or IdP policy, every app will have the same flaw. One finding becomes five.

### Reporting Multiple Instances

File each instance as a separate report. Each affected application is a separate asset with separate impact. The fact that they share a root cause does not mean they should be filed as one report — programs pay per finding, not per root cause.

---

*This document contains general security concepts and techniques for authorized bug bounty testing. Always ensure you have explicit authorization before testing any system.*
