Skip to main content

Knowledge base

Resources

Everything you need to run disciplined agentic bug bounty operations and submit findings that actually get accepted.

In this knowledge base

Disclosure workflowWorkflow
Bug class referenceBug classes
Trust and security modelTrust
Operational guidelinesOperations
Platform deep-divePlatform

Guides, references, and platform docs

Workflow

Disclosure workflow

The four-stage disclosure process: scope definition, agentic mission execution, deterministic validation, and PoC report delivery. Covers every step from scope import to copy-paste-submit output.

Bug classes

Bug class reference

IDOR, auth bypass, privilege escalation, race conditions, prompt injection. What each class means, why it matters, which tier covers it, and what validation mode is used.

Trust

Trust and security model

What RaSEC Hunt enforces at runtime: scope, validation, data retention, audit trails. What it does not guarantee: 100% coverage, program acceptance, results on hardened targets.

Operations

Operational guidelines

Scope discipline, reproducibility standards, target stability principles, and responsible disclosure ethics. Specific dos and don'ts grounded in what triage teams actually see.

Platform

Platform deep-dive

The agentic AI execution surface: scope guardrail engine, ReAct coordinator, specialist agents (ReconAgent, LogicAgent, ValidationAgent, PoCAgent), SSE streaming, and overnight background runs.

FAQ

Frequently asked questions

Common questions about how the agent works, what counts as a confirmed finding, how data is handled, what the free tier covers, and when to upgrade.

Why agentic bug bounty matters now

The AI slop problem

Programs are rejecting AI-assisted submissions at scale. HackerOne data from 2025 shows "AI slop" as a top triage complaint — shallow reports with no evidence, wrong scope, or unverified findings. Curl shut its public bounty program entirely. The hunters who succeed with AI tooling are those who treat validation as non-negotiable.

Where the money is moving

IDOR report growth on HackerOne hit +116% over 5 years. Prompt injection for AI-powered apps rose +540% year-over-year. XSS is flatlined. Business logic bugs — access control failures, auth bypass, session abuse — are where programs are paying most, and where generic scanners fail to compete.

The XBOW gap

XBOW proved autonomous hunting works (raised $120M, hit #1 on HackerOne). But it costs $4,000-8,000 per test, is explicitly enterprise-only, and is a black box — hunters cannot see its reasoning or steer it. RaSEC Hunt fills the gap XBOW deliberately left: affordable, hunter-facing, glass-box agentic co-pilot.

Market data sourced from HackerOne 2025 annual report, Bugcrowd 2026 Inside the Mind of a Hacker report, and Grok/Deep Research synthesis from March 2026.

Ready to run your first hunt?

If you are new: start with the platform overview to understand how the agentic execution works, then read the disclosure process to understand the four-stage workflow before you start a session.

If you are already running hunts: the operational guidelines cover what to check at review time and what the evidence requirements are for each finding state before you submit.