Disclosure workflow
The four-stage disclosure process: scope definition, agentic mission execution, deterministic validation, and PoC report delivery. Covers every step from scope import to copy-paste-submit output.
Knowledge base
Everything you need to run disciplined agentic bug bounty operations and submit findings that actually get accepted.
In this knowledge base
Resource library
The four-stage disclosure process: scope definition, agentic mission execution, deterministic validation, and PoC report delivery. Covers every step from scope import to copy-paste-submit output.
IDOR, auth bypass, privilege escalation, race conditions, prompt injection. What each class means, why it matters, which tier covers it, and what validation mode is used.
What RaSEC Hunt enforces at runtime: scope, validation, data retention, audit trails. What it does not guarantee: 100% coverage, program acceptance, results on hardened targets.
Scope discipline, reproducibility standards, target stability principles, and responsible disclosure ethics. Specific dos and don'ts grounded in what triage teams actually see.
The agentic AI execution surface: scope guardrail engine, ReAct coordinator, specialist agents (ReconAgent, LogicAgent, ValidationAgent, PoCAgent), SSE streaming, and overnight background runs.
Common questions about how the agent works, what counts as a confirmed finding, how data is handled, what the free tier covers, and when to upgrade.
Market context
Programs are rejecting AI-assisted submissions at scale. HackerOne data from 2025 shows "AI slop" as a top triage complaint — shallow reports with no evidence, wrong scope, or unverified findings. Curl shut its public bounty program entirely. The hunters who succeed with AI tooling are those who treat validation as non-negotiable.
IDOR report growth on HackerOne hit +116% over 5 years. Prompt injection for AI-powered apps rose +540% year-over-year. XSS is flatlined. Business logic bugs — access control failures, auth bypass, session abuse — are where programs are paying most, and where generic scanners fail to compete.
XBOW proved autonomous hunting works (raised $120M, hit #1 on HackerOne). But it costs $4,000-8,000 per test, is explicitly enterprise-only, and is a black box — hunters cannot see its reasoning or steer it. RaSEC Hunt fills the gap XBOW deliberately left: affordable, hunter-facing, glass-box agentic co-pilot.
Get started
If you are new: start with the platform overview to understand how the agentic execution works, then read the disclosure process to understand the four-stage workflow before you start a session.
If you are already running hunts: the operational guidelines cover what to check at review time and what the evidence requirements are for each finding state before you submit.