Skip to main content

Operator culture

Community

Built for hunters who care about craft: scope-safe execution, validated evidence, and submissions that hold up in triage.

“Hunters who play at a high bar need tools that play at the same bar.”

Market context

$81M+In bounty payouts on HackerOne per year — the market we hunt in
70-82%Bug bounty hunters already using AI tools in their workflow
+116%IDOR report growth on HackerOne over 5 years — the core focus area
+540%Prompt injection growth YoY — the emerging AI target class

The moment AI and bug bounty converge

$81M+In bounty payouts on HackerOne per year — the market we hunt inSource: HackerOne 2025
70-82%Bug bounty hunters already using AI tools in their workflowSource: HackerOne/Bugcrowd 2025-26
+116%IDOR report growth on HackerOne over 5 years — the core focus areaSource: HackerOne 2025
+540%Prompt injection growth YoY — the emerging AI target classSource: HackerOne 2025

The bug bounty market is growing faster than the pool of skilled hunters who can produce high-quality submissions. 70-82% of hunters already use AI tools, but the dominant use case is still chat assistance — not autonomous execution. The gap between AI-assisted discovery and submission-ready output remains large.

At the same time, programs are raising their quality bar in response to AI spam. Triage teams are rejecting more submissions, requiring more evidence, and in some cases (like Curl) shutting programs entirely. The hunters who succeed in this environment are those who use AI to do more disciplined work, not more volume.

IDOR and access control failures — the core focus of RaSEC Hunt — grew +116% in report volume over 5 years. Prompt injection (relevant to AI-powered application targets) rose +540% year-over-year. These are not niche bug classes. They are where the market is moving, and they require methodical, logic-aware testing that scanners cannot replicate.

This community is built around hunters who understand this context: who prioritize business-logic bugs over low-hanging XSS, who validate before they submit, and who treat scope as a hard constraint, not a soft guideline.

How we operate

Scope discipline first

The community norm is simple: you test what the program authorizes, and nothing else. Scope edge-running — intentionally testing assets that are ambiguously defined — is not a grey area here. It is a disqualifying behavior. One out-of-scope test can ban a researcher from a program permanently.

RaSEC Hunt enforces scope at the transport layer. But community enforcement is cultural: hunters who demonstrate consistent scope discipline get early access to new features, program integrations, and cohort participation.

Validation over volume

The dominant metric in this community is not the number of bugs submitted — it is the ratio of accepted to rejected submissions. One CONFIRMED finding with a working curl command, a clean diff, and a platform-ready report is worth more than twenty "potential" alerts with no evidence.

HackerOne data shows that IDOR reports grew +116% in 5 years. Programs are flooded with low-quality submissions. The hunters who stand out are those who demonstrate that their output has been verified before submission.

Glass-box mindset

The RaSEC Hunt model is explicit about what the agent did: every tool call, every reasoning step, every validation attempt is logged and reviewable. The community practices the same transparency. Writeups include the full methodology, not just the punchline.

Hunters who share methodology — hunt replays, validation chains, scope-aware recon techniques — progress faster than those who treat their process as proprietary. The goal is a community of practice, not a set of isolated solo operators.

Learn from every mission

Every hunt session is a learning artifact. Rejected findings are as informative as confirmed ones — they tell you where the application does enforce access control correctly, which is useful context for the next session. Hunt history, session replay, and the memory system are tools for compound improvement over time.

The memory system retains episodic context across hunts: what was found, what was patched, what vectors were tried. Elite tier uses pgvector RAG on Neon DB for semantic cross-hunt deduplication. Hunters who use this context iteratively consistently surface higher-quality findings.

What the community is. What it is not yet.

The RaSEC Hunt community is an early-stage cohort of hunters taking agentic AI tooling seriously. The tooling is real and shipping. The community practices are being built now. This is not a pre-launch waitlist that pretends a community exists before it does.

What exists today: a shared platform, shared standards, shared tooling, and early access to features as they ship. What is being built: community programs, cohort communication channels, and shared learning artifacts from real hunts.

We publish concrete community details only when they are confirmed. No vaporware program announcements. No fake engagement numbers. No Discord server that is actually just a drip campaign in disguise.

If you are joining now, you are joining an early cohort. That means you get direct influence on what the tooling prioritizes, early access to new capabilities, and a real seat at the table when the community programs launch.