Skip to main content

Operational trust

Security & trust

Security is a runtime constraint, not a marketing badge. Here is exactly what we enforce, what we guarantee, and what we do not.

Runtime trust guarantees

Scope gated
No data training
Full audit trail
No false positives
Session isolation
No scan traffic logged

Why trust is harder with agentic AI

Traditional security tools have a simple trust model: you press Scan, it fires payloads, you review the report. The tool is passive. You are in control of every step.

Agentic AI changes this. The agent makes autonomous multi-step decisions about what to test, how to test it, and what counts as a valid finding. That autonomy is the source of its power — and the source of its trust challenge. A poorly controlled agent could test out-of-scope targets, promote false positives as confirmed findings, or log data it should not retain.

RaSEC Hunt is designed with these trust concerns as first-class design constraints: scope is enforced at the execution layer (not just checked in the UI), findings require deterministic reproduction before promotion, and the activity log is complete and unfiltered.

We also recognize that trust must be earned, not claimed. That is why we document our residual risks explicitly below and do not pretend that every possible risk has been eliminated.

What we enforce at runtime

01

Scope safety

Scope enforced in the execution layer, not the UI

Every agent action is checked against the session scope policy before any network call is made. Scope rules are immutable once a session starts — they cannot be overridden by agent reasoning, user steering, or intermediate findings. If the agent discovers a redirect that leads out of scope, it stops, logs the event, and waits for operator intervention.

  • Scope rules locked at session start and version-tracked
  • Out-of-scope network calls are blocked at the transport layer
  • Scope violations surface as first-class events in the activity log
  • Agent cannot self-modify scope constraints during a hunt
02

Validation honesty

No finding is "AI-confirmed" without evidence

Findings go through a mandatory three-stage validation pipeline before being promoted to confirmed status: initial detection, deterministic reproduction (up to 3 attempts), and evidence assembly. If reproduction fails at any attempt, the finding stays in "pending" state with a clear explanation of why — it is never silently dropped or promoted anyway.

  • CONFIRMED state requires successful HTTP reproduction, not just heuristic detection
  • Pending and rejected states are permanently logged with reasons
  • CVSS severity assigned only after confirmation, not at detection
  • HTTP evidence bundle (request, response, diff) included with every confirmed finding
03

Data privacy

Your hunt data is never used to train models

We do not train on your target data, findings, reports, or agent activity logs. Raw HTTP scan traffic is not stored beyond the session request/response pair needed for evidence. Pro and Elite tiers include no-log mode which disables all server-side logging of HTTP payloads and prevents retention of any raw request or response bodies.

  • Zero training on your hunt data, findings, or scope configurations
  • No-log mode disables HTTP payload storage entirely (Pro and Elite)
  • 30-day automatic data purge for Free tier
  • 1-year retention on Pro/Elite — exportable and deletable on demand
04

Audit trail

Every agent action is logged and reviewable

The agent activity log is a complete, unfiltered record of every action the AI took during your hunt: every tool call, every decision, every network request, and every validation attempt. Logs are not summarized, truncated, or filtered before display. What the agent did is what you see — including failed attempts and out-of-scope rejections.

  • Complete activity log includes tool calls, reasoning steps, and reproduction attempts
  • Activity log export is included for Pro and Elite tiers
  • Logs are never modified or summarized post-hoc
  • Session replay available to reconstruct the full hunt timeline

What we do not guarantee

We are explicit about residual risks rather than hiding them in footnotes.

Production hardening is your responsibility

RaSEC Hunt is a bug bounty co-pilot, not a compliance framework. Identity policy, secrets governance, deployment guardrails, and production hardening on your own infrastructure remains entirely your responsibility. We do not assess your internal security posture.

Coverage is not a guarantee

The agent covers the surface you give it within the timeframe of the hunt. It cannot guarantee it found all vulnerabilities in your program scope. A clean hunt result means the defined surface was covered — not that no vulnerabilities exist.

Target environment validation is yours

End-to-end verification of findings against your own live environment before submission is your final acceptance bar. The agent validates deterministically against the session target, but your environment may differ from the test state at submission time.

Responsible disclosure for the platform itself

If you find a vulnerability in the RaSEC Hunt platform itself — not a bug you found through the platform on a third-party target, but a bug in our own application — we ask you to report it to us directly before public disclosure.

We follow a coordinated disclosure model. We respond to verified vulnerability reports within 5 business days and aim to patch critical findings within 30 days. We acknowledge researchers who report valid findings in our acknowledgement log.

To report a platform vulnerability: contact us via the contact page with subject line "Security: [brief description]". Include reproduction steps, your environment, and any evidence. Do not scan our infrastructure with automated tools without written consent.

We do not offer a public bounty program for the platform at this time, but we do provide acknowledgement and early access credits at our discretion for valid high-severity reports.