How We Audit

Seven specialist AI agents. Transparent scoring. On-chain verification. Every audit is reproducible and independently verifiable.

Current Prompt Template Hash

9fb1952f314f8591f96d835ff790953c55dafdedf79558e652a15557f2ae54e7

This hash identifies the current prompt version. It is included in the promptTemplateHash field of on-chain attestations when enabled. If we change our prompts, this hash changes, and you can see exactly when in the version history below.

Pending on-chain attestation

Agent Roster

Each specialist agent focuses on a distinct vulnerability class. Click to expand and see exactly what each agent looks for.

Deep Analysis Agents (Cipher Family)

2 additional specialized agents extending the Reentrancy Agent above with deeper access control and economic exploit analysis. Multi-iteration RALPH loops with independent peer review.

Security Score

Measures vulnerability risk from reentrancy, access control, economic, upgrade, and compliance findings.

ReentrancyAccess ControlEconomicUpgrade SafetyStandard Compliance

Security Score (1.0 - 10.0)

9.0 - 10.0
Excellent
No significant issues. Follows best practices.
7.0 - 8.9
Good
Minor issues. Generally well-written.
5.0 - 6.9
Moderate
Notable issues. Review before deployment.
3.0 - 4.9
Concerning
Significant vulnerabilities. Do not deploy without fixes.
1.0 - 2.9
Critical
Severe vulnerabilities. Unsafe for production.

Finding Penalties

Each target starts at 10. Findings subtract from this based on severity, weighted by confidence. Caps prevent any single severity tier from dominating.

SeverityPer FindingMax Total
Critical-2.5-7.5
High-1.0-4.0
Medium-0.3-1.5
Low-0.1-0.5
Informational00

Score Modifiers

Confidence
Findings below 0.4 confidence are excluded from scoring (still shown in report).Penalty = base penalty x confidence. A critical at 0.9 confidence: -2.5 x 0.9 = -2.25
Agreement
When 2+ agents flag the same category on the same function, confidence is boosted by +0.15 (capped at 1.0).Rewards corroboration without double-counting the penalty itself.
Coverage
Fewer than 4 agents: -0.5 penalty + "Partial Audit" badge. Fewer than 3: no score issued.
Concentration
3+ findings in the same category: -0.5 "systemic risk" penalty per category.

Efficiency Score

Measures gas optimization and code efficiency. Does not affect security risk assessment.

Gas Optimization

Efficiency Score (1.0 - 10.0)

9.0 - 10.0
Optimized
Minimal gas improvements available.
7.0 - 8.9
Good
Minor optimizations possible.
5.0 - 6.9
Fair
Notable gas savings available.
3.0 - 4.9
Inefficient
Significant optimization needed.
1.0 - 2.9
Very Inefficient
Major gas issues throughout.

Finding Penalties

Each target starts at 10. Findings subtract from this based on severity, weighted by confidence. Caps prevent any single severity tier from dominating.

SeverityPer FindingMax Total
High-0.5-2.0
Medium-0.2-1.0
Low-0.1-0.5
Informational00

Score Modifiers

Confidence
Findings below 0.4 confidence are excluded from scoring (still shown in report).Penalty = base penalty x confidence. A critical at 0.9 confidence: -2.5 x 0.9 = -2.25
Concentration
5+ findings in the same category: -0.3 "systemic risk" penalty per category.

Prompt Version History

When we update our agent prompts, the composite hash changes. Every on-chain attestation references the hash that was active at audit time, so you can verify which prompt version produced any given audit.

Version History

VersionComposite HashDateChanges
v1.09fb1952f31...ae54e7Mar 24, 2026Initial specialist prompts

Limitations

Automated audits are powerful but not omniscient. Understanding what we cannot detect is part of honest security.

  • !
    Business logic correctness. We match vulnerability patterns, but we cannot verify the intent behind the code. A function that sends funds to the owner is a feature or a backdoor depending on context we do not have.
  • !
    Off-chain components. Oracle infrastructure, keeper bots, admin key management, and multisig governance are outside the scope of on-chain source analysis.
  • !
    Economic modeling. Tokenomics viability, liquidity depth, and market dynamics require simulation, not static analysis. We flag known economic attack patterns, not novel ones.
  • !
    Cross-protocol composability. We analyze the submitted contract in isolation. Interactions with external protocols (flash loan sources, LP positions, governance tokens) are flagged when detected but not exhaustively traced.
  • !
    When to get a manual audit. If your contract handles significant value (>$1M TVL), involves novel cryptographic primitives, or implements custom governance, automated analysis should supplement (not replace) a manual audit from a reputable firm.

Prompt Version Integrity

Every audit is tagged with the composite hash of the prompt templates used by all agents. This hash uniquely identifies the exact instructions each agent followed during the audit.

01 / Hash

When an audit completes, we compute a SHA-256 hash of all agent prompt templates. This hash is stored with the audit report and visible on the report page.

02 / Trace

Any change to agent prompts produces a new hash. The version history below shows exactly when prompts changed and what was updated.

03 / Attest

This system is designed for on-chain attestation via EAS on Base L2. When enabled, each audit's hash is included in a tamper-proof on-chain record.

Attestation schema (used when on-chain verification is enabled)

promptTemplateHashSHA-256 of all agent prompts
contractAuditedContract address + chain
score0-10.0 security score
reportHashSHA-256 of full report

EAS is deployed on Base at predeploy addresses (part of the OP Stack). Attestations cost less than $0.01 each and are indexed by EASScan. No custom smart contracts are needed. When attestation is configured, reports that include on-chain verification display a "Verified on Base" badge.

Audit a Contract

Every audit is produced by the methodology described above.