# ADR-0004: Exa is the recommended (not mandatory) triangulation primitive

## Status
Accepted

## Date
2026-05-07

## Context

The Reference Discipline (introduced in `standard.md` between Rule 5 and the Finding Packet template) requires every load-bearing claim to triangulate to ≥ 2 independent Tier-1/2/3 sources, and requires a `verified_via` / `verified_on` audit trail. To make that requirement *operational* rather than aspirational, the standard has to name a concrete tool that produces the audit trail.

The methodology for the stablecoin specimen v1.2 was developed and proven using [Exa](https://exa.ai)'s `web_search_exa` MCP. 8 highest-leverage claims cross-referenced via Exa, 8/8 confirmed, 5 enrichments surfaced. The methodology document at `/methodology` documents the Exa workflow concretely and reproducibly.

The risk: naming Exa specifically locks the standard to one vendor's tool, in violation of Rule 4 (Standards as vocabulary, not ceremony). Tool-agnostic is non-negotiable for a standard that wants to outlast any one search vendor.

The competing risk: declining to name a tool at all leaves the Reference Discipline as vague exhortation. Adopters won't operationalise *"triangulate"* unless they have a concrete first-step tool.

## Decision

Exa is named as the **recommended** (not mandatory) triangulation primitive. Equivalents are explicitly accepted under the same triangulation rules:

- Brave Search MCP
- Tavily
- Perplexity (with citations)
- Direct WebFetch chains
- Any other web-search MCP that returns canonical URLs and supports keyword + neural retrieval

The `references.json` schema's `verified_via` field accepts any string (`"exa"`, `"brave"`, `"tavily"`, `"manual"`, …). Adopters declare which tool they used; the discipline is the same.

## Alternatives Considered

### Mandate Exa
- Pros: Simplest possible adoption story.
- Cons: Vendor lock; first time Exa changes pricing, accuracy, or terms, the standard is co-affected. Violates Rule 4.
- Rejected: tool-agnostic is non-negotiable.

### Name no tool ("use any web search")
- Pros: Maximum vendor neutrality.
- Cons: Vague exhortation; adopters don't operationalise. Methodology doc has nothing concrete to anchor against.
- Rejected: vague guidance does not produce the audit trail the schema requires.

### List 5–10 tools with comparison table
- Pros: Maximum information.
- Cons: Stale within months as vendors change features and pricing; the standard becomes a tool review rather than an operating discipline.
- Rejected: not the standard's job.

### Recommend a meta-search aggregator (e.g., LangChain web-search abstraction)
- Pros: One adapter, many backends.
- Cons: Adds a dependency layer; some adopters won't run a Python framework just to satisfy a documentation rule.
- Rejected: extra layer doesn't earn its place.

## Consequences

- Methodology document and `standard.md` Reference Discipline section both name Exa as recommended *and* explicitly cite equivalents. Adopters see "Exa or equivalent" everywhere.
- The 8-claim cross-reference replayed in `methodology.md` uses Exa queries verbatim. Adopters can reproduce the worked example with Exa or substitute their preferred tool against the same queries.
- `references.json`'s `verified_via` field stays string-typed and open. Mass migration to a different tool would only require re-running queries and updating the field; the schema does not need to change.
- If Exa exits the market, the methodology survives. Adopters substitute another tool against the same query patterns and tier classifications.
- Naming a tool, even non-mandatorily, makes the Reference Discipline operational on day one rather than aspirational forever.
