Three agents. Six guardrails. Zero hallucination surface.
Most AI tools generate freely, then check what they made. We took the opposite approach.
The design system — tokens, components, patterns — is loaded into the AI's schema before generation starts. The AI cannot reference a color that isn't in the approved palette. It cannot use a component that isn't in the library. It cannot skip a required section like ISI or disclaimers.
This isn't prompt engineering. It's schema engineering. The Zod types that define the AI's output literally cannot represent an unapproved component. Compliance is structural, not aspirational.
Converts natural language briefs into structured requirements
Model: Gemini 2.5 Flash (configurable via env var)
Method: generateObject with Zod schema validation — structured output, not free-text parsing
Prompt strategy: Design system context (patterns, markets, components) injected into system prompt — the AI sees ONLY what’s approved
Guardrails
Generates two PageSpec variants from structured requirements
Model: Same configurable LLM
Method: generateObject with constrained Zod enum schema
Prompt strategy: Full component props shapes, market-specific requirements, and required disclosures injected
Guardrails
Applies natural language edits to existing page specs
Model: Same configurable LLM
Method: generateObject with single-variant constrained schema
Prompt strategy: Current page spec + edit instruction, with explicit compliance preservation rules
Guardrails
This is critical for trust:
See the live audit chain at /evidence.