Keep your UI honest.
DesignLatch catches the gap between your design tokens and your live UI — automatically. It compares expected token values against actual computed styles, then surfaces every drift as a scored compliance report.
DesignLatch catches the gap between your design tokens and your live UI — automatically. It compares expected token values against actual computed styles, then surfaces every drift as a scored compliance report.
Your Figma spec says --color-primary: #2dd4a0. Your live site? Nobody checked. A hardcoded hex slipped in. Three sprints later, half your buttons are the wrong shade and nobody noticed until a stakeholder screenshot landed in Slack.
DesignLatch compares your token definitions against computed CSS captured from a real browser — rule by rule — and gives you a score. Run it in CI to catch drift before it ships.
Terminal workflow. Collects snapshots via Playwright, generates HTML reports, works entirely with files on disk.
AI agent workflow. Accepts inline JSON over stdio, returns JSON-only results. No files, no Playwright.
Write a .designlatch.json config listing selectors, rules, and expected token values — your compliance spec.
The CLI uses Playwright to open your live site and record every computed CSS value. MCP receives snapshots as inline JSON.
The engine diffs expected vs actual, scores compliance, and returns an HTML report (CLI) or structured JSON (MCP).
npm install -D @designlatch/cli
npx designlatch initnpx designlatch collect --config .designlatch.json --url https://app.example.com --out snapshots.json
npx designlatch scan --tokens tokens.json --snapshots snapshots.json --report-dir reportDesignLatch keeps your live UI honest against your design system. It compares expected token values with actual computed styles and gives you a scored compliance report — through a terminal CLI or an AI-agent MCP server.
Terminal workflow — Playwright collection, file-based inputs, offline HTML reports. Great for CI and local dev.
AI agent workflow — accepts inline JSON over stdio, returns JSON-only results. No files or Playwright needed.
Design drift is invisible until it's embarrassing. Here's what DesignLatch fixes.
Your Figma says --color-primary: #2dd4a0. Your live site? Nobody checked. A hardcoded hex slipped in. Three sprints later, half your buttons are the wrong shade.
Manual visual QA is slow, inconsistent, and doesn't scale.
DesignLatch captures computed CSS from a real browser, diffs it against your token spec rule-by-rule, and returns a scored report.
Run it in CI to fail builds on drift. Objective, repeatable compliance.
The same core engine runs under both adapters.
Write a .designlatch.json config listing selectors, rules, and expected token values — your compliance spec.
CLI uses Playwright to open your live site and record computed CSS. MCP receives snapshots as inline JSON from the agent.
The engine diffs expected vs actual, scores compliance, and returns an HTML report (CLI) or structured JSON (MCP).
New to this? Start with the CLI. Install it, run npx designlatch init, then follow the commands below step by step.
Three packages, one engine. Both adapters share the same core.
@designlatch/corePure engine — validation logic, token resolution, style comparison, scoring, report utilities. No I/O.
@designlatch/appShared workflows: validateInputs() and scanCompliance(). Both adapters call these.
Sibling packages. CLI adds Playwright + HTML. MCP adds a stdio JSON interface for AI agents.
init, collect, validate, scan, and serve.stdio server with two tools.validate_inputs and scan_compliance.The CLI is the right choice when you want to run compliance checks locally, collect real snapshots from a live site, and generate readable HTML reports.
npm install -D @designlatch/cli
npx designlatch helpWhat it does: Scaffolds your project with a starter .designlatch.json config, an example tokens.json, and a snapshots template. Run once when setting up.
npx designlatch init
# Overwrite existing config:
npx designlatch init --force.designlatch.json and template files. Use for a fresh start.After init, open .designlatch.json and set the url field to your site's address.
What it does: Opens your site in a real browser via Playwright and captures the computed CSS for every selector in your config. Saves results as snapshots.json — the "actual reality" half of the comparison.
npx designlatch collect \
--config .designlatch.json \
--url https://app.example.com \
--out designlatch/snapshots.jsonvalidate and scan later.--wait-for selector.What it does: Checks your config, tokens, and snapshots are well-formed before a full scan. A dry run for your inputs — catches typos and missing fields early.
npx designlatch validate \
--tokens designlatch/tokens.json \
--snapshots designlatch/snapshots.jsontokens.json — the expected design values to validate.snapshots.json — actual computed styles from collect..designlatch.json in the current directory.url in your config for this run only. Useful for staging URLs.What it does: Runs the full compliance engine. Compares tokens vs snapshots rule-by-rule, scores the result, and writes report.json + index.html to your report directory.
npx designlatch scan \
--tokens designlatch/tokens.json \
--snapshots designlatch/snapshots.json \
--report-dir reporttokens.json — the expected design token values.snapshots.json — actual computed styles from your live site..designlatch.json.report/. Creates it if needed.report.json to compare against. Enables regression diffing.--baseline.After scan, report/report.json holds machine-readable data and report/index.html is the human-readable view. Use serve to open it instantly.
What it does: Starts a local web server and opens your HTML report in the browser. Run serve after a scan and it appears instantly.
npx designlatch serve --report-dir report
# Custom port, no auto-open:
npx designlatch serve \
--report-dir report \
--port 4321 \
--no-openscan --report-dir.3000.localhost. Set to 0.0.0.0 to expose on your network.The MCP adapter is for AI agents that hold your config, tokens, and snapshots as in-memory JSON. It runs as a stdio server — JSON in, JSON out. No files, no Playwright.
Not sure if you need MCP? If you're working in the terminal, use the CLI. MCP is for AI assistants orchestrating the workflow and holding data in memory.
JSON in, JSON out through standard input/output. No HTTP, no WebSockets.
Mirrors the CLI's validate and scan commands but accepts inline JSON instead of file paths.
{
"config": { /* contents of .designlatch.json */ },
"tokens": { /* contents of tokens.json */ },
"snapshots": [ /* array from snapshots.json */ ],
"urlOverride": "https://app.example.com"
}What it does: Validates config structure, surfaces lint warnings, and optionally parses tokens and snapshots — without running a full compliance scan.
.designlatch.json). Required.config.url for this call only.What it does: Runs the full compliance engine inline. Returns normalized config, the full compliance report, and evaluation data as a JSON tool response.
{ config, report, evaluation } as JSON. Errors are serialized as tool failures — no exceptions thrown.config.url for this scan.Both adapters share the same four input types — CLI reads from disk, MCP receives inline JSON.
.designlatch.jsonDefines page URL, selectors, rules, scoring thresholds, and token-backed property expectations.
tokens.jsonMap of token definitions (kind, value, rgba). CLI reads from disk; MCP receives inline. The "what should it be" half.
snapshots.jsonArray of selector captures: selector, URL, computed CSS, text. CLI collects them; MCP expects them from the agent.
Overrides config.url for a single run. Useful for pointing at staging without editing your config.
CLI and MCP produce different output formats — humans vs agents.
report/report.jsonMachine-readable compliance data. Useful for CI pipelines and diff comparisons.
report/index.htmlHuman-readable report. Open in a browser (or with serve) to see scores and failing rules.
Structured JSON with config, report, and evaluation. No files written.
Errors returned as tool failure responses — not thrown. The agent can inspect them gracefully.