D
DesignLatch
D
DesignLatch
Design Compliance Toolkit

Keep your UI honest.

DesignLatch catches the gap between your design tokens and your live UI — automatically. It compares expected token values against actual computed styles, then surfaces every drift as a scored compliance report.

Problem 😬

Design tokens drift — silently

Your Figma spec says --color-primary: #2dd4a0. Your live site? Nobody checked. A hardcoded hex slipped in. Three sprints later, half your buttons are the wrong shade and nobody noticed until a stakeholder screenshot landed in Slack.

Solution

Automated, scored compliance

DesignLatch compares your token definitions against computed CSS captured from a real browser — rule by rule — and gives you a score. Run it in CI to catch drift before it ships.

$_
CLI

@designlatch/cli

Terminal workflow. Collects snapshots via Playwright, generates HTML reports, works entirely with files on disk.

{}
MCP

@designlatch/mcp

AI agent workflow. Accepts inline JSON over stdio, returns JSON-only results. No files, no Playwright.

Quick Start

3 steps
1

Define expectations

Write a .designlatch.json config listing selectors, rules, and expected token values — your compliance spec.

2

Capture reality

The CLI uses Playwright to open your live site and record every computed CSS value. MCP receives snapshots as inline JSON.

3

Compare & report

The engine diffs expected vs actual, scores compliance, and returns an HTML report (CLI) or structured JSON (MCP).

CLI — Install
bash
npm install -D @designlatch/cli
npx designlatch init
CLI — Full scan
bash
npx designlatch collect --config .designlatch.json --url https://app.example.com --out snapshots.json
npx designlatch scan --tokens tokens.json --snapshots snapshots.json --report-dir report
Docs · v1

DesignLatch docs

DesignLatch keeps your live UI honest against your design system. It compares expected token values with actual computed styles and gives you a scored compliance report — through a terminal CLI or an AI-agent MCP server.

$_
CLI

@designlatch/cli

Terminal workflow — Playwright collection, file-based inputs, offline HTML reports. Great for CI and local dev.

{}
MCP

@designlatch/mcp

AI agent workflow — accepts inline JSON over stdio, returns JSON-only results. No files or Playwright needed.

Why it exists

The problem & the solution

Design drift is invisible until it's embarrassing. Here's what DesignLatch fixes.

Problem😬

Tokens drift silently

Your Figma says --color-primary: #2dd4a0. Your live site? Nobody checked. A hardcoded hex slipped in. Three sprints later, half your buttons are the wrong shade.

Manual visual QA is slow, inconsistent, and doesn't scale.

Solution

Automated compliance

DesignLatch captures computed CSS from a real browser, diffs it against your token spec rule-by-rule, and returns a scored report.

Run it in CI to fail builds on drift. Objective, repeatable compliance.

Concept

How it works in 3 steps

The same core engine runs under both adapters.

1

Define expectations

Write a .designlatch.json config listing selectors, rules, and expected token values — your compliance spec.

2

Capture reality

CLI uses Playwright to open your live site and record computed CSS. MCP receives snapshots as inline JSON from the agent.

3

Compare & report

The engine diffs expected vs actual, scores compliance, and returns an HTML report (CLI) or structured JSON (MCP).

💡

New to this? Start with the CLI. Install it, run npx designlatch init, then follow the commands below step by step.

Architecture

Package layout

Three packages, one engine. Both adapters share the same core.

Core

@designlatch/core

Pure engine — validation logic, token resolution, style comparison, scoring, report utilities. No I/O.

App

@designlatch/app

Shared workflows: validateInputs() and scanCompliance(). Both adapters call these.

Adapters

CLI + MCP

Sibling packages. CLI adds Playwright + HTML. MCP adds a stdio JSON interface for AI agents.

Status

What's shipped in v1

CLI is live with all five commands: init, collect, validate, scan, and serve.
MCP v1 is live as a stdio server with two tools.
MCP exposes exactly two tools: validate_inputs and scan_compliance.
Playwright collection & HTML reports are CLI-only — not part of the MCP surface in v1.
MCP accepts inline JSON and returns JSON-only results. No files are written.
▸ CLI Adapter
CLI

CLI adapter — file-based workflow

@designlatch/cli

The CLI is the right choice when you want to run compliance checks locally, collect real snapshots from a live site, and generate readable HTML reports.

Install

Get started

bash
npm install -D @designlatch/cli
npx designlatch help
5 commands

Full CLI flow

init→ Scaffold project files
collect→ Capture live computed styles
validate→ Check inputs are well-formed
scan→ Run the compliance engine
serve→ Open the HTML report
CLI · 1 / 5

init

first run

What it does: Scaffolds your project with a starter .designlatch.json config, an example tokens.json, and a snapshots template. Run once when setting up.

bash
npx designlatch init
# Overwrite existing config:
npx designlatch init --force
Flags
--forceOverwrites existing .designlatch.json and template files. Use for a fresh start.
💡

After init, open .designlatch.json and set the url field to your site's address.

CLI · 2 / 5

collect

requires Playwright

What it does: Opens your site in a real browser via Playwright and captures the computed CSS for every selector in your config. Saves results as snapshots.json — the "actual reality" half of the comparison.

bash — typical collect
npx designlatch collect \
  --config  .designlatch.json \
  --url     https://app.example.com \
  --out     designlatch/snapshots.json
Flags
--config*Path to your config file. Defines which selectors and properties to capture.
--url*The URL of the page to open. Playwright navigates here and captures computed styles.
--out*Where to write the snapshots JSON. Pass this path to validate and scan later.
--wait-forCSS selector to wait for before capturing — ensures the page is fully rendered.
--timeout-msMax milliseconds to wait for the --wait-for selector.
--headedRun the browser in visible (non-headless) mode. Useful for debugging capture issues.
CLI · 3 / 5

validate

optional but recommended

What it does: Checks your config, tokens, and snapshots are well-formed before a full scan. A dry run for your inputs — catches typos and missing fields early.

bash
npx designlatch validate \
  --tokens    designlatch/tokens.json \
  --snapshots designlatch/snapshots.json
Flags
--tokens*Path to your tokens.json — the expected design values to validate.
--snapshots*Path to your snapshots.json — actual computed styles from collect.
--configConfig file path. Defaults to .designlatch.json in the current directory.
--urlOverrides the url in your config for this run only. Useful for staging URLs.
CLI · 4 / 5

scan

main command

What it does: Runs the full compliance engine. Compares tokens vs snapshots rule-by-rule, scores the result, and writes report.json + index.html to your report directory.

bash — full scan
npx designlatch scan \
  --tokens     designlatch/tokens.json \
  --snapshots  designlatch/snapshots.json \
  --report-dir report
Flags
--tokens*Path to your tokens.json — the expected design token values.
--snapshots*Path to your snapshots.json — actual computed styles from your live site.
--configConfig file path. Defaults to .designlatch.json.
--report-dirDirectory to write output reports. Defaults to report/. Creates it if needed.
--urlOverrides the page URL in config for this scan only.
--baselinePath to a previous report.json to compare against. Enables regression diffing.
--diff-outWhere to write the diff JSON when using --baseline.
📁

After scan, report/report.json holds machine-readable data and report/index.html is the human-readable view. Use serve to open it instantly.

CLI · 5 / 5

serve

local preview

What it does: Starts a local web server and opens your HTML report in the browser. Run serve after a scan and it appears instantly.

bash
npx designlatch serve --report-dir report

# Custom port, no auto-open:
npx designlatch serve \
  --report-dir report \
  --port       4321 \
  --no-open
Flags
--report-dir*Directory containing your report files. Same value you passed to scan --report-dir.
--portPort for the local server. Defaults to 3000.
--hostHostname to bind to. Defaults to localhost. Set to 0.0.0.0 to expose on your network.
--fileServe a specific HTML file instead of the report directory.
--no-openDon't auto-open the browser. Useful in CI or headless environments.
▸ MCP Adapter
MCP

MCP adapter — AI agent workflow

@designlatch/mcp

The MCP adapter is for AI agents that hold your config, tokens, and snapshots as in-memory JSON. It runs as a stdio server — JSON in, JSON out. No files, no Playwright.

🤖

Not sure if you need MCP? If you're working in the terminal, use the CLI. MCP is for AI assistants orchestrating the workflow and holding data in memory.

Transport

stdio only (v1)

JSON in, JSON out through standard input/output. No HTTP, no WebSockets.

2 tools in v1

validate + scan

Mirrors the CLI's validate and scan commands but accepts inline JSON instead of file paths.

JSON — example payload
{
  "config":      { /* contents of .designlatch.json */ },
  "tokens":      { /* contents of tokens.json */ },
  "snapshots":   [ /* array from snapshots.json */ ],
  "urlOverride": "https://app.example.com"
}
MCP · Tool 1 / 2

validate_inputs

config check

What it does: Validates config structure, surfaces lint warnings, and optionally parses tokens and snapshots — without running a full compliance scan.

validate_inputs
Returns a normalized validation result as JSON. Errors are returned as tool failures.
config*The full config object (same structure as .designlatch.json). Required.
tokensToken definitions object. Optional — omit to validate only config structure.
snapshotsArray of snapshot objects. Optional.
urlOverrideString URL that overrides config.url for this call only.
MCP · Tool 2 / 2

scan_compliance

full scan

What it does: Runs the full compliance engine inline. Returns normalized config, the full compliance report, and evaluation data as a JSON tool response.

scan_compliance
Returns { config, report, evaluation } as JSON. Errors are serialized as tool failures — no exceptions thrown.
config*Full config object. Required.
tokens*Token definitions — the expected design values. Required.
snapshots*Array of snapshot objects — actual computed styles. Required.
urlOverrideOptional URL string that overrides config.url for this scan.
Reference
Reference

Inputs

Both adapters share the same four input types — CLI reads from disk, MCP receives inline JSON.

config

.designlatch.json

Defines page URL, selectors, rules, scoring thresholds, and token-backed property expectations.

tokens

tokens.json

Map of token definitions (kind, value, rgba). CLI reads from disk; MCP receives inline. The "what should it be" half.

snapshots

snapshots.json

Array of selector captures: selector, URL, computed CSS, text. CLI collects them; MCP expects them from the agent.

urlOverride

Optional URL string

Overrides config.url for a single run. Useful for pointing at staging without editing your config.

Reference

Outputs

CLI and MCP produce different output formats — humans vs agents.

CLI · JSON

report/report.json

Machine-readable compliance data. Useful for CI pipelines and diff comparisons.

CLI · HTML

report/index.html

Human-readable report. Open in a browser (or with serve) to see scores and failing rules.

MCP · JSON

Tool response objects

Structured JSON with config, report, and evaluation. No files written.

MCP · Errors

Serialized tool failures

Errors returned as tool failure responses — not thrown. The agent can inspect them gracefully.