SchemaStruct Documentation
SchemaStruct is a warehouse schema workflow for analytics, data platform, and warehouse teams. It gives you one place to profile raw tables, model dimensional structures in DSL, review them against live warehouse state, hand off deployable artifacts for GitHub, dbt, or dashboard delivery, publish data contracts, and keep living docs aligned with the same model.
Platform Workflow
Profile
Start from Data Lab to ingest and profile source files, import SQL or dbt metadata, or connect to a live Snowflake, Databricks, or PostgreSQL warehouse for introspection.
Model
Generate dimensional models, trace column lineage from Bronze through Gold, compare fact-grain candidates, and review star-schema drafts on the canvas before handoff.
Sync
Push reviewed DSL changes back to the warehouse as DDL using the pre-flight dry-run flow. Background drift detection flags when production diverges from your design so reconciliation happens before deployment, not after.
Deploy
Export warehouse-ready SQL, sync migration artifacts to GitHub, load sample data into connected tables for integration testing, and prepare dashboard-ready handoffs without rebuilding the metric layer.
Publish
Publish a data dictionary, publish a contract page with live validation health, launch the dedicated dashboard workspace, and flow CI results back into the same source of truth.
Quick Start
Start from the right source
Open schemastruct.com/editor and choose the fastest path for your workflow: profile source files in Data Lab, model from scratch, connect Snowflake or Databricks, paste SQL, or import dbt metadata.
Design the change visually
Modify the DSL, edit directly on the diagram, or use AI to scaffold the first version. You can also convert source models into star schemas and refine fact grain or dimensions visually.
Review diff and deployment output
Use Diff, validation, and version history to inspect what changed. Then hand off SQL, dbt metadata, GitHub sync artifacts, or a dashboard-ready schema into the dedicated dashboard workspace.
Publish docs, contracts, and dashboard deliverables
Click Publish Docs for a shareable dictionary, use the Contracts panel to publish contract pages, and open Build Dashboard from a dashboard-ready handoff when you need KPI tiles, chart specs, or BI export bundles.
Schema DSL Quick Reference
SchemaStruct's DSL is the shared language between Data Lab, the editor, warehouse export, and documentation. You can write it directly in the editor, generate it from Data Lab, or refine it after visual edits.
database analytics
schema analytics.silver
schema analytics.gold
enum billing_status {
draft
active
cancelled
}
table analytics.silver.subscriptions {
subscription_id integer [pk]
account_id integer [not null]
plan_id integer [not null]
mrr_amount decimal [note: "Monthly recurring revenue at the subscription grain"]
loaded_at timestamp [note: "Audit column added in Silver"]
discount_amount decimal [note: "Derived from: list_price - net_price"]
}
table analytics.gold.fact_subscription {
subscription_key bigint [pk]
account_key bigint [not null]
plan_key bigint [not null]
mrr_amount decimal
}
ref: analytics.silver.subscriptions.account_id > analytics.silver.accounts.account_id
ref: analytics.gold.fact_subscription.account_key > analytics.gold.dim_account.account_keyUse database and schema declarations when you want SQL export to target the correct warehouse namespace. This is especially useful for Snowflake, Databricks, and multi-schema delivery workflows where generated DDL should preserve the intended database and schema names.
| Element | Syntax | Use |
|---|---|---|
| Database | database analytics | Declares a warehouse database name available to the model and SQL export |
| Schema | schema analytics.silver | Declares a schema or namespace so exported SQL can keep the correct target path |
| Table | table schema.table_name { ... } | Define a modeled table, optionally namespaced by layer or schema |
| Column | column_name type [constraints] | Declare warehouse-friendly names, types, and annotations |
| Constraints | [pk], [not null], [unique], [default: ...] | Describe key, nullability, uniqueness, and defaults inline |
| Structured note | [note: "text"] | Show hover documentation in the diagram and preserve business logic context |
| Relationship | ref: orders.customer_id > customers.id | Connect tables with explicit join direction and cardinality meaning |
| Enum | enum status { active inactive } | Model controlled value sets shared across columns |
A practical authoring pattern is: use Data Lab to curate Bronze into Silver, review the generated Silver DSL, then let Gold modeling or direct editor work continue from that cleaned schema. If you already know the target warehouse namespace, declare the database and schema up front so export SQL lands in the right place without manual edits.
Tables
Tables are the core building block. Define them using the table keyword followed by the table name and curly braces:
table users {
id integer [pk]
email varchar [unique, not null]
name varchar
created_at timestamp
}Table names should be lowercase with underscores. Each line inside the braces defines a column. You can also create tables visually by double-clicking on the diagram canvas.
Columns & Types
Each column is defined as: column_name type [constraints]
Supported Types
Numeric
integer int bigint smallint serial float double decimal numericText
varchar text charDate/Time
timestamp datetime date timeOther
boolean bool json jsonb uuid byteaConstraints
Add constraints inside square brackets after the column type:
table products {
id integer [pk] // Primary Key
sku varchar [unique] // Unique constraint
name varchar [not null] // NOT NULL
price decimal [not null, default: 0.00] // Default value
}| Constraint | Syntax | Description |
|---|---|---|
| Primary Key | [pk] | Marks column as the primary key |
| Not Null | [not null] | Column cannot be NULL |
| Unique | [unique] | Column values must be unique |
| Default | [default: value] | Sets a default value |
| Multiple | [pk, not null] | Combine constraints with commas |
You can also toggle constraints visually by clicking the PK or UQ badges on the diagram.
Notes & Comments
SchemaStruct supports two ways to annotate columns:
Comments (editor only)
Use // or # for inline comments. These are visible only in the code editor and are not shown in the diagram.
table users {
id integer [pk]
email varchar [not null] // Primary contact email
role varchar # admin, editor, viewer
}Notes (shown in diagram on hover)
Use [note: "..."] to add structured notes. These appear as a popup tooltip when you hover over the column in the diagram.
table users {
id integer [pk]
username varchar [not null] [note: "User's unique login name"]
email varchar [unique] [note: "Must be verified before login"]
status varchar [default: 'active'] [note: "active, suspended, deleted"]
}| Syntax | Visible In | Purpose |
|---|---|---|
// text or # text | Editor only | Code comments, dev notes |
[note: "text"] | Editor + Diagram (hover) | Column descriptions, business logic |
Relationships
Define foreign key relationships using the ref: keyword, or drag from one column handle to another on the diagram:
// One-to-Many: one user has many orders ref: orders.user_id > users.id // Many-to-One ref: orders.user_id < users.id // One-to-One ref: profiles.user_id - users.id // Many-to-Many ref: students.id <> courses.id
| Operator | Cardinality | Meaning |
|---|---|---|
> | Many-to-One | Left table has FK referencing right table |
< | One-to-Many | Right table has FK referencing left table |
- | One-to-One | Unique foreign key relationship |
<> | Many-to-Many | Junction table relationship |
You can also click on an edge in the diagram to change its type or delete the relationship.
Inline Diagram Editing
Edit your schema directly on the diagram without touching the DSL code. All visual edits automatically sync back to the DSL editor.
- Rename table — Double-click the table header to edit the name
- Edit column — Double-click any column to edit its name and data type
- Add column — Click the
+ Add Columnbutton at the bottom of any table - Delete column — Hover over a column and click the
✕button - Toggle constraints — Click PK or UQ badges to toggle constraints
- Create table — Double-click anywhere on the empty canvas
When editing a column inline, you can Tab between the name and type fields. Press Enter to save or Escape to cancel.
Column Drag-to-Reorder
Rearrange columns within a table by dragging them up or down:
- Hover over a column to reveal the drag handle (
⠿) on the left side - Click and drag the handle to move the column to a new position
- A blue indicator line shows where the column will be inserted
- The DSL code updates automatically to match the new order
Note: Only the left-side grip handle initiates column reorder — dragging from the table header moves the entire table node.
Table Colors & Domain Groups
Visually organize complex schemas by color-coding tables by domain. Right-click a table and select Color / Group:
Domain Groups
Assign tables to predefined groups like Auth, Orders, Payments, or Products. Each group gets:
- A colored header matching the group
- A group label badge on the table header (e.g. AUTH, ORDERS)
Custom Groups
Type any custom group name (e.g. "Shipping", "Notifications") and click Add. A color is automatically assigned based on the name.
Color Only
If you just want a colored header without a label, pick from the 11-color palette in the "Color Only" section. Use the Reset to Default button to remove any color or group.
Sticky Notes
Add floating notes to annotate your diagram. Click Add Note in the diagram toolbar to create a sticky note. Each note supports:
- 5 colors — Click the color dots to switch
- Inline editing — Click to type, press outside to save
- Drag to reposition — Move notes anywhere on the canvas
- Delete — Click the ✕ button to remove
Search & Zoom to Table
Quickly find and navigate to any table in large schemas:
- Click Search Table in the diagram toolbar, or press
Cmd+F// - Type a table name — results filter in real-time
- Use
↑↓arrow keys to navigate results - Press Enter to zoom the camera to the selected table
- Press Escape to close the search panel
Undo / Redo
Full undo/redo history for all schema changes — whether made in the DSL editor or via inline diagram editing:
Cmd+Z— Undo the last changeCmd+Shift+Z— Redo- History stores up to 50 steps
- Keystroke changes are debounced (500ms) to group rapid typing
- Discrete actions (add/delete/rename) are saved immediately
A toast notification appears confirming the undo/redo action.
Schema Diff
Compare your current schema against the last saved version or an introspected warehouse state. Click the Diff button in the diagram toolbar to open a side-by-side, color-coded review view before you ship changes:
- Green lines — Added
- Red lines — Removed
- Line numbers for both old and new versions
- Summary showing total additions and removals
Snap to Grid
Enable snap-to-grid for pixel-perfect, professional-looking diagrams. When enabled, table nodes snap to a 20px grid that aligns with the background dots.
- Open the Layout dropdown in the toolbar
- Toggle Snap to Grid on or off
- Setting is persisted across sessions
Data Lab
Data Lab is the fastest way to turn raw files into modeling-ready schemas. Import CSV, JSON, or Parquet data into Bronze, profile the source automatically, curate the Silver layer, then use Gold to preview dimensional modeling outcomes.
Ingestion
- Upload multiple related CSV, JSON, or Parquet files into a single staging workspace
- Files are staged as Bronze tables with their original shape preserved
Data profiling
SchemaStruct automatically profiles each ingested source table so you understand data shape before modeling begins:
- Column type detection — Infers integer, decimal, date, timestamp, boolean, and string types from raw values
- Distribution analysis — Null rates, distinct counts, and value range summaries per column
- Primary key candidates — Identifies columns with high cardinality and no nulls as likely PK candidates
- Relation candidates — Detects columns that appear to reference other tables based on naming patterns and value overlap
- Industry classification — Infers the likely domain (e-commerce, SaaS, fintech, healthcare, logistics, etc.) from table and column names to seed KPI suggestions downstream
Silver curation
- Rename columns into warehouse-friendly names and cast loose raw types into clean analytics types
- Add derived columns and audit fields before sending the schema to the editor
- Run guided checks like duplicate keys, null rates, and distinct summaries
- Preview and accept candidate joins before sending Silver schemas into the editor
Gold modeling & handoff
- Review fact grain, dimensions, and aggregate mart recommendations from the curated Silver schema
- Promote query results into new modeling tables when needed
- Send only the Silver schema into the editor, or build a full Bronze-Silver-Gold medallion draft in one step
- Mark the workspace as dashboard-ready to unlock the dedicated Dashboard Builder
Silver Designer
The Silver layer is where raw source shape becomes analytics-ready structure. Use it to standardize names, cast types, remove noise, add audit columns, and create derived fields before dimensional modeling begins.
- Rename columns into warehouse-friendly names like
order_dateoraccount_key - Cast types from loose raw inputs into dates, timestamps, numerics, and booleans
- Add derived columns such as
gross_margin,discount_amount, orseats_available - Apply audit suggestions like
loaded_at,updated_at,source_name, andbatch_id - Send only the curated Silver schema into the editor when you want to continue relational design first
Derived Silver columns are preserved in the editor with structured notes so the DSL still tells you how each field was calculated.
Gold Modeling
Gold modeling runs on the curated Silver schema, not the raw upload. SchemaStruct uses the accepted Silver joins, cleaned types, and derived columns to recommend fact grain, dimensions, and aggregate marts.
- Preview the primary fact candidate and selected dimensions
- Review Gold quality checks like additive measure coverage and grain mismatch risks
- See KPI suggestions and aggregate mart opportunities before applying the model
- Build a full medallion draft or send only the Silver schema into the editor, depending on the workflow stage
Column Lineage & Dependency Visualization
SchemaStruct tracks how curated Silver columns flow into Gold facts and dimensions, and renders that lineage visually on the canvas so teams can trace data paths before any pipeline is built.
Canvas lineage visualization
- Animated dashed lineage edges distinguish data flow from foreign-key relationships on the canvas
- Dependency-flow overlays weight connection thickness based on inbound and outbound relation counts — highly connected "hub" tables are visually prominent
- Toggle the lineage overlay from the Layout menu to show or hide dependency flow edges without changing the schema
- Pulsing indicators on high-connectivity tables make critical data paths easy to spot in large schemas
Column-level lineage
- See how a Silver expression like
total_amount = qty * unit_pricepropagates into Gold measure columns - Derivation notes from Silver are preserved in the editor DSL as structured comments, keeping the lineage path always traceable
- Use the medallion lineage view to understand Bronze → Silver → Gold column-level dependencies before building pipelines
Impact analysis
Before dropping a column or renaming a table, use Impact Analysis from the table context menu to preview downstream breakage:
- Shows all tables and columns that reference the affected object
- Severity is labeled: breaking (FK references), affected (downstream marts or views), warning (derivation-linked columns)
- Helps teams estimate the blast radius of a schema change before review
Query Path Finder
Select any two tables and use Find Query Path to trace the multi-hop join chain between them. Useful for understanding indirect relationships in large or layered schemas.
Normalization Advisor
In the editor overview panel, SchemaStruct can flag likely 1NF, 2NF, and 3NF issues and suggest how to decompose the model more cleanly.
- Spot repeating groups and array-like fields that should be separated
- Detect columns that depend on only part of a composite key
- Highlight transitive dependencies that belong in their own lookup or parent table
- Apply one-click normalization fixes for supported cases directly in the editor
AI Schema Generation
Click the ✨ AI Generate button in the toolbar to describe your project in plain English. SchemaStruct uses built-in AI to generate a complete schema draft you can refine visually.
Quick-start Templates
- E-commerce — Products, orders, customers, payments
- Blog / CMS — Posts, authors, categories, comments
- SaaS — Users, organizations, subscriptions, billing
- Social Media — Profiles, posts, followers, messages
- Healthcare — Patients, doctors, appointments, records
- Education — Students, courses, enrollments, grades
Or type an analytics-focused prompt like: "A Snowflake customer 360 warehouse with staging tables, conformed dimensions, fact orders, and gold marts"
Star Schema Builder
Click Star Schema in the workspace toolbar to turn a source model into a dimensional draft. SchemaStruct scores fact candidates, suggests dimensions, and lets you apply a star schema with an auto-arranged fact-and-dimension layout.
- Choose a business process and review fact-grain candidates
- Select, deselect, or bulk-select dimensions
- Mark eligible dimensions as SCD Type 2 where history matters
- Review generated DSL and dbt model previews before applying
- Use AI Assist to recommend the best grain and dimension set for a business goal
- Apply or revert the star schema directly from the same toolbar action
Dashboard Builder
SchemaStruct includes a dedicated dashboard workspace at /dashboard. It is designed for dashboard-ready dimensional models, reporting views, and Data Lab handoffs that already have the fact, date, and filtering structure needed for KPI delivery.
You do not open the dashboard builder from the canvas toolbar by default. Instead, finish a dimensional-model, reporting-view, or Data Lab handoff in the editor, then use the handoff summary action to launch the dedicated dashboard workspace with the current schema context attached.
- Generate KPI starters and chart specs from the reviewed warehouse model
- Run live preview queries against the active warehouse connection when available
- Regenerate individual widgets without rebuilding the entire dashboard story
- Adjust theme variant, accent color, density, radius, KPI card style, and brand header fields
- Export SQL bundles, PowerPoint decks, Power BI projects, Tableau workbooks, Streamlit apps, React components, Metabase JSON, or Markdown docs
- Save dashboard snapshots and detect schema drift against the source model
This keeps dashboard delivery tied to the same schema review flow as SQL, dbt, contracts, and docs instead of forcing BI teams to rebuild metric definitions after the model is approved.
AI Database Copilot
Click the ⚡ Copilot button in the toolbar to open an interactive AI sidebar. The Copilot is aware of your entire schema context and can:
- Write complex SQL queries based on your tables and relationships
- Answer architectural questions
- Suggest DSL modifications to add new features
- Explain dimensional tradeoffs like fact grain, conformed dimensions, and SCD strategy
SQL Import
Click Import SQL in the toolbar and paste any SQL DDL to reverse-engineer it into a visual diagram. This is useful when you want to bring an existing schema into the design, diff, deploy, document workflow. Supported databases:
- PostgreSQL
- MySQL
- SQLite
- SQL Server
The importer understands CREATE TABLE, PRIMARY KEY, FOREIGN KEY, REFERENCES, NOT NULL, UNIQUE, and DEFAULT clauses.
Live Warehouse Connection
Click 🔗 Connect DB to introspect a live database or warehouse securely. SchemaStruct connects temporarily, reads metadata, and converts your live tables, constraints, and relationships into visual DSL. Once connected, the session is reused for bidirectional sync, live query preview, and direct sample data load.
Supported Databases
| Database | Introspect | Push DDL | Live Query Preview |
|---|---|---|---|
| PostgreSQL | ✓ | ✓ | ✓ |
| MySQL | ✓ | ✓ | ✓ |
| Snowflake | ✓ | ✓ | ✓ |
| Databricks | ✓ | ✓ | ✓ |
| Iceberg (REST) | ✓ | — | — |
Connection profiles
Signed-in users can save and restore full connection profiles. The profile picker opens with empty credential fields and only hydrates after a saved profile is selected, so credentials are never pre-filled in the UI. A successful live connection automatically refreshes the saved profile so returning users do not need to re-enter credentials.
Once connected, use Warehouse Sync to push reviewed DDL back, and Drift Detection to monitor production state continuously.
Live Warehouse Sync
SchemaStruct supports bidirectional warehouse sync — pull live schema state in, and push reviewed DSL changes back to your warehouse as DDL. This closes the design-to-deploy loop without a separate migration tool.
Pulling schema from the warehouse
Connect to a live warehouse (see Live Warehouse Connection) and SchemaStruct converts your existing tables, constraints, and relationships into editable DSL. You can then model on top of the introspected schema, review changes with diff, and sync back.
Pushing DDL back to the warehouse
After reviewing your DSL changes, click Sync to DB from the drift panel or the connection toolbar entry. SchemaStruct generates the DDL needed to bring the warehouse in line with your design and presents a pre-flight dry-run before any changes are applied:
- Review the exact DDL grouped by table with operation labels (
CREATE TABLE,ALTER TABLE … ADD COLUMN,DROP COLUMN, etc.) - Severity indicators flag safe operations, warnings, and breaking changes (e.g. DROP COLUMN, DROP TABLE)
- Confirm the full DDL set or apply selected blocks after review
- Execution results are shown inline so you can verify each step succeeded before proceeding
Star schema target-schema override
For PostgreSQL, Snowflake, and Databricks, the sync flow supports an optional target-schema override so dimensional marts can be previewed and created in a dedicated schema (e.g. analytics.gold) instead of the active connection default. The dry-run preview prepends an ensure-schema statement before the selected DDL.
Supported targets
| Database | Push DDL | Schema Override |
|---|---|---|
| PostgreSQL | ✓ | ✓ |
| MySQL | ✓ | — |
| Snowflake | ✓ | ✓ |
| Databricks | ✓ | ✓ |
Schema Drift Detection
When you have an active warehouse connection, SchemaStruct continuously monitors for drift — any difference between the live warehouse schema and the DSL in your editor. Drift is detected automatically in the background so you always know when production has diverged from your design.
How drift detection works
- The drift watcher polls the connected warehouse every 5 minutes by default
- When drift is detected, a notification appears in the editor toolbar with a per-table grouped summary
- Each change is labeled by operation: added column, dropped column, type change, new table, removed table
- Operations are tagged with severity: safe, warning, or breaking
- The last-checked timestamp is shown in the panel header (e.g. "checked 3m ago")
Responding to drift
From the drift panel, you have two options:
- Pull from DB — Overwrites the editor DSL with the live warehouse state. Use this when production is the source of truth and your design needs to catch up.
- Push to DB — Opens the Warehouse Sync modal with a pre-flight DDL preview. Use this when your DSL design is the source of truth and the warehouse needs to be updated.
Drift panel controls
- Click Refresh to trigger an immediate drift check without waiting for the next poll
- Dismiss a drift notification to suppress it until the schema changes again
- Toggle the drift watcher off from the connection panel to disable background polling for the session
| Drift status | Meaning |
|---|---|
| Idle | No active connection or watcher is paused |
| Checking | Poll in progress |
| In sync | Live schema matches DSL — no action needed |
| Drifted | Differences detected — review and pull or push |
| Error | Connection failed during poll — check credentials |
Sample Data Generation
SchemaStruct can generate realistic sample data from your schema design and export it as CSV or SQL, or load it directly into a connected warehouse for integration testing before shipping to production.
Generating sample data
- Open Sample Data from the editor toolbar
- Choose row count per table, a scenario preset, and realism level
- Enable business-context prompts to generate realistic domain values (customer names, product codes, statuses)
- Free tier — Up to 5 rows per table (preview)
- Pro tier — Up to 10,000 rows per table with schema-aware dependency ordering
- AI-curated recipe flow (Pro/Team) — Combines your schema with a business prompt to produce realistic labels, statuses, and codes before deterministic row generation
Loading data directly into a connected warehouse
When you have an active warehouse connection, the sample data generator can load rows directly into your connected tables instead of just exporting files:
- Reuses the active connected database session — no separate credentials required
- Pre-flight checks verify target tables exist and are empty before loading
- Tables are loaded in dependency order (parent tables first) to satisfy FK constraints
- Supported targets for direct load: PostgreSQL, MySQL, Snowflake, Databricks
Export options
| Format | Description | Plan |
|---|---|---|
| CSV | One file per table, seeded for reproducibility | Free |
| SQL INSERT | Dependency-ordered INSERT statements | Free |
| Direct DB load | Push rows into a connected warehouse table | Pro |
Deploy-Ready SQL Export
Click Export → SQL and choose your target database. SchemaStruct generates deployment-ready DDL across supported dialects with warehouse-aware type mapping:
- PostgreSQL — SERIAL, VARCHAR, JSONB, etc.
- MySQL — AUTO_INCREMENT, VARCHAR(255), TINYINT(1), etc.
- SQL Server — IDENTITY(1,1), NVARCHAR(255), BIT, etc.
- SQLite — INTEGER, TEXT, REAL, BLOB
- Oracle — NUMBER(10), VARCHAR2(255), CLOB, sequences
- Snowflake — NUMBER, VARCHAR, VARIANT, AUTOINCREMENT
Each dialect uses proper identifier quoting and type mapping so the design artifact is ready for review, commit, and downstream deployment workflows.
GitHub Deploy Sync
Keep your warehouse change workflow tied to the codebase. Click ♲ Sync to push generated SQL migrations or schema artifacts directly to a GitHub repository.
- Requires a GitHub Personal Access Token (PAT) with
reposcope - Commits your schema directly to any branch
- Generates a direct link to the commit diff
CLI
SchemaStruct now includes a thin CLI package for engineering workflows that need linting, contract validation, schema diffing, or authenticated cloud push outside the browser editor.
schemastruct lint schema.dsl schemastruct diff schema.dsl --base HEAD~1 schemastruct contracts validate schema.dsl schemastruct push schema.dsl --name "Warehouse Core"
- lint runs parser checks, validator rules, naming, index, referential, and normalization advisors
- diff compares a DSL file against a git ref or local file and includes contract change summaries
- contracts validate focuses on contract coverage and governance issues
- push sends the reviewed DSL into the authenticated cloud workspace via the project API
For push, set SCHEMASTRUCT_ACCESS_TOKEN and optionally SCHEMASTRUCT_API_URL if you are not targeting the local dev server.
GitHub PR Bot
SchemaStruct includes a PR-comment workflow that watches for .dsl changes and posts a warehouse-aware summary directly into pull requests. This keeps schema review visible in the same thread where application and dbt changes are already being reviewed.
- Highlights tables and columns added or removed
- Includes contract-aware change summaries using the same diff logic as the editor
- Updates the existing PR comment instead of adding duplicate noise on every push
The workflow lives in .github/workflows/schemastruct-pr-diff.yml and uses scripts/post-schema-pr-diff.cjs to generate the comment body.
Public Schema Gallery
Public projects can now become reusable gallery entries at /gallery. The gallery is designed for discoverable warehouse schema starting points, reusable examples, and lightweight internal or external sharing.
- Browse public schemas by inferred domain such as SaaS, Fintech, or E-commerce
- See table counts, relationship counts, documentation density, and contract coverage before opening
- Open a gallery detail page and fork it directly into your workspace
- Open the same schema as an unsaved editor working copy if you are not signed in yet
Schema Embed Widget
Use the public embed script when you want a live schema preview inside docs, blogs, or internal wiki pages. The widget renders a read-only visual preview and links back to the full gallery page.
<script src="https://schemastruct.com/embed.js" data-schema-id="your-share-id" data-theme="light" data-height="420"> </script>
data-schema-idaccepts a public project share iddata-themesupportslightordarkdata-heightcontrols the iframe height while keeping the preview responsive
dbt Workflow
SchemaStruct works well as a visual planning layer for dbt teams. You can bring dbt metadata into the editor, review relationships and constraints visually, and export artifacts back out for your project.
- Import
schema.ymlandsources.ymlinto visual DSL - Export updated
schema.ymlmetadata from the same model - Generate starter dbt model stubs for delivery workflows
- Preview dbt-ready fact and dimension SQL from the star schema builder
- Carry the approved mart into the dashboard workspace when the modeled output is dashboard-ready
- Use diff and docs to review model changes with non-dbt stakeholders
Data Contracts
Click Contracts in the editor toolbar to author and publish table-level contracts from the same schema you already review for delivery. The contract workflow is designed for warehouse teams that need enforceable metadata, not a separate governance silo.
The contract panel supports two working modes:
- Rules — Add owner, domain, SLA, freshness, criticality, version, contact, and per-column rules
- Export — Generate ODCS YAML, dbt tests, Soda checks, Great Expectations JSON, a GitHub Action, or a contract bundle ZIP
Contracts are stored in the schema DSL using structured comment directives so they round-trip through project save, version history, share links, and exports.
table analytics.gold.fact_subscription {
// contract.meta: {"owner":"data-platform","domain":"billing","sla":"daily","version":"1.0.0","contactEmail":"data@example.com","criticality":"high","freshnessMaxHours":24}
// contract.rules: [{"type":"not_null","severity":"error","description":"Key must always be present"}]
subscription_key bigint [pk]
// contract.rules: [{"type":"min","value":0,"severity":"error","description":"MRR cannot be negative"}]
mrr_amount decimal
}Governance readiness is built into the editor. The contract panel shows what metadata is still missing before publish, and validation flags missing owners, versions, contact emails, criticality, or SLA/freshness expectations.
ORM Code Generation
Export your schema as ORM model code for popular frameworks:
- Prisma — schema.prisma with models, relations, and @@map
- TypeORM — TypeScript entities with decorators
- SQLAlchemy — Python models with Column, ForeignKey, relationship
- Django — Python models with fields and Meta class
Data Dictionary Portal
Documentation is the last step of the workflow, not a separate artifact. Click Publish Docsin the toolbar to get a shareable URL to your project's data dictionary, export it as Markdown or HTML, or click Copy as Markdown to move the schema directly into GitHub, Notion, RFCs, or team docs.
The documentation portal includes:
- Summary (total tables, columns, relationships)
- Table of contents with links
- Per-table column details (name, type, constraints, default, notes)
- Relationship maps per table
- Full relationship summary
- Model context that stays aligned with downstream dashboard and contract handoff workflows
- Clipboard-ready Markdown for one-click documentation handoff
Contract Registry & Validation Health
Published contracts are discoverable inside the authenticated registry at /account/contracts. This is the in-product dashboard for contract discovery, lifecycle control, and CI status monitoring.
The registry dashboard lets you:
- Search by table name or workspace
- Filter by lifecycle state: active, draft, deprecated
- Filter by health state: passed, failed, warning, awaiting run
- Open public contract pages and download contract bundles
- Inspect the latest validation time, trigger source, and violation count
To send CI results back into SchemaStruct, post run results to the contract run endpoint for a published contract:
POST /api/contracts/:id/run
{
"status": "failed",
"triggered_by": "ci",
"violations": [
{
"table": "fact_subscription",
"column": "mrr_amount",
"rule": "min",
"message": "Found rows where mrr_amount < 0"
}
]
}Once a run is posted, the registry and the public contract page update to show the latest health badge, validation timestamp, and recent violations.
Schema Templates
Click Templates in the toolbar to load a pre-built schema instantly. Available templates:
- E-commerce — Users, products, categories, orders, reviews, coupons
- Blog / CMS — Posts, categories, tags, comments, media
- Project Management — Teams, projects, tasks, time tracking
- CRM — Contacts, companies, deals, pipelines, activities
- SaaS Multi-tenant — Organizations, roles, subscriptions, billing
- Social Network — Posts, likes, follows, messages, notifications
- Healthcare / EHR — Patients, doctors, appointments, prescriptions
- Education / LMS — Courses, lessons, enrollments, grades
- Inventory Management — Warehouses, stock, purchase orders, shipments
- Authentication System — Roles, permissions, sessions, audit logs
Version History
SchemaStruct auto-saves your schema every 5 seconds. Click History in the toolbar to:
- View a timeline of all saved versions
- Restore any previous version instantly
- Save labeled snapshots (e.g. "Before adding payments")
- Compare two versions with a colored diff view
- Review contract-aware change summaries, including breaking and non-breaking contract changes
Schema Validation
Click Validate to lint your schema before review or deployment. Built-in rules catch structural issues, naming problems, and warehouse compatibility risks:
| Rule | Severity | Description |
|---|---|---|
| No Primary Key | Warning | Table has no primary key |
| Contract Definitions | Warning | Checks contract rules and metadata for invalid values |
| Contract Governance | Warning | Checks for missing owner, version, contact, criticality, and SLA/freshness data |
| Naming Convention | Warning | Names should be snake_case |
| Require Timestamps | Info | Tables should have created_at/updated_at |
| NOT NULL Foreign Keys | Warning | FK columns should be NOT NULL |
| No Duplicate Columns | Error | Duplicate column names |
| Lonely Table | Info | Table has no relationships |
| Missing Referenced Table | Error | Ref points to non-existent table |
| Missing Referenced Column | Error | Ref points to non-existent column |
| Reserved Words | Warning | Column name is a SQL reserved word |
| No Type Defined | Warning | Column has no type specified |
Rules can be individually enabled or disabled in the validation panel, which is useful when different warehouses or dbt conventions require different standards.
Contract-focused rules are especially useful before publishing a public contract page or handing a contract bundle to downstream engineering teams.
Auto Layout
Use the Layout dropdown in the toolbar to adjust canvas presentation controls:
- Edge Style — Switch between smooth, straight, bezier, and step connectors
Real-time Collaboration
When connected with Supabase authentication, SchemaStruct supports real-time collaboration so schema delivery is reviewable, not siloed:
- See who is currently editing (colored avatars in the toolbar)
- Schema changes sync instantly across all connected users
- Share projects via unique URLs
- Keep docs, diff, and deployment artifacts aligned for the whole team
Data Modeling Types
Switch between three model views using the selector in the toolbar. Your table positions are preserved when switching views.
Conceptual Model
A high-level overview of your data. Shows only entity names and their relationships — no columns, types, or constraints. Great for early-stage planning and stakeholder communication.
Logical Model
Shows tables with only PK and FK columns. Defines the relational key structure without implementation details — database-agnostic.
Physical Model
The most detailed model. Includes all columns, specific database types, defaults, constraints, and notes. Ready to export to SQL and deploy.
table customers {
id serial [pk]
name varchar [not null]
email varchar [unique, not null]
phone varchar
created_at timestamp [not null, default: 'now()']
updated_at timestamp
}
table orders {
id serial [pk]
customer_id integer [not null]
total decimal [not null, default: 0.00]
status varchar [not null, default: 'pending']
notes text
created_at timestamp [not null, default: 'now()']
}
ref: orders.customer_id > customers.idKeyboard Shortcuts
| Shortcut | Action |
|---|---|
Cmd+Z | Undo |
Cmd+Shift+Z | Redo |
Cmd+S | Save project |
Cmd+F or / | Search tables |
Delete / Backspace | Delete selected table |
Double-click canvas | Create new table |
Double-click header | Rename table |
Double-click column | Edit column name & type |
Right-click table | Open context menu |
Enter (while editing) | Save edit |
Escape (while editing) | Cancel edit |
Full Example: E-commerce Schema
// E-commerce Database Schema
table users {
id serial [pk]
email varchar [unique, not null]
password_hash varchar [not null]
full_name varchar [not null]
role varchar [default: 'customer']
created_at timestamp [default: 'now()']
}
table products {
id serial [pk]
name varchar [not null]
description text
price decimal [not null]
stock integer [default: 0]
category_id integer
is_active boolean [default: true]
}
table categories {
id serial [pk]
name varchar [unique, not null]
parent_id integer
}
table orders {
id serial [pk]
user_id integer [not null]
status varchar [default: 'pending']
total decimal [not null]
shipping_address text
created_at timestamp [default: 'now()']
}
table order_items {
id serial [pk]
order_id integer [not null]
product_id integer [not null]
quantity integer [not null, default: 1]
unit_price decimal [not null]
}
// Relationships
ref: orders.user_id > users.id
ref: order_items.order_id > orders.id
ref: order_items.product_id > products.id
ref: products.category_id > categories.id
ref: categories.parent_id > categories.id