Best use case
Malloy — Semantic Data Language is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
## Overview
Teams using Malloy — Semantic Data Language should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/malloy/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How Malloy — Semantic Data Language Compares
| Feature / Agent | Malloy — Semantic Data Language | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
## Overview
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Malloy — Semantic Data Language
## Overview
Malloy, the experimental data language from Google that replaces SQL for analytics with a composable, reusable, and more readable syntax. Helps developers write Malloy models, build nested queries, and explore data with Malloy's VS Code extension and notebook interface.
## Instructions
### Source Definition
Define reusable data models:
```malloy
// models/ecommerce.malloy — Ecommerce data model
source: orders is duckdb.table('orders.parquet') extend {
// Dimensions (attributes to group by)
dimension:
order_date is created_at::date
order_month is created_at.month
order_year is created_at.year
is_high_value is amount > 100
order_size is pick
'small' when items_count < 3
'medium' when items_count < 10
'large'
// Measures (aggregations)
measure:
order_count is count()
total_revenue is sum(amount)
avg_order_value is avg(amount)
unique_customers is count(distinct customer_id)
revenue_per_customer is total_revenue / unique_customers
// Reusable named queries (views)
view: revenue_by_month is {
group_by: order_month
aggregate:
total_revenue
order_count
avg_order_value
order_by: order_month
}
view: top_customers is {
group_by: customer_id
aggregate:
total_revenue
order_count
order_by: total_revenue desc
limit: 20
}
view: daily_dashboard is {
group_by: order_date
aggregate:
total_revenue
order_count
unique_customers
avg_order_value
order_by: order_date desc
limit: 30
}
}
```
### Queries
Write composable, readable analytics queries:
```malloy
// queries/analysis.malloy — Analytics queries using the model
import "models/ecommerce.malloy"
// Simple aggregation
run: orders -> {
aggregate:
total_revenue
order_count
avg_order_value
}
// Group by with filters
run: orders -> {
where: order_year = 2026
group_by: order_month
aggregate:
total_revenue
order_count
order_by: order_month
}
// Nested queries — multiple levels of aggregation in one query
run: orders -> {
group_by: order_size
aggregate:
total_revenue
order_count
avg_order_value
// Nested: for each order_size, show monthly breakdown
nest: monthly_trend is {
group_by: order_month
aggregate: total_revenue
order_by: order_month
}
// Nested: for each order_size, show top customers
nest: top_customers is {
group_by: customer_id
aggregate: total_revenue, order_count
order_by: total_revenue desc
limit: 5
}
}
// Pipeline: chain transformations
run: orders
-> { where: status = 'completed' }
-> revenue_by_month // Reuse named view
-> { where: total_revenue > 10000 } // Filter the result
```
### Joins and Relationships
```malloy
// models/full_model.malloy — Multi-table model with joins
source: customers is duckdb.table('customers.parquet') extend {
dimension: signup_month is created_at.month
measure:
customer_count is count()
avg_lifetime_value is avg(lifetime_value)
}
source: products is duckdb.table('products.parquet') extend {
dimension: price_tier is pick
'budget' when price < 25
'mid-range' when price < 100
'premium'
measure: product_count is count()
}
source: order_items is duckdb.table('order_items.parquet') extend {
// Join to related tables
join_one: orders on order_id = orders.id
join_one: products on product_id = products.id
join_one: customers is orders.customer_id = customers.id
measure:
total_quantity is sum(quantity)
item_revenue is sum(quantity * unit_price)
// Query across joined tables
view: revenue_by_category is {
group_by: products.category
aggregate:
item_revenue
total_quantity
order_by: item_revenue desc
}
view: customer_product_matrix is {
group_by: customers.signup_month
aggregate: item_revenue
nest: by_category is {
group_by: products.category
aggregate: item_revenue
}
}
}
```
### Notebooks and Visualization
```malloy
// In Malloy notebook (.malloynb) or VS Code extension
// Malloy auto-renders results as charts when appropriate
// Bar chart — group by with single measure
run: orders -> {
group_by: status
aggregate: order_count
}
// # bar_chart
// Line chart — time series
run: orders -> revenue_by_month
// # line_chart
// Dashboard — multiple visualizations from one query
run: orders -> {
group_by: order_size
aggregate: total_revenue, order_count
nest: trend is {
group_by: order_month
aggregate: total_revenue
}
}
// # dashboard
```
### DuckDB and BigQuery Connections
```malloy
// Connection configuration
// DuckDB (local files)
connection: duckdb is duckdb [
parquet_path: "./data/"
]
// BigQuery
connection: bq is bigquery [
project_id: "my-gcp-project"
dataset: "analytics"
]
// Use BigQuery tables in models
source: events is bq.table('analytics.events') extend {
measure: event_count is count()
}
```
## Installation
```bash
# VS Code Extension (recommended)
# Install "Malloy" from VS Code Marketplace
# CLI
npm install -g @malloydata/malloy-cli
# Python package
pip install malloy
# Run a Malloy file
malloy run analysis.malloy
```
## Examples
### Example 1: Integrating Malloy into an existing application
**User request:**
```
Add Malloy to my Next.js app for the AI chat feature. I want streaming responses.
```
The agent installs the SDK, creates an API route that initializes the Malloy client, configures streaming, selects an appropriate model, and wires up the frontend to consume the stream. It handles error cases and sets up proper environment variable management for the API key.
### Example 2: Optimizing queries performance
**User request:**
```
My Malloy calls are slow and expensive. Help me optimize the setup.
```
The agent reviews the current implementation, identifies issues (wrong model selection, missing caching, inefficient prompting, no batching), and applies optimizations specific to Malloy's capabilities — adjusting model parameters, adding response caching, and implementing retry logic with exponential backoff.
## Guidelines
1. **Models separate from queries** — Define sources and views in model files; write queries in separate files or notebooks
2. **Name your views** — Reusable views (named queries) are Malloy's superpower; define common analyses once, use everywhere
3. **Nested queries for rich analysis** — Instead of multiple separate queries, nest related analyses into a single query
4. **Use pick for categorization** — The `pick` expression replaces SQL's verbose CASE WHEN for creating dimensions
5. **Pipeline for progressive filtering** — Chain queries with `->` to progressively refine results; each step is readable
6. **DuckDB for local analysis** — Use DuckDB connection with Parquet files for fast local analytics; switch to BigQuery for production
7. **Malloy notebooks for exploration** — Use `.malloynb` files for iterative data exploration with inline visualization
8. **Version your models** — Malloy models are code; store in Git alongside your data pipelinesRelated Skills
College Football Data (CFB)
Before writing queries, consult `references/api-reference.md` for endpoints, conference IDs, team IDs, and data shapes.
College Basketball Data (CBB)
Before writing queries, consult `references/api-reference.md` for endpoints, conference IDs, team IDs, and data shapes.
validating-database-integrity
Process use when you need to ensure database integrity through comprehensive data validation. This skill validates data types, ranges, formats, referential integrity, and business rules. Trigger with phrases like "validate database data", "implement data validation rules", "enforce data integrity constraints", or "validate data formats".
forecasting-time-series-data
This skill enables Claude to forecast future values based on historical time series data. It analyzes time-dependent data to identify trends, seasonality, and other patterns. Use this skill when the user asks to predict future values of a time series, analyze trends in data over time, or requires insights into time-dependent data. Trigger terms include "forecast," "predict," "time series analysis," "future values," and requests involving temporal data.
generating-test-data
This skill enables Claude to generate realistic test data for software development. It uses the test-data-generator plugin to create users, products, orders, and custom schemas for comprehensive testing. Use this skill when you need to populate databases, simulate user behavior, or create fixtures for automated tests. Trigger phrases include "generate test data", "create fake users", "populate database", "generate product data", "create test orders", or "generate data based on schema". This skill is especially useful for populating testing environments or creating sample data for demonstrations.
test-data-builder
Test Data Builder - Auto-activating skill for Test Automation. Triggers on: test data builder, test data builder Part of the Test Automation skill category.
splitting-datasets
Process split datasets into training, validation, and testing sets for ML model development. Use when requesting "split dataset", "train-test split", or "data partitioning". Trigger with relevant phrases based on skill purpose.
scanning-database-security
Process use when you need to work with security and compliance. This skill provides security scanning and vulnerability detection with comprehensive guidance and automation. Trigger with phrases like "scan for vulnerabilities", "implement security controls", or "audit security".
preprocessing-data-with-automated-pipelines
Process automate data cleaning, transformation, and validation for ML tasks. Use when requesting "preprocess data", "clean data", "ETL pipeline", or "data transformation". Trigger with relevant phrases based on skill purpose.
optimizing-database-connection-pooling
Process use when you need to work with connection management. This skill provides connection pooling and management with comprehensive guidance and automation. Trigger with phrases like "manage connections", "configure pooling", or "optimize connection usage".
modeling-nosql-data
This skill enables Claude to design NoSQL data models. It activates when the user requests assistance with NoSQL database design, including schema creation, data modeling for MongoDB or DynamoDB, or defining document structures. Use this skill when the user mentions "NoSQL data model", "design MongoDB schema", "create DynamoDB table", or similar phrases related to NoSQL database architecture. It assists in understanding NoSQL modeling principles like embedding vs. referencing, access pattern optimization, and sharding key selection.
monitoring-database-transactions
Monitor use when you need to work with monitoring and observability. This skill provides health monitoring and alerting with comprehensive guidance and automation. Trigger with phrases like "monitor system health", "set up alerts", or "track metrics".