model-discovery

Fetch current model names from AI providers (Anthropic, OpenAI, Gemini, Ollama), classify them into tiers (fast/default/heavy), and detect new models. Use when needing up-to-date model IDs for API calls or when other skills reference model names.

242 stars

Best use case

model-discovery is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Fetch current model names from AI providers (Anthropic, OpenAI, Gemini, Ollama), classify them into tiers (fast/default/heavy), and detect new models. Use when needing up-to-date model IDs for API calls or when other skills reference model names.

Fetch current model names from AI providers (Anthropic, OpenAI, Gemini, Ollama), classify them into tiers (fast/default/heavy), and detect new models. Use when needing up-to-date model IDs for API calls or when other skills reference model names.

Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.

Practical example

Example input

Use the "model-discovery" skill to help with this workflow task. Context: Fetch current model names from AI providers (Anthropic, OpenAI, Gemini, Ollama), classify them into tiers (fast/default/heavy), and detect new models. Use when needing up-to-date model IDs for API calls or when other skills reference model names.

Example output

A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.

When to use this skill

  • Use this skill when you want a reusable workflow rather than writing the same prompt again and again.

When not to use this skill

  • Do not use this when you only need a one-off answer and do not need a reusable workflow.
  • Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/model-discovery/SKILL.md --create-dirs "https://raw.githubusercontent.com/aiskillstore/marketplace/main/skills/consiliency/model-discovery/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/model-discovery/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How model-discovery Compares

Feature / Agentmodel-discoveryStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Fetch current model names from AI providers (Anthropic, OpenAI, Gemini, Ollama), classify them into tiers (fast/default/heavy), and detect new models. Use when needing up-to-date model IDs for API calls or when other skills reference model names.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

Related Guides

SKILL.md Source

# Model Discovery Skill

Fetch the most recent model names from AI providers using their APIs. Includes tier classification (fast/default/heavy) for routing decisions and automatic detection of new models.

## Variables

| Variable | Default | Description |
|----------|---------|-------------|
| CACHE_TTL_HOURS | 24 | How long to cache model lists before refreshing |
| ENABLED_ANTHROPIC | true | Fetch Claude models from Anthropic API |
| ENABLED_OPENAI | true | Fetch GPT models from OpenAI API |
| ENABLED_GEMINI | true | Fetch Gemini models from Google API |
| ENABLED_OLLAMA | true | Fetch local models from Ollama |
| OLLAMA_HOST | http://localhost:11434 | Ollama API endpoint |
| AUTO_CLASSIFY | true | Auto-classify new models using pattern matching |

## Instructions

**MANDATORY** - Follow the Workflow steps below in order. Do not skip steps.

- Before referencing model names in any skill, check if fresh data exists
- Use tier mappings to select appropriate models (fast for speed, heavy for capability)
- Check for new models periodically and classify them

## Red Flags - STOP and Reconsider

If you're about to:
- Hardcode a model version like `gpt-5.2` or `claude-sonnet-4-5`
- Use model names from memory without checking current availability
- Call APIs without checking if API keys are configured
- Skip new model classification when prompted

**STOP** -> Read the appropriate cookbook file -> Use the fetch script

## Workflow

### Fetching Models

1. [ ] Determine which provider(s) you need models from
2. [ ] Check if cached model list exists: `cache/models.json`
3. [ ] If cache is fresh (< CACHE_TTL_HOURS old), use cached data
4. [ ] If stale/missing, run: `uv run python scripts/fetch_models.py --force`
5. [ ] **CHECKPOINT**: Verify no API errors in output
6. [ ] Use the model IDs as needed

### Checking for New Models

1. [ ] Run: `uv run python scripts/check_new_models.py --json`
2. [ ] If new models found, review the output
3. [ ] For auto-classification: `uv run python scripts/check_new_models.py --auto`
4. [ ] For interactive classification: `uv run python scripts/check_new_models.py`
5. [ ] **CHECKPOINT**: All models assigned to tiers (fast/default/heavy)

### Getting Tier Recommendations

1. [ ] Read: `config/model_tiers.json` for current tier mappings
2. [ ] Use the appropriate model for task complexity:
   - **fast**: Simple tasks, high throughput, cost-sensitive
   - **default**: General purpose, balanced
   - **heavy**: Complex reasoning, research, difficult tasks

## Model Tier Reference

### Anthropic Claude

| Tier | Model | CLI Name |
|------|-------|----------|
| fast | claude-haiku-4-5 | haiku |
| default | claude-sonnet-4-5 | sonnet |
| heavy | claude-opus-4-5 | opus |

### OpenAI

| Tier | Model | Notes |
|------|-------|-------|
| fast | gpt-5.2-mini | Speed optimized |
| default | gpt-5.2 | Balanced flagship |
| heavy | gpt-5.2-pro | Maximum capability |

**Codex (for coding)**:
| Tier | Model |
|------|-------|
| fast | gpt-5.2-codex-mini |
| default | gpt-5.2-codex |
| heavy | gpt-5.2-codex-max |

### Google Gemini

| Tier | Model | Context |
|------|-------|---------|
| fast | gemini-3-flash-lite | See API output |
| default | gemini-3-pro | See API output |
| heavy | gemini-3-deep-think | See API output |

### Ollama (Local)

| Tier | Suggested Model | Notes |
|------|-----------------|-------|
| fast | phi3.5:latest | Small; fast |
| default | llama3.2:latest | Balanced |
| heavy | llama3.3:70b | Large; requires GPU |

## CLI Mappings (for spawn:agent skill)

| CLI Tool | Fast | Default | Heavy |
|----------|------|---------|-------|
| claude-code | haiku | sonnet | opus |
| codex-cli | gpt-5.2-codex-mini | gpt-5.2-codex | gpt-5.2-codex-max |
| gemini-cli | gemini-3-flash-lite | gemini-3-pro | gemini-3-deep-think |
| cursor-cli | gpt-5.2 | sonnet-4.5 | sonnet-4.5-thinking |
| opencode-cli | anthropic/claude-haiku-4-5 | anthropic/claude-sonnet-4-5 | anthropic/claude-opus-4-5 |
| copilot-cli | claude-sonnet-4.5 | claude-sonnet-4.5 | claude-sonnet-4.5 |

## Quick Reference

### Scripts

```bash
# Fetch all models (uses cache if fresh)
uv run python scripts/fetch_models.py

# Force refresh from APIs
uv run python scripts/fetch_models.py --force

# Fetch and check for new models
uv run python scripts/fetch_models.py --force --check-new

# Check for new unclassified models (JSON output for agents)
uv run python scripts/check_new_models.py --json

# Auto-classify new models using patterns
uv run python scripts/check_new_models.py --auto

# Interactive classification
uv run python scripts/check_new_models.py
```

### Config Files

| File | Purpose |
|------|---------|
| `config/model_tiers.json` | Static tier mappings and CLI model names |
| `config/known_models.json` | Registry of all classified models with timestamps |
| `cache/models.json` | Cached API responses |

### API Endpoints

| Provider | Endpoint | Auth |
|----------|----------|------|
| Anthropic | `GET /v1/models` | `x-api-key` header |
| OpenAI | `GET /v1/models` | Bearer token |
| Gemini | `GET /v1beta/models` | `?key=` param |
| Ollama | `GET /api/tags` | None |

## Output Examples

### Fetch Models Output

```json
{
  "fetched_at": "2025-12-17T05:53:25Z",
  "providers": {
    "anthropic": [{"id": "claude-opus-4-5", "name": "Claude Opus 4.5"}],
    "openai": [{"id": "gpt-5.2", "name": "gpt-5.2"}],
    "gemini": [{"id": "models/gemini-3-pro", "name": "Gemini 3 Pro"}],
    "ollama": [{"id": "phi3.5:latest", "name": "phi3.5:latest"}]
  }
}
```

### Check New Models Output (--json)

```json
{
  "timestamp": "2025-12-17T06:00:00Z",
  "has_new_models": true,
  "total_new": 2,
  "by_provider": {
    "openai": {
      "count": 2,
      "models": [
        {"id": "gpt-5.2-mini", "inferred_tier": "fast", "needs_classification": false},
        {"id": "gpt-5.2-pro", "inferred_tier": "heavy", "needs_classification": false}
      ]
    }
  }
}
```

## Integration

Other skills should reference this skill for model names:

```markdown
## Model Names

For current model names and tiers, use the `model-discovery` skill:
- Tiers: Read `config/model_tiers.json`
- Fresh data: Run `uv run python scripts/fetch_models.py`
- New models: Run `uv run python scripts/check_new_models.py --json`

**Do not hardcode model version numbers** - they become stale quickly.
```

## New Model Detection

When new models are detected:

1. The script will report them with suggested tiers based on naming patterns
2. Models matching these patterns are auto-classified:
   - **heavy**: `-pro`, `-opus`, `-max`, `thinking`, `deep-research`
   - **fast**: `-mini`, `-nano`, `-flash`, `-lite`, `-haiku`
   - **default**: Base model names without modifiers
3. Models not matching patterns require manual classification
4. Specialty models (TTS, audio, transcribe) are auto-excluded

### Agent Query for New Models

When checking for new models programmatically:

```bash
# Returns exit code 1 if new models need attention
uv run python scripts/check_new_models.py --json

# Example agent workflow
if ! uv run python scripts/check_new_models.py --json > /tmp/new_models.json 2>&1; then
    echo "New models detected - review /tmp/new_models.json"
fi
```

Related Skills

threat-modeling-expert

242
from aiskillstore/marketplace

Expert in threat modeling methodologies, security architecture review, and risk assessment. Masters STRIDE, PASTA, attack trees, and security requirement extraction. Use for security architecture reviews, threat identification, and secure-by-design planning.

startup-financial-modeling

242
from aiskillstore/marketplace

This skill should be used when the user asks to "create financial projections", "build a financial model", "forecast revenue", "calculate burn rate", "estimate runway", "model cash flow", or requests 3-5 year financial planning for a startup.

pydantic-models-py

242
from aiskillstore/marketplace

Create Pydantic models following the multi-model pattern with Base, Create, Update, Response, and InDB variants. Use when defining API request/response schemas, database models, or data validation in Python applications using Pydantic v2.

avalonia-viewmodels-zafiro

242
from aiskillstore/marketplace

Optimal ViewModel and Wizard creation patterns for Avalonia using Zafiro and ReactiveUI.

deploy-model

242
from aiskillstore/marketplace

Unified Azure OpenAI model deployment skill with intelligent intent-based routing. Handles quick preset deployments, fully customized deployments (version/SKU/capacity/RAI policy), and capacity discovery across regions and projects. USE FOR: deploy model, deploy gpt, create deployment, model deployment, deploy openai model, set up model, provision model, find capacity, check model availability, where can I deploy, best region for model, capacity analysis. DO NOT USE FOR: listing existing deployments (use foundry_models_deployments_list MCP tool), deleting deployments, agent creation (use agent/create), project creation (use project/create).

harness-model-protocol

242
from aiskillstore/marketplace

Analyze the protocol layer between agent harness and LLM model. Use when (1) understanding message wire formats and API contracts, (2) examining tool call encoding/decoding mechanisms, (3) evaluating streaming protocols and partial response handling, (4) identifying agentic chat primitives (system prompts, scratchpads, interrupts), (5) comparing multi-provider abstraction strategies, or (6) understanding how frameworks translate between native LLM APIs and internal representations.

component-model-analysis

242
from aiskillstore/marketplace

Evaluate extensibility patterns, abstraction layers, and configuration approaches in frameworks. Use when (1) assessing base class/protocol design, (2) understanding dependency injection patterns, (3) evaluating plugin/extension systems, (4) comparing code-first vs config-first approaches, or (5) determining framework flexibility for customization.

when-developing-ml-models-use-ml-expert

242
from aiskillstore/marketplace

Specialized ML model development, training, and deployment workflow

backend-models

242
from aiskillstore/marketplace

Define and configure database models with proper naming, relationships, timestamps, data types, constraints, and validation. Use this skill when creating or editing model files in app/Models/, Eloquent model classes, model relationships (hasMany, belongsTo, etc.), database table structures, model attributes and casts, model factories, or seeders. Use when working on model validation logic, database constraints, foreign key relationships, indexes, scopes, accessors, mutators, or any ORM-related model configuration.

statsmodels

242
from aiskillstore/marketplace

Statistical modeling toolkit. OLS, GLM, logistic, ARIMA, time series, hypothesis tests, diagnostics, AIC/BIC, for rigorous statistical inference and econometric analysis.

pymc-bayesian-modeling

242
from aiskillstore/marketplace

Bayesian modeling with PyMC. Build hierarchical models, MCMC (NUTS), variational inference, LOO/WAIC comparison, posterior checks, for probabilistic programming and inference.

skool-money-model-strategist

242
from aiskillstore/marketplace

Applies Alex Hormozi's $100M Money Models frameworks to design, evaluate, and improve Skool community monetization strategies. Uses CAC-based stage diagnosis (5 stages), 30-day cash maximization formulas, and sequential implementation to create actionable roadmaps grounded in Hormozi's 15 money model mechanisms and Skool's 5 business models (Free, Subscription, Freemium, Tiers, One-Time). Helps Skool community owners identify which mechanisms to implement, validate money models against Hormozi principles, and create step-by-step Skool setup instructions for maximum revenue per customer in 30 days.