multiAI Summary Pending

clarvia-aeo-check

Score any MCP server, API, or CLI for agent-readiness using Clarvia AEO (Agent Experience Optimization). Search 15,400+ indexed tools before adding them to your workflow.

28,273 stars

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/clarvia-aeo-check/SKILL.md --create-dirs "https://raw.githubusercontent.com/sickn33/antigravity-awesome-skills/main/plugins/antigravity-awesome-skills-claude/skills/clarvia-aeo-check/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/clarvia-aeo-check/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How clarvia-aeo-check Compares

Feature / Agentclarvia-aeo-checkStandard Approach
Platform SupportmultiLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Score any MCP server, API, or CLI for agent-readiness using Clarvia AEO (Agent Experience Optimization). Search 15,400+ indexed tools before adding them to your workflow.

Which AI agents support this skill?

This skill is compatible with multi.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Clarvia AEO Check

## Overview

Before adding any MCP server, API, or CLI tool to your agent workflow, use Clarvia to score its agent-readiness. Clarvia evaluates 15,400+ AI tools across four AEO dimensions: API accessibility, data structuring, agent compatibility, and trust signals.

## Prerequisites

Add Clarvia MCP server to your config:

```json
{
  "mcpServers": {
    "clarvia": {
      "command": "npx",
      "args": ["-y", "clarvia-mcp-server"]
    }
  }
}
```

## When to Use This Skill

- Use when evaluating a new MCP server before adding it to your config
- Use when comparing two tools for the same job
- Use when building an agent that selects tools dynamically
- Use when you want to find the highest-quality tool in a category

## How It Works

### Step 1: Score a specific tool

Ask Claude to score any tool by URL or name:

```
Score https://github.com/example/my-mcp-server for agent-readiness
```

Clarvia returns a 0-100 AEO score with breakdown across four dimensions.

### Step 2: Search tools by category

```
Find the top-rated database MCP servers using Clarvia
```

Returns ranked results from 15,400+ indexed tools.

### Step 3: Compare tools head-to-head

```
Compare supabase-mcp vs firebase-mcp using Clarvia
```

Returns side-by-side score breakdown with a recommendation.

### Step 4: Check leaderboard

```
Show me the top 10 MCP servers for authentication using Clarvia
```

## Examples

### Example 1: Evaluate before installing

```
Before I add this MCP server to my config, score it:
https://github.com/example/new-tool

Use the clarvia aeo_score tool and tell me if it's agent-ready.
```

### Example 2: Find best tool in category

```
I need an MCP server for web scraping. Use Clarvia to find the 
top-rated options and compare the top 3.
```

### Example 3: CI/CD quality gate

Add to your CI pipeline using the GitHub Action:

```yaml
- uses: clarvia-project/clarvia-action@v1
  with:
    url: https://your-api.com
    fail-under: 70
```

## AEO Score Interpretation

| Score | Rating | Meaning |
|-------|--------|---------|
| 90-100 | Agent Native | Built specifically for agent use |
| 70-89 | Agent Friendly | Works well, minor gaps |
| 50-69 | Agent Compatible | Works but needs improvement |
| 30-49 | Agent Partial | Significant limitations |
| 0-29 | Not Agent Ready | Avoid for agentic workflows |

## Best Practices

- ✅ Score tools before adding them to long-running agent workflows
- ✅ Use Clarvia's leaderboard to discover alternatives you haven't considered
- ✅ Re-check scores periodically — tools improve over time
- ❌ Don't skip scoring for "well-known" tools — even popular tools can score poorly
- ❌ Don't use tools scoring below 50 in production agent pipelines without understanding the limitations

## Common Pitfalls

- **Problem:** Clarvia returns "not found" for a tool
  **Solution:** Try scanning by URL directly with `aeo_score` — Clarvia will score it on-demand

- **Problem:** Score seems low for a tool I trust
  **Solution:** Use `get_score_breakdown` to see which dimensions are weak and decide if they matter for your use case

## Related Skills

- `@mcp-builder` - Build a new MCP server that scores well on AEO
- `@agent-evaluation` - Broader agent quality evaluation framework