opencode-learn

Extracts actionable knowledge from external sources and enhances existing skills using a 4-tier novelty framework. Use PROACTIVELY when a user says "/learn <source>", provides documentation URLs, code examples, or explicitly asks to extract patterns from a repository or marketplace.

25 stars

Best use case

opencode-learn is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Extracts actionable knowledge from external sources and enhances existing skills using a 4-tier novelty framework. Use PROACTIVELY when a user says "/learn <source>", provides documentation URLs, code examples, or explicitly asks to extract patterns from a repository or marketplace.

Teams using opencode-learn should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/opencode-learn/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/garyblankenship/SKILL.md/opencode-learn/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/opencode-learn/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How opencode-learn Compares

Feature / Agentopencode-learnStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Extracts actionable knowledge from external sources and enhances existing skills using a 4-tier novelty framework. Use PROACTIVELY when a user says "/learn <source>", provides documentation URLs, code examples, or explicitly asks to extract patterns from a repository or marketplace.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# INSTRUCTIONS FOR AI ASSISTANT: The /learn Command

## System Prompt

You are executing a rigid knowledge extraction protocol. Your primary objective is to extract **only novel** technical patterns from external sources and inject them into the user's local `skills/` or `prompts/` directory.

You are acting as the Command, the Agent, and the Skill all at once. You must orchestrate the fetching, extracting, matching, and applying.

**CRITICAL DIRECTIVE:** You have a natural tendency to summarize everything you read. **DO NOT DO THIS.** The user's context window is precious. You must aggressively filter out "Tier 1" knowledge (things you already know from your pre-training data) and only retain Tier 2, 3, or 4 insights.

**Core Execution Loop**: 1. Source → 2. Extract → 3. Match → 4. Preview → 5. Approve → 6. Apply → 7. Loop

## Anti-Patterns

| Anti-Pattern | Problem | Fix |
|--------------|---------|-----|
| **Summarizing Training Data** | Bloats the context window with useless Tier 1 facts (e.g. "React uses a Virtual DOM"). | Ruthlessly apply the Novelty Test. Exclude Tier 1. |
| **Sequential File Reading** | Calling `read` 100 times in a loop will cause you to time out. | Use **Parallel Tool Calls** inside a single block. |
| **Asking before Editing** | If the user already said "Apply" in Phase 5, pausing to ask permission to edit is maddening. | Execute the edit immediately upon user approval. |
| **Missing Source Links** | Future agents won't know where the pattern came from. | Always append `<!-- Source: {url/file} -->`. |

---

## Phase 1: Source Processing (Execution Steps)

### 1a. URL Sources
**ACTION:** Fetch the URL using a web scraping or fetching tool.
*   **OPTIMIZATION:** Check for `llms.txt` first. Attempt `{base_url}/llms-full.txt` → `llms.txt` → `llms-small.txt`. If found, use it directly to avoid scraping HTML.

### 1b. File Sources & Batch Processing
**ACTION:** If the source is a local directory or repository, use your file search (`glob`/`grep`) and `read` tools.
*   **Strategy:** When analyzing multiple files (e.g., discovering existing skills), you MUST use **Parallel Tool Calls**. Output all your `read` tool calls in a single response.

### 1c. Discovery
**ACTION:** Use your search tools to find existing `SKILL.md` files or `AGENTS.md` manifests.
*   Look for `AGENTS.md` at the project root.
*   Look for `skills/*/SKILL.md` or `prompts/*.md`.
*   Read all discovered files in parallel.

---

## Phase 2: Knowledge Extraction

**MANDATORY:** You must apply the novelty-detection framework to filter the extracted content.

### Tier Classification
You must classify every extracted insight into one of four tiers:
| Tier | Include? | Signal |
|------|----------|--------|
| 1 | **EXCLUDE** | Could write without source (training data) |
| 2 | Include | Shows HOW (implementation-specific) |
| 3 | High value | Explains WHY (architectural trade-offs) |
| 4 | Highest | Contradicts assumptions (counter-intuitive) |

### The Novelty Test
For every insight, ask yourself: *"Could I have written this WITHOUT reading the source?"*
*   **If YES** → It is Tier 1. You MUST EXCLUDE IT.
*   **If NO** → Continue to Tier 2-4 classification.

### Insight Structure Requirements
You must structure each extracted insight logically before scoring it:
```json
{
  "tier": 2,
  "domain": "sveltekit",
  "pattern": "Server-only load with +page.server.ts",
  "insight": "Data fetching in +page.server.ts runs only on server, +page.ts runs on both",
  "keywords": ["sveltekit", "load", "server", "ssr"],
  "source_context": "Line 45-52 of routing docs"
}
```

---

## Phase 3: Skill Matching

### Matching Algorithm
You must score each extracted insight against the user's existing skills/prompts to find the best home for it.
1. **Exact domain match**: Insight domain === skill name (score: 100)
2. **Keyword overlap**: Insight keywords ∩ skill description (score: 60-90)
3. **Technology alignment**: Same framework/library family (score: 40-60)
4. **No match**: Score <40 → Skip enhancement and propose a new skill instead.

---

## Phase 4: Enhancement Proposal

### For Each Match (score >= 40)
**1. Read current skill:** Read the contents of the matched skill/prompt file.
**2. Identify target section:** Find the best section (e.g., `Patterns`, `Anti-Patterns`, `Quick Reference`).
**3. Draft the enhancement:**
- Preserve the existing structure exactly.
- Add the insight in the appropriate format for that section.
- You MUST include source attribution: `<!-- Source: {url/file} -->`

---

## Phase 5: User Approval

### The Proposal Format
For each valid enhancement, you must present the proposal to the user exactly like this:

Present the:
1. Target Skill name
2. Insight summary and Tier
3. Diff preview of what you are going to add
4. Source attribution

**ACTION:** Ask the user: `"Apply this enhancement?"` with the options: `Apply`, `Skip`, or `Edit`.

### Response Handling
- **Apply**: Proceed to Phase 6.
- **Skip**: Skip to the next candidate.
- **Edit**: User modifies the text, then you proceed to Phase 6.

---

## Phase 6: Apply & New Skill Proposal

### 6a. Apply Enhancement
**ACTION:** If the user selected 'Apply', you MUST immediately use your file editing tool to insert the drafted block into the target file. **Do not just say you will do it, execute the tool.**

### 6b. When No Match Found (New Skills)
For insights with no match (score <40), present the user with a summary of the domain and keywords.
Ask: `"Propose new skill for {domain}?" [y/n]`
If approved, generate the new skill directory and file.

---

## Quality Gates

### Absolute Rules
- [ ] Zero Tier 1 insights in skills
- [ ] User approves each change (no auto-apply)
- [ ] Diff preview shown before any edit
- [ ] Source attribution in comments

## Examples

### Example 1: Extracting from Documentation
**Source**: User runs `/learn https://svelte.dev/docs/kit/state-management`
**Insight (Tier 3)**: SvelteKit 5 replaces store subscriptions with runes (`$state`, `$derived`) for reactivity.
**Target Skill**: `sveltekit-patterns`
**Agent Output Preview**:
```markdown
## Enhancement Proposal (Score: 85, Tier: 3)

**Insight**: SvelteKit 5 relies on runes ($state, $derived) instead of store subscriptions for reactive UI state.
**Target Skill**: skills/sveltekit-patterns/SKILL.md
**Section**: Patterns

**Proposed Addition**:
### Runes vs Stores (Svelte 5)
<!-- Source: https://svelte.dev/docs/kit/state-management -->
Replace old `writable` stores with `$state()` runes for component-level reactivity. Do not use `$:` for derived state; use `$derived()` instead.

Apply this enhancement? [y/n/edit]
```

Related Skills

adapting-transfer-learning-models

25
from ComeOnOliver/skillshub

This skill automates the adaptation of pre-trained machine learning models using transfer learning techniques. It is triggered when the user requests assistance with fine-tuning a model, adapting a pre-trained model to a new dataset, or performing transfer learning. It analyzes the user's requirements, generates code for adapting the model, includes data validation and error handling, provides performance metrics, and saves artifacts with documentation. Use this skill when you need to leverage existing models for new tasks or datasets, optimizing for performance and efficiency.

training-machine-learning-models

25
from ComeOnOliver/skillshub

Build train machine learning models with automated workflows. Analyzes datasets, selects model types (classification, regression), configures parameters, trains with cross-validation, and saves model artifacts. Use when asked to "train model" or "evalua... Trigger with relevant phrases based on skill purpose.

sklearn-pipeline-builder

25
from ComeOnOliver/skillshub

Sklearn Pipeline Builder - Auto-activating skill for ML Training. Triggers on: sklearn pipeline builder, sklearn pipeline builder Part of the ML Training skill category.

evaluating-machine-learning-models

25
from ComeOnOliver/skillshub

This skill allows Claude to evaluate machine learning models using a comprehensive suite of metrics. It should be used when the user requests model performance analysis, validation, or testing. Claude can use this skill to assess model accuracy, precision, recall, F1-score, and other relevant metrics. Trigger this skill when the user mentions "evaluate model", "model performance", "testing metrics", "validation results", or requests a comprehensive "model evaluation".

deploying-machine-learning-models

25
from ComeOnOliver/skillshub

This skill enables Claude to deploy machine learning models to production environments. It automates the deployment workflow, implements best practices for serving models, optimizes performance, and handles potential errors. Use this skill when the user requests to deploy a model, serve a model via an API, or put a trained model into a production environment. The skill is triggered by requests containing terms like "deploy model," "productionize model," "serve model," or "model deployment."

learning-rate-scheduler

25
from ComeOnOliver/skillshub

Learning Rate Scheduler - Auto-activating skill for ML Training. Triggers on: learning rate scheduler, learning rate scheduler Part of the ML Training skill category.

engineering-features-for-machine-learning

25
from ComeOnOliver/skillshub

This skill empowers Claude to perform feature engineering tasks for machine learning. It creates, selects, and transforms features to improve model performance. Use this skill when the user requests feature creation, feature selection, feature transformation, or any request that involves improving the features used in a machine learning model. Trigger terms include "feature engineering", "feature selection", "feature transformation", "create features", "select features", "transform features", "improve model performance", and similar phrases related to feature manipulation.

explaining-machine-learning-models

25
from ComeOnOliver/skillshub

Build this skill enables AI assistant to provide interpretability and explainability for machine learning models. it is triggered when the user requests explanations for model predictions, insights into feature importance, or help understanding model behavior... Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.

optimizing-deep-learning-models

25
from ComeOnOliver/skillshub

This skill optimizes deep learning models using various techniques. It is triggered when the user requests improvements to model performance, such as increasing accuracy, reducing training time, or minimizing resource consumption. The skill leverages advanced optimization algorithms like Adam, SGD, and learning rate scheduling. It analyzes the existing model architecture, training data, and performance metrics to identify areas for enhancement. The skill then automatically applies appropriate optimization strategies and generates optimized code. Use this skill when the user mentions "optimize deep learning model", "improve model accuracy", "reduce training time", or "optimize learning rate".

learning-a-tool

25
from ComeOnOliver/skillshub

Create learning paths for programming tools, and define what information should be researched to create learning guides. Use when user asks to learn, understand, or get started with any programming tool, library, or framework.

skill-learning

25
from ComeOnOliver/skillshub

Extracts actionable knowledge from external sources and enhances existing skills using a 4-tier novelty framework. Use when learning from URLs, documentation, or codebases. Proactively use when the user asks to extract patterns from a reference repository or skill marketplace.

ship-learn-next

25
from ComeOnOliver/skillshub

Transform learning content (like YouTube transcripts, articles, tutorials) into actionable implementation plans using the Ship-Learn-Next framework. Use when user wants to turn advice, lessons, or educational content into concrete action steps, reps, or a learning quest.