machine-learning

Machine learning development patterns, model training, evaluation, and deployment. Use when building ML pipelines, training models, feature engineering, model evaluation, or deploying ML systems to production.

242 stars

Best use case

machine-learning is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Machine learning development patterns, model training, evaluation, and deployment. Use when building ML pipelines, training models, feature engineering, model evaluation, or deploying ML systems to production.

Machine learning development patterns, model training, evaluation, and deployment. Use when building ML pipelines, training models, feature engineering, model evaluation, or deploying ML systems to production.

Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.

Practical example

Example input

Use the "machine-learning" skill to help with this workflow task. Context: Machine learning development patterns, model training, evaluation, and deployment. Use when building ML pipelines, training models, feature engineering, model evaluation, or deploying ML systems to production.

Example output

A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.

When to use this skill

  • Use this skill when you want a reusable workflow rather than writing the same prompt again and again.

When not to use this skill

  • Do not use this when you only need a one-off answer and do not need a reusable workflow.
  • Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/machine-learning/SKILL.md --create-dirs "https://raw.githubusercontent.com/aiskillstore/marketplace/main/skills/89jobrien/machine-learning/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/machine-learning/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How machine-learning Compares

Feature / Agentmachine-learningStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Machine learning development patterns, model training, evaluation, and deployment. Use when building ML pipelines, training models, feature engineering, model evaluation, or deploying ML systems to production.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Machine Learning

Comprehensive machine learning skill covering the full ML lifecycle from experimentation to production deployment.

## When to Use This Skill

- Building machine learning pipelines
- Feature engineering and data preprocessing
- Model training, evaluation, and selection
- Hyperparameter tuning and optimization
- Model deployment and serving
- ML experiment tracking and versioning
- Production ML monitoring and maintenance

## ML Development Lifecycle

### 1. Problem Definition

**Classification Types:**

- Binary classification (spam/not spam)
- Multi-class classification (image categories)
- Multi-label classification (document tags)
- Regression (price prediction)
- Clustering (customer segmentation)
- Ranking (search results)
- Anomaly detection (fraud detection)

**Success Metrics by Problem Type:**

| Problem Type | Primary Metrics | Secondary Metrics |
|--------------|-----------------|-------------------|
| Binary Classification | AUC-ROC, F1 | Precision, Recall, PR-AUC |
| Multi-class | Macro F1, Accuracy | Per-class metrics |
| Regression | RMSE, MAE | R², MAPE |
| Ranking | NDCG, MAP | MRR |
| Clustering | Silhouette, Calinski-Harabasz | Davies-Bouldin |

### 2. Data Preparation

**Data Quality Checks:**

- Missing value analysis and imputation strategies
- Outlier detection and handling
- Data type validation
- Distribution analysis
- Target leakage detection

**Feature Engineering Patterns:**

- Numerical: scaling, binning, log transforms, polynomial features
- Categorical: one-hot, target encoding, frequency encoding, embeddings
- Temporal: lag features, rolling statistics, cyclical encoding
- Text: TF-IDF, word embeddings, transformer embeddings
- Geospatial: distance features, clustering, grid encoding

**Train/Test Split Strategies:**

- Random split (standard)
- Stratified split (imbalanced classes)
- Time-based split (temporal data)
- Group split (prevent data leakage)
- K-fold cross-validation

### 3. Model Selection

**Algorithm Selection Guide:**

| Data Size | Problem | Recommended Models |
|-----------|---------|-------------------|
| Small (<10K) | Classification | Logistic Regression, SVM, Random Forest |
| Small (<10K) | Regression | Linear Regression, Ridge, SVR |
| Medium (10K-1M) | Classification | XGBoost, LightGBM, Neural Networks |
| Medium (10K-1M) | Regression | XGBoost, LightGBM, Neural Networks |
| Large (>1M) | Any | Deep Learning, Distributed training |
| Tabular | Any | Gradient Boosting (XGBoost, LightGBM, CatBoost) |
| Images | Classification | CNN, ResNet, EfficientNet, Vision Transformers |
| Text | NLP | Transformers (BERT, RoBERTa, GPT) |
| Sequential | Time Series | LSTM, Transformer, Prophet |

### 4. Model Training

**Hyperparameter Tuning:**

- Grid Search: exhaustive, good for small spaces
- Random Search: efficient, good for large spaces
- Bayesian Optimization: smart exploration (Optuna, Hyperopt)
- Early stopping: prevent overfitting

**Common Hyperparameters:**

| Model | Key Parameters |
|-------|---------------|
| XGBoost | learning_rate, max_depth, n_estimators, subsample |
| LightGBM | num_leaves, learning_rate, n_estimators, feature_fraction |
| Random Forest | n_estimators, max_depth, min_samples_split |
| Neural Networks | learning_rate, batch_size, layers, dropout |

### 5. Model Evaluation

**Evaluation Best Practices:**

- Always use held-out test set for final evaluation
- Use cross-validation during development
- Check for overfitting (train vs validation gap)
- Evaluate on multiple metrics
- Analyze errors qualitatively

**Handling Imbalanced Data:**

- Resampling: SMOTE, undersampling
- Class weights: weighted loss functions
- Threshold tuning: optimize decision threshold
- Evaluation: use PR-AUC over ROC-AUC

### 6. Production Deployment

**Model Serving Patterns:**

- REST API (Flask, FastAPI, TF Serving)
- Batch inference (scheduled jobs)
- Streaming (real-time predictions)
- Edge deployment (mobile, IoT)

**Production Considerations:**

- Latency requirements (p50, p95, p99)
- Throughput (requests per second)
- Model size and memory footprint
- Fallback strategies
- A/B testing framework

### 7. Monitoring & Maintenance

**What to Monitor:**

- Prediction latency
- Input feature distributions (data drift)
- Prediction distributions (concept drift)
- Model performance metrics
- Error rates and types

**Retraining Triggers:**

- Performance degradation below threshold
- Significant data drift detected
- Scheduled retraining (daily, weekly)
- New training data available

## MLOps Best Practices

### Experiment Tracking

Track for every experiment:

- Code version (git commit)
- Data version (hash or version ID)
- Hyperparameters
- Metrics (train, validation, test)
- Model artifacts
- Environment (packages, versions)

### Model Versioning

```
models/
├── model_v1.0.0/
│   ├── model.pkl
│   ├── metadata.json
│   ├── requirements.txt
│   └── metrics.json
├── model_v1.1.0/
└── model_v2.0.0/
```

### CI/CD for ML

1. **Continuous Integration:**
   - Data validation tests
   - Model training tests
   - Performance regression tests

2. **Continuous Deployment:**
   - Staging environment validation
   - Shadow mode testing
   - Gradual rollout (canary)
   - Automatic rollback

## Reference Files

For detailed patterns and code examples, load reference files as needed:

- **`references/preprocessing.md`** - Data preprocessing patterns and feature engineering techniques
- **`references/model_patterns.md`** - Model architecture patterns and implementation examples
- **`references/evaluation.md`** - Comprehensive evaluation strategies and metrics

## Integration with Other Skills

- **performance** - For optimizing inference latency
- **testing** - For ML-specific testing patterns
- **database-optimization** - For feature store queries
- **debugging** - For model debugging and error analysis

Related Skills

machine-learning-ops-ml-pipeline

242
from aiskillstore/marketplace

Design and implement a complete ML pipeline for: $ARGUMENTS

cc-skill-continuous-learning

242
from aiskillstore/marketplace

Development skill from everything-claude-code

when-optimizing-agent-learning-use-reasoningbank-intelligence

242
from aiskillstore/marketplace

Implement adaptive learning with ReasoningBank for pattern recognition, strategy optimization, and continuous improvement

reasoningbank-adaptive-learning-with-agentdb

242
from aiskillstore/marketplace

Implement ReasoningBank adaptive learning with AgentDB for trajectory tracking, verdict judgment, memory distillation, and pattern recognition to build self-learning agents that improve decision-making through experience.

agentdb-reinforcement-learning-training

242
from aiskillstore/marketplace

Train AI agents using AgentDB's 9 reinforcement learning algorithms including Q-Learning, DQN, PPO, and Actor-Critic. Build self-learning agents, implement RL training loops with experience replay, and deploy optimized models to production.

agentdb-learning-plugins

242
from aiskillstore/marketplace

Create and train AI learning plugins with AgentDB's 9 reinforcement learning algorithms. Includes Decision Transformer, Q-Learning, SARSA, Actor-Critic, and more. Use when building self-learning agents, implementing RL, or optimizing agent behavior through experience.

virtual-machine-management

242
from aiskillstore/marketplace

Create, manage, and optimize virtual machines in Proxmox. Control VM lifecycle, monitor performance, adjust resources, and plan VM deployment strategies.

cross-task-learning

242
from aiskillstore/marketplace

Pattern for aggregating insights across multiple tasks to enable data-driven evolution.

learning-objectives

242
from aiskillstore/marketplace

Generate measurable learning outcomes aligned with Bloom's taxonomy and CEFR proficiency levels for educational content. Use this skill when educators need to define what students will achieve, create learning objectives for curriculum planning, or ensure objectives are specific and testable rather than vague. This skill helps break down complex topics into progressively building learning goals with clear assessment methods and success criteria.

chinese-learning-assistant

242
from aiskillstore/marketplace

HSK4級レベルから流暢さを目指す学習者向け。中国語表現の使用場面・自然さを分析し、作文を「ネイティブらしい流暢な表現」に改善。bilibili等のコンテンツ理解とネイティブとの会話をサポート。実際の用例をWeb検索で提示

azure-quotas

242
from aiskillstore/marketplace

Check/manage Azure quotas and usage across providers. For deployment planning, capacity validation, region selection. WHEN: "check quotas", "service limits", "current usage", "request quota increase", "quota exceeded", "validate capacity", "regional availability", "provisioning limits", "vCPU limit", "how many vCPUs available in my subscription".

DevOps & Infrastructure

raindrop-io

242
from aiskillstore/marketplace

Manage Raindrop.io bookmarks with AI assistance. Save and organize bookmarks, search your collection, manage reading lists, and organize research materials. Use when working with bookmarks, web research, reading lists, or when user mentions Raindrop.io.

Data & Research