detecting-data-anomalies

Process identify anomalies and outliers in datasets using machine learning algorithms. Use when analyzing data for unusual patterns, outliers, or unexpected deviations from normal behavior. Trigger with phrases like "detect anomalies", "find outliers", or "identify unusual patterns".

25 stars

Best use case

detecting-data-anomalies is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Process identify anomalies and outliers in datasets using machine learning algorithms. Use when analyzing data for unusual patterns, outliers, or unexpected deviations from normal behavior. Trigger with phrases like "detect anomalies", "find outliers", or "identify unusual patterns".

Teams using detecting-data-anomalies should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/detecting-data-anomalies/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/jeremylongshore/claude-code-plugins-plus-skills/detecting-data-anomalies/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/detecting-data-anomalies/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How detecting-data-anomalies Compares

Feature / Agentdetecting-data-anomaliesStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Process identify anomalies and outliers in datasets using machine learning algorithms. Use when analyzing data for unusual patterns, outliers, or unexpected deviations from normal behavior. Trigger with phrases like "detect anomalies", "find outliers", or "identify unusual patterns".

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Detecting Data Anomalies

## Overview

Identify anomalies and outliers in datasets using statistical and machine learning algorithms including Isolation Forest, One-Class SVM, Local Outlier Factor, and autoencoders. This skill handles the full detection pipeline from data ingestion and feature scaling through algorithm selection, threshold tuning, and result interpretation with anomaly scoring.

## Prerequisites

- Python 3.9+ with scikit-learn >= 1.3 (`pip install scikit-learn`)
- pandas and NumPy for data manipulation (`pip install pandas numpy`)
- matplotlib or seaborn for anomaly visualizations (`pip install matplotlib seaborn`)
- Dataset in CSV, JSON, Parquet, or database-queryable format
- Minimum 500 data points for statistical significance (1000+ recommended)
- Optional: PyTorch or TensorFlow for autoencoder-based detection on complex patterns

## Instructions

1. Load the dataset using the Read tool and verify schema, column types, and row count
2. Profile feature distributions using descriptive statistics to understand baseline behavior
3. Handle missing values via imputation (median for numeric, mode for categorical) or row exclusion
4. Apply StandardScaler or MinMaxScaler to numeric features to normalize magnitude differences
5. Select the detection algorithm based on data characteristics:
   - **Isolation Forest**: high-dimensional data, no assumptions on distribution
   - **One-Class SVM**: well-defined normal class with clear decision boundary
   - **Local Outlier Factor**: density-varying data with local anomaly patterns
   - **Autoencoder**: complex temporal or image data with non-linear relationships
6. Set the contamination parameter to the expected anomaly proportion (start with 0.01-0.05)
7. Fit the model on the training partition and generate anomaly scores for each data point
8. Apply the decision threshold to classify points as normal (-1) or anomalous (1)
9. Analyze flagged anomalies for common characteristics, temporal clusters, or feature correlations
10. Generate a summary report with detection counts, score distributions, and visualization plots

See `${CLAUDE_SKILL_DIR}/references/implementation.md` for the detailed implementation guide.

## Output

- Anomaly detection summary: total points, anomaly count, contamination rate
- Per-record anomaly scores with classification labels
- Algorithm configuration: model type, contamination, distance metric, threshold
- Feature importance ranking showing which dimensions drive anomaly flags
- Visualization: scatter plot of anomaly scores, distribution histogram, t-SNE cluster plot
- CSV export of flagged records with anomaly scores and contributing features

## Error Handling

| Error | Cause | Solution |
|-------|-------|----------|
| Insufficient data volume | Fewer than 100 data points for model fitting | Collect additional data or switch to simple statistical methods (z-score, IQR) |
| High false positive rate | Contamination parameter set too high or features not scaled | Lower contamination to 0.01; verify StandardScaler applied; refine feature selection |
| Algorithm OOM on large dataset | Isolation Forest or LOF exceeds available memory | Subsample data for training; use `max_samples` parameter; switch to streaming approach |
| Feature scaling mismatch | Mixed numeric and categorical features without proper encoding | One-hot encode categoricals separately; scale numeric features independently |
| No ground truth for validation | Unlabeled dataset prevents accuracy measurement | Use domain expert review on top-N anomalies; implement feedback loop to refine threshold |

See `${CLAUDE_SKILL_DIR}/references/errors.md` for the full error reference.

## Examples

**Scenario 1: Network Intrusion Detection** -- Apply Isolation Forest to 50K network flow records with features: packet count, byte volume, duration, protocol type. Expected contamination: 2%. Target: flag port-scan and DDoS patterns with precision above 0.85.

**Scenario 2: Manufacturing Quality Control** -- Run LOF on sensor readings (temperature, vibration, pressure) from 10K production cycles. Detect equipment degradation anomalies. Visualize flagged cycles on a time-series plot with normal operating bands.

**Scenario 3: Financial Transaction Monitoring** -- Train an autoencoder on 100K legitimate transactions. Reconstruct test transactions and flag those with reconstruction error above the 99th percentile. Report flagged transactions with amount, merchant category, and time-of-day features.

## Resources

- [scikit-learn Anomaly Detection](https://scikit-learn.org/stable/modules/outlier_detection.html) -- Isolation Forest, LOF, One-Class SVM
- [PyOD Library](https://pyod.readthedocs.io/) -- 40+ outlier detection algorithms with unified API
- Autoencoder anomaly detection: Keras/PyTorch reconstruction-error approach
- Feature scaling: StandardScaler, RobustScaler, MinMaxScaler selection guide
- Evaluation without labels: silhouette analysis, domain expert review protocols

Related Skills

College Football Data (CFB)

25
from ComeOnOliver/skillshub

Before writing queries, consult `references/api-reference.md` for endpoints, conference IDs, team IDs, and data shapes.

College Basketball Data (CBB)

25
from ComeOnOliver/skillshub

Before writing queries, consult `references/api-reference.md` for endpoints, conference IDs, team IDs, and data shapes.

validating-database-integrity

25
from ComeOnOliver/skillshub

Process use when you need to ensure database integrity through comprehensive data validation. This skill validates data types, ranges, formats, referential integrity, and business rules. Trigger with phrases like "validate database data", "implement data validation rules", "enforce data integrity constraints", or "validate data formats".

forecasting-time-series-data

25
from ComeOnOliver/skillshub

This skill enables Claude to forecast future values based on historical time series data. It analyzes time-dependent data to identify trends, seasonality, and other patterns. Use this skill when the user asks to predict future values of a time series, analyze trends in data over time, or requires insights into time-dependent data. Trigger terms include "forecast," "predict," "time series analysis," "future values," and requests involving temporal data.

generating-test-data

25
from ComeOnOliver/skillshub

This skill enables Claude to generate realistic test data for software development. It uses the test-data-generator plugin to create users, products, orders, and custom schemas for comprehensive testing. Use this skill when you need to populate databases, simulate user behavior, or create fixtures for automated tests. Trigger phrases include "generate test data", "create fake users", "populate database", "generate product data", "create test orders", or "generate data based on schema". This skill is especially useful for populating testing environments or creating sample data for demonstrations.

test-data-builder

25
from ComeOnOliver/skillshub

Test Data Builder - Auto-activating skill for Test Automation. Triggers on: test data builder, test data builder Part of the Test Automation skill category.

detecting-sql-injection-vulnerabilities

25
from ComeOnOliver/skillshub

This skill enables Claude to detect SQL injection vulnerabilities in code. It uses the sql-injection-detector plugin to analyze codebases, identify potential SQL injection flaws, and provide remediation guidance. Use this skill when the user asks to find SQL injection vulnerabilities, scan for SQL injection, or check code for SQL injection risks. The skill is triggered by phrases like "detect SQL injection", "scan for SQLi", or "check for SQL injection vulnerabilities".

splitting-datasets

25
from ComeOnOliver/skillshub

Process split datasets into training, validation, and testing sets for ML model development. Use when requesting "split dataset", "train-test split", or "data partitioning". Trigger with relevant phrases based on skill purpose.

scanning-database-security

25
from ComeOnOliver/skillshub

Process use when you need to work with security and compliance. This skill provides security scanning and vulnerability detection with comprehensive guidance and automation. Trigger with phrases like "scan for vulnerabilities", "implement security controls", or "audit security".

preprocessing-data-with-automated-pipelines

25
from ComeOnOliver/skillshub

Process automate data cleaning, transformation, and validation for ML tasks. Use when requesting "preprocess data", "clean data", "ETL pipeline", or "data transformation". Trigger with relevant phrases based on skill purpose.

detecting-performance-regressions

25
from ComeOnOliver/skillshub

This skill enables Claude to automatically detect performance regressions in a CI/CD pipeline. It analyzes performance metrics, such as response time and throughput, and compares them against baselines or thresholds. Use this skill when the user requests to "detect performance regressions", "analyze performance metrics for regressions", or "find performance degradation" in a CI/CD environment. The skill is also triggered when the user mentions "baseline comparison", "statistical significance analysis", or "performance budget violations". It helps identify and report performance issues early in the development cycle.

optimizing-database-connection-pooling

25
from ComeOnOliver/skillshub

Process use when you need to work with connection management. This skill provides connection pooling and management with comprehensive guidance and automation. Trigger with phrases like "manage connections", "configure pooling", or "optimize connection usage".