datanalysis-credit-risk

Credit risk data cleaning and variable screening pipeline for pre-loan modeling. Use when working with raw credit data that needs quality assessment, missing value analysis, or variable selection before modeling. it covers data loading and formatting, abnormal period filtering, missing rate calculation, high-missing variable removal,low-IV variable filtering, high-PSI variable removal, Null Importance denoising, high-correlation variable removal, and cleaning report generation. Applicable scenarios arecredit risk data cleaning, variable screening, pre-loan modeling preprocessing.

23 stars

Best use case

datanalysis-credit-risk is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Credit risk data cleaning and variable screening pipeline for pre-loan modeling. Use when working with raw credit data that needs quality assessment, missing value analysis, or variable selection before modeling. it covers data loading and formatting, abnormal period filtering, missing rate calculation, high-missing variable removal,low-IV variable filtering, high-PSI variable removal, Null Importance denoising, high-correlation variable removal, and cleaning report generation. Applicable scenarios arecredit risk data cleaning, variable screening, pre-loan modeling preprocessing.

Teams using datanalysis-credit-risk should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/datanalysis-credit-risk/SKILL.md --create-dirs "https://raw.githubusercontent.com/christophacham/agent-skills-library/main/skills/database/datanalysis-credit-risk/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/datanalysis-credit-risk/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How datanalysis-credit-risk Compares

Feature / Agentdatanalysis-credit-riskStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Credit risk data cleaning and variable screening pipeline for pre-loan modeling. Use when working with raw credit data that needs quality assessment, missing value analysis, or variable selection before modeling. it covers data loading and formatting, abnormal period filtering, missing rate calculation, high-missing variable removal,low-IV variable filtering, high-PSI variable removal, Null Importance denoising, high-correlation variable removal, and cleaning report generation. Applicable scenarios arecredit risk data cleaning, variable screening, pre-loan modeling preprocessing.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Data Cleaning and Variable Screening

## Quick Start

```bash
# Run the complete data cleaning pipeline
python ".github/skills/datanalysis-credit-risk/scripts/example.py"
```

## Complete Process Description

The data cleaning pipeline consists of the following 11 steps, each executed independently without deleting the original data:

1. **Get Data** - Load and format raw data
2. **Organization Sample Analysis** - Statistics of sample count and bad sample rate for each organization
3. **Separate OOS Data** - Separate out-of-sample (OOS) samples from modeling samples
4. **Filter Abnormal Months** - Remove months with insufficient bad sample count or total sample count
5. **Calculate Missing Rate** - Calculate overall and organization-level missing rates for each feature
6. **Drop High Missing Rate Features** - Remove features with overall missing rate exceeding threshold
7. **Drop Low IV Features** - Remove features with overall IV too low or IV too low in too many organizations
8. **Drop High PSI Features** - Remove features with unstable PSI
9. **Null Importance Denoising** - Remove noise features using label permutation method
10. **Drop High Correlation Features** - Remove high correlation features based on original gain
11. **Export Report** - Generate Excel report containing details and statistics of all steps

## Core Functions

| Function | Purpose | Module |
|------|------|----------|
| `get_dataset()` | Load and format data | references.func |
| `org_analysis()` | Organization sample analysis | references.func |
| `missing_check()` | Calculate missing rate | references.func |
| `drop_abnormal_ym()` | Filter abnormal months | references.analysis |
| `drop_highmiss_features()` | Drop high missing rate features | references.analysis |
| `drop_lowiv_features()` | Drop low IV features | references.analysis |
| `drop_highpsi_features()` | Drop high PSI features | references.analysis |
| `drop_highnoise_features()` | Null Importance denoising | references.analysis |
| `drop_highcorr_features()` | Drop high correlation features | references.analysis |
| `iv_distribution_by_org()` | IV distribution statistics | references.analysis |
| `psi_distribution_by_org()` | PSI distribution statistics | references.analysis |
| `value_ratio_distribution_by_org()` | Value ratio distribution statistics | references.analysis |
| `export_cleaning_report()` | Export cleaning report | references.analysis |

## Parameter Description

### Data Loading Parameters
- `DATA_PATH`: Data file path (best are parquet format)
- `DATE_COL`: Date column name
- `Y_COL`: Label column name
- `ORG_COL`: Organization column name
- `KEY_COLS`: Primary key column name list

### OOS Organization Configuration
- `OOS_ORGS`: Out-of-sample organization list

### Abnormal Month Filtering Parameters
- `min_ym_bad_sample`: Minimum bad sample count per month (default 10)
- `min_ym_sample`: Minimum total sample count per month (default 500)

### Missing Rate Parameters
- `missing_ratio`: Overall missing rate threshold (default 0.6)

### IV Parameters
- `overall_iv_threshold`: Overall IV threshold (default 0.1)
- `org_iv_threshold`: Single organization IV threshold (default 0.1)
- `max_org_threshold`: Maximum tolerated low IV organization count (default 2)

### PSI Parameters
- `psi_threshold`: PSI threshold (default 0.1)
- `max_months_ratio`: Maximum unstable month ratio (default 1/3)
- `max_orgs`: Maximum unstable organization count (default 6)

### Null Importance Parameters
- `n_estimators`: Number of trees (default 100)
- `max_depth`: Maximum tree depth (default 5)
- `gain_threshold`: Gain difference threshold (default 50)

### High Correlation Parameters
- `max_corr`: Correlation threshold (default 0.9)
- `top_n_keep`: Keep top N features by original gain ranking (default 20)

## Output Report

The generated Excel report contains the following sheets:

1. **汇总** - Summary information of all steps, including operation results and conditions
2. **机构样本统计** - Sample count and bad sample rate for each organization
3. **分离OOS数据** - OOS sample and modeling sample counts
4. **Step4-异常月份处理** - Abnormal months that were removed
5. **缺失率明细** - Overall and organization-level missing rates for each feature
6. **Step5-有值率分布统计** - Distribution of features in different value ratio ranges
7. **Step6-高缺失率处理** - High missing rate features that were removed
8. **Step7-IV明细** - IV values of each feature in each organization and overall
9. **Step7-IV处理** - Features that do not meet IV conditions and low IV organizations
10. **Step7-IV分布统计** - Distribution of features in different IV ranges
11. **Step8-PSI明细** - PSI values of each feature in each organization each month
12. **Step8-PSI处理** - Features that do not meet PSI conditions and unstable organizations
13. **Step8-PSI分布统计** - Distribution of features in different PSI ranges
14. **Step9-null importance处理** - Noise features that were removed
15. **Step10-高相关性剔除** - High correlation features that were removed

## Features

- **Interactive Input**: Parameters can be input before each step execution, with default values supported
- **Independent Execution**: Each step is executed independently without deleting original data, facilitating comparative analysis
- **Complete Report**: Generate complete Excel report containing details, statistics, and distributions
- **Multi-process Support**: IV and PSI calculations support multi-process acceleration
- **Organization-level Analysis**: Support organization-level statistics and modeling/OOS distinction

Related Skills

-21risk-automation

23
from christophacham/agent-skills-library

Automate 21risk tasks via Rube MCP (Composio). Always search tools first for current schemas.

supply-chain-risk-auditor

23
from christophacham/agent-skills-library

Identifies dependencies at heightened risk of exploitation or takeover. Use when assessing supply chain attack surface, evaluating dependency health, or scoping security engagements.

renderform-automation

23
from christophacham/agent-skills-library

Automate Renderform tasks via Rube MCP (Composio). Always search tools first for current schemas.

fpf:query

23
from christophacham/agent-skills-library

Search the FPF knowledge base and display hypothesis details with assurance information

quality-nonconformance

23
from christophacham/agent-skills-library

Codified expertise for quality control, non-conformance investigation, root cause analysis, corrective action, and supplier quality management in regulated manufacturing.

python-performance-optimization

23
from christophacham/agent-skills-library

Profile and optimize Python code using cProfile, memory profilers, and performance best practices. Use when debugging slow Python code, optimizing bottlenecks, or improving application performance.

pyopenms

23
from christophacham/agent-skills-library

Complete mass spectrometry analysis platform. Use for proteomics workflows feature detection, peptide identification, protein quantification, and complex LC-MS/MS pipelines. Supports extensive file formats and algorithms. Best for proteomics, comprehensive MS data processing. For simple spectral comparison and metabolite ID use matchms.

pymatgen

23
from christophacham/agent-skills-library

Materials science toolkit. Crystal structures (CIF, POSCAR), phase diagrams, band structure, DOS, Materials Project integration, format conversion, for computational materials science.

pubmed-database

23
from christophacham/agent-skills-library

Direct REST API access to PubMed. Advanced Boolean/MeSH queries, E-utilities API, batch processing, citation management. For Python workflows, prefer biopython (Bio.Entrez). Use this for direct HTTP/REST work or custom API implementations.

pubchem-database

23
from christophacham/agent-skills-library

Query PubChem via PUG-REST API/PubChemPy (110M+ compounds). Search by name/CID/SMILES, retrieve properties, similarity/substructure searches, bioactivity, for cheminformatics.

prisma-expert

23
from christophacham/agent-skills-library

Prisma ORM expert for schema design, migrations, query optimization, relations modeling, and database operations. Use PROACTIVELY for Prisma schema issues, migration problems, query performance, re...

prisma-automation

23
from christophacham/agent-skills-library

Automate Prisma tasks via Rube MCP (Composio). Always search tools first for current schemas.