engineering-nba-data
Extracts, transforms, and analyzes NBA statistics using the nba_api Python library. Use when working with NBA player stats, team data, game logs, shot charts, league statistics, or any NBA-related data engineering tasks. Supports both stats.nba.com endpoints and static player/team lookups.
Best use case
engineering-nba-data is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Extracts, transforms, and analyzes NBA statistics using the nba_api Python library. Use when working with NBA player stats, team data, game logs, shot charts, league statistics, or any NBA-related data engineering tasks. Supports both stats.nba.com endpoints and static player/team lookups.
Extracts, transforms, and analyzes NBA statistics using the nba_api Python library. Use when working with NBA player stats, team data, game logs, shot charts, league statistics, or any NBA-related data engineering tasks. Supports both stats.nba.com endpoints and static player/team lookups.
Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.
Practical example
Example input
Use the "engineering-nba-data" skill to help with this workflow task. Context: Extracts, transforms, and analyzes NBA statistics using the nba_api Python library. Use when working with NBA player stats, team data, game logs, shot charts, league statistics, or any NBA-related data engineering tasks. Supports both stats.nba.com endpoints and static player/team lookups.
Example output
A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.
When to use this skill
- Use this skill when you want a reusable workflow rather than writing the same prompt again and again.
When not to use this skill
- Do not use this when you only need a one-off answer and do not need a reusable workflow.
- Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/engineering-nba-data/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How engineering-nba-data Compares
| Feature / Agent | engineering-nba-data | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Extracts, transforms, and analyzes NBA statistics using the nba_api Python library. Use when working with NBA player stats, team data, game logs, shot charts, league statistics, or any NBA-related data engineering tasks. Supports both stats.nba.com endpoints and static player/team lookups.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
**Goal**: Extract and process NBA statistical data efficiently using the nba_api library for data analysis, reporting, and application development.
**IMPORTANT**: The nba_api library accesses stats.nba.com endpoints. All data requests return structured datasets that can be output as JSON, dictionaries, or pandas DataFrames.
## Workflow
### Phase 1: Setup and Installation
- Install nba_api: `pip install nba_api` if not yet installed
- Import required modules based on task:
- `from nba_api.stats.endpoints import [endpoint_name]` for stats.nba.com data
- `from nba_api.stats.static import players, teams` for static lookups
- `from nba_api.stats.library.parameters import [parameter_classes]` for valid parameter values
### Phase 2: Data Retrieval
**For Player/Team Lookups (No API Calls)**:
- Use `players.find_players_by_full_name('player_name')` for player searches
- Use `teams.find_teams_by_full_name('team_name')` for team searches
- Both return dictionaries with `id`, `full_name`, and other metadata
- No HTTP requests are sent; data is embedded in the package
**For Stats Endpoints (API Calls)**:
- Identify the correct endpoint from [table of contents](docs/table_of_contents.md)
- Initialize endpoint with required parameters: `endpoint_class(param1=value1, param2=value2)`
- Access datasets using dot notation: `response_object.dataset_name`
- Retrieve data in desired format:
- `.get_json()` for JSON string
- `.get_dict()` for dictionary
- `.get_data_frame()` for pandas DataFrame
**Custom Request Configuration**:
- Set custom headers: `endpoint_class(player_id=123, headers=custom_headers)`
- Set proxy: `endpoint_class(player_id=123, proxy='127.0.0.1:80')`
- Set timeout: `endpoint_class(player_id=123, timeout=100)` (in seconds)
### Phase 3: Data Processing
- Extract specific datasets from endpoint responses
- Transform data using pandas for aggregations, filtering, joins
- Normalize nested data structures as needed
- Handle multiple datasets returned by single endpoint
### Phase 4: Output and Storage
- Export to CSV: `df.to_csv('output.csv', index=False)`
- Export to JSON: Use `.get_json()` or `df.to_json()`
- Store in database using pandas `.to_sql()` method
- Cache responses to minimize API calls
## Rules
- **Required packages**: `nba_api` must be installed before use
- **Static first**: Always use static lookups (players/teams) for ID retrieval before making API calls
- **Parameter validation**: Reference [parameters.md](docs/nba_api/stats/library/parameters.md) for valid parameter values
- **Endpoint selection**: Check [table of contents](docs/table_of_contents.md) to find the correct endpoint
- **Rate limiting**: Be mindful of API rate limits; cache data when possible
- **Error handling**: Wrap API calls in try-except blocks to handle network failures
- **Data formats**: Know when to use JSON, dict, or DataFrame based on downstream requirements
- **Season format**: Seasons use format `YYYY-YY` (e.g., `2019-20`)
- **League IDs**: NBA=`00`, ABA=`01`, WNBA=`10`, G-League=`20`
## Acceptance Criteria
- Data retrieved successfully from appropriate endpoint or static source
- Correct parameters used based on documentation
- Data formatted appropriately for intended use case
- Error handling implemented for API failures
- Code follows Python best practices
- Results validated against expected structure
- Documentation references included where relevant
## Reference Documentation
**Quick access to common resources**:
- [Table of Contents](docs/table_of_contents.md) - Full documentation index
- [Examples](docs/nba_api/stats/examples.md) - Usage examples for endpoints and static data
- [Parameters](docs/nba_api/stats/library/parameters.md) - Valid parameter values and patterns
- [Endpoints Data Structure](docs/nba_api/stats/endpoints_data_structure.md) - Response format and methods
- [Players](docs/nba_api/stats/static/players.md) - Static player lookup functions
- [Teams](docs/nba_api/stats/static/teams.md) - Static team lookup functions
- [HTTP Library](docs/nba_api/library/http.md) - HTTP request details
**Endpoint-specific documentation**:
Refer to `docs/nba_api/stats/endpoints/[endpoint_name].md` for detailed parameter and dataset information for each endpoint.Related Skills
vector-database-engineer
Expert in vector databases, embedding strategies, and semantic search implementation. Masters Pinecone, Weaviate, Qdrant, Milvus, and pgvector for RAG applications, recommendation systems, and similar
sqlmap-database-pentesting
This skill should be used when the user asks to "automate SQL injection testing," "enumerate database structure," "extract database credentials using sqlmap," "dump tables and columns...
sqlmap-database-penetration-testing
This skill should be used when the user asks to "automate SQL injection testing," "enumerate database structure," "extract database credentials using sqlmap," "dump tables and columns from a vulnerable database," or "perform automated database penetration testing." It provides comprehensive guidance for using SQLMap to detect and exploit SQL injection vulnerabilities.
protocol-reverse-engineering
Master network protocol reverse engineering including packet analysis, protocol dissection, and custom protocol documentation. Use when analyzing network traffic, understanding proprietary protocols, or debugging network communication.
prompt-engineering-patterns
Master advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability in production. Use when optimizing prompts, improving LLM outputs, or designing production prompt templates.
gdpr-data-handling
Implement GDPR-compliant data handling with consent management, data subject rights, and privacy by design. Use when building systems that process EU personal data, implementing privacy controls, or conducting GDPR compliance reviews.
datadog-automation
Automate Datadog tasks via Rube MCP (Composio): query metrics, search logs, manage monitors/dashboards, create events and downtimes. Always search tools first for current schemas.
database-optimizer
Expert database optimizer specializing in modern performance tuning, query optimization, and scalable architectures. Masters advanced indexing, N+1 resolution, multi-tier caching, partitioning strategies, and cloud database optimization. Handles complex query analysis, migration strategies, and performance monitoring. Use PROACTIVELY for database optimization, performance issues, or scalability challenges.
database-migrations-sql-migrations
SQL database migrations with zero-downtime strategies for PostgreSQL, MySQL, SQL Server
database-migrations-migration-observability
Migration monitoring, CDC, and observability infrastructure
database-design
Database design principles and decision-making. Schema design, indexing strategy, ORM selection, serverless databases.
database-cloud-optimization-cost-optimize
You are a cloud cost optimization expert specializing in reducing infrastructure expenses while maintaining performance and reliability. Analyze cloud spending, identify savings opportunities, and implement cost-effective architectures across AWS, Azure, and GCP.