azure-data-tables-py

Azure Tables SDK for Python (Storage and Cosmos DB). Use for NoSQL key-value storage, entity CRUD, and batch operations. Triggers: "table storage", "TableServiceClient", "TableClient", "entities", "PartitionKey", "RowKey".

25 stars

Best use case

azure-data-tables-py is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Azure Tables SDK for Python (Storage and Cosmos DB). Use for NoSQL key-value storage, entity CRUD, and batch operations. Triggers: "table storage", "TableServiceClient", "TableClient", "entities", "PartitionKey", "RowKey".

Teams using azure-data-tables-py should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/azure-data-tables-py/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/aiskillstore/marketplace/sickn33/azure-data-tables-py/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/azure-data-tables-py/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How azure-data-tables-py Compares

Feature / Agentazure-data-tables-pyStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Azure Tables SDK for Python (Storage and Cosmos DB). Use for NoSQL key-value storage, entity CRUD, and batch operations. Triggers: "table storage", "TableServiceClient", "TableClient", "entities", "PartitionKey", "RowKey".

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Azure Tables SDK for Python

NoSQL key-value store for structured data (Azure Storage Tables or Cosmos DB Table API).

## Installation

```bash
pip install azure-data-tables azure-identity
```

## Environment Variables

```bash
# Azure Storage Tables
AZURE_STORAGE_ACCOUNT_URL=https://<account>.table.core.windows.net

# Cosmos DB Table API
COSMOS_TABLE_ENDPOINT=https://<account>.table.cosmos.azure.com
```

## Authentication

```python
from azure.identity import DefaultAzureCredential
from azure.data.tables import TableServiceClient, TableClient

credential = DefaultAzureCredential()
endpoint = "https://<account>.table.core.windows.net"

# Service client (manage tables)
service_client = TableServiceClient(endpoint=endpoint, credential=credential)

# Table client (work with entities)
table_client = TableClient(endpoint=endpoint, table_name="mytable", credential=credential)
```

## Client Types

| Client | Purpose |
|--------|---------|
| `TableServiceClient` | Create/delete tables, list tables |
| `TableClient` | Entity CRUD, queries |

## Table Operations

```python
# Create table
service_client.create_table("mytable")

# Create if not exists
service_client.create_table_if_not_exists("mytable")

# Delete table
service_client.delete_table("mytable")

# List tables
for table in service_client.list_tables():
    print(table.name)

# Get table client
table_client = service_client.get_table_client("mytable")
```

## Entity Operations

**Important**: Every entity requires `PartitionKey` and `RowKey` (together form unique ID).

### Create Entity

```python
entity = {
    "PartitionKey": "sales",
    "RowKey": "order-001",
    "product": "Widget",
    "quantity": 5,
    "price": 9.99,
    "shipped": False
}

# Create (fails if exists)
table_client.create_entity(entity=entity)

# Upsert (create or replace)
table_client.upsert_entity(entity=entity)
```

### Get Entity

```python
# Get by key (fastest)
entity = table_client.get_entity(
    partition_key="sales",
    row_key="order-001"
)
print(f"Product: {entity['product']}")
```

### Update Entity

```python
# Replace entire entity
entity["quantity"] = 10
table_client.update_entity(entity=entity, mode="replace")

# Merge (update specific fields only)
update = {
    "PartitionKey": "sales",
    "RowKey": "order-001",
    "shipped": True
}
table_client.update_entity(entity=update, mode="merge")
```

### Delete Entity

```python
table_client.delete_entity(
    partition_key="sales",
    row_key="order-001"
)
```

## Query Entities

### Query Within Partition

```python
# Query by partition (efficient)
entities = table_client.query_entities(
    query_filter="PartitionKey eq 'sales'"
)
for entity in entities:
    print(entity)
```

### Query with Filters

```python
# Filter by properties
entities = table_client.query_entities(
    query_filter="PartitionKey eq 'sales' and quantity gt 3"
)

# With parameters (safer)
entities = table_client.query_entities(
    query_filter="PartitionKey eq @pk and price lt @max_price",
    parameters={"pk": "sales", "max_price": 50.0}
)
```

### Select Specific Properties

```python
entities = table_client.query_entities(
    query_filter="PartitionKey eq 'sales'",
    select=["RowKey", "product", "price"]
)
```

### List All Entities

```python
# List all (cross-partition - use sparingly)
for entity in table_client.list_entities():
    print(entity)
```

## Batch Operations

```python
from azure.data.tables import TableTransactionError

# Batch operations (same partition only!)
operations = [
    ("create", {"PartitionKey": "batch", "RowKey": "1", "data": "first"}),
    ("create", {"PartitionKey": "batch", "RowKey": "2", "data": "second"}),
    ("upsert", {"PartitionKey": "batch", "RowKey": "3", "data": "third"}),
]

try:
    table_client.submit_transaction(operations)
except TableTransactionError as e:
    print(f"Transaction failed: {e}")
```

## Async Client

```python
from azure.data.tables.aio import TableServiceClient, TableClient
from azure.identity.aio import DefaultAzureCredential

async def table_operations():
    credential = DefaultAzureCredential()
    
    async with TableClient(
        endpoint="https://<account>.table.core.windows.net",
        table_name="mytable",
        credential=credential
    ) as client:
        # Create
        await client.create_entity(entity={
            "PartitionKey": "async",
            "RowKey": "1",
            "data": "test"
        })
        
        # Query
        async for entity in client.query_entities("PartitionKey eq 'async'"):
            print(entity)

import asyncio
asyncio.run(table_operations())
```

## Data Types

| Python Type | Table Storage Type |
|-------------|-------------------|
| `str` | String |
| `int` | Int64 |
| `float` | Double |
| `bool` | Boolean |
| `datetime` | DateTime |
| `bytes` | Binary |
| `UUID` | Guid |

## Best Practices

1. **Design partition keys** for query patterns and even distribution
2. **Query within partitions** whenever possible (cross-partition is expensive)
3. **Use batch operations** for multiple entities in same partition
4. **Use `upsert_entity`** for idempotent writes
5. **Use parameterized queries** to prevent injection
6. **Keep entities small** — max 1MB per entity
7. **Use async client** for high-throughput scenarios

Related Skills

generating-database-seed-data

25
from ComeOnOliver/skillshub

Process this skill enables AI assistant to generate realistic test data and database seed scripts for development and testing environments. it uses faker libraries to create realistic data, maintains relational integrity, and allows configurable data volumes. u... Use when working with databases or data models. Trigger with phrases like 'database', 'query', or 'schema'.

forensics-data-collector

25
from ComeOnOliver/skillshub

Forensics Data Collector - Auto-activating skill for Security Advanced. Triggers on: forensics data collector, forensics data collector Part of the Security Advanced skill category.

forecasting-time-series-data

25
from ComeOnOliver/skillshub

Process this skill enables AI assistant to forecast future values based on historical time series data. it analyzes time-dependent data to identify trends, seasonality, and other patterns. use this skill when the user asks to predict future values of a time ser... Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.

exploring-blockchain-data

25
from ComeOnOliver/skillshub

Process query and analyze blockchain data including blocks, transactions, and smart contracts. Use when querying blockchain data and transactions. Trigger with phrases like "explore blockchain", "query transactions", or "check on-chain data".

evernote-data-handling

25
from ComeOnOliver/skillshub

Best practices for handling Evernote data. Use when implementing data storage, processing notes, handling attachments, or ensuring data integrity. Trigger with phrases like "evernote data", "handle evernote notes", "evernote storage", "process evernote content".

encrypting-and-decrypting-data

25
from ComeOnOliver/skillshub

This skill enables Claude to encrypt and decrypt data using various algorithms provided by the encryption-tool plugin. It should be used when the user requests to "encrypt data", "decrypt a file", "generate an encrypted file", or needs to secure sensitive information. This skill supports various encryption methods and ensures data confidentiality. It is triggered by requests related to data encryption, decryption, or general data security needs.

documenso-data-handling

25
from ComeOnOliver/skillshub

Handle document data, signatures, and PII in Documenso integrations. Use when managing document lifecycle, handling signed PDFs, or implementing data retention policies. Trigger with phrases like "documenso data", "signed document", "document retention", "documenso PII", "download signed pdf".

detecting-database-deadlocks

25
from ComeOnOliver/skillshub

Process use when you need to work with deadlock detection. This skill provides deadlock detection and resolution with comprehensive guidance and automation. Trigger with phrases like "detect deadlocks", "resolve deadlocks", or "prevent deadlocks".

detecting-data-anomalies

25
from ComeOnOliver/skillshub

Process identify anomalies and outliers in datasets using machine learning algorithms. Use when analyzing data for unusual patterns, outliers, or unexpected deviations from normal behavior. Trigger with phrases like "detect anomalies", "find outliers", or "identify unusual patterns".

designing-database-schemas

25
from ComeOnOliver/skillshub

Process use when you need to work with database schema design. This skill provides schema design and migrations with comprehensive guidance and automation. Trigger with phrases like "design schema", "create migration", or "model database".

deepgram-data-handling

25
from ComeOnOliver/skillshub

Implement audio data handling best practices for Deepgram integrations. Use when managing audio file storage, implementing data retention, or ensuring GDPR/HIPAA compliance for transcription data. Trigger: "deepgram data", "audio storage", "transcription data", "deepgram GDPR", "deepgram HIPAA", "deepgram privacy", "PII redaction".

splitting-datasets

25
from ComeOnOliver/skillshub

This skill enables Claude to split datasets into training, validation, and testing sets. It is useful when preparing data for machine learning model development. Use this skill when the user requests to split a dataset, create train-test splits, or needs data partitioning for model training. The skill is triggered by terms like "split dataset," "train-test split," "validation set," or "data partitioning."