cold-start-optimizer

Provides guidance on reducing Lambda cold start times through binary optimization, lazy initialization, and deployment strategies. Activates when users discuss cold starts or deployment configuration.

25 stars

Best use case

cold-start-optimizer is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Provides guidance on reducing Lambda cold start times through binary optimization, lazy initialization, and deployment strategies. Activates when users discuss cold starts or deployment configuration.

Teams using cold-start-optimizer should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/cold-start-optimizer/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/aiskillstore/marketplace/emillindfors/cold-start-optimizer/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/cold-start-optimizer/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How cold-start-optimizer Compares

Feature / Agentcold-start-optimizerStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Provides guidance on reducing Lambda cold start times through binary optimization, lazy initialization, and deployment strategies. Activates when users discuss cold starts or deployment configuration.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Cold Start Optimizer Skill

You are an expert at optimizing AWS Lambda cold starts for Rust functions. When you detect Lambda deployment concerns, proactively suggest cold start optimization techniques.

## When to Activate

Activate when you notice:
- Lambda deployment configurations
- Questions about cold starts or initialization
- Missing cargo.toml optimizations
- Global state initialization patterns

## Optimization Strategies

### 1. Binary Size Reduction

**Cargo.toml Configuration**:
```toml
[profile.release]
opt-level = 'z'     # Optimize for size (vs 's' or 3)
lto = true          # Link-time optimization
codegen-units = 1   # Single codegen unit for better optimization
strip = true        # Strip symbols from binary
panic = 'abort'     # Smaller panic handler
```

**Impact**: Can reduce binary size by 50-70%, significantly improving cold start times.

### 2. Lazy Initialization

**Bad Pattern**:
```rust
// ❌ Initializes everything on cold start
static HTTP_CLIENT: reqwest::Client = reqwest::Client::new();
static DB_POOL: PgPool = create_pool().await;  // Won't even compile

#[tokio::main]
async fn main() -> Result<(), Error> {
    // Heavy initialization before handler is ready
    tracing_subscriber::fmt().init();
    init_aws_sdk().await;
    warm_cache().await;

    run(service_fn(handler)).await
}
```

**Good Pattern**:
```rust
use std::sync::OnceLock;

// ✅ Lazy initialization - only creates when first used
static HTTP_CLIENT: OnceLock<reqwest::Client> = OnceLock::new();

fn get_client() -> &'static reqwest::Client {
    HTTP_CLIENT.get_or_init(|| {
        reqwest::Client::builder()
            .timeout(Duration::from_secs(10))
            .build()
            .unwrap()
    })
}

#[tokio::main]
async fn main() -> Result<(), Error> {
    // Minimal initialization
    tracing_subscriber::fmt()
        .without_time()
        .init();

    run(service_fn(handler)).await
}
```

### 3. Dependency Optimization

**Audit Dependencies**:
```bash
cargo tree
cargo bloat --release
```

**Reduce Features**:
```toml
[dependencies]
# ❌ BAD: Pulls in everything
tokio = "1"

# ✅ GOOD: Only what you need
tokio = { version = "1", features = ["rt-multi-thread", "macros"] }

# ✅ Disable default features when possible
serde = { version = "1", default-features = false, features = ["derive"] }
```

### 4. ARM64 (Graviton2)

**Build for ARM64**:
```bash
cargo lambda build --release --arm64
```

**Deploy with ARM64**:
```bash
cargo lambda deploy --memory 512 --arch arm64
```

**Benefits**:
- 20% better price/performance
- Often faster cold starts
- Lower memory footprint

### 5. Provisioned Concurrency

For critical functions with strict latency requirements:

```bash
# CloudFormation/SAM
ProvisionedConcurrencyConfig:
  ProvisionedConcurrentExecutions: 2

# Or via AWS CLI
aws lambda put-provisioned-concurrency-config \
  --function-name my-function \
  --provisioned-concurrent-executions 2
```

**Trade-off**: Costs more but eliminates cold starts.

## Initialization Patterns

### Pattern 1: OnceLock for Expensive Resources

```rust
use std::sync::OnceLock;

static AWS_CONFIG: OnceLock<aws_config::SdkConfig> = OnceLock::new();
static S3_CLIENT: OnceLock<aws_sdk_s3::Client> = OnceLock::new();

async fn get_s3_client() -> &'static aws_sdk_s3::Client {
    S3_CLIENT.get_or_init(|| {
        let config = AWS_CONFIG.get_or_init(|| {
            tokio::runtime::Handle::current()
                .block_on(aws_config::load_from_env())
        });
        aws_sdk_s3::Client::new(config)
    })
}
```

### Pattern 2: Conditional Initialization

```rust
async fn handler(event: LambdaEvent<Request>) -> Result<Response, Error> {
    // Only initialize if needed
    let client = if event.payload.needs_api_call {
        Some(get_http_client())
    } else {
        None
    };

    // Process without client if not needed
    process(event.payload, client).await
}
```

## Measurement and Monitoring

### CloudWatch Insights Query

```
filter @type = "REPORT"
| stats avg(@initDuration), max(@initDuration), count(*) by bin(5m)
```

### Local Testing

```bash
# Measure binary size
ls -lh target/lambda/bootstrap/bootstrap.zip

# Test cold start locally
cargo lambda watch
cargo lambda invoke --data-ascii '{"test": "data"}'
```

## Best Practices Checklist

- [ ] Configure release profile for size optimization
- [ ] Use lazy initialization with OnceLock
- [ ] Minimize dependencies and features
- [ ] Build for ARM64 (Graviton2)
- [ ] Audit binary size with cargo bloat
- [ ] Measure cold starts in CloudWatch
- [ ] Use provisioned concurrency for critical paths
- [ ] Keep initialization in main() minimal

## Your Approach

When you see Lambda deployment code:
1. Check Cargo.toml for optimization settings
2. Look for eager initialization that could be lazy
3. Suggest ARM64 deployment
4. Provide measurement strategies

Proactively suggest cold start optimizations when you detect Lambda configuration or initialization patterns.

Related Skills

tailwind-class-optimizer

25
from ComeOnOliver/skillshub

Tailwind Class Optimizer - Auto-activating skill for Frontend Development. Triggers on: tailwind class optimizer, tailwind class optimizer Part of the Frontend Development skill category.

sql-query-optimizer

25
from ComeOnOliver/skillshub

Sql Query Optimizer - Auto-activating skill for Data Analytics. Triggers on: sql query optimizer, sql query optimizer Part of the Data Analytics skill category.

spark-sql-optimizer

25
from ComeOnOliver/skillshub

Spark Sql Optimizer - Auto-activating skill for Data Pipelines. Triggers on: spark sql optimizer, spark sql optimizer Part of the Data Pipelines skill category.

quickstart-guide-generator

25
from ComeOnOliver/skillshub

Quickstart Guide Generator - Auto-activating skill for Technical Documentation. Triggers on: quickstart guide generator, quickstart guide generator Part of the Technical Documentation skill category.

npm-scripts-optimizer

25
from ComeOnOliver/skillshub

Npm Scripts Optimizer - Auto-activating skill for DevOps Basics. Triggers on: npm scripts optimizer, npm scripts optimizer Part of the DevOps Basics skill category.

lean-startup

25
from ComeOnOliver/skillshub

Design MVPs, validated learning experiments, and pivot-or-persevere decisions using Build-Measure-Learn. Use when the user mentions "MVP scope", "validated learning", "pivot or persevere", "vanity metrics", or "test assumptions". Covers innovation accounting and actionable metrics. For 5-day prototype testing, see design-sprint. For customer motivation analysis, see jobs-to-be-done. Trigger with 'lean', 'startup'.

gpu-resource-optimizer

25
from ComeOnOliver/skillshub

Gpu Resource Optimizer - Auto-activating skill for ML Deployment. Triggers on: gpu resource optimizer, gpu resource optimizer Part of the ML Deployment skill category.

github-actions-starter

25
from ComeOnOliver/skillshub

Github Actions Starter - Auto-activating skill for DevOps Basics. Triggers on: github actions starter, github actions starter Part of the DevOps Basics skill category.

compression-optimizer

25
from ComeOnOliver/skillshub

Compression Optimizer - Auto-activating skill for Data Pipelines. Triggers on: compression optimizer, compression optimizer Part of the Data Pipelines skill category.

rdc-optimizer

25
from ComeOnOliver/skillshub

Public main skill for the incubating optimizer framework. Use when the user wants to analyze performance, identify bottlenecks, design experiments, or validate optimization gains from captures, traces, or profiling evidence. This skill is the future optimizer entry and currently provides the minimum intake contract only.

github-copilot-starter

25
from ComeOnOliver/skillshub

Set up complete GitHub Copilot configuration for a new project based on technology stack

dataverse-python-quickstart

25
from ComeOnOliver/skillshub

Generate Python SDK setup + CRUD + bulk + paging snippets using official patterns.