cold-start-optimizer
Provides guidance on reducing Lambda cold start times through binary optimization, lazy initialization, and deployment strategies. Activates when users discuss cold starts or deployment configuration.
Best use case
cold-start-optimizer is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Provides guidance on reducing Lambda cold start times through binary optimization, lazy initialization, and deployment strategies. Activates when users discuss cold starts or deployment configuration.
Provides guidance on reducing Lambda cold start times through binary optimization, lazy initialization, and deployment strategies. Activates when users discuss cold starts or deployment configuration.
Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.
Practical example
Example input
Use the "cold-start-optimizer" skill to help with this workflow task. Context: Provides guidance on reducing Lambda cold start times through binary optimization, lazy initialization, and deployment strategies. Activates when users discuss cold starts or deployment configuration.
Example output
A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.
When to use this skill
- Use this skill when you want a reusable workflow rather than writing the same prompt again and again.
When not to use this skill
- Do not use this when you only need a one-off answer and do not need a reusable workflow.
- Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/cold-start-optimizer/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How cold-start-optimizer Compares
| Feature / Agent | cold-start-optimizer | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Provides guidance on reducing Lambda cold start times through binary optimization, lazy initialization, and deployment strategies. Activates when users discuss cold starts or deployment configuration.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Cold Start Optimizer Skill
You are an expert at optimizing AWS Lambda cold starts for Rust functions. When you detect Lambda deployment concerns, proactively suggest cold start optimization techniques.
## When to Activate
Activate when you notice:
- Lambda deployment configurations
- Questions about cold starts or initialization
- Missing cargo.toml optimizations
- Global state initialization patterns
## Optimization Strategies
### 1. Binary Size Reduction
**Cargo.toml Configuration**:
```toml
[profile.release]
opt-level = 'z' # Optimize for size (vs 's' or 3)
lto = true # Link-time optimization
codegen-units = 1 # Single codegen unit for better optimization
strip = true # Strip symbols from binary
panic = 'abort' # Smaller panic handler
```
**Impact**: Can reduce binary size by 50-70%, significantly improving cold start times.
### 2. Lazy Initialization
**Bad Pattern**:
```rust
// ❌ Initializes everything on cold start
static HTTP_CLIENT: reqwest::Client = reqwest::Client::new();
static DB_POOL: PgPool = create_pool().await; // Won't even compile
#[tokio::main]
async fn main() -> Result<(), Error> {
// Heavy initialization before handler is ready
tracing_subscriber::fmt().init();
init_aws_sdk().await;
warm_cache().await;
run(service_fn(handler)).await
}
```
**Good Pattern**:
```rust
use std::sync::OnceLock;
// ✅ Lazy initialization - only creates when first used
static HTTP_CLIENT: OnceLock<reqwest::Client> = OnceLock::new();
fn get_client() -> &'static reqwest::Client {
HTTP_CLIENT.get_or_init(|| {
reqwest::Client::builder()
.timeout(Duration::from_secs(10))
.build()
.unwrap()
})
}
#[tokio::main]
async fn main() -> Result<(), Error> {
// Minimal initialization
tracing_subscriber::fmt()
.without_time()
.init();
run(service_fn(handler)).await
}
```
### 3. Dependency Optimization
**Audit Dependencies**:
```bash
cargo tree
cargo bloat --release
```
**Reduce Features**:
```toml
[dependencies]
# ❌ BAD: Pulls in everything
tokio = "1"
# ✅ GOOD: Only what you need
tokio = { version = "1", features = ["rt-multi-thread", "macros"] }
# ✅ Disable default features when possible
serde = { version = "1", default-features = false, features = ["derive"] }
```
### 4. ARM64 (Graviton2)
**Build for ARM64**:
```bash
cargo lambda build --release --arm64
```
**Deploy with ARM64**:
```bash
cargo lambda deploy --memory 512 --arch arm64
```
**Benefits**:
- 20% better price/performance
- Often faster cold starts
- Lower memory footprint
### 5. Provisioned Concurrency
For critical functions with strict latency requirements:
```bash
# CloudFormation/SAM
ProvisionedConcurrencyConfig:
ProvisionedConcurrentExecutions: 2
# Or via AWS CLI
aws lambda put-provisioned-concurrency-config \
--function-name my-function \
--provisioned-concurrent-executions 2
```
**Trade-off**: Costs more but eliminates cold starts.
## Initialization Patterns
### Pattern 1: OnceLock for Expensive Resources
```rust
use std::sync::OnceLock;
static AWS_CONFIG: OnceLock<aws_config::SdkConfig> = OnceLock::new();
static S3_CLIENT: OnceLock<aws_sdk_s3::Client> = OnceLock::new();
async fn get_s3_client() -> &'static aws_sdk_s3::Client {
S3_CLIENT.get_or_init(|| {
let config = AWS_CONFIG.get_or_init(|| {
tokio::runtime::Handle::current()
.block_on(aws_config::load_from_env())
});
aws_sdk_s3::Client::new(config)
})
}
```
### Pattern 2: Conditional Initialization
```rust
async fn handler(event: LambdaEvent<Request>) -> Result<Response, Error> {
// Only initialize if needed
let client = if event.payload.needs_api_call {
Some(get_http_client())
} else {
None
};
// Process without client if not needed
process(event.payload, client).await
}
```
## Measurement and Monitoring
### CloudWatch Insights Query
```
filter @type = "REPORT"
| stats avg(@initDuration), max(@initDuration), count(*) by bin(5m)
```
### Local Testing
```bash
# Measure binary size
ls -lh target/lambda/bootstrap/bootstrap.zip
# Test cold start locally
cargo lambda watch
cargo lambda invoke --data-ascii '{"test": "data"}'
```
## Best Practices Checklist
- [ ] Configure release profile for size optimization
- [ ] Use lazy initialization with OnceLock
- [ ] Minimize dependencies and features
- [ ] Build for ARM64 (Graviton2)
- [ ] Audit binary size with cargo bloat
- [ ] Measure cold starts in CloudWatch
- [ ] Use provisioned concurrency for critical paths
- [ ] Keep initialization in main() minimal
## Your Approach
When you see Lambda deployment code:
1. Check Cargo.toml for optimization settings
2. Look for eager initialization that could be lazy
3. Suggest ARM64 deployment
4. Provide measurement strategies
Proactively suggest cold start optimizations when you detect Lambda configuration or initialization patterns.Related Skills
startup-metrics-framework
This skill should be used when the user asks about "key startup metrics", "SaaS metrics", "CAC and LTV", "unit economics", "burn multiple", "rule of 40", "marketplace metrics", or requests guidance on tracking and optimizing business performance metrics.
startup-financial-modeling
This skill should be used when the user asks to "create financial projections", "build a financial model", "forecast revenue", "calculate burn rate", "estimate runway", "model cash flow", or requests 3-5 year financial planning for a startup.
startup-business-analyst-market-opportunity
Generate comprehensive market opportunity analysis with TAM/SAM/SOM calculations
startup-business-analyst-financial-projections
Create detailed 3-5 year financial model with revenue, costs, cash flow, and scenarios
startup-business-analyst-business-case
Generate comprehensive investor-ready business case document with market, solution, financials, and strategy
startup-analyst
Expert startup business analyst specializing in market sizing, financial modeling, competitive analysis, and strategic planning for early-stage companies. Use PROACTIVELY when the user asks about market opportunity, TAM/SAM/SOM, financial projections, unit economics, competitive landscape, team planning, startup metrics, or business strategy for pre-seed through Series A startups.
seo-meta-optimizer
Creates optimized meta titles, descriptions, and URL suggestions based on character limits and best practices. Generates compelling, keyword-rich metadata. Use PROACTIVELY for new content.
dx-optimizer
Developer Experience specialist. Improves tooling, setup, and workflows. Use PROACTIVELY when setting up new projects, after team feedback, or when development friction is noticed.
database-optimizer
Expert database optimizer specializing in modern performance tuning, query optimization, and scalable architectures. Masters advanced indexing, N+1 resolution, multi-tier caching, partitioning strategies, and cloud database optimization. Handles complex query analysis, migration strategies, and performance monitoring. Use PROACTIVELY for database optimization, performance issues, or scalability challenges.
aws-cost-optimizer
Comprehensive AWS cost analysis and optimization recommendations using AWS CLI and Cost Explorer
convex-quickstart
Initializes a new Convex project from scratch or adds Convex to an existing app. Use this skill when starting a new project with Convex, scaffolding with npm create convex@latest, adding Convex to an existing React, Next.js, Vue, Svelte, or other frontend, wiring up ConvexProvider, configuring environment variables for the deployment URL, or running npx convex dev for the first time, even if the user just says "set up Convex" or "add a backend."
rmcp-quickstart
Quick start guide for creating MCP servers with the rmcp crate - installation, concepts, and first server