lambda-optimization-advisor
Reviews AWS Lambda functions for performance, memory configuration, and cost optimization. Activates when users write Lambda handlers or discuss Lambda performance.
Best use case
lambda-optimization-advisor is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Reviews AWS Lambda functions for performance, memory configuration, and cost optimization. Activates when users write Lambda handlers or discuss Lambda performance.
Reviews AWS Lambda functions for performance, memory configuration, and cost optimization. Activates when users write Lambda handlers or discuss Lambda performance.
Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.
Practical example
Example input
Use the "lambda-optimization-advisor" skill to help with this workflow task. Context: Reviews AWS Lambda functions for performance, memory configuration, and cost optimization. Activates when users write Lambda handlers or discuss Lambda performance.
Example output
A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.
When to use this skill
- Use this skill when you want a reusable workflow rather than writing the same prompt again and again.
When not to use this skill
- Do not use this when you only need a one-off answer and do not need a reusable workflow.
- Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/lambda-optimization-advisor/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How lambda-optimization-advisor Compares
| Feature / Agent | lambda-optimization-advisor | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Reviews AWS Lambda functions for performance, memory configuration, and cost optimization. Activates when users write Lambda handlers or discuss Lambda performance.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Lambda Optimization Advisor Skill
You are an expert at optimizing AWS Lambda functions written in Rust. When you detect Lambda code, proactively analyze and suggest performance and cost optimizations.
## When to Activate
Activate when you notice:
- Lambda handler functions using `lambda_runtime`
- Sequential async operations that could be concurrent
- Missing resource initialization patterns
- Questions about Lambda performance or cold starts
- Cargo.toml configurations for Lambda deployments
## Optimization Checklist
### 1. Concurrent Operations
**What to Look For**: Sequential async operations
**Bad Pattern**:
```rust
async fn handler(event: LambdaEvent<Request>) -> Result<Response, Error> {
// ❌ Sequential: takes 3+ seconds total
let user = fetch_user(&event.payload.user_id).await?;
let posts = fetch_posts(&event.payload.user_id).await?;
let comments = fetch_comments(&event.payload.user_id).await?;
Ok(Response { user, posts, comments })
}
```
**Good Pattern**:
```rust
async fn handler(event: LambdaEvent<Request>) -> Result<Response, Error> {
// ✅ Concurrent: all three requests happen simultaneously
let (user, posts, comments) = tokio::try_join!(
fetch_user(&event.payload.user_id),
fetch_posts(&event.payload.user_id),
fetch_comments(&event.payload.user_id),
)?;
Ok(Response { user, posts, comments })
}
```
**Suggestion**: Use `tokio::join!` or `tokio::try_join!` for concurrent operations. This can reduce execution time by 3-5x for I/O-bound workloads.
### 2. Resource Initialization
**What to Look For**: Creating clients inside the handler
**Bad Pattern**:
```rust
async fn handler(event: LambdaEvent<Request>) -> Result<Response, Error> {
// ❌ Creates new client for every invocation
let client = reqwest::Client::new();
let data = client.get("https://api.example.com").await?;
Ok(Response { data })
}
```
**Good Pattern**:
```rust
use std::sync::OnceLock;
// ✅ Initialized once per container (reused across invocations)
static HTTP_CLIENT: OnceLock<reqwest::Client> = OnceLock::new();
async fn handler(event: LambdaEvent<Request>) -> Result<Response, Error> {
let client = HTTP_CLIENT.get_or_init(|| {
reqwest::Client::builder()
.timeout(Duration::from_secs(10))
.build()
.unwrap()
});
let data = client.get("https://api.example.com").await?;
Ok(Response { data })
}
```
**Suggestion**: Use `OnceLock` for expensive resources (HTTP clients, database pools, AWS SDK clients) that should be initialized once and reused.
### 3. Binary Size Optimization
**What to Look For**: Missing release profile optimizations
**Check Cargo.toml**:
```toml
[profile.release]
opt-level = 'z' # ✅ Optimize for size
lto = true # ✅ Link-time optimization
codegen-units = 1 # ✅ Better optimization
strip = true # ✅ Strip symbols
panic = 'abort' # ✅ Smaller panic handler
```
**Suggestion**: Configure release profile for smaller binaries. Smaller binaries = faster cold starts and lower storage costs.
### 4. ARM64 (Graviton2) Usage
**What to Look For**: Building for x86_64 only
**Build Command**:
```bash
# ✅ Build for ARM64 (20% better price/performance)
cargo lambda build --release --arm64
```
**Suggestion**: Use ARM64 for 20% better price/performance and often faster cold starts.
### 5. Memory Configuration
**What to Look For**: Default memory settings
**Guidelines**:
```bash
# Test different memory configs
cargo lambda deploy --memory 512 # For simple functions
cargo lambda deploy --memory 1024 # For standard workloads
cargo lambda deploy --memory 2048 # For CPU-intensive tasks
```
**Suggestion**: Lambda allocates CPU proportionally to memory. For CPU-bound tasks, increasing memory can reduce execution time and total cost.
## Cost Optimization Patterns
### Pattern 1: Batch Processing
```rust
async fn handler(event: LambdaEvent<Vec<Item>>) -> Result<(), Error> {
// Process multiple items in one invocation
let futures = event.payload.iter().map(|item| process_item(item));
futures::future::try_join_all(futures).await?;
Ok(())
}
```
### Pattern 2: Early Return
```rust
async fn handler(event: LambdaEvent<Request>) -> Result<Response, Error> {
// ✅ Validate early, fail fast
if event.payload.user_id.is_empty() {
return Err(Error::from("user_id required"));
}
// Expensive operations only if validation passes
let user = fetch_user(&event.payload.user_id).await?;
Ok(Response { user })
}
```
## Your Approach
1. **Detect**: Identify Lambda handler code
2. **Analyze**: Check for concurrent operations, resource init, config
3. **Suggest**: Provide specific optimizations with code examples
4. **Explain**: Impact on performance and cost
Proactively suggest optimizations that will reduce Lambda execution time and costs.Related Skills
web-performance-optimization
Optimize website and web application performance including loading speed, Core Web Vitals, bundle size, caching strategies, and runtime performance
sql-optimization-patterns
Master SQL query optimization, indexing strategies, and EXPLAIN analysis to dramatically improve database performance and eliminate slow queries. Use when debugging slow queries, designing database schemas, or optimizing application performance.
spark-optimization
Optimize Apache Spark jobs with partitioning, caching, shuffle optimization, and memory tuning. Use when improving Spark performance, debugging slow jobs, or scaling data processing pipelines.
postgresql-optimization
PostgreSQL database optimization workflow for query tuning, indexing strategies, performance analysis, and production database management.
legal-advisor
Draft privacy policies, terms of service, disclaimers, and legal notices. Creates GDPR-compliant texts, cookie policies, and data processing agreements. Use PROACTIVELY for legal documentation, compliance texts, or regulatory requirements.
database-cloud-optimization-cost-optimize
You are a cloud cost optimization expert specializing in reducing infrastructure expenses while maintaining performance and reliability. Analyze cloud spending, identify savings opportunities, and implement cost-effective architectures across AWS, Azure, and GCP.
cost-optimization
Optimize cloud costs through resource rightsizing, tagging strategies, reserved instances, and spending analysis. Use when reducing cloud expenses, analyzing infrastructure costs, or implementing cost governance policies.
bazel-build-optimization
Optimize Bazel builds for large-scale monorepos. Use when configuring Bazel, implementing remote execution, or optimizing build performance for enterprise codebases.
application-performance-performance-optimization
Optimize end-to-end application performance with profiling, observability, and backend/frontend tuning. Use when coordinating performance optimization across the stack.
azure-cost-optimization
Identify and quantify cost savings across Azure subscriptions by analyzing actual costs, utilization metrics, and generating actionable optimization recommendations. USE FOR: optimize Azure costs, reduce Azure spending, reduce Azure expenses, analyze Azure costs, find cost savings, generate cost optimization report, find orphaned resources, rightsize VMs, cost analysis, reduce waste, Azure spending analysis, find unused resources, optimize Redis costs. DO NOT USE FOR: deploying resources (use azure-deploy), general Azure diagnostics (use azure-diagnostics), security issues (use azure-security)
test-coverage-advisor
Reviews test coverage and suggests missing test cases for error paths, edge cases, and business logic. Activates when users write tests or implement new features.
parquet-optimization
Proactively analyzes Parquet file operations and suggests optimization improvements for compression, encoding, row group sizing, and statistics. Activates when users are reading or writing Parquet files or discussing Parquet performance.