configuring-auto-scaling-policies
Configure use when you need to work with auto-scaling. This skill provides auto-scaling configuration with comprehensive guidance and automation. Trigger with phrases like "configure auto-scaling", "set up elastic scaling", or "implement scaling".
Best use case
configuring-auto-scaling-policies is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Configure use when you need to work with auto-scaling. This skill provides auto-scaling configuration with comprehensive guidance and automation. Trigger with phrases like "configure auto-scaling", "set up elastic scaling", or "implement scaling".
Teams using configuring-auto-scaling-policies should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/configuring-auto-scaling-policies/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How configuring-auto-scaling-policies Compares
| Feature / Agent | configuring-auto-scaling-policies | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Configure use when you need to work with auto-scaling. This skill provides auto-scaling configuration with comprehensive guidance and automation. Trigger with phrases like "configure auto-scaling", "set up elastic scaling", or "implement scaling".
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Configuring Auto-Scaling Policies ## Overview Configure auto-scaling policies for cloud workloads across AWS Auto Scaling Groups, GCP Managed Instance Groups, Azure VMSS, and Kubernetes Horizontal Pod Autoscaler (HPA). Generate scaling configurations based on CPU, memory, request rate, or custom metrics with appropriate thresholds, cooldown periods, and scale-in protection. ## Prerequisites - Cloud provider CLI installed and authenticated (`aws`, `gcloud`, or `az`) - For Kubernetes HPA: `kubectl` configured with cluster access and metrics-server deployed - Baseline performance data for the target workload (average CPU, memory, request rate) - Understanding of traffic patterns (steady, bursty, scheduled) - IAM permissions to create/modify scaling policies and CloudWatch/Stackdriver alarms ## Instructions 1. Identify the scaling target: EC2 Auto Scaling Group, GCP MIG, Azure VMSS, or Kubernetes Deployment 2. Analyze current workload metrics to establish baseline utilization and peak patterns 3. Define scaling boundaries: minimum instances/pods, maximum instances/pods, desired count 4. Select scaling metric(s): CPU utilization, memory, request count, queue depth, or custom metrics 5. Set target thresholds: scale-out trigger (e.g., CPU > 70%), scale-in trigger (e.g., CPU < 30%) 6. Configure cooldown periods to prevent flapping (typically 300s scale-out, 600s scale-in) 7. Add scale-in protection for stateful workloads or leader nodes if needed 8. Generate the scaling policy configuration in the appropriate format (Terraform, YAML, or CLI commands) 9. Validate by simulating load and confirming scaling events fire correctly ## Output - Terraform HCL for AWS ASG scaling policies with CloudWatch alarms - Kubernetes HPA manifests (YAML) with resource or custom metric targets - GCP autoscaler configurations for Managed Instance Groups - Scaling policy JSON/YAML for Azure VMSS - CloudWatch or Stackdriver alarm definitions tied to scaling actions ## Error Handling | Error | Cause | Solution | |-------|-------|---------| | `No scaling activity despite high load` | Metric not reaching threshold or cooldown active | Verify metric source in CloudWatch/Stackdriver; check cooldown timer with `describe-scaling-activities` | | `Scaling too aggressively (flapping)` | Cooldown too short or threshold too sensitive | Increase cooldown period and widen the gap between scale-out and scale-in thresholds | | `Max capacity reached` | Instance/pod limit hit during traffic spike | Raise `max_size` or implement request queuing as a backpressure mechanism | | `HPA unable to compute replica count` | Metrics server not deployed or metric unavailable | Install metrics-server and verify `kubectl top pods` returns data | | `FailedScaleUp: insufficient capacity` | Cloud provider out of capacity in selected AZ/region | Add multiple AZs to the ASG or use mixed instance types with allocation strategy | ## Examples - "Configure an AWS ASG with target tracking at 65% CPU, min 2 / max 20 instances, and 5-minute cooldown." - "Create a Kubernetes HPA for a deployment that scales from 3 to 50 pods based on requests-per-second using a custom Prometheus metric." - "Set up scheduled scaling for a GCP MIG: scale to 10 instances at 8am UTC and back to 2 at 10pm." ## Resources - AWS Auto Scaling: https://docs.aws.amazon.com/autoscaling/ec2/userguide/ - Kubernetes HPA: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ - GCP Autoscaler: https://cloud.google.com/compute/docs/autoscaler - Azure VMSS Autoscale: https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-overview
Related Skills
managing-autonomous-development
Enables Claude to manage Sugar's autonomous development workflows. It allows Claude to create tasks, view the status of the system, review pending tasks, and start autonomous execution mode. Use this skill when the user asks to create a new development task using `/sugar-task`, check the system status with `/sugar-status`, review pending tasks via `/sugar-review`, or initiate autonomous development using `/sugar-run`. It provides a comprehensive interface for interacting with the Sugar autonomous development system.
configuring-service-meshes
This skill configures service meshes like Istio and Linkerd for microservices. It generates production-ready configurations, implements best practices, and ensures a security-first approach. Use this skill when the user asks to "configure service mesh", "setup Istio", "setup Linkerd", or requests assistance with "service mesh configuration" for their microservices architecture. The configurations will be tailored to the specified infrastructure requirements.
preprocessing-data-with-automated-pipelines
Process automate data cleaning, transformation, and validation for ML tasks. Use when requesting "preprocess data", "clean data", "ETL pipeline", or "data transformation". Trigger with relevant phrases based on skill purpose.
managing-network-policies
This skill enables Claude to manage Kubernetes network policies and firewall rules. It allows Claude to generate configurations and setup code based on specific requirements and infrastructure. Use this skill when the user requests to create, modify, or analyze network policies for Kubernetes, or when the user mentions "network-policy", "firewall rules", or "Kubernetes security". This skill is useful for implementing best practices and production-ready configurations for network security in a Kubernetes environment.
automating-mobile-app-testing
This skill enables automated testing of mobile applications on iOS and Android platforms using frameworks like Appium, Detox, XCUITest, and Espresso. It generates end-to-end tests, sets up page object models, and handles platform-specific elements. Use this skill when the user requests mobile app testing, test automation for iOS or Android, or needs assistance with setting up device farms and simulators. The skill is triggered by terms like "mobile testing", "appium", "detox", "xcuitest", "espresso", "android test", "ios test".
configuring-load-balancers
This skill configures load balancers, including ALB, NLB, Nginx, and HAProxy. It generates production-ready configurations based on specified requirements and infrastructure. Use this skill when the user asks to "configure load balancer", "create load balancer config", "generate nginx config", "setup HAProxy", or mentions specific load balancer types like "ALB" or "NLB". It's ideal for DevOps tasks, infrastructure automation, and generating load balancer configurations for different environments.
google-sheets-automation
Google Sheets Automation - Auto-activating skill for Business Automation. Triggers on: google sheets automation, google sheets automation Part of the Business Automation skill category.
automating-database-backups
This skill automates database backups using the database-backup-automator plugin. It creates scripts for scheduled backups, compression, encryption, and restore procedures across PostgreSQL, MySQL, MongoDB, and SQLite. Use this when the user requests database backup automation, disaster recovery planning, setting up backup schedules, or creating restore procedures. The skill is triggered by phrases like "create database backup", "automate database backups", "setup backup schedule", or "generate restore procedure".
validating-cors-policies
This skill enables Claude to validate Cross-Origin Resource Sharing (CORS) policies. It uses the cors-policy-validator plugin to analyze CORS configurations and identify potential security vulnerabilities. Use this skill when the user requests to "validate CORS policy", "check CORS configuration", "analyze CORS headers", or asks about "CORS security". It helps ensure that CORS policies are correctly implemented, preventing unauthorized cross-origin requests and protecting sensitive data.
building-automl-pipelines
Build automated machine learning pipelines with feature engineering, model selection, and hyperparameter tuning. Use when automating ML workflows from data preparation through model deployment. Trigger with phrases like "build automl pipeline", "automate ml workflow", or "create automated training pipeline".
automating-api-testing
This skill automates API endpoint testing, including request generation, validation, and comprehensive test coverage for REST and GraphQL APIs. It is used when the user requests API testing, contract testing, or validation against OpenAPI specifications. The skill analyzes API endpoints and generates test suites covering CRUD operations, authentication flows, and security aspects. It also validates response status codes, headers, and body structure. Use this skill when the user mentions "API testing", "REST API tests", "GraphQL API tests", "contract tests", or "OpenAPI validation".
structured-autonomy-plan
Structured Autonomy Planning Prompt