infrastructure
Infrastructure as Code patterns for deploying Guts nodes using Terraform, Docker, and Kubernetes
Best use case
infrastructure is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Infrastructure as Code patterns for deploying Guts nodes using Terraform, Docker, and Kubernetes
Teams using infrastructure should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/infrastructure/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How infrastructure Compares
| Feature / Agent | infrastructure | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Infrastructure as Code patterns for deploying Guts nodes using Terraform, Docker, and Kubernetes
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Infrastructure Skill for Guts
You are managing infrastructure for a decentralized application with multiple node types.
## Deployment Targets
1. **Local Development**: Docker Compose
2. **Testing**: Kubernetes (k3s/kind)
3. **Production**: Cloud-agnostic Kubernetes + Terraform
## Terraform Patterns
### Module Structure
```
infra/
├── terraform/
│ ├── modules/
│ │ ├── network/
│ │ ├── compute/
│ │ └── storage/
│ ├── environments/
│ │ ├── dev/
│ │ ├── staging/
│ │ └── prod/
│ └── main.tf
```
### Example Module
```hcl
# modules/guts-node/main.tf
variable "node_count" {
type = number
description = "Number of Guts nodes to deploy"
default = 3
}
variable "instance_type" {
type = string
description = "Instance type for nodes"
default = "t3.medium"
}
resource "aws_instance" "guts_node" {
count = var.node_count
ami = data.aws_ami.ubuntu.id
instance_type = var.instance_type
tags = {
Name = "guts-node-${count.index}"
Environment = var.environment
Project = "guts"
}
}
```
## Docker Best Practices
### Multi-stage Builds
```dockerfile
# Build stage
FROM rust:1.75-slim as builder
WORKDIR /app
COPY . .
RUN cargo build --release --bin guts-node
# Runtime stage
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/guts-node /usr/local/bin/
EXPOSE 8080 9000
ENTRYPOINT ["guts-node"]
```
### Docker Compose for Development
```yaml
version: '3.8'
services:
node1:
build: .
ports:
- "8081:8080"
environment:
- GUTS_NODE_ID=node1
- GUTS_PEERS=node2:9000,node3:9000
volumes:
- node1-data:/data
node2:
build: .
ports:
- "8082:8080"
environment:
- GUTS_NODE_ID=node2
- GUTS_PEERS=node1:9000,node3:9000
volumes:
- node2-data:/data
node3:
build: .
ports:
- "8083:8080"
environment:
- GUTS_NODE_ID=node3
- GUTS_PEERS=node1:9000,node2:9000
volumes:
- node3-data:/data
volumes:
node1-data:
node2-data:
node3-data:
```
## Kubernetes Patterns
### StatefulSet for Nodes
```yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: guts-node
spec:
serviceName: guts-nodes
replicas: 3
selector:
matchLabels:
app: guts-node
template:
metadata:
labels:
app: guts-node
spec:
containers:
- name: guts-node
image: guts/node:latest
ports:
- containerPort: 8080
name: api
- containerPort: 9000
name: p2p
volumeMounts:
- name: data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 100Gi
```
## Monitoring Stack
- **Metrics**: Prometheus with custom Rust metrics
- **Logs**: Loki + Grafana
- **Tracing**: Jaeger with OpenTelemetry
## Security Checklist
- [ ] TLS certificates via cert-manager
- [ ] Network policies for pod isolation
- [ ] Secrets management with external-secrets
- [ ] Regular security scanning with Trivy
- [ ] RBAC for Kubernetes accessRelated Skills
collecting-infrastructure-metrics
This skill enables Claude to collect comprehensive infrastructure performance metrics across compute, storage, network, containers, load balancers, and databases. It is triggered when the user requests "collect infrastructure metrics", "monitor server performance", "set up performance dashboards", or needs to analyze system resource utilization. The skill configures metrics collection, sets up aggregation, and helps create infrastructure dashboards for health monitoring and capacity tracking. It supports configuration for Prometheus, Datadog, and CloudWatch.
detecting-infrastructure-drift
This skill enables Claude to detect infrastructure drift from a desired state. It uses the `drift-detect` command to identify discrepancies between the current infrastructure configuration and the intended configuration, as defined in infrastructure-as-code tools like Terraform. Use this skill when the user asks to check for infrastructure drift, identify configuration changes, or ensure that the current infrastructure matches the desired state. It is particularly useful in DevOps workflows for maintaining infrastructure consistency and preventing configuration errors. Trigger this skill when the user mentions "drift detection," "infrastructure changes," "configuration drift," or requests a "drift report."
generating-infrastructure-as-code
This skill enables Claude to generate Infrastructure as Code (IaC) configurations. It uses the infrastructure-as-code-generator plugin to create production-ready IaC for Terraform, CloudFormation, Pulumi, ARM Templates, and CDK. Use this skill when the user requests IaC configurations for cloud infrastructure, specifying the platform (e.g., Terraform, CloudFormation) and cloud provider (e.g., AWS, Azure, GCP), or when the user needs help automating infrastructure deployment. Trigger terms include: "generate IaC", "create Terraform", "CloudFormation template", "Pulumi program", "infrastructure code".
checking-infrastructure-compliance
Execute use when you need to work with compliance checking. This skill provides compliance monitoring and validation with comprehensive guidance and automation. Trigger with phrases like "check compliance", "validate policies", or "audit compliance".
import-infrastructure-as-code
Import existing Azure resources into Terraform using Azure CLI discovery and Azure Verified Modules (AVM). Use when asked to reverse-engineer live Azure infrastructure, generate Infrastructure as Code from existing subscriptions/resource groups/resource IDs, map dependencies, derive exact import addresses from downloaded module source, prevent configuration drift, and produce AVM-based Terraform files ready for validation and planning across any Azure resource type.
terraform-infrastructure
Terraform infrastructure as code workflow for provisioning cloud resources, creating reusable modules, and managing infrastructure at scale.
infrastructure-reporting
Generate comprehensive network infrastructure reports including health status, performance analysis, security audits, and capacity planning recommendations.
infrastructure-management
Manage network hosts and devices across all UniFi sites. Monitor host status, device configuration, and infrastructure health for comprehensive inventory management.
test-infrastructure
When invoked:
cloud-infrastructure
Cloud infrastructure design and deployment patterns for AWS, Azure, and GCP. Use when designing cloud architectures, implementing IaC with Terraform, optimizing costs, or setting up multi-region deployments.
Trieve — AI Search Infrastructure
## Overview
Svix — Webhook Delivery Infrastructure
You are an expert in Svix, the enterprise webhook delivery platform. You help developers send reliable webhooks to customers with automatic retries, signature verification, delivery monitoring, endpoint management, and event type filtering — replacing custom webhook infrastructure with a purpose-built service used by companies like Clerk, Resend, and Liveblocks.