castai-deploy-integration

Deploy CAST AI across multi-cloud Kubernetes clusters with Terraform modules. Use when onboarding EKS, GKE, or AKS clusters to CAST AI using infrastructure-as-code patterns. Trigger with phrases like "deploy cast ai", "cast ai eks", "cast ai gke", "cast ai aks", "cast ai terraform module".

25 stars

Best use case

castai-deploy-integration is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Deploy CAST AI across multi-cloud Kubernetes clusters with Terraform modules. Use when onboarding EKS, GKE, or AKS clusters to CAST AI using infrastructure-as-code patterns. Trigger with phrases like "deploy cast ai", "cast ai eks", "cast ai gke", "cast ai aks", "cast ai terraform module".

Teams using castai-deploy-integration should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/castai-deploy-integration/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/jeremylongshore/claude-code-plugins-plus-skills/castai-deploy-integration/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/castai-deploy-integration/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How castai-deploy-integration Compares

Feature / Agentcastai-deploy-integrationStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Deploy CAST AI across multi-cloud Kubernetes clusters with Terraform modules. Use when onboarding EKS, GKE, or AKS clusters to CAST AI using infrastructure-as-code patterns. Trigger with phrases like "deploy cast ai", "cast ai eks", "cast ai gke", "cast ai aks", "cast ai terraform module".

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# CAST AI Deploy Integration

## Overview

Deploy CAST AI to EKS, GKE, and AKS clusters using official Terraform modules. Each cloud provider has a dedicated CAST AI module that handles IAM roles, node configuration, and autoscaler setup.

## Prerequisites

- Terraform 1.0+
- CAST AI Full Access API key
- Cloud provider credentials configured
- Existing Kubernetes cluster

## Instructions

### EKS Deployment

```hcl
# main.tf -- EKS cluster onboarding
module "castai_eks" {
  source  = "castai/eks-cluster/castai"
  version = "~> 3.0"

  api_token           = var.castai_api_token
  aws_account_id      = data.aws_caller_identity.current.account_id
  aws_cluster_region  = var.region
  aws_cluster_name    = var.cluster_name

  # IAM role for CAST AI to manage nodes
  aws_instance_profile_arn = aws_iam_instance_profile.castai.arn

  # Autoscaler configuration
  autoscaler_policies_json = jsonencode({
    enabled = true
    unschedulablePods = { enabled = true }
    nodeDownscaler = {
      enabled = true
      emptyNodes = { enabled = true, delaySeconds = 300 }
    }
    spotInstances = {
      enabled = true
      spotDiversityEnabled = true
    }
    clusterLimits = {
      enabled = true
      cpu = { minCores = 4, maxCores = 200 }
    }
  })

  # Node templates
  default_node_configuration = module.castai_eks.castai_node_configurations["default"]
}
```

### GKE Deployment

```hcl
module "castai_gke" {
  source  = "castai/gke-cluster/castai"
  version = "~> 2.0"

  api_token            = var.castai_api_token
  project_id           = var.gcp_project_id
  gke_cluster_name     = var.cluster_name
  gke_cluster_location = var.region

  gke_credentials = base64decode(
    google_container_cluster.this.master_auth[0].cluster_ca_certificate
  )

  autoscaler_policies_json = jsonencode({
    enabled = true
    unschedulablePods = { enabled = true }
    nodeDownscaler = {
      enabled = true
      emptyNodes = { enabled = true, delaySeconds = 300 }
    }
  })
}
```

### AKS Deployment

```hcl
module "castai_aks" {
  source  = "castai/aks/castai"
  version = "~> 1.0"

  api_token              = var.castai_api_token
  aks_cluster_name       = var.cluster_name
  aks_cluster_region     = var.region
  node_resource_group    = azurerm_kubernetes_cluster.this.node_resource_group
  azure_subscription_id  = data.azurerm_subscription.current.subscription_id
  azure_tenant_id        = data.azurerm_client_config.current.tenant_id

  autoscaler_policies_json = jsonencode({
    enabled = true
    unschedulablePods = { enabled = true }
    spotInstances = { enabled = true }
  })
}
```

### Multi-Cluster Deployment Pattern

```hcl
# Deploy CAST AI across all clusters with a for_each
variable "clusters" {
  type = map(object({
    name     = string
    provider = string  # eks, gke, aks
    region   = string
    max_cpu  = number
  }))
}

# Then reference the appropriate module per provider
```

## Error Handling

| Issue | Cause | Solution |
|-------|-------|----------|
| IAM role error | Missing permissions | Check CAST AI IAM docs for required policies |
| Module version conflict | Terraform lock | Run `terraform init -upgrade` |
| Cluster not appearing | Wrong credentials | Verify cloud provider auth |
| Policies not applying | JSON encoding error | Validate `jsonencode()` output |

## Resources

- [EKS Module](https://registry.terraform.io/modules/castai/eks-cluster/castai/latest)
- [GKE Module](https://registry.terraform.io/modules/castai/gke-cluster/castai/latest)
- [AKS Module](https://registry.terraform.io/modules/castai/aks/castai/latest)
- [CAST AI Terraform Provider](https://registry.terraform.io/providers/castai/castai/latest/docs)

## Next Steps

For webhook-based automation, see `castai-webhooks-events`.

Related Skills

zapier-integration-helper

25
from ComeOnOliver/skillshub

Zapier Integration Helper - Auto-activating skill for Business Automation. Triggers on: zapier integration helper, zapier integration helper Part of the Business Automation skill category.

vertex-ai-deployer

25
from ComeOnOliver/skillshub

Vertex Ai Deployer - Auto-activating skill for ML Deployment. Triggers on: vertex ai deployer, vertex ai deployer Part of the ML Deployment skill category.

sagemaker-endpoint-deployer

25
from ComeOnOliver/skillshub

Sagemaker Endpoint Deployer - Auto-activating skill for ML Deployment. Triggers on: sagemaker endpoint deployer, sagemaker endpoint deployer Part of the ML Deployment skill category.

orchestrating-deployment-pipelines

25
from ComeOnOliver/skillshub

Deploy use when you need to work with deployment and CI/CD. This skill provides deployment automation and orchestration with comprehensive guidance and automation. Trigger with phrases like "deploy application", "create pipeline", or "automate deployment".

deploying-monitoring-stacks

25
from ComeOnOliver/skillshub

This skill deploys monitoring stacks, including Prometheus, Grafana, and Datadog. It is used when the user needs to set up or configure monitoring infrastructure for applications or systems. The skill generates production-ready configurations, implements best practices, and supports multi-platform deployments. Use this when the user explicitly requests to deploy a monitoring stack, or mentions Prometheus, Grafana, or Datadog in the context of infrastructure setup.

deploying-machine-learning-models

25
from ComeOnOliver/skillshub

This skill enables Claude to deploy machine learning models to production environments. It automates the deployment workflow, implements best practices for serving models, optimizes performance, and handles potential errors. Use this skill when the user requests to deploy a model, serve a model via an API, or put a trained model into a production environment. The skill is triggered by requests containing terms like "deploy model," "productionize model," "serve model," or "model deployment."

managing-deployment-rollbacks

25
from ComeOnOliver/skillshub

Deploy use when you need to work with deployment and CI/CD. This skill provides deployment automation and orchestration with comprehensive guidance and automation. Trigger with phrases like "deploy application", "create pipeline", or "automate deployment".

kubernetes-deployment-creator

25
from ComeOnOliver/skillshub

Kubernetes Deployment Creator - Auto-activating skill for DevOps Advanced. Triggers on: kubernetes deployment creator, kubernetes deployment creator Part of the DevOps Advanced skill category.

integration-test-setup

25
from ComeOnOliver/skillshub

Integration Test Setup - Auto-activating skill for Test Automation. Triggers on: integration test setup, integration test setup Part of the Test Automation skill category.

running-integration-tests

25
from ComeOnOliver/skillshub

This skill enables Claude to run and manage integration test suites. It automates environment setup, database seeding, service orchestration, and cleanup. Use this skill when the user asks to "run integration tests", "execute integration tests", or any command that implies running integration tests for a project, including specifying particular test suites or options like code coverage. It is triggered by phrases such as "/run-integration", "/rit", or requests mentioning "integration tests". The plugin handles database creation, migrations, seeding, and dependent service management.

integration-test-generator

25
from ComeOnOliver/skillshub

Integration Test Generator - Auto-activating skill for API Integration. Triggers on: integration test generator, integration test generator Part of the API Integration skill category.

fathom-ci-integration

25
from ComeOnOliver/skillshub

Test Fathom integrations in CI/CD pipelines. Trigger with phrases like "fathom CI", "fathom github actions", "test fathom pipeline".