gcp-cloud-sql

Provision and manage Cloud SQL instances on Google Cloud for MySQL, PostgreSQL, and SQL Server. Configure high availability, read replicas, automated backups, IAM database authentication, the Cloud SQL Auth Proxy, and Terraform deployments. Use for managed relational databases on GCP.

26 stars

Best use case

gcp-cloud-sql is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Provision and manage Cloud SQL instances on Google Cloud for MySQL, PostgreSQL, and SQL Server. Configure high availability, read replicas, automated backups, IAM database authentication, the Cloud SQL Auth Proxy, and Terraform deployments. Use for managed relational databases on GCP.

Teams using gcp-cloud-sql should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/gcp-cloud-sql/SKILL.md --create-dirs "https://raw.githubusercontent.com/TerminalSkills/skills/main/skills/gcp-cloud-sql/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/gcp-cloud-sql/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How gcp-cloud-sql Compares

Feature / Agentgcp-cloud-sqlStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Provision and manage Cloud SQL instances on Google Cloud for MySQL, PostgreSQL, and SQL Server. Configure high availability, read replicas, automated backups, IAM database authentication, the Cloud SQL Auth Proxy, and Terraform deployments. Use for managed relational databases on GCP.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# GCP Cloud SQL

## Overview

Cloud SQL is Google Cloud's managed relational database service for MySQL, PostgreSQL, and SQL Server. It handles patches, upgrades, replication, automated backups, point-in-time recovery, and HA failover so applications focus on schema and queries instead of database administration.

## Instructions

### Core Concepts

- **Instance** — a managed VM running MySQL/Postgres/SQL Server with attached storage
- **High Availability (HA)** — synchronous replica in another zone with automatic failover
- **Read replica** — async read-only copy for scaling reads or cross-region DR
- **Cloud SQL Auth Proxy** — local sidecar that handles IAM auth, TLS, and connection routing
- **Private IP** — instance reachable only via VPC peering, never the public internet
- **Point-in-time recovery (PITR)** — restore to any second within the retention window using binary logs

### Prerequisites

```bash
gcloud services enable sqladmin.googleapis.com servicenetworking.googleapis.com

# One-time: reserve a private range for VPC peering (for private IP instances)
gcloud compute addresses create google-managed-services-default \
  --global --purpose=VPC_PEERING --prefix-length=16 \
  --network=default

gcloud services vpc-peerings connect \
  --service=servicenetworking.googleapis.com \
  --ranges=google-managed-services-default --network=default
```

### Creating a PostgreSQL Instance with HA

```bash
gcloud sql instances create orders-db \
  --database-version=POSTGRES_15 \
  --tier=db-custom-2-7680 \
  --region=us-central1 \
  --availability-type=REGIONAL \
  --network=default \
  --no-assign-ip \
  --backup-start-time=02:00 \
  --enable-point-in-time-recovery \
  --retained-backups-count=14 \
  --database-flags=cloudsql.iam_authentication=on,log_min_duration_statement=500
```

```bash
# Set the postgres password (or skip and use IAM auth exclusively)
gcloud sql users set-password postgres \
  --instance=orders-db \
  --password="$(openssl rand -base64 24)"

# Create application database and user
gcloud sql databases create orders --instance=orders-db
gcloud sql users create app_user --instance=orders-db --password="$(openssl rand -base64 24)"
```

### Creating a MySQL Instance

```bash
gcloud sql instances create analytics-db \
  --database-version=MYSQL_8_0 \
  --tier=db-n1-standard-2 \
  --region=us-central1 \
  --storage-type=SSD \
  --storage-size=100 \
  --storage-auto-increase \
  --backup-start-time=03:00 \
  --enable-bin-log
```

### Read Replicas

```bash
# Read replica in the same region (scale reads)
gcloud sql instances create orders-db-replica \
  --master-instance-name=orders-db \
  --region=us-central1 \
  --tier=db-custom-2-7680
```

```bash
# Cross-region replica (DR + low-latency reads in another region)
gcloud sql instances create orders-db-eu \
  --master-instance-name=orders-db \
  --region=europe-west1 \
  --tier=db-custom-2-7680
```

```bash
# Promote a replica to a standalone primary (DR failover)
gcloud sql instances promote-replica orders-db-eu
```

### Connecting via Cloud SQL Auth Proxy

```bash
# Get the connection name
gcloud sql instances describe orders-db --format="value(connectionName)"
# Returns: my-project:us-central1:orders-db

# Run the proxy locally
./cloud-sql-proxy --port 5432 my-project:us-central1:orders-db

# In another terminal:
psql "host=127.0.0.1 port=5432 user=app_user dbname=orders"
```

```yaml
# Cloud Run sidecar pattern (Cloud Run handles the proxy automatically with --add-cloudsql-instances)
# For GKE, deploy the proxy as a sidecar in the same pod:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
spec:
  template:
    spec:
      serviceAccountName: api-sa  # bound to a GSA with roles/cloudsql.client
      containers:
        - name: api
          image: gcr.io/my-project/api:latest
          env:
            - name: DATABASE_URL
              value: "postgresql://app_user@127.0.0.1:5432/orders"
        - name: cloud-sql-proxy
          image: gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.11.0
          args:
            - "--auto-iam-authn"
            - "--private-ip"
            - "my-project:us-central1:orders-db"
          securityContext:
            runAsNonRoot: true
```

### IAM Database Authentication

```bash
# Add a service account as a Postgres database user
gcloud sql users create app-sa@my-project.iam \
  --instance=orders-db \
  --type=cloud_iam_service_account
```

```sql
-- Grant database privileges (run as postgres superuser)
GRANT CONNECT ON DATABASE orders TO "app-sa@my-project.iam";
GRANT USAGE ON SCHEMA public TO "app-sa@my-project.iam";
GRANT SELECT, INSERT, UPDATE ON ALL TABLES IN SCHEMA public TO "app-sa@my-project.iam";
```

The application uses no static password — the proxy fetches a short-lived OAuth token from the metadata server and passes it as the database password.

### Backups and Point-in-Time Recovery

```bash
# Manual on-demand backup before risky migration
gcloud sql backups create --instance=orders-db --description="pre-v2-migration"
```

```bash
# Restore the database to a specific point in time (requires PITR enabled)
gcloud sql instances clone orders-db orders-db-recovery \
  --point-in-time='2026-04-15T14:30:00Z'
```

### Terraform

```hcl
resource "google_sql_database_instance" "orders" {
  name             = "orders-db"
  database_version = "POSTGRES_15"
  region           = "us-central1"
  deletion_protection = true

  settings {
    tier              = "db-custom-2-7680"
    availability_type = "REGIONAL"

    ip_configuration {
      ipv4_enabled    = false
      private_network = data.google_compute_network.default.id
    }

    backup_configuration {
      enabled                        = true
      point_in_time_recovery_enabled = true
      start_time                     = "02:00"
      backup_retention_settings {
        retained_backups = 14
      }
    }

    database_flags {
      name  = "cloudsql.iam_authentication"
      value = "on"
    }
  }
}

resource "google_sql_user" "app_sa" {
  name     = "app-sa@${var.project_id}.iam"
  instance = google_sql_database_instance.orders.name
  type     = "CLOUD_IAM_SERVICE_ACCOUNT"
}
```
## Examples

### Example 1 — Migrate a self-hosted Postgres to Cloud SQL

User wants to move a 200 GB Postgres database to Cloud SQL with minimal downtime. Create a target instance with `--availability-type=REGIONAL` and PITR enabled, use Database Migration Service to perform a continuous logical replication from the source, run validation queries, then promote the destination during a short cutover window. Hand the user the new connection string via the Auth Proxy and a Terraform module for the instance.

### Example 2 — Add cross-region read replica for EU users

User reports high read latency from European users. Create a read replica in `europe-west1` with the same tier as primary, point the EU app instances at the replica's connection name through their Auth Proxy sidecars, and verify replication lag stays under 5 seconds via `gcloud sql operations` and Cloud Monitoring metrics.

## Guidelines

- Use **private IP only** in production — never expose Cloud SQL on a public IP
- Always set `availability-type=REGIONAL` for production workloads
- Enable PITR (`--enable-point-in-time-recovery`) — it's cheap insurance against accidental writes
- Prefer **IAM database authentication** over passwords — works for service accounts and human users
- Run the Cloud SQL Auth Proxy as a sidecar (Cloud Run, GKE) rather than embedding TLS logic in the app
- Pin database flags via Terraform so they survive instance recreation
- For schema migrations, take an on-demand backup first
- Monitor `cloudsql.googleapis.com/database/cpu/utilization` and connection count — alert at 80%
- Read replicas are async — never write to them and assume eventual consistency for reads from them

Related Skills

hetzner-cloud

26
from TerminalSkills/skills

Manage Hetzner Cloud infrastructure from the terminal. Use when a user asks to create a Hetzner server, manage VPS instances, set up firewalls, configure networks, manage volumes, create snapshots, handle SSH keys, or provision infrastructure on Hetzner. Covers the hcloud CLI for all resource types. For deploying applications on top of Hetzner servers, see coolify.

gcp-cloud-storage

26
from TerminalSkills/skills

Manage Google Cloud Storage for scalable object storage. Create and configure buckets, upload and organize objects, generate signed URLs for secure temporary access, set lifecycle rules for cost optimization, and configure access control.

gcp-cloud-run

26
from TerminalSkills/skills

Deploy serverless containers on Google Cloud Run — services for HTTP traffic, jobs for batch and scheduled tasks, and worker pools for always-on pull-based background processing. Build and push container images, configure auto-scaling from zero, split traffic for canary deploys, and set up custom domains with managed TLS.

gcp-cloud-functions

26
from TerminalSkills/skills

Build serverless functions on Google Cloud Functions. Deploy HTTP and event-driven functions triggered by Pub/Sub, Cloud Storage, and Firestore. Configure runtime settings, manage dependencies, and connect to other GCP services.

gcloud

26
from TerminalSkills/skills

Google Cloud CLI for managing GCP resources. Use when the user needs to work with Compute Engine, Cloud Storage, Cloud Functions, IAM, GKE, and other Google Cloud services from the terminal.

cloudflare-workers

26
from TerminalSkills/skills

Assists with building and deploying applications on Cloudflare Workers edge computing platform. Use when working with Workers runtime, Wrangler CLI, KV, D1, R2, Durable Objects, Queues, or Hyperdrive. Trigger words: cloudflare, workers, edge functions, wrangler, KV, D1, R2, durable objects, edge computing.

cloudflare-vectorize

26
from TerminalSkills/skills

Serverless vector database at the edge with Cloudflare Vectorize. Use when: building semantic search on Cloudflare Workers, RAG pipelines at the edge, low-latency vector similarity search, or storing and querying embeddings without managing a separate vector database.

cloudflare-ai

26
from TerminalSkills/skills

You are an expert in Cloudflare Workers AI, the serverless AI inference platform running on Cloudflare's global network. You help developers run LLMs, embedding models, image generation, speech-to-text, and translation models at the edge with zero cold starts, pay-per-use pricing, and integration with Workers, Pages, and Vectorize — enabling AI features without managing GPU infrastructure.

cloud-resource-analyzer

26
from TerminalSkills/skills

Finds orphaned, idle, and underutilized cloud resources across AWS, GCP, or Azure accounts. Use when someone needs to audit cloud spending, find unused EBS volumes, stale snapshots, unattached IPs, idle load balancers, or oversized RDS instances. Trigger words: cloud waste, orphaned resources, unused volumes, cloud audit, infrastructure cleanup, cloud bill analysis.

aws-cloudfront

26
from TerminalSkills/skills

Configure Amazon CloudFront for global content delivery. Set up distributions with S3 and ALB origins, define cache behaviors and TTLs, invalidate cached content, and use Lambda@Edge for request/response manipulation at the edge.

zustand

26
from TerminalSkills/skills

You are an expert in Zustand, the small, fast, and scalable state management library for React. You help developers manage global state without boilerplate using Zustand's hook-based stores, selectors for performance, middleware (persist, devtools, immer), computed values, and async actions — replacing Redux complexity with a simple, un-opinionated API in under 1KB.

zoho

26
from TerminalSkills/skills

Integrate and automate Zoho products. Use when a user asks to work with Zoho CRM, Zoho Books, Zoho Desk, Zoho Projects, Zoho Mail, or Zoho Creator, build custom integrations via Zoho APIs, automate workflows with Deluge scripting, sync data between Zoho apps and external systems, manage leads and deals, automate invoicing, build custom Zoho Creator apps, set up webhooks, or manage Zoho organization settings. Covers Zoho CRM, Books, Desk, Projects, Creator, and cross-product integrations.