gcp-cloud-storage
Manage Google Cloud Storage for scalable object storage. Create and configure buckets, upload and organize objects, generate signed URLs for secure temporary access, set lifecycle rules for cost optimization, and configure access control.
Best use case
gcp-cloud-storage is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Manage Google Cloud Storage for scalable object storage. Create and configure buckets, upload and organize objects, generate signed URLs for secure temporary access, set lifecycle rules for cost optimization, and configure access control.
Teams using gcp-cloud-storage should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/gcp-cloud-storage/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How gcp-cloud-storage Compares
| Feature / Agent | gcp-cloud-storage | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Manage Google Cloud Storage for scalable object storage. Create and configure buckets, upload and organize objects, generate signed URLs for secure temporary access, set lifecycle rules for cost optimization, and configure access control.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# GCP Cloud Storage
Google Cloud Storage is a unified object storage service with global edge caching. It offers multiple storage classes (Standard, Nearline, Coldline, Archive) for cost optimization, strong consistency, and integration with all GCP services.
## Core Concepts
- **Bucket** — globally unique container, scoped to a project and location
- **Object** — a file with metadata, identified by name (path) within a bucket
- **Storage Class** — Standard, Nearline (30d), Coldline (90d), Archive (365d)
- **Signed URL** — time-limited URL for authenticated access without credentials
- **Lifecycle Rule** — automatic actions based on age, class, or conditions
- **IAM / ACL** — access control at bucket and object level
## Bucket Operations
```bash
# Create a bucket with location and default storage class
gcloud storage buckets create gs://my-app-assets-prod \
--location=us-central1 \
--default-storage-class=STANDARD \
--uniform-bucket-level-access
```
```bash
# List buckets
gcloud storage ls
```
```bash
# Set bucket to public read (for static hosting)
gcloud storage buckets add-iam-policy-binding gs://my-app-assets-prod \
--member=allUsers \
--role=roles/storage.objectViewer
```
```bash
# Enable versioning
gcloud storage buckets update gs://my-app-assets-prod --versioning
```
## Object Operations
```bash
# Upload a file
gcloud storage cp ./build/app.zip gs://my-app-assets-prod/releases/v1.2.0/app.zip
```
```bash
# Sync a directory
gcloud storage rsync ./dist gs://my-app-assets-prod/static/ \
--delete-unmatched-destination-objects \
--cache-control="public, max-age=86400"
```
```bash
# Copy between buckets
gcloud storage cp gs://source-bucket/data.csv gs://dest-bucket/data.csv
```
```bash
# List objects with prefix
gcloud storage ls gs://my-app-assets-prod/releases/ --recursive
```
```bash
# Remove objects
gcloud storage rm gs://my-app-assets-prod/old-file.txt
```
## Signed URLs
```python
# Generate signed URL for download (1 hour)
from google.cloud import storage
from datetime import timedelta
client = storage.Client()
bucket = client.bucket('my-app-assets-prod')
blob = bucket.blob('reports/q4.pdf')
url = blob.generate_signed_url(
version='v4',
expiration=timedelta(hours=1),
method='GET'
)
print(f"Download URL: {url}")
```
```python
# Generate signed URL for upload
url = blob.generate_signed_url(
version='v4',
expiration=timedelta(minutes=15),
method='PUT',
content_type='application/octet-stream'
)
print(f"Upload URL: {url}")
# Client uploads with: curl -X PUT -H "Content-Type: application/octet-stream" --data-binary @file "$url"
```
```bash
# Generate signed URL with gsutil
gcloud storage sign-url gs://my-app-assets-prod/reports/q4.pdf \
--duration=1h \
--private-key-file=service-account.json
```
## Lifecycle Rules
```json
// lifecycle-config.json — transition and expire objects automatically
{
"rule": [
{
"action": {"type": "SetStorageClass", "storageClass": "NEARLINE"},
"condition": {"age": 30, "matchesPrefix": ["logs/"]}
},
{
"action": {"type": "SetStorageClass", "storageClass": "COLDLINE"},
"condition": {"age": 90, "matchesPrefix": ["logs/"]}
},
{
"action": {"type": "Delete"},
"condition": {"age": 365, "matchesPrefix": ["logs/"]}
},
{
"action": {"type": "AbortIncompleteMultipartUpload"},
"condition": {"age": 7}
},
{
"action": {"type": "Delete"},
"condition": {"numNewerVersions": 3, "isLive": false}
}
]
}
```
```bash
# Apply lifecycle rules
gcloud storage buckets update gs://my-app-assets-prod \
--lifecycle-file=lifecycle-config.json
```
## Static Website Hosting
```bash
# Configure bucket for static website
gcloud storage buckets update gs://my-app-website \
--web-main-page-suffix=index.html \
--web-not-found-page=404.html
```
```bash
# Upload website files with appropriate content types
gcloud storage cp -r ./build/* gs://my-app-website/ \
--cache-control="public, max-age=3600"
```
## Event Notifications
```bash
# Notify Pub/Sub when objects are created
gcloud storage buckets notifications create gs://my-app-assets-prod \
--topic=storage-events \
--event-types=OBJECT_FINALIZE \
--object-prefix=uploads/
```
## CORS Configuration
```json
// cors-config.json — allow browser uploads
[
{
"origin": ["https://myapp.com"],
"method": ["GET", "PUT", "POST"],
"responseHeader": ["Content-Type"],
"maxAgeSeconds": 3600
}
]
```
```bash
# Apply CORS
gcloud storage buckets update gs://my-app-assets-prod --cors-file=cors-config.json
```
## Access Control
```bash
# Grant a service account read access
gcloud storage buckets add-iam-policy-binding gs://my-app-assets-prod \
--member=serviceAccount:app@my-project.iam.gserviceaccount.com \
--role=roles/storage.objectViewer
```
```bash
# Remove public access
gcloud storage buckets remove-iam-policy-binding gs://my-app-assets-prod \
--member=allUsers \
--role=roles/storage.objectViewer
```
## Best Practices
- Enable uniform bucket-level access (disable ACLs) for simpler permissions
- Use lifecycle rules to automatically transition cold data to cheaper classes
- Generate signed URLs for temporary access instead of making objects public
- Enable versioning on buckets with critical data
- Use `gcloud storage rsync` for efficient directory synchronization
- Set appropriate Cache-Control headers for CDN and browser caching
- Use Pub/Sub notifications for event-driven processing of uploads
- Enable Object Versioning + lifecycle rules to limit stored versionsRelated Skills
s3-storage
Manages S3-compatible object storage (AWS S3, MinIO, Cloudflare R2, DigitalOcean Spaces, Backblaze B2, Wasabi, Supabase Storage). Use when the user wants to create buckets, upload/download files, set up lifecycle policies, configure CORS, manage presigned URLs, implement multipart uploads, set up replication, handle versioning, configure access policies, or build file management features on top of S3-compatible APIs. Trigger words: s3, minio, r2, object storage, bucket, presigned url, multipart upload, lifecycle policy, s3 cors, storage backend, file storage, blob storage, spaces, backblaze, wasabi.
hetzner-cloud
Manage Hetzner Cloud infrastructure from the terminal. Use when a user asks to create a Hetzner server, manage VPS instances, set up firewalls, configure networks, manage volumes, create snapshots, handle SSH keys, or provision infrastructure on Hetzner. Covers the hcloud CLI for all resource types. For deploying applications on top of Hetzner servers, see coolify.
gcp-cloud-sql
Provision and manage Cloud SQL instances on Google Cloud for MySQL, PostgreSQL, and SQL Server. Configure high availability, read replicas, automated backups, IAM database authentication, the Cloud SQL Auth Proxy, and Terraform deployments. Use for managed relational databases on GCP.
gcp-cloud-run
Deploy serverless containers on Google Cloud Run — services for HTTP traffic, jobs for batch and scheduled tasks, and worker pools for always-on pull-based background processing. Build and push container images, configure auto-scaling from zero, split traffic for canary deploys, and set up custom domains with managed TLS.
gcp-cloud-functions
Build serverless functions on Google Cloud Functions. Deploy HTTP and event-driven functions triggered by Pub/Sub, Cloud Storage, and Firestore. Configure runtime settings, manage dependencies, and connect to other GCP services.
gcloud
Google Cloud CLI for managing GCP resources. Use when the user needs to work with Compute Engine, Cloud Storage, Cloud Functions, IAM, GKE, and other Google Cloud services from the terminal.
cloudflare-workers
Assists with building and deploying applications on Cloudflare Workers edge computing platform. Use when working with Workers runtime, Wrangler CLI, KV, D1, R2, Durable Objects, Queues, or Hyperdrive. Trigger words: cloudflare, workers, edge functions, wrangler, KV, D1, R2, durable objects, edge computing.
cloudflare-vectorize
Serverless vector database at the edge with Cloudflare Vectorize. Use when: building semantic search on Cloudflare Workers, RAG pipelines at the edge, low-latency vector similarity search, or storing and querying embeddings without managing a separate vector database.
cloudflare-ai
You are an expert in Cloudflare Workers AI, the serverless AI inference platform running on Cloudflare's global network. You help developers run LLMs, embedding models, image generation, speech-to-text, and translation models at the edge with zero cold starts, pay-per-use pricing, and integration with Workers, Pages, and Vectorize — enabling AI features without managing GPU infrastructure.
cloud-resource-analyzer
Finds orphaned, idle, and underutilized cloud resources across AWS, GCP, or Azure accounts. Use when someone needs to audit cloud spending, find unused EBS volumes, stale snapshots, unattached IPs, idle load balancers, or oversized RDS instances. Trigger words: cloud waste, orphaned resources, unused volumes, cloud audit, infrastructure cleanup, cloud bill analysis.
azure-blob-storage
Store and manage unstructured data with Azure Blob Storage. Create containers, upload and organize blobs, configure access tiers (Hot, Cool, Archive) for cost optimization, generate SAS tokens for secure temporary access, and set lifecycle management policies.
aws-cloudfront
Configure Amazon CloudFront for global content delivery. Set up distributions with S3 and ALB origins, define cache behaviors and TTLs, invalidate cached content, and use Lambda@Edge for request/response manipulation at the edge.