s3-storage
Manages S3-compatible object storage (AWS S3, MinIO, Cloudflare R2, DigitalOcean Spaces, Backblaze B2, Wasabi, Supabase Storage). Use when the user wants to create buckets, upload/download files, set up lifecycle policies, configure CORS, manage presigned URLs, implement multipart uploads, set up replication, handle versioning, configure access policies, or build file management features on top of S3-compatible APIs. Trigger words: s3, minio, r2, object storage, bucket, presigned url, multipart upload, lifecycle policy, s3 cors, storage backend, file storage, blob storage, spaces, backblaze, wasabi.
Best use case
s3-storage is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Manages S3-compatible object storage (AWS S3, MinIO, Cloudflare R2, DigitalOcean Spaces, Backblaze B2, Wasabi, Supabase Storage). Use when the user wants to create buckets, upload/download files, set up lifecycle policies, configure CORS, manage presigned URLs, implement multipart uploads, set up replication, handle versioning, configure access policies, or build file management features on top of S3-compatible APIs. Trigger words: s3, minio, r2, object storage, bucket, presigned url, multipart upload, lifecycle policy, s3 cors, storage backend, file storage, blob storage, spaces, backblaze, wasabi.
Teams using s3-storage should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/s3-storage/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How s3-storage Compares
| Feature / Agent | s3-storage | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Manages S3-compatible object storage (AWS S3, MinIO, Cloudflare R2, DigitalOcean Spaces, Backblaze B2, Wasabi, Supabase Storage). Use when the user wants to create buckets, upload/download files, set up lifecycle policies, configure CORS, manage presigned URLs, implement multipart uploads, set up replication, handle versioning, configure access policies, or build file management features on top of S3-compatible APIs. Trigger words: s3, minio, r2, object storage, bucket, presigned url, multipart upload, lifecycle policy, s3 cors, storage backend, file storage, blob storage, spaces, backblaze, wasabi.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# S3 Storage
## Overview
Manages S3-compatible object storage across all major providers. Covers bucket operations, file upload/download, presigned URLs, multipart uploads, lifecycle policies, versioning, access control, CORS, and event notifications. All examples use AWS SDK v3 which works with any S3-compatible endpoint.
## Instructions
### 1. Client Setup
```javascript
import { S3Client } from '@aws-sdk/client-s3';
// AWS S3
const s3 = new S3Client({ region: 'us-east-1' }); // Uses env vars or IAM role
// MinIO (self-hosted)
const s3 = new S3Client({
region: 'us-east-1', endpoint: 'http://localhost:9000',
credentials: { accessKeyId: process.env.MINIO_ACCESS_KEY, secretAccessKey: process.env.MINIO_SECRET_KEY },
forcePathStyle: true,
});
// Cloudflare R2
const s3 = new S3Client({
region: 'auto', endpoint: `https://${process.env.CF_ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: { accessKeyId: process.env.R2_ACCESS_KEY, secretAccessKey: process.env.R2_SECRET_KEY },
});
// Python (boto3) — works with all providers
import boto3
s3 = boto3.client('s3', endpoint_url='http://localhost:9000',
aws_access_key_id=os.environ['ACCESS_KEY'], aws_secret_access_key=os.environ['SECRET_KEY'])
```
### 2. File Operations
```javascript
import { PutObjectCommand, GetObjectCommand, DeleteObjectCommand, DeleteObjectsCommand, ListObjectsV2Command, CopyObjectCommand } from '@aws-sdk/client-s3';
// Upload
await s3.send(new PutObjectCommand({
Bucket: 'my-app-uploads', Key: 'users/123/avatar.jpg',
Body: fileBuffer, ContentType: 'image/jpeg',
Metadata: { 'uploaded-by': 'user-123' },
}));
// Download
const { Body, ContentType } = await s3.send(new GetObjectCommand({ Bucket: 'my-app-uploads', Key: 'users/123/avatar.jpg' }));
const chunks = [];
for await (const chunk of Body) chunks.push(chunk);
const buffer = Buffer.concat(chunks);
// List with pagination
let token;
const allKeys = [];
do {
const { Contents, NextContinuationToken, IsTruncated } = await s3.send(
new ListObjectsV2Command({ Bucket: 'my-app-uploads', Prefix: 'users/123/', MaxKeys: 1000, ContinuationToken: token })
);
allKeys.push(...(Contents || []).map(o => o.Key));
token = IsTruncated ? NextContinuationToken : undefined;
} while (token);
// Delete (single and bulk)
await s3.send(new DeleteObjectCommand({ Bucket: 'my-app-uploads', Key: 'old-file.txt' }));
await s3.send(new DeleteObjectsCommand({
Bucket: 'my-app-uploads', Delete: { Objects: keys.map(Key => ({ Key })), Quiet: true },
}));
// Copy (move = copy + delete)
await s3.send(new CopyObjectCommand({ Bucket: 'my-app-uploads', CopySource: 'my-app-uploads/old/path.jpg', Key: 'new/path.jpg' }));
```
### 3. Presigned URLs
```javascript
import { PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
// Upload URL (client uploads directly to S3)
const uploadUrl = await getSignedUrl(s3, new PutObjectCommand({
Bucket: 'my-app-uploads', Key: `uploads/${userId}/${filename}`, ContentType: contentType,
}), { expiresIn: 3600 });
// Download URL
const downloadUrl = await getSignedUrl(s3, new GetObjectCommand({
Bucket: 'my-app-uploads', Key: 'reports/q4-2024.pdf',
}), { expiresIn: 900 });
```
### 4. Multipart Upload (large files)
```javascript
import { Upload } from '@aws-sdk/lib-storage';
const upload = new Upload({
client: s3,
params: { Bucket: 'my-app-uploads', Key: 'large-file.zip', Body: stream },
partSize: 10 * 1024 * 1024, // 10MB
leavePartsOnError: false,
});
upload.on('httpUploadProgress', (p) => console.log(`${p.loaded}/${p.total}`));
await upload.done();
```
### 5. Lifecycle Policies
```javascript
import { PutBucketLifecycleConfigurationCommand } from '@aws-sdk/client-s3';
await s3.send(new PutBucketLifecycleConfigurationCommand({
Bucket: 'my-app-uploads',
LifecycleConfiguration: {
Rules: [
{ ID: 'delete-temp-after-1-day', Prefix: 'tmp/', Status: 'Enabled', Expiration: { Days: 1 } },
{ ID: 'archive-old-logs', Prefix: 'logs/', Status: 'Enabled',
Transitions: [{ Days: 30, StorageClass: 'STANDARD_IA' }, { Days: 90, StorageClass: 'GLACIER' }],
Expiration: { Days: 365 } },
{ ID: 'cleanup-incomplete-uploads', Prefix: '', Status: 'Enabled',
AbortIncompleteMultipartUpload: { DaysAfterInitiation: 7 } },
],
},
}));
```
### 6. Versioning and CORS
```javascript
import { PutBucketVersioningCommand, PutBucketCorsCommand } from '@aws-sdk/client-s3';
// Enable versioning
await s3.send(new PutBucketVersioningCommand({
Bucket: 'my-app-uploads', VersioningConfiguration: { Status: 'Enabled' },
}));
// CORS (required for browser uploads)
await s3.send(new PutBucketCorsCommand({
Bucket: 'my-app-uploads',
CORSConfiguration: {
CORSRules: [{
AllowedHeaders: ['*'],
AllowedMethods: ['GET', 'PUT', 'POST', 'HEAD'],
AllowedOrigins: ['https://myapp.com', 'http://localhost:3000'],
ExposeHeaders: ['ETag'],
MaxAgeSeconds: 3600,
}],
},
}));
```
## Examples
### Example 1: User Avatar Upload System
**Input:** "Build a secure avatar upload system. Users get a presigned URL from our API, upload directly to S3, then we resize to 3 sizes and serve through CloudFront."
**Output:** API endpoint generating presigned PUT URLs with content-type validation (images only), CORS config for browser uploads, S3 event notification triggering a Lambda for resizing (64x64, 256x256, 512x512), CloudFront distribution pointing to processed images, lifecycle rule deleting originals after processing.
### Example 2: Self-Hosted MinIO for Development
**Input:** "Set up MinIO as a local S3 replacement. Docker Compose setup, create same buckets and policies as production, write a helper module that switches between MinIO and real S3 via env var."
**Output:** Docker Compose with MinIO + health check, init script creating buckets and policies, storage module with `createStorageClient()` that configures for MinIO or AWS based on `STORAGE_PROVIDER`, wrapper functions for common operations with consistent error handling, integration tests running against local MinIO.
## Guidelines
- Always use `forcePathStyle: true` for MinIO and self-hosted S3-compatible stores
- Use presigned URLs for client-side uploads — never proxy large files through your API
- Set lifecycle rules on every bucket — at minimum, abort incomplete multipart uploads after 7 days
- Block public access by default, whitelist only when necessary
- Use object key prefixes for logical organization (`users/{id}/`, `uploads/{date}/`)
- Never put user input directly in object keys — sanitize filenames
- Set `ContentType` explicitly — S3 defaults to `application/octet-stream`
- Use `ContentDisposition: 'attachment; filename="name.pdf"'` for downloadable files
- Always handle `NoSuchKey` errors gracefully on downloads
- Use bucket versioning for critical data, not ephemeral files
- R2 has no egress fees — consider it for CDN-heavy workloads
- MinIO is fully S3-compatible — use it for local development and testingRelated Skills
gcp-cloud-storage
Manage Google Cloud Storage for scalable object storage. Create and configure buckets, upload and organize objects, generate signed URLs for secure temporary access, set lifecycle rules for cost optimization, and configure access control.
azure-blob-storage
Store and manage unstructured data with Azure Blob Storage. Create containers, upload and organize blobs, configure access tiers (Hot, Cool, Archive) for cost optimization, generate SAS tokens for secure temporary access, and set lifecycle management policies.
zustand
You are an expert in Zustand, the small, fast, and scalable state management library for React. You help developers manage global state without boilerplate using Zustand's hook-based stores, selectors for performance, middleware (persist, devtools, immer), computed values, and async actions — replacing Redux complexity with a simple, un-opinionated API in under 1KB.
zoho
Integrate and automate Zoho products. Use when a user asks to work with Zoho CRM, Zoho Books, Zoho Desk, Zoho Projects, Zoho Mail, or Zoho Creator, build custom integrations via Zoho APIs, automate workflows with Deluge scripting, sync data between Zoho apps and external systems, manage leads and deals, automate invoicing, build custom Zoho Creator apps, set up webhooks, or manage Zoho organization settings. Covers Zoho CRM, Books, Desk, Projects, Creator, and cross-product integrations.
zod
You are an expert in Zod, the TypeScript-first schema declaration and validation library. You help developers define schemas that validate data at runtime AND infer TypeScript types at compile time — eliminating the need to write types and validators separately. Used for API input validation, form validation, environment variables, config files, and any data boundary.
zipkin
Deploy and configure Zipkin for distributed tracing and request flow visualization. Use when a user needs to set up trace collection, instrument Java/Spring or other services with Zipkin, analyze service dependencies, or configure storage backends for trace data.
zig
Expert guidance for Zig, the systems programming language focused on performance, safety, and readability. Helps developers write high-performance code with compile-time evaluation, seamless C interop, no hidden control flow, and no garbage collector. Zig is used for game engines, operating systems, networking, and as a C/C++ replacement.
zed
Expert guidance for Zed, the high-performance code editor built in Rust with native collaboration, AI integration, and GPU-accelerated rendering. Helps developers configure Zed, create custom extensions, set up collaborative editing sessions, and integrate AI assistants for productive coding.
zeabur
Expert guidance for Zeabur, the cloud deployment platform that auto-detects frameworks, builds and deploys applications with zero configuration, and provides managed services like databases and message queues. Helps developers deploy full-stack applications with automatic scaling and one-click marketplace services.
zapier
Automate workflows between apps with Zapier. Use when a user asks to connect apps without code, automate repetitive tasks, sync data between services, or build no-code integrations between SaaS tools.
zabbix
Configure Zabbix for enterprise infrastructure monitoring with templates, triggers, discovery rules, and dashboards. Use when a user needs to set up Zabbix server, configure host monitoring, create custom templates, define trigger expressions, or automate host discovery and registration.
yup
Validate data with Yup schemas. Use when adding form validation, defining API request schemas, validating configuration, or building type-safe validation pipelines in JavaScript/TypeScript.