coreweave-core-workflow-a
Deploy KServe InferenceService on CoreWeave with autoscaling and GPU scheduling. Use when serving ML models with KServe, configuring scale-to-zero, or deploying production inference endpoints on CoreWeave. Trigger with phrases like "coreweave inference service", "coreweave kserve", "coreweave model serving", "deploy model on coreweave".
Best use case
coreweave-core-workflow-a is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Deploy KServe InferenceService on CoreWeave with autoscaling and GPU scheduling. Use when serving ML models with KServe, configuring scale-to-zero, or deploying production inference endpoints on CoreWeave. Trigger with phrases like "coreweave inference service", "coreweave kserve", "coreweave model serving", "deploy model on coreweave".
Teams using coreweave-core-workflow-a should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/coreweave-core-workflow-a/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How coreweave-core-workflow-a Compares
| Feature / Agent | coreweave-core-workflow-a | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Deploy KServe InferenceService on CoreWeave with autoscaling and GPU scheduling. Use when serving ML models with KServe, configuring scale-to-zero, or deploying production inference endpoints on CoreWeave. Trigger with phrases like "coreweave inference service", "coreweave kserve", "coreweave model serving", "deploy model on coreweave".
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# CoreWeave Core Workflow: KServe Inference
## Overview
Deploy production inference services on CoreWeave using KServe InferenceService with GPU scheduling, autoscaling, and scale-to-zero. CKS natively integrates with KServe for serverless GPU inference.
## Prerequisites
- Completed `coreweave-install-auth` setup
- KServe available on your CKS cluster
- Model stored in S3, GCS, or HuggingFace
## Instructions
### Step 1: Deploy an InferenceService
```yaml
# inference-service.yaml
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
name: llama-inference
annotations:
autoscaling.knative.dev/class: "kpa.autoscaling.knative.dev"
autoscaling.knative.dev/metric: "concurrency"
autoscaling.knative.dev/target: "1"
autoscaling.knative.dev/minScale: "1"
autoscaling.knative.dev/maxScale: "5"
spec:
predictor:
minReplicas: 1
maxReplicas: 5
containers:
- name: kserve-container
image: vllm/vllm-openai:latest
args:
- "--model"
- "meta-llama/Llama-3.1-8B-Instruct"
- "--port"
- "8080"
ports:
- containerPort: 8080
protocol: TCP
resources:
limits:
nvidia.com/gpu: "1"
memory: 48Gi
cpu: "8"
requests:
nvidia.com/gpu: "1"
memory: 32Gi
cpu: "4"
env:
- name: HUGGING_FACE_HUB_TOKEN
valueFrom:
secretKeyRef:
name: hf-token
key: token
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: gpu.nvidia.com/class
operator: In
values: ["A100_PCIE_80GB"]
```
```bash
kubectl apply -f inference-service.yaml
kubectl get inferenceservice llama-inference -w
```
### Step 2: Scale-to-Zero Configuration
```yaml
# For dev/staging -- scale down to zero when idle
metadata:
annotations:
autoscaling.knative.dev/minScale: "0" # Scale to zero
autoscaling.knative.dev/maxScale: "3"
autoscaling.knative.dev/scaleDownDelay: "5m"
```
### Step 3: Test the Endpoint
```bash
# Get inference URL
INFERENCE_URL=$(kubectl get inferenceservice llama-inference \
-o jsonpath='{.status.url}')
curl -X POST "${INFERENCE_URL}/v1/chat/completions" \
-H "Content-Type: application/json" \
-d '{"model": "meta-llama/Llama-3.1-8B-Instruct", "messages": [{"role": "user", "content": "Hello!"}]}'
```
## Error Handling
| Error | Cause | Solution |
|-------|-------|----------|
| InferenceService not ready | GPU not available | Check node capacity and affinity |
| Scale-to-zero cold start | First request after idle | Set `minScale: 1` for production |
| Model loading timeout | Large model download | Pre-cache model in PVC |
| OOMKilled | Model too large | Use multi-GPU or quantized model |
## Resources
- [CoreWeave Inference](https://docs.coreweave.com/docs/products/cks/tutorials/deploy-vllm-inference)
- [KServe Documentation](https://kserve.github.io/website/)
## Next Steps
For GPU training workloads, see `coreweave-core-workflow-b`.Related Skills
step-functions-workflow
Step Functions Workflow - Auto-activating skill for AWS Skills. Triggers on: step functions workflow, step functions workflow Part of the AWS Skills skill category.
sprint-workflow
Execute this skill should be used when the user asks about "how sprints work", "sprint phases", "iteration workflow", "convergent development", "sprint lifecycle", "when to use sprints", or wants to understand the sprint execution model and its convergent diffusion approach. Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.
scorecard-marketing
Build quiz and assessment funnels that generate qualified leads at 30-50% conversion. Use when the user mentions "lead magnet", "quiz funnel", "assessment tool", "lead generation", or "score-based segmentation". Covers question design, dynamic results by tier, and automated follow-up sequences. For landing page conversion, see cro-methodology. For full marketing plans, see one-page-marketing. Trigger with 'scorecard', 'marketing'.
n8n-workflow-generator
N8N Workflow Generator - Auto-activating skill for Business Automation. Triggers on: n8n workflow generator, n8n workflow generator Part of the Business Automation skill category.
jira-workflow-creator
Jira Workflow Creator - Auto-activating skill for Enterprise Workflows. Triggers on: jira workflow creator, jira workflow creator Part of the Enterprise Workflows skill category.
building-gitops-workflows
This skill enables Claude to construct GitOps workflows using ArgoCD and Flux. It is designed to generate production-ready configurations, implement best practices, and ensure a security-first approach for Kubernetes deployments. Use this skill when the user explicitly requests "GitOps workflow", "ArgoCD", "Flux", or asks for help with setting up a continuous delivery pipeline using GitOps principles. The skill will generate the necessary configuration files and setup code based on the user's specific requirements and infrastructure.
git-workflow-manager
Git Workflow Manager - Auto-activating skill for DevOps Basics. Triggers on: git workflow manager, git workflow manager Part of the DevOps Basics skill category.
fathom-core-workflow-b
Sync Fathom meeting data to CRM and build automated follow-up workflows. Use when integrating Fathom with Salesforce, HubSpot, or custom CRMs, or creating automated post-meeting email summaries. Trigger with phrases like "fathom crm sync", "fathom salesforce", "fathom follow-up", "fathom post-meeting workflow".
fathom-core-workflow-a
Build a meeting analytics pipeline with Fathom transcripts and summaries. Use when extracting insights from meetings, building CRM sync, or creating automated meeting follow-up workflows. Trigger with phrases like "fathom analytics", "fathom meeting pipeline", "fathom transcript analysis", "fathom action items sync".
exa-core-workflow-b
Execute Exa findSimilar, getContents, answer, and streaming answer workflows. Use when finding pages similar to a URL, retrieving content for known URLs, or getting AI-generated answers with citations. Trigger with phrases like "exa find similar", "exa get contents", "exa answer", "exa similarity search", "findSimilarAndContents".
exa-core-workflow-a
Execute Exa neural search with contents, date filters, and domain scoping. Use when building search features, implementing RAG context retrieval, or querying the web with semantic understanding. Trigger with phrases like "exa search", "exa neural search", "search with exa", "exa searchAndContents", "exa query".
evernote-core-workflow-b
Execute Evernote secondary workflow: Search and Retrieval. Use when implementing search features, finding notes, filtering content, or building search interfaces. Trigger with phrases like "search evernote", "find evernote notes", "evernote search", "query evernote".