cursor-custom-prompts
Create effective custom prompts for Cursor AI using project rules, prompt engineering patterns, and reusable templates. Triggers on "cursor prompts", "prompt engineering cursor", "better cursor prompts", "cursor instructions", "cursor prompt templates".
Best use case
cursor-custom-prompts is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Create effective custom prompts for Cursor AI using project rules, prompt engineering patterns, and reusable templates. Triggers on "cursor prompts", "prompt engineering cursor", "better cursor prompts", "cursor instructions", "cursor prompt templates".
Teams using cursor-custom-prompts should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/cursor-custom-prompts/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How cursor-custom-prompts Compares
| Feature / Agent | cursor-custom-prompts | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Create effective custom prompts for Cursor AI using project rules, prompt engineering patterns, and reusable templates. Triggers on "cursor prompts", "prompt engineering cursor", "better cursor prompts", "cursor instructions", "cursor prompt templates".
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Cursor Custom Prompts
Create effective prompts for Cursor AI. Covers prompt engineering fundamentals, reusable templates stored in project rules, and advanced techniques for consistent, high-quality code generation.
## Prompt Anatomy
A well-structured Cursor prompt has four parts:
```
1. CONTEXT → @-mentions pointing to relevant code
2. TASK → What you want done (specific, actionable)
3. CONSTRAINTS → Rules, patterns, limitations
4. FORMAT → How the output should look
```
### Example: All Four Parts
```
@src/api/users/route.ts @src/types/user.ts ← CONTEXT
Create a new API endpoint for updating user profiles. ← TASK
Constraints: ← CONSTRAINTS
- Follow the same pattern as the users route
- Use Zod for input validation
- Return 400 for invalid input, 404 for missing user
- Only allow updating: name, email, avatarUrl
Return the endpoint code and the Zod schema as ← FORMAT
separate code blocks.
```
## Prompt Templates
### Template: Feature Implementation
```
@[existing-similar-feature] @[relevant-types]
Implement [feature name] following the pattern in [reference file].
Requirements:
- [requirement 1]
- [requirement 2]
- [requirement 3]
Constraints:
- Same error handling pattern as [reference]
- Same file structure as [reference]
- Include TypeScript types for all public interfaces
```
### Template: Bug Fix
```
@[buggy-file] @Lint Errors
Bug: [describe the incorrect behavior]
Expected: [describe correct behavior]
Steps to reproduce: [1, 2, 3]
The error message is: [paste error]
Find the root cause and suggest a fix. Do not change
the public API surface.
```
### Template: Code Review
```
@[file-to-review]
Review this code for:
1. Logic errors or edge cases
2. Security vulnerabilities (injection, XSS, auth bypass)
3. Performance issues (N+1 queries, unnecessary re-renders)
4. TypeScript type safety (any casts, missing generics)
5. Naming and readability
List issues as: [severity] [line/area] [description] [suggestion]
```
### Template: Test Generation
```
@[source-file] @[existing-test-file]
Generate tests for [function/class name] covering:
- Happy path with valid inputs
- Edge cases: empty input, null, undefined, max values
- Error cases: invalid input, missing required fields
- Async behavior: success and failure scenarios
Follow the same test structure as [existing-test-file].
Use [vitest/jest/pytest] assertions.
```
### Template: Refactoring
```
@[file-to-refactor]
Refactor this code to [goal]:
- [specific change 1]
- [specific change 2]
Do NOT change:
- The public API (function signatures, return types)
- The test behavior (existing tests must still pass)
- External imports
```
## Storing Prompts as Project Rules
Convert frequently used prompts into `.cursor/rules/` for automatic injection:
```yaml
# .cursor/rules/code-generation.mdc
---
description: "Standards for AI-generated code"
globs: ""
alwaysApply: true
---
When generating code, always:
1. Add JSDoc comments on all exported functions
2. Include error handling (never let functions throw unhandled)
3. Use named exports (never default exports)
4. Add `import type` for type-only imports
5. Prefer const arrow functions for pure utilities
6. Use discriminated unions over boolean flags
When generating TypeScript:
- Strict mode: no `any`, no `as` casts without justification
- Prefer `unknown` over `any` for unknown types
- Use `satisfies` operator for type narrowing
- Infer types where TypeScript can; annotate where it cannot
```
```yaml
# .cursor/rules/test-patterns.mdc
---
description: "Test generation standards"
globs: "**/*.test.ts,**/*.spec.ts"
alwaysApply: false
---
When generating tests:
- Use describe/it blocks with readable descriptions
- Arrange/Act/Assert pattern (AAA)
- One assertion per test (prefer multiple focused tests)
- Mock external dependencies, not internal utilities
- Use factory functions for test data (not inline objects)
- Name test files: {module}.test.ts colocated with source
```
## Advanced Prompting Techniques
### Chain of Thought
Force the AI to reason before generating:
```
@src/services/billing.service.ts
I need to add proration logic for subscription upgrades.
Before writing code, first:
1. List the variables involved (current plan, new plan, billing cycle)
2. Show the proration formula with a concrete example
3. Identify edge cases (upgrade on last day, downgrade, free trial)
Then implement based on your analysis.
```
### Few-Shot Examples
Provide examples of what you want:
```
Convert these function signatures to the Result pattern:
Example input:
async function getUser(id: string): Promise<User>
Example output:
async function getUser(id: string): Promise<Result<User, NotFoundError>>
Now convert these:
- async function createOrder(input: CreateOrderInput): Promise<Order>
- async function deleteAccount(userId: string): Promise<void>
- async function sendEmail(to: string, body: string): Promise<boolean>
```
### Negative Constraints
Tell the AI what NOT to do:
```
Create a React form component for user registration.
DO NOT:
- Use class components
- Use any CSS-in-JS library
- Add client-side validation (server validates)
- Use controlled inputs for every field (use react-hook-form)
- Import anything not already in package.json
```
### Iterative Refinement
Build up complexity in steps:
```
Turn 1: "Create a basic Express route for GET /api/products"
Turn 2: "Add pagination with page and limit query params"
Turn 3: "Add filtering by category and price range"
Turn 4: "Add sorting by any field with asc/desc direction"
Turn 5: "Add input validation and comprehensive error responses"
```
Each turn adds one layer. The AI maintains context from previous turns.
## Common Prompt Anti-Patterns
| Anti-Pattern | Problem | Better Approach |
|-------------|---------|-----------------|
| "Make it better" | Too vague | "Add error handling for network failures" |
| "Rewrite everything" | Scope too large | "Refactor the validation logic in lines 40-80" |
| No context files | AI guesses patterns | Always add @Files references |
| Wall of text prompt | AI misses key points | Use numbered lists and headers |
| "Do what you think is best" | AI makes assumptions | Specify requirements explicitly |
## Enterprise Considerations
- **Prompt libraries**: Maintain a team-shared library of effective prompts in a wiki or docs/ directory
- **Standardization**: Use `.cursor/rules/` to encode team prompt standards so all developers get consistent behavior
- **Security**: Never include real credentials, PII, or regulated data in prompts
- **Reproducibility**: Document effective prompts alongside their output for knowledge sharing
## Resources
- [Cursor Rules Documentation](https://docs.cursor.com/context/rules)
- [@ Symbols Overview](https://docs.cursor.com/context/@-symbols/overview)
- [Prompt Engineering Guide](https://platform.openai.com/docs/guides/prompt-engineering)Related Skills
optimizing-prompts
Execute this skill optimizes prompts for large language models (llms) to reduce token usage, lower costs, and improve performance. it analyzes the prompt, identifies areas for simplification and redundancy removal, and rewrites the prompt to be more conci... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.
customerio-webhooks-events
Implement Customer.io webhook and reporting event handling. Use when processing email delivery events, click/open tracking, bounce handling, or streaming to a data warehouse. Trigger: "customer.io webhook", "customer.io events", "customer.io delivery status", "customer.io bounces", "customer.io open tracking".
customerio-upgrade-migration
Plan and execute Customer.io SDK upgrades and migrations. Use when upgrading customerio-node versions, migrating from legacy APIs, or updating to new SDK patterns. Trigger: "upgrade customer.io", "customer.io migration", "update customer.io sdk", "customer.io breaking changes".
customerio-security-basics
Apply Customer.io security best practices. Use when implementing secure credential storage, PII handling, webhook signature verification, or GDPR/CCPA compliance. Trigger: "customer.io security", "customer.io pii", "secure customer.io", "customer.io gdpr", "customer.io webhook verify".
customerio-sdk-patterns
Apply production-ready Customer.io SDK patterns. Use when implementing typed clients, retry logic, event batching, or singleton management for customerio-node. Trigger: "customer.io best practices", "customer.io patterns", "production customer.io", "customer.io architecture", "customer.io singleton".
customerio-reliability-patterns
Implement Customer.io reliability and fault-tolerance patterns. Use when building circuit breakers, fallback queues, idempotency, or graceful degradation for Customer.io integrations. Trigger: "customer.io reliability", "customer.io resilience", "customer.io circuit breaker", "customer.io fault tolerance".
customerio-reference-architecture
Implement Customer.io enterprise reference architecture. Use when designing integration layers, event-driven architectures, or enterprise-grade Customer.io setups. Trigger: "customer.io architecture", "customer.io design", "customer.io enterprise", "customer.io integration pattern".
customerio-rate-limits
Implement Customer.io rate limiting and backoff. Use when handling high-volume API calls, implementing retry logic, or hitting 429 errors. Trigger: "customer.io rate limit", "customer.io throttle", "customer.io 429", "customer.io backoff", "customer.io too many requests".
customerio-prod-checklist
Execute Customer.io production deployment checklist. Use when preparing for production launch, auditing integration quality, or performing pre-launch validation. Trigger: "customer.io production", "customer.io checklist", "deploy customer.io", "customer.io go-live", "customer.io launch".
customerio-primary-workflow
Implement Customer.io primary messaging workflow. Use when setting up campaign triggers, welcome sequences, onboarding flows, or event-driven email automation. Trigger: "customer.io campaign", "customer.io workflow", "customer.io email automation", "customer.io messaging", "customer.io onboarding".
customerio-performance-tuning
Optimize Customer.io API performance for high throughput. Use when improving response times, implementing connection pooling, batching, caching, or regional routing. Trigger: "customer.io performance", "optimize customer.io", "customer.io latency", "customer.io connection pooling".
customerio-observability
Set up Customer.io monitoring and observability. Use when implementing metrics, structured logging, alerting, or Grafana dashboards for Customer.io integrations. Trigger: "customer.io monitoring", "customer.io metrics", "customer.io dashboard", "customer.io alerts", "customer.io observability".