cross-task-learning

Pattern for aggregating insights across multiple tasks to enable data-driven evolution.

242 stars

Best use case

cross-task-learning is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Pattern for aggregating insights across multiple tasks to enable data-driven evolution.

Pattern for aggregating insights across multiple tasks to enable data-driven evolution.

Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.

Practical example

Example input

Use the "cross-task-learning" skill to help with this workflow task. Context: Pattern for aggregating insights across multiple tasks to enable data-driven evolution.

Example output

A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.

When to use this skill

  • Use this skill when you want a reusable workflow rather than writing the same prompt again and again.

When not to use this skill

  • Do not use this when you only need a one-off answer and do not need a reusable workflow.
  • Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/cross-task-learning/SKILL.md --create-dirs "https://raw.githubusercontent.com/aiskillstore/marketplace/main/skills/clouder0/cross-task-learning/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/cross-task-learning/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How cross-task-learning Compares

Feature / Agentcross-task-learningStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Pattern for aggregating insights across multiple tasks to enable data-driven evolution.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Cross-Task Learning Skill

Pattern for maintaining aggregated insights across all completed tasks.

## When to Load This Skill

- Reflector: After writing individual reflection
- Evolver: Before analyzing reflections (to get aggregated view)

## Core Concept

Individual reflections capture task-specific learnings. Cross-task learning aggregates these to identify:

- **Patterns that keep appearing** → Skill candidates
- **Strategies that consistently work** → Best practices
- **Strategies that keep failing** → Anti-patterns
- **Bottlenecks that recur** → System weaknesses
- **Proposals that keep emerging** → Priority improvements

## Aggregate File

Location: `memory/reflections/_aggregate.json`

Example structure (compact JSON):

```json
{"last_updated":"ISO-8601","tasks_analyzed":15,"strategy_effectiveness":[{"strategy":"Spawn parallel explorers for context","uses":12,"successes":10,"effectiveness_score":0.83,"notes":"Works well for unfamiliar codebases"}],"failure_patterns":[{"pattern":"Contract conflicts in parallel implementation","occurrences":4,"severity":"high","status":"active"}],"skill_candidates":[{"pattern":"Read → Explore → Implement → Test → Verify","frequency":8,"effectiveness":"high","proposed_skill_name":"implementation-cycle"}]}
```

## Update Protocol (for Reflector)

After writing individual reflection, update aggregate:

```
1. Read current _aggregate.json
2. Read the reflection just written

3. Update task_history:
   - Add new entry with task_id, timestamp, outcome
   - Keep last 20 entries (trim oldest)

4. Update strategy_effectiveness:
   FOR each strategy in reflection.patterns.effective_strategies:
     IF strategy exists in aggregate:
       → Increment uses and successes
       → Recalculate effectiveness_score
     ELSE:
       → Add new entry with uses=1, successes=1

   FOR each strategy in reflection.patterns.ineffective_strategies:
     IF strategy exists in aggregate:
       → Increment uses and failures
       → Recalculate effectiveness_score
     ELSE:
       → Add new entry with uses=1, failures=1

5. Update failure_patterns:
   FOR each issue in reflection.process_analysis.phases[].issues:
     IF similar pattern exists (fuzzy match):
       → Increment occurrences
       → Update last_seen
     ELSE:
       → Add new pattern

6. Update bottleneck_hotspots:
   FOR each bottleneck in reflection.process_analysis.bottlenecks:
     IF location exists:
       → Increment frequency
       → Add cause if new
     ELSE:
       → Add new hotspot

7. Update skill_candidates:
   FOR each sequence in reflection.patterns.repeated_sequences:
     IF sequence.skill_candidate == true:
       IF similar pattern exists:
         → Increment frequency
       ELSE:
         → Add new candidate

8. Update recurring_discoveries:
   FOR each finding in reflection.knowledge_discovered:
     IF similar finding exists:
       → Increment discovery_count
       → Set should_be_documented = true if count >= 3
     ELSE:
       → Add new entry

9. Update recurring_proposals:
   FOR each proposal in reflection.evolution_proposals:
     IF similar proposal exists:
       → Increment occurrence_count
     ELSE:
       → Add new entry

10. Update retry_analysis:
    FOR each retry in reflection.process_analysis.retries:
      → Increment total_retries
      → Update by_strategy counts

11. Increment tasks_analyzed
12. Update last_updated
13. Write updated _aggregate.json (compact JSON)
```

## Similarity Matching

When checking if patterns are "similar":

```
Normalize both strings:
  - Lowercase
  - Remove punctuation
  - Remove common words (the, a, an, is, are)

Compare using:
  - Exact match after normalization
  - OR: >70% word overlap
  - OR: Same key terms present
```

## Thresholds for Action

| Metric | Threshold | Action |
|--------|-----------|--------|
| Strategy effectiveness < 0.3 | After 5 uses | Flag as anti-pattern |
| Strategy effectiveness > 0.8 | After 5 uses | Flag as best practice |
| Failure pattern occurrences | >= 3 | Flag for urgent fix |
| Skill candidate frequency | >= 5 | Propose as new skill |
| Recurring discovery count | >= 3 | Add to knowledge base |
| Recurring proposal count | >= 3 | Prioritize for evolution |

## Query Patterns (for Evolver)

**Get top issues to fix:**
```
failure_patterns
  WHERE status == "active"
  ORDER BY occurrences * severity_weight DESC
  LIMIT 5
```

**Get best practices to document:**
```
strategy_effectiveness
  WHERE effectiveness_score > 0.8
  AND uses >= 5
```

**Get skill candidates ready for implementation:**
```
skill_candidates
  WHERE frequency >= 5
  AND effectiveness == "high"
  AND status == "candidate"
```

**Get knowledge gaps:**
```
recurring_discoveries
  WHERE should_be_documented == true
  AND NOT in knowledge_base
```

## Integration with Evolver

The evolver should:

1. Read `_aggregate.json` FIRST (not individual reflections)
2. Use aggregated data for proposal prioritization:
   - High-occurrence failure patterns → High priority
   - High-frequency skill candidates → Medium priority
   - Recurring proposals → Already validated ideas
3. Reference individual reflections only for details
4. Update `recurring_proposals[].status` after evolution

## Principles

1. **Aggregate, don't duplicate** - Summary stats, not copies
2. **Track trends** - First seen, last seen, frequency
3. **Enable queries** - Structure for easy filtering
4. **Threshold-based actions** - Clear criteria for when to act
5. **Fuzzy matching** - Similar patterns should merge, not duplicate

Related Skills

task-planning

242
from aiskillstore/marketplace

Plan and organize software development tasks effectively. Use when breaking down features, creating user stories, or planning sprints. Handles task breakdown, user stories, acceptance criteria, and backlog management.

task-estimation

242
from aiskillstore/marketplace

Estimate software development tasks accurately using various techniques. Use when planning sprints, roadmaps, or project timelines. Handles story points, t-shirt sizing, planning poker, and estimation best practices.

machine-learning-ops-ml-pipeline

242
from aiskillstore/marketplace

Design and implement a complete ML pipeline for: $ARGUMENTS

cross-site-scripting-and-html-injection-testing

242
from aiskillstore/marketplace

This skill should be used when the user asks to "test for XSS vulnerabilities", "perform cross-site scripting attacks", "identify HTML injection flaws", "exploit client-side injection vulnerabilities", "steal cookies via XSS", or "bypass content security policies". It provides comprehensive techniques for detecting, exploiting, and understanding XSS and HTML injection attack vectors in web applications.

cc-skill-continuous-learning

242
from aiskillstore/marketplace

Development skill from everything-claude-code

task-execution-engine

242
from aiskillstore/marketplace

Execute implementation tasks from design documents using markdown checkboxes. Use when (1) implementing features from feature-design-assistant output, (2) resuming interrupted work, (3) batch executing tasks. Triggers on 'start implementation', 'run tasks', 'resume'.

lark-task

242
from aiskillstore/marketplace

飞书任务:管理任务和清单。创建待办任务、查看和更新任务状态、拆分子任务、组织任务清单、分配协作成员。当用户需要创建待办事项、查看任务列表、跟踪任务进度、管理项目清单或给他人分配任务时使用。

tasks-generator

242
from aiskillstore/marketplace

Generate structured task roadmaps from project specifications. Use when the user asks to create tasks, sprint plans, roadmaps, or work breakdowns based on PRD (Product Requirements Document), Tech Specs, or UI/UX specs. Triggers include requests like "generate tasks from PRD", "create sprint plan", "break down this spec into tasks", "create a roadmap", or "plan the implementation".

when-optimizing-agent-learning-use-reasoningbank-intelligence

242
from aiskillstore/marketplace

Implement adaptive learning with ReasoningBank for pattern recognition, strategy optimization, and continuous improvement

reasoningbank-adaptive-learning-with-agentdb

242
from aiskillstore/marketplace

Implement ReasoningBank adaptive learning with AgentDB for trajectory tracking, verdict judgment, memory distillation, and pattern recognition to build self-learning agents that improve decision-making through experience.

agentdb-reinforcement-learning-training

242
from aiskillstore/marketplace

Train AI agents using AgentDB's 9 reinforcement learning algorithms including Q-Learning, DQN, PPO, and Actor-Critic. Build self-learning agents, implement RL training loops with experience replay, and deploy optimized models to production.

agentdb-learning-plugins

242
from aiskillstore/marketplace

Create and train AI learning plugins with AgentDB's 9 reinforcement learning algorithms. Includes Decision Transformer, Q-Learning, SARSA, Actor-Critic, and more. Use when building self-learning agents, implementing RL, or optimizing agent behavior through experience.