fpf:propose-hypotheses
Execute complete FPF cycle from hypothesis generation to decision
Best use case
fpf:propose-hypotheses is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Execute complete FPF cycle from hypothesis generation to decision
Teams using fpf:propose-hypotheses should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/propose-hypotheses/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How fpf:propose-hypotheses Compares
| Feature / Agent | fpf:propose-hypotheses | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Execute complete FPF cycle from hypothesis generation to decision
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Propose Hypotheses Workflow
Execute the First Principles Framework (FPF) cycle: generate competing hypotheses, verify logic, validate evidence, audit trust, and produce a decision.
## User Input
```text
Problem Statement: $ARGUMENTS
```
## Workflow Execution
### Step 1a: Create Directory Structure (Main Agent)
Create `.fpf/` directory structure if it does not exist:
```bash
mkdir -p .fpf/{evidence,decisions,sessions,knowledge/{L0,L1,L2,invalid}}
touch .fpf/{evidence,decisions,sessions,knowledge/{L0,L1,L2,invalid}}/.gitkeep
```
**Postcondition**: `.fpf/` directory scaffold exists.
---
### Step 1b: Initialize Context (FPF Agent)
Launch fpf-agent with sonnet[1m] model:
- **Description**: "Initialize FPF context"
- **Prompt**:
```
Read ${CLAUDE_PLUGIN_ROOT}/tasks/init-context.md and execute.
Problem Statement: $ARGUMENTS
**Write**: Context summary to `.fpf/context.md`**
```
---
### Step 2: Generate Hypotheses (FPF Agent)
Launch fpf-agent with sonnet[1m] model:
- **Description**: "Generate L0 hypotheses"
- **Prompt**:
```
Read ${CLAUDE_PLUGIN_ROOT}/tasks/generate-hypotheses.md and execute.
Problem Statement: $ARGUMENTS
Context: <summary from Step 1b>
**Write**: List of hypothesis IDs and titles to `.fpf/knowledge/L0/`
Reply with summary table in markdown format:
| ID | Title | Kind | Scope |
|----|-------|------|-------|
| ... | ... | ... | ... |
```
---
### Step 3: Present Summary (Main Agent)
1. Read all L0 hypothesis files from `.fpf/knowledge/L0/`
2. Present summary table from agent response.
3. Ask user: "Would you like to add any hypotheses of your own? (yes/no)"
---
### Step 4: Add User Hypothesis (FPF Agent, Conditional Loop)
**Condition**: User says yes to adding hypotheses.
Launch fpf-agent with sonnet[1m] model:
- **Description**: "Add user hypothesis"
- **Prompt**:
```
Read ${CLAUDE_PLUGIN_ROOT}/tasks/add-user-hypothesis.md and execute.
User Hypothesis Description: <get from user>
**Write**: User hypothesis to `.fpf/knowledge/L0/`
```
**Loop**: Return to Step 3 after hypothesis is added.
**Exit**: When user says no or declines to add more.
---
### Step 5: Verify Logic (Parallel Sub-Agents)
**Condition**: User finished adding hypotheses.
For EACH L0 hypothesis file in `.fpf/knowledge/L0/`, launch parallel fpf-agent with sonnet[1m] model:
- **Description**: "Verify hypothesis: <hypothesis-id>"
- **Prompt**:
```
Read ${CLAUDE_PLUGIN_ROOT}/tasks/verify-logic.md and execute.
Hypothesis ID: <hypothesis-id>
Hypothesis File: .fpf/knowledge/L0/<hypothesis-id>.md
**Move**: After you complete verification, move the file to `.fpf/knowledge/L1/` or `.fpf/knowledge/invalid/`.
```
**Wait for all agents**, then check that files are moved to `.fpf/knowledge/L1/` or `.fpf/knowledge/invalid/`.
---
### Step 6: Validate Evidence (Parallel Sub-Agents)
For EACH L1 hypothesis file in `.fpf/knowledge/L1/`, launch parallel fpf-agent with sonnet[1m] model:
- **Description**: "Validate hypothesis: <hypothesis-id>"
- **Prompt**:
```
Read ${CLAUDE_PLUGIN_ROOT}/tasks/validate-evidence.md and execute.
Hypothesis ID: <hypothesis-id>
Hypothesis File: .fpf/knowledge/L1/<hypothesis-id>.md
**Move**: After you complete validation, move the file to `.fpf/knowledge/L2/` or `.fpf/knowledge/invalid/`.
```
**Wait for all agents**, then check that files are moved to `.fpf/knowledge/L2/` or `.fpf/knowledge/invalid/`.
---
### Step 7: Audit Trust (Parallel Sub-Agents)
For EACH L2 hypothesis file in `.fpf/knowledge/L2/`, launch parallel fpf-agent with sonnet[1m] model:
- **Description**: "Audit trust: <hypothesis-id>"
- **Prompt**:
```
Read ${CLAUDE_PLUGIN_ROOT}/tasks/audit-trust.md and execute.
Hypothesis ID: <hypothesis-id>
Hypothesis File: .fpf/knowledge/L2/<hypothesis-id>.md
**Write**: Audit report to `.fpf/evidence/audit-{hypothesis-id}-{YYYY-MM-DD}.md`
**Reply**: with R_eff score and weakest link
```
**Wait for all agents**, then check that audit reports are created in `.fpf/evidence/`.
---
### Step 8: Make Decision (FPF Agent)
Launch fpf-agent with sonnet[1m] model:
- **Description**: "Create decision record"
- **Prompt**:
```
Read ${CLAUDE_PLUGIN_ROOT}/tasks/decide.md and execute.
Problem Statement: $ARGUMENTS
L2 Hypotheses Directory: .fpf/knowledge/L2/
Audit Reports: .fpf/evidence/
**Write**: Decision record to `.fpf/decisions/`
**Reply**: with decision record summary in markdown format:
| Hypothesis | R_eff | Weakest Link | Status |
|------------|-------|--------------|--------|
| ... | ... | ... | ... |
**Recommended Decision**: <hypothesis title>
**Rationale**: <brief explanation>
```
**Wait for agent**, then check that decision record is created in `.fpf/decisions/`.
---
### Step 9: Present Final Summary (Main Agent)
1. Read the DRR from `.fpf/decisions/`
2. Present results from agent response.
3. Present next steps:
- Implement the selected hypothesis
- Use `/fpf:status` to check FPF state
- Use `/fpf:actualize` if codebase changes
4. Ask user if he agree with the decision, if not launch fpf-agent at step 8 with instruction to modify the decision as user wants.
---
## Completion
Workflow complete when:
- [ ] `.fpf/` directory structure exists
- [ ] Context recorded in `.fpf/context.md`
- [ ] Hypotheses generated, verified, validated, and audited
- [ ] DRR created in `.fpf/decisions/`
- [ ] Final summary presented to user
**Artifacts Created**:
- `.fpf/context.md` - Problem context
- `.fpf/knowledge/L0/*.md` - Initial hypotheses
- `.fpf/knowledge/L1/*.md` - Verified hypotheses
- `.fpf/knowledge/L2/*.md` - Validated hypotheses
- `.fpf/knowledge/invalid/*.md` - Rejected hypotheses
- `.fpf/evidence/*.md` - Evidence files
- `.fpf/decisions/*.md` - Design Rationale RecordRelated Skills
memory-forensics
Master memory forensics techniques including memory acquisition, process analysis, and artifact extraction using Volatility and related tools. Use when analyzing memory dumps, investigating inciden...
malware-analyst
Expert malware analyst specializing in defensive malware research, threat intelligence, and incident response. Masters sandbox analysis, behavioral analysis, and malware family identification.
loki-mode
Multi-agent autonomous startup system for Claude Code. Triggers on "Loki Mode". Orchestrates 100+ specialized agents across engineering, QA, DevOps, security, data/ML, business operations,...
llm-security
LLM and AI application security testing skill for prompt injection, jailbreaking, and AI system vulnerabilities. This skill should be used when testing AI/ML applications for security issues, performing prompt injection attacks, testing LLM guardrails, analyzing AI system architectures for vulnerabilities, or assessing RAG pipeline security. Triggers on requests to test LLM security, perform prompt injection, jailbreak AI systems, test AI guardrails, or audit AI application security.
linux-privilege-escalation
This skill should be used when the user asks to "escalate privileges on Linux", "find privesc vectors on Linux systems", "exploit sudo misconfigurations", "abuse SUID binaries", "ex...
libfuzzer
Coverage-guided fuzzer built into LLVM for C/C++ projects. Use for fuzzing C/C++ code that can be compiled with Clang.
libafl
LibAFL is a modular fuzzing library for building custom fuzzers. Use for advanced fuzzing needs, custom mutators, or non-standard fuzzing targets.
laravel-security-audit
Security auditor for Laravel applications. Analyzes code for vulnerabilities, misconfigurations, and insecure practices using OWASP standards and Laravel security best practices.
k8s-security-policies
Implement Kubernetes security policies including NetworkPolicy, PodSecurityPolicy, and RBAC for production-grade security. Use when securing Kubernetes clusters, implementing network isolation, or ...
insecure-defaults
Detects fail-open insecure defaults (hardcoded secrets, weak auth, permissive security) that allow apps to run insecurely in production. Use when auditing security, reviewing config management, or analyzing environment variable handling.
influence-psychology
Apply the six principles of ethical persuasion (reciprocity, commitment, social proof, authority, liking, scarcity) to product design, copy, and sales. Use when the user mentions "social proof", "persuasive copy", "why users don't convert", or "ethical persuasion". For deal negotiation tactics, see negotiation. For viral word-of-mouth, see contagious.
idor-testing
This skill should be used when the user asks to "test for insecure direct object references," "find IDOR vulnerabilities," "exploit broken access control," "enumerate user IDs or obje...