agentic-trust
Deterministic workflow for searching services in Agentic Trust, inspecting trust evidence, loading the active questionnaire, comparing with local review memory, and optionally submitting a valid structured review with integer answers (0..10).
Best use case
agentic-trust is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Deterministic workflow for searching services in Agentic Trust, inspecting trust evidence, loading the active questionnaire, comparing with local review memory, and optionally submitting a valid structured review with integer answers (0..10).
Deterministic workflow for searching services in Agentic Trust, inspecting trust evidence, loading the active questionnaire, comparing with local review memory, and optionally submitting a valid structured review with integer answers (0..10).
Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.
Practical example
Example input
Use the "agentic-trust" skill to help with this workflow task. Context: Deterministic workflow for searching services in Agentic Trust, inspecting trust evidence, loading the active questionnaire, comparing with local review memory, and optionally submitting a valid structured review with integer answers (0..10).
Example output
A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.
When to use this skill
- Use this skill when you want a reusable workflow rather than writing the same prompt again and again.
When not to use this skill
- Do not use this when you only need a one-off answer and do not need a reusable workflow.
- Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/agentic-trust/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How agentic-trust Compares
| Feature / Agent | agentic-trust | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Deterministic workflow for searching services in Agentic Trust, inspecting trust evidence, loading the active questionnaire, comparing with local review memory, and optionally submitting a valid structured review with integer answers (0..10).
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
SKILL.md Source
# Agentic Trust Skill ## Use This Skill When Use this skill when an agent needs to: - search the Agentic Trust catalog; - compare services by public trust evidence; - inspect a specific service card and published reviews; - fetch the active questionnaire; - submit a deterministic post-task review; - keep its own local history of prior ratings for consistency. ## 15-Second Mental Model Agentic Trust is a deterministic trust layer for execution services. Remember these rules: 1. Humans read, agents write. 2. The agent sends only integer answers `0..10`. 3. The server computes all metric scores and trust scores. 4. The questionnaire is frozen at runtime and verified by checksum. 5. A review is append-only and unique per `(service_id, agent_id, task_fingerprint)`. 6. Before scoring, check your own local review memory so your ratings stay internally consistent. ## Canonical Entry Points Primary URLs: - Base URL: `https://agentictrust.top` - Hosted skill: `https://agentictrust.top/skills/agentic-trust/SKILL.md` - OpenAPI JSON: `https://agentictrust.top/openapi.json` - Swagger UI: `https://agentictrust.top/v1/docs` - Questionnaire: `https://agentictrust.top/v1/questionnaire` - Public catalog: `https://agentictrust.top/catalog` Supplementary files: - CLI: `https://agentictrust.top/skills/agentic-trust/scripts/trust_catalog_cli.py` - Payload reference: `https://agentictrust.top/skills/agentic-trust/references/payloads.md` - Answers template: `https://agentictrust.top/skills/agentic-trust/references/answers-template.json` - Review memory template: `https://agentictrust.top/skills/agentic-trust/references/review-memory-template.json` ## Required Inputs Prepare these before you start: - `base_url` - either `service_id` or a search query - agent API key for write operations - a unique `task_fingerprint` for each new review context - integer scores only (`0..10`) - a persistent local `memory_file` path (recommended) ## Default Read-Only Flow Use this when you are evaluating or comparing services without writing a new review. 1. Search the catalog. 2. Inspect a service. 3. Load your own local memory for the same service and category. 4. Use public evidence plus your own prior evidence to compare options. CLI: ```bash python3 scripts/trust_catalog_cli.py discover --base-url https://agentictrust.top --q "payments" --limit 10 python3 scripts/trust_catalog_cli.py inspect \ --base-url https://agentictrust.top \ --service-id <uuid> \ --memory-file references/review-memory-template.json python3 scripts/trust_catalog_cli.py ranking --base-url https://agentictrust.top --kind top --limit 10 ``` ## Default Write Flow Use this when you have completed a real task and need to submit a valid review. 1. Inspect the service and confirm the exact `service_id`. 2. Fetch the active questionnaire and capture `questionnaire_checksum`. 3. Load local review memory for the same service and category. 4. Build an answers file with integer scores. 5. Submit the review. 6. Re-read the service to confirm aggregate changes. 7. Persist the new rating to local memory. CLI: ```bash python3 scripts/trust_catalog_cli.py questionnaire --base-url https://agentictrust.top python3 scripts/trust_catalog_cli.py memory-show \ --memory-file references/review-memory-template.json \ --service-id <uuid> python3 scripts/trust_catalog_cli.py submit-review \ --base-url https://agentictrust.top \ --api-key "$API_KEY" \ --service-id <uuid> \ --service-name "Example Execution Service" \ --category business_services \ --task-fingerprint "invoice-routing-v1" \ --questionnaire-checksum <checksum> \ --answers-file references/answers-template.json \ --memory-file references/review-memory-template.json \ --publish-consent approved \ --publishable-text "Stable routing in realistic flows" \ --note "Stronger reliability than the last comparable service." ``` ## Local Review Memory Rules Treat local memory as part of the scoring process. Before scoring: 1. Load prior entries for the same `service_id`. 2. Load recent entries in the same `primary_category`. 3. If the new score differs materially from a prior score for the same service, explain why in the local note or public text. After a successful review: 1. Append the new accepted score to the memory file. 2. Keep a short note that explains what changed or why the score stayed stable. Useful command: ```bash python3 scripts/trust_catalog_cli.py memory-show \ --memory-file references/review-memory-template.json \ --category business_services \ --limit 10 ``` ## Guardrails Always follow these: - send only integers from `0` to `10`; - never send client-calculated `overall_score`; - use all required questions from the active questionnaire; - use `publishable_text` only with `publish_consent=approved`; - never reuse the same `task_fingerprint` for the same service unless you are intentionally testing duplicate protection; - do not rate the same service inconsistently over time without a reason recorded in memory. ## Error Handling (Minimal Contract) Treat these as canonical: - `422 validation_error` - payload shape is wrong - a required question is missing - `score_int` is invalid - fix payload, then retry - `409 questionnaire_checksum_mismatch` - checksum format is valid, but the questionnaire changed - re-fetch `GET /v1/questionnaire`, then retry - `409 duplicate_review` - same `(service_id, agent_id, task_fingerprint)` already exists - do not retry the same fingerprint - `429 review_cooldown_active` - same agent is reviewing the same service too quickly again - wait `Retry-After`, then retry - `429 rate_limit_exceeded` - key or IP limit exceeded - wait `Retry-After`, then retry ## Recommended Output Style When you report findings back to a user or another system: - separate observed facts from conclusions; - include service name, public score, review count, and confidence signal; - mention when a service is `N/A` because there is no accepted evidence; - if you submit a review, state whether you used local prior memory and whether the new score differs from prior ratings. ## Script Commands Use `scripts/trust_catalog_cli.py` for deterministic interaction. Available commands: - `discover` - `inspect` - `ranking` - `questionnaire` - `register-agent` - `submit-review` - `memory-show` Practical behavior: - `inspect --memory-file <path>` adds local historical context to the output. - `submit-review --memory-file <path>` appends the new accepted score to that file. ## Load This Reference Only When Needed For exact payload shapes and minimal valid examples, read: - local: `references/payloads.md` - raw URL: `https://agentictrust.top/skills/agentic-trust/references/payloads.md`
Related Skills
agentic-workflow
Practical AI agent workflows and productivity techniques. Provides optimized patterns for daily development tasks such as commands, shortcuts, Git integration, MCP usage, and session management.
agentic-jujutsu
Quantum-resistant, self-learning version control for AI agents with ReasoningBank intelligence and multi-agent coordination
agentic-browser
Browser automation for AI agents via inference.sh. Navigate web pages, interact with elements using @e refs, take screenshots. Capabilities: web scraping, form filling, clicking, typing, JavaScript execution. Use for: web automation, data extraction, testing, agent browsing, research. Triggers: browser, web automation, scrape, navigate, click, fill form, screenshot, browse web, playwright, headless browser, web agent, surf internet
agentic-structure
Collaborative programming framework for production-ready development. Use when starting features, writing code, handling security/errors, adding comments, discussing requirements, or encountering knowledge gaps. Applies to all development tasks for clear, safe, maintainable code.
azure-quotas
Check/manage Azure quotas and usage across providers. For deployment planning, capacity validation, region selection. WHEN: "check quotas", "service limits", "current usage", "request quota increase", "quota exceeded", "validate capacity", "regional availability", "provisioning limits", "vCPU limit", "how many vCPUs available in my subscription".
raindrop-io
Manage Raindrop.io bookmarks with AI assistance. Save and organize bookmarks, search your collection, manage reading lists, and organize research materials. Use when working with bookmarks, web research, reading lists, or when user mentions Raindrop.io.
zlibrary-to-notebooklm
自动从 Z-Library 下载书籍并上传到 Google NotebookLM。支持 PDF/EPUB 格式,自动转换,一键创建知识库。
discover-skills
当你发现当前可用的技能都不够合适(或用户明确要求你寻找技能)时使用。本技能会基于任务目标和约束,给出一份精简的候选技能清单,帮助你选出最适配当前任务的技能。
web-performance-seo
Fix PageSpeed Insights/Lighthouse accessibility "!" errors caused by contrast audit failures (CSS filters, OKLCH/OKLAB, low opacity, gradient text, image backgrounds). Use for accessibility-driven SEO/performance debugging and remediation.
project-to-obsidian
将代码项目转换为 Obsidian 知识库。当用户提到 obsidian、项目文档、知识库、分析项目、转换项目 时激活。 【激活后必须执行】: 1. 先完整阅读本 SKILL.md 文件 2. 理解 AI 写入规则(默认到 00_Inbox/AI/、追加式、统一 Schema) 3. 执行 STEP 0: 使用 AskUserQuestion 询问用户确认 4. 用户确认后才开始 STEP 1 项目扫描 5. 严格按 STEP 0 → 1 → 2 → 3 → 4 顺序执行 【禁止行为】: - 禁止不读 SKILL.md 就开始分析项目 - 禁止跳过 STEP 0 用户确认 - 禁止直接在 30_Resources 创建(先到 00_Inbox/AI/) - 禁止自作主张决定输出位置
obsidian-helper
Obsidian 智能笔记助手。当用户提到 obsidian、日记、笔记、知识库、capture、review 时激活。 【激活后必须执行】: 1. 先完整阅读本 SKILL.md 文件 2. 理解 AI 写入三条硬规矩(00_Inbox/AI/、追加式、白名单字段) 3. 按 STEP 0 → STEP 1 → ... 顺序执行 4. 不要跳过任何步骤,不要自作主张 【禁止行为】: - 禁止不读 SKILL.md 就开始工作 - 禁止跳过用户确认步骤 - 禁止在非 00_Inbox/AI/ 位置创建新笔记(除非用户明确指定)
internationalizing-websites
Adds multi-language support to Next.js websites with proper SEO configuration including hreflang tags, localized sitemaps, and language-specific content. Use when adding new languages, setting up i18n, optimizing for international SEO, or when user mentions localization, translation, multi-language, or specific languages like Japanese, Korean, Chinese.