json-to-llm-context

Turn JSON or PostgreSQL jsonb payloads into compact readable context for LLMs. Use when a user wants to compress JSON, reduce token usage, summarize API responses, or convert structured data into model-friendly text without dumping raw paths.

242 stars

Best use case

json-to-llm-context is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Turn JSON or PostgreSQL jsonb payloads into compact readable context for LLMs. Use when a user wants to compress JSON, reduce token usage, summarize API responses, or convert structured data into model-friendly text without dumping raw paths.

Turn JSON or PostgreSQL jsonb payloads into compact readable context for LLMs. Use when a user wants to compress JSON, reduce token usage, summarize API responses, or convert structured data into model-friendly text without dumping raw paths.

Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.

Practical example

Example input

Use the "json-to-llm-context" skill to help with this workflow task. Context: Turn JSON or PostgreSQL jsonb payloads into compact readable context for LLMs. Use when a user wants to compress JSON, reduce token usage, summarize API responses, or convert structured data into model-friendly text without dumping raw paths.

Example output

A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.

When to use this skill

  • Use this skill when you want a reusable workflow rather than writing the same prompt again and again.

When not to use this skill

  • Do not use this when you only need a one-off answer and do not need a reusable workflow.
  • Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/json-to-llm-context/SKILL.md --create-dirs "https://raw.githubusercontent.com/aiskillstore/marketplace/main/skills/vc999999999/json-to-llm-context/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/json-to-llm-context/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How json-to-llm-context Compares

Feature / Agentjson-to-llm-contextStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Turn JSON or PostgreSQL jsonb payloads into compact readable context for LLMs. Use when a user wants to compress JSON, reduce token usage, summarize API responses, or convert structured data into model-friendly text without dumping raw paths.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# JSON to LLM Context

## Overview

Use this skill when raw JSON is too noisy for direct prompting. It converts JSON or exported `jsonb`
content into short, readable summaries that preserve entities, status, relationships, and counts.

Prefer this skill for:
- API responses
- PostgreSQL `jsonb` exports
- nested config/state payloads
- large arrays of records that need compact summaries

Do not use this skill for PDF, DOCX, image OCR, or arbitrary prose documents.

## Workflow

1. Confirm the input is valid JSON or `jsonb`-style JSON text.
2. Run `scripts/json_to_readable_context.py` on the file or pipe JSON through stdin.
3. Return the generated readable summary as the primary artifact.
4. If the output still feels too long, rerun with tighter limits such as lower `--max-samples`,
   `--max-depth`, or `--max-string-len`.
5. If parsing fails, report the JSON error clearly instead of guessing.

## Quick Start

```bash
python3 scripts/json_to_readable_context.py --input payload.json
```

From stdin:

```bash
cat payload.json | python3 scripts/json_to_readable_context.py
```

Write to a file:

```bash
python3 scripts/json_to_readable_context.py --input payload.json --output summary.txt
```

Common tuning:

```bash
python3 scripts/json_to_readable_context.py \
  --input payload.json \
  --style sectioned \
  --strict \
  --preserve status,profile.email \
  --show-paths \
  --expand collections \
  --max-samples 2 \
  --max-depth 2 \
  --max-string-len 48
```

## Output Style

The script aims for layer-2 readable output, for example:

```text
User[123]: Tom

Summary
- Status: active.
- Profile: email a@b.com (verified).

Collections
- Roles: 2 total; values: admin and editor.
```

Behavior rules:
- prefer entity headers like `User[123]: Tom`
- group top-level output into `Summary`, `Details`, and `Collections` when available
- convert fields into short report-style bullets when possible
- summarize large arrays as totals, statuses, and short examples
- keep stable ordering so repeated runs are comparable
- avoid raw path dumps unless the structure is too irregular to beautify safely

## Style Options

- `--style sectioned` (default): emits `Summary`, `Details`, and `Collections`
- `--style flat`: emits a simpler header + bullet list without section headings

Example flat output:

```text
User[123]: Tom
- Status: active.
- Profile: email a@b.com (verified).
- Roles: 2 total; values: admin and editor.
```

## Safety Controls

- `--strict`: reduces aggressive compression and keeps more explicit structure
- `--preserve key1,key2,path.to.field`: always keeps those keys or dotted paths, even when empty or normally dropped
- `--expand collections|details|all`: adds local sub-bullets so important parts are less compressed
- `--show-paths`: appends source markers like `[@status]` or `[@orders[0]]` to rendered lines

Example:

```bash
python3 scripts/json_to_readable_context.py \
  --input payload.json \
  --strict \
  --preserve status,profile.email,orders \
  --expand all \
  --show-paths
```

Example with paths:

```text
User[123]: Tom [@root]

Summary
- Status: active. [@active]

Collections
- Roles: 2 total; values: admin and editor. [@roles]
```

## When To Read References

Read `references/rules.md` only when you need:
- the exact summarization heuristics
- examples of array and nested-object handling
- guidance for deciding whether to tighten or loosen output

## Failure Handling

- Invalid JSON: stop and show the parse error
- Very irregular objects: fall back to simplified readable key/value lines
- Extremely deep payloads: cap traversal with `--max-depth`
- Overlong text blobs: truncate safely with length hints

## Notes

- `json` and PostgreSQL `jsonb` are treated the same once parsed
- the default output is a single readable text artifact
- this skill intentionally favors readability over perfect structural fidelity

Related Skills

opencontext

242
from aiskillstore/marketplace

Persistent memory and context management for AI agents using OpenContext. Keep context across sessions/repos/dates, store conclusions, and provide document search workflows.

ralph-tui-create-json

242
from aiskillstore/marketplace

Convert PRDs to prd.json format for ralph-tui execution. Creates JSON task files with user stories, acceptance criteria, and dependencies. Triggers on: create prd.json, convert to json, ralph json, create json tasks.

hig-project-context

242
from aiskillstore/marketplace

Create or update a shared Apple design context document that other HIG skills use to tailor guidance. Use when the user says 'set up my project context,' 'what platforms am I targeting,' 'configure HIG settings,' or when starting a new Apple platform project.

ddd-context-mapping

242
from aiskillstore/marketplace

Map relationships between bounded contexts and define integration contracts using DDD context mapping patterns.

context7-auto-research

242
from aiskillstore/marketplace

Automatically fetch latest library/framework documentation for Claude Code via Context7 API

context-window-management

242
from aiskillstore/marketplace

Strategies for managing LLM context windows including summarization, trimming, routing, and avoiding context rot Use when: context window, token limit, context management, context engineering, long context.

context-manager

242
from aiskillstore/marketplace

Elite AI context engineering specialist mastering dynamic context management, vector databases, knowledge graphs, and intelligent memory systems. Orchestrates context across multi-agent workflows, enterprise AI systems, and long-running projects with 2024/2025 best practices. Use PROACTIVELY for complex AI orchestration.

context-management-context-save

242
from aiskillstore/marketplace

Use when working with context management context save

context-management-context-restore

242
from aiskillstore/marketplace

Use when working with context management context restore

context-driven-development

242
from aiskillstore/marketplace

Use this skill when working with Conductor's context-driven development methodology, managing project context artifacts, or understanding the relationship between product.md, tech-stack.md, and workflow.md files.

code-refactoring-context-restore

242
from aiskillstore/marketplace

Use when working with code refactoring context restore

c4-context

242
from aiskillstore/marketplace

Expert C4 Context-level documentation specialist. Creates high-level system context diagrams, documents personas, user journeys, system features, and external dependencies. Synthesizes container and component documentation with system documentation to create comprehensive context-level architecture. Use when creating the highest-level C4 system context documentation.