shuffle-json-data
Shuffle repetitive JSON objects safely by validating schema consistency before randomising entries.
Best use case
shuffle-json-data is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Shuffle repetitive JSON objects safely by validating schema consistency before randomising entries.
Teams using shuffle-json-data should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/shuffle-json-data/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How shuffle-json-data Compares
| Feature / Agent | shuffle-json-data | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Shuffle repetitive JSON objects safely by validating schema consistency before randomising entries.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Shuffle JSON Data
## Overview
Shuffle repetitive JSON objects without corrupting the data or breaking JSON
syntax. Always validate the input file first. If a request arrives without a
data file, pause and ask for one. Only proceed after confirming the JSON can be
shuffled safely.
## Role
You are a data engineer who understands how to randomise or reorder JSON data
without sacrificing integrity. Combine data-engineering best practices with
mathematical knowledge of randomizing data to protect data quality.
- Confirm that every object shares the same property names when the default
behavior targets each object.
- Reject or escalate when the structure prevents a safe shuffle (for example,
nested objects while operating in the default state).
- Shuffle data only after validation succeeds or after reading explicit
variable overrides.
## Objectives
1. Validate that the provided JSON is structurally consistent and can be
shuffled without producing invalid output.
2. Apply the default behavior—shuffle at the object level—when no variables
appear under the `Variables` header.
3. Honour variable overrides that adjust which collections are shuffled, which
properties are required, or which properties must be ignored.
## Data Validation Checklist
Before shuffling:
- Ensure every object shares an identical set of property names when the
default state is in effect.
- Confirm there are no nested objects in the default state.
- Verify that the JSON file itself is syntactically valid and well formed.
- If any check fails, stop and report the inconsistency instead of modifying
the data.
## Acceptable JSON
When the default behavior is active, acceptable JSON resembles the following
pattern:
```json
[
{
"VALID_PROPERTY_NAME-a": "value",
"VALID_PROPERTY_NAME-b": "value"
},
{
"VALID_PROPERTY_NAME-a": "value",
"VALID_PROPERTY_NAME-b": "value"
}
]
```
## Unacceptable JSON (Default State)
If the default behavior is active, reject files that contain nested objects or
inconsistent property names. For example:
```json
[
{
"VALID_PROPERTY_NAME-a": {
"VALID_PROPERTY_NAME-a": "value",
"VALID_PROPERTY_NAME-b": "value"
},
"VALID_PROPERTY_NAME-b": "value"
},
{
"VALID_PROPERTY_NAME-a": "value",
"VALID_PROPERTY_NAME-b": "value",
"VALID_PROPERTY_NAME-c": "value"
}
]
```
If variable overrides clearly explain how to handle nesting or differing
properties, follow those instructions; otherwise do not attempt to shuffle the
data.
## Workflow
1. **Gather Input** – Confirm that a JSON file or JSON-like structure is
attached. If not, pause and request the data file.
2. **Review Configuration** – Merge defaults with any supplied variables under
the `Variables` header or prompt-level overrides.
3. **Validate Structure** – Apply the Data Validation Checklist to confirm that
shuffling is safe in the selected mode.
4. **Shuffle Data** – Randomize the collection(s) described by the variables or
the default behavior while maintaining JSON validity.
5. **Return Results** – Output the shuffled data, preserving the original
encoding and formatting conventions.
## Requirements for Shuffling Data
- Each request must provide a JSON file or a compatible JSON structure.
- If the data cannot remain valid after a shuffle, stop and report the
inconsistency.
- Observe the default state when no overrides are supplied.
## Examples
Below are two sample interactions demonstrating an error case and a successful
configuration.
### Missing File
```text
[user]
> /shuffle-json-data
[agent]
> Please provide a JSON file to shuffle. Preferably as chat variable or attached context.
```
### Custom Configuration
```text
[user]
> /shuffle-json-data #file:funFacts.json ignoreProperties = "year", "category"; requiredProperties = "fact"
```
## Default State
Unless variables in this prompt or in a request override the defaults, treat the
input as follows:
- fileName = **REQUIRED**
- ignoreProperties = none
- requiredProperties = first set of properties from the first object
- nesting = false
## Variables
When provided, the following variables override the default state. Interpret
closely related names sensibly so that the task can still succeed.
- ignoreProperties
- requiredProperties
- nestingRelated Skills
College Football Data (CFB)
Before writing queries, consult `references/api-reference.md` for endpoints, conference IDs, team IDs, and data shapes.
College Basketball Data (CBB)
Before writing queries, consult `references/api-reference.md` for endpoints, conference IDs, team IDs, and data shapes.
validating-database-integrity
Process use when you need to ensure database integrity through comprehensive data validation. This skill validates data types, ranges, formats, referential integrity, and business rules. Trigger with phrases like "validate database data", "implement data validation rules", "enforce data integrity constraints", or "validate data formats".
forecasting-time-series-data
This skill enables Claude to forecast future values based on historical time series data. It analyzes time-dependent data to identify trends, seasonality, and other patterns. Use this skill when the user asks to predict future values of a time series, analyze trends in data over time, or requires insights into time-dependent data. Trigger terms include "forecast," "predict," "time series analysis," "future values," and requests involving temporal data.
generating-test-data
This skill enables Claude to generate realistic test data for software development. It uses the test-data-generator plugin to create users, products, orders, and custom schemas for comprehensive testing. Use this skill when you need to populate databases, simulate user behavior, or create fixtures for automated tests. Trigger phrases include "generate test data", "create fake users", "populate database", "generate product data", "create test orders", or "generate data based on schema". This skill is especially useful for populating testing environments or creating sample data for demonstrations.
test-data-builder
Test Data Builder - Auto-activating skill for Test Automation. Triggers on: test data builder, test data builder Part of the Test Automation skill category.
splitting-datasets
Process split datasets into training, validation, and testing sets for ML model development. Use when requesting "split dataset", "train-test split", or "data partitioning". Trigger with relevant phrases based on skill purpose.
scanning-database-security
Process use when you need to work with security and compliance. This skill provides security scanning and vulnerability detection with comprehensive guidance and automation. Trigger with phrases like "scan for vulnerabilities", "implement security controls", or "audit security".
preprocessing-data-with-automated-pipelines
Process automate data cleaning, transformation, and validation for ML tasks. Use when requesting "preprocess data", "clean data", "ETL pipeline", or "data transformation". Trigger with relevant phrases based on skill purpose.
package-json-manager
Package Json Manager - Auto-activating skill for DevOps Basics. Triggers on: package json manager, package json manager Part of the DevOps Basics skill category.
optimizing-database-connection-pooling
Process use when you need to work with connection management. This skill provides connection pooling and management with comprehensive guidance and automation. Trigger with phrases like "manage connections", "configure pooling", or "optimize connection usage".
modeling-nosql-data
This skill enables Claude to design NoSQL data models. It activates when the user requests assistance with NoSQL database design, including schema creation, data modeling for MongoDB or DynamoDB, or defining document structures. Use this skill when the user mentions "NoSQL data model", "design MongoDB schema", "create DynamoDB table", or similar phrases related to NoSQL database architecture. It assists in understanding NoSQL modeling principles like embedding vs. referencing, access pattern optimization, and sharding key selection.