newman

Automated API testing with Postman collections via Newman CLI. Use when user requests API testing, collection execution, automated testing, CI/CD integration, or mentions "Postman", "Newman", "API tests", "run collection", or "automated testing".

3,891 stars
Complexity: easy

About this skill

Newman is the command-line collection runner for Postman, allowing users and AI agents to execute API tests defined within Postman collections without needing the graphical user interface. This skill provides the necessary instructions for installing and leveraging Newman to automate comprehensive API testing processes. It's highly useful for integrating API tests into continuous integration and continuous delivery (CI/CD) pipelines, automating regression testing, running performance tests, or executing a suite of API calls from a script. By using this skill, an AI agent can quickly set up and run API tests based on a user's request, providing immediate feedback on API health and ensuring reliability. Developers and QA engineers will find this skill invaluable for consistently validating the functionality and stability of their APIs. It supports advanced features like environment management, data-driven testing, and various reporting formats (HTML, JSON, JUnit), making it a robust solution for programmatic API quality assurance.

Best use case

The primary use case is automating API testing and integrating these tests into modern development and deployment workflows, particularly CI/CD pipelines. This skill benefits developers, QA engineers, and DevOps teams who need to ensure API stability, functionality, and performance through consistent, programmatic execution of their Postman collections.

Automated API testing with Postman collections via Newman CLI. Use when user requests API testing, collection execution, automated testing, CI/CD integration, or mentions "Postman", "Newman", "API tests", "run collection", or "automated testing".

The user should expect a command-line output detailing the results of the API collection run, potentially accompanied by generated reports (e.g., HTML, JSON) that confirm API functionality or highlight any failures.

Practical example

Example input

Run my `my_api_collection.json` Postman collection using the `dev_environment.json` and generate an HTML report named `api_test_report.html`.

Example output

```
INFO  newman v5.3.2
INFO  ┌─────────────────────────┐
INFO  │      newman summary     │
INFO  ├─────────────────────────┤
INFO  │ collection run results  │
INFO  ├─────────────────────────┤
INFO  │ iterations              │ 1
INFO  │ requests                │ 5
INFO  │ test-scripts            │ 5
INFO  │ assertions              │ 8
INFO  │ prerequest-scripts      │ 0
INFO  │ callbacks               │ 0
INFO  │ failures                │ 0
INFO  ├─────────────────────────┤
INFO  │ total run duration      │ 100ms
INFO  │ total data received     │ 2.5KB
INFO  │ avg response time       │ 20ms
INFO  └─────────────────────────┘
INFO  HTML reporter: api_test_report.html exported successfully.
```

When to use this skill

  • When a user requests to run API tests or validate API endpoints defined in a Postman collection.
  • To integrate Postman collection execution into a continuous integration/continuous delivery (CI/CD) pipeline.
  • When performing data-driven API testing with varying inputs from CSV or JSON files.
  • To generate detailed reports (e.g., HTML, JSON, JUnit) of API test results for analysis or auditing.

When not to use this skill

  • When interactive API exploration, manual debugging, or visual request building is required (use Postman GUI directly).
  • If the user does not have existing Postman collections or environments to run.
  • For front-end user interface (UI) testing, as this skill focuses solely on API interactions.
  • When only simple, ad-hoc API calls are needed that can be handled with basic `curl` commands.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/newman/SKILL.md --create-dirs "https://raw.githubusercontent.com/openclaw/skills/main/skills/1999azzar/newman/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/newman/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How newman Compares

Feature / AgentnewmanStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityeasyN/A

Frequently Asked Questions

What does this skill do?

Automated API testing with Postman collections via Newman CLI. Use when user requests API testing, collection execution, automated testing, CI/CD integration, or mentions "Postman", "Newman", "API tests", "run collection", or "automated testing".

How difficult is it to install?

The installation complexity is rated as easy. You can find the installation instructions above.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

Related Guides

SKILL.md Source

# Newman - Postman CLI Runner

Newman is the command-line Collection Runner for Postman. Run and test Postman collections directly from the command line with powerful reporting, environment management, and CI/CD integration.

## Quick Start

### Installation

```bash
# Global install (recommended)
npm install -g newman

# Project-specific
npm install --save-dev newman

# Verify
newman --version
```

### Basic Execution

```bash
# Run collection
newman run collection.json

# With environment
newman run collection.json -e environment.json

# With globals
newman run collection.json -g globals.json

# Combined
newman run collection.json -e env.json -g globals.json -d data.csv
```

## Core Workflows

### 1. Export from Postman Desktop

**In Postman:**
1. Collections → Click "..." → Export
2. Choose "Collection v2.1" (recommended)
3. Save as `collection.json`

**Environment:**
1. Environments → Click "..." → Export
2. Save as `environment.json`

### 2. Run Tests

```bash
# Basic run
newman run collection.json

# With detailed output
newman run collection.json --verbose

# Fail on errors
newman run collection.json --bail

# Custom timeout (30s)
newman run collection.json --timeout-request 30000
```

### 3. Data-Driven Testing

**CSV format:**
```csv
username,password
user1,pass1
user2,pass2
```

**Run:**
```bash
newman run collection.json -d test_data.csv --iteration-count 2
```

### 4. Reporters

```bash
# CLI only (default)
newman run collection.json

# HTML report
newman run collection.json --reporters cli,html --reporter-html-export report.html

# JSON export
newman run collection.json --reporters cli,json --reporter-json-export results.json

# JUnit (for CI)
newman run collection.json --reporters cli,junit --reporter-junit-export junit.xml

# Multiple reporters
newman run collection.json --reporters cli,html,json,junit \
  --reporter-html-export ./reports/newman.html \
  --reporter-json-export ./reports/newman.json \
  --reporter-junit-export ./reports/newman.xml
```

### 5. Security Best Practices

**❌ NEVER hardcode secrets in collections!**

Use environment variables:

```bash
# Export sensitive vars
export API_KEY="your-secret-key"
export DB_PASSWORD="your-db-pass"

# Newman auto-loads from env
newman run collection.json -e environment.json

# Or pass directly
newman run collection.json --env-var "API_KEY=secret" --env-var "DB_PASSWORD=pass"
```

**In Postman collection tests:**
```javascript
// Use {{API_KEY}} in requests
pm.request.headers.add({key: 'Authorization', value: `Bearer {{API_KEY}}`});

// Access in scripts
const apiKey = pm.environment.get("API_KEY");
```

**Environment file (environment.json):**
```json
{
  "name": "Production",
  "values": [
    {"key": "BASE_URL", "value": "https://api.example.com", "enabled": true},
    {"key": "API_KEY", "value": "{{$processEnvironment.API_KEY}}", "enabled": true}
  ]
}
```

Newman will replace `{{$processEnvironment.API_KEY}}` with the environment variable.

## Common Use Cases

### CI/CD Integration

See `references/ci-cd-examples.md` for GitHub Actions, GitLab CI, and Jenkins examples.

### Automated Regression Testing

```bash
#!/bin/bash
# scripts/run-api-tests.sh

set -e

echo "Running API tests..."

newman run collections/api-tests.json \
  -e environments/staging.json \
  --reporters cli,html,junit \
  --reporter-html-export ./test-results/newman.html \
  --reporter-junit-export ./test-results/newman.xml \
  --bail \
  --color on

echo "Tests completed. Report: ./test-results/newman.html"
```

### Load Testing

```bash
# Run with high iteration count
newman run collection.json \
  -n 100 \
  --delay-request 100 \
  --timeout-request 5000 \
  --reporters cli,json \
  --reporter-json-export load-test-results.json
```

### Parallel Execution

```bash
# Install parallel runner
npm install -g newman-parallel

# Run collections in parallel
newman-parallel -c collection1.json,collection2.json,collection3.json \
  -e environment.json \
  --reporters cli,html
```

## Advanced Features

### Custom Scripts

**Pre-request Script (in Postman):**
```javascript
// Generate dynamic values
pm.environment.set("timestamp", Date.now());
pm.environment.set("nonce", Math.random().toString(36).substring(7));
```

**Test Script (in Postman):**
```javascript
// Status code check
pm.test("Status is 200", function() {
    pm.response.to.have.status(200);
});

// Response body validation
pm.test("Response has user ID", function() {
    const jsonData = pm.response.json();
    pm.expect(jsonData).to.have.property('user_id');
});

// Response time check
pm.test("Response time < 500ms", function() {
    pm.expect(pm.response.responseTime).to.be.below(500);
});

// Set variable from response
pm.environment.set("user_token", pm.response.json().token);
```

### SSL/TLS Configuration

```bash
# Disable SSL verification (dev only!)
newman run collection.json --insecure

# Custom CA certificate
newman run collection.json --ssl-client-cert-list cert-list.json

# Client certificates
newman run collection.json \
  --ssl-client-cert client.pem \
  --ssl-client-key key.pem \
  --ssl-client-passphrase "secret"
```

### Error Handling

```bash
# Continue on errors
newman run collection.json --suppress-exit-code

# Fail fast
newman run collection.json --bail

# Custom error handling in wrapper
#!/bin/bash
newman run collection.json -e env.json
EXIT_CODE=$?

if [ $EXIT_CODE -ne 0 ]; then
    echo "Tests failed! Exit code: $EXIT_CODE"
    # Send alert, rollback deployment, etc.
    exit 1
fi
```

## Troubleshooting

**Collection not found:**
- Use absolute paths: `newman run /full/path/to/collection.json`
- Check file permissions: `ls -la collection.json`

**Environment variables not loading:**
- Verify syntax: `{{$processEnvironment.VAR_NAME}}`
- Check export: `echo $VAR_NAME`
- Use `--env-var` flag as fallback

**Timeout errors:**
- Increase timeout: `--timeout-request 60000` (60s)
- Check network connectivity
- Verify API endpoint is reachable

**SSL errors:**
- Development: Use `--insecure` temporarily
- Production: Add CA cert with `--ssl-extra-ca-certs`

**Memory issues (large collections):**
- Reduce iteration count
- Split collection into smaller parts
- Increase Node heap: `NODE_OPTIONS=--max-old-space-size=4096 newman run ...`

## Best Practices

1. **Version Control**: Store collections and environments in Git
2. **Environment Separation**: Separate files for dev/staging/prod
3. **Secret Management**: Use environment variables, never commit secrets
4. **Meaningful Names**: Use descriptive collection and folder names
5. **Test Atomicity**: Each request should test one specific thing
6. **Assertions**: Add comprehensive test scripts to every request
7. **Documentation**: Use Postman descriptions for context
8. **CI Integration**: Run Newman in CI pipeline for every PR
9. **Reports**: Archive HTML reports for historical analysis
10. **Timeouts**: Set reasonable timeout values for production APIs

## References

- **CI/CD Examples**: See `references/ci-cd-examples.md`
- **Advanced Patterns**: See `references/advanced-patterns.md`
- **Official Docs**: https://learning.postman.com/docs/running-collections/using-newman-cli/command-line-integration-with-newman/

Related Skills

botlearn-healthcheck

3891
from openclaw/skills

botlearn-healthcheck — BotLearn autonomous health inspector for OpenClaw instances across 5 domains (hardware, config, security, skills, autonomy); triggers on system check, health report, diagnostics, or scheduled heartbeat inspection.

DevOps & Infrastructure

Incident Postmortem Generator

3891
from openclaw/skills

Generate blameless incident postmortems from raw notes, Slack threads, or bullet points.

DevOps & Infrastructure

Post-Mortem & Incident Review Framework

3891
from openclaw/skills

Run structured post-mortems that actually prevent repeat failures. Blameless analysis, root cause identification, and action tracking.

DevOps & Infrastructure

afrexai-performance-engineering

3891
from openclaw/skills

Complete performance engineering system — profiling, optimization, load testing, capacity planning, and performance culture. Use when diagnosing slow applications, optimizing code/queries/infrastructure, load testing before launch, planning capacity, or building performance into CI/CD. Covers Node.js, Python, Go, Java, databases, APIs, and frontend.

DevOps & Infrastructure

OpenClaw Mastery — The Complete Agent Engineering & Operations System

3891
from openclaw/skills

> Built by AfrexAI — the team that runs 9+ production agents 24/7 on OpenClaw.

DevOps & Infrastructure

Legacy System Modernization Engine

3891
from openclaw/skills

Complete methodology for assessing, planning, and executing legacy system modernization — from monolith decomposition to cloud migration. Works for any tech stack, any scale.

DevOps & Infrastructure

Incident Response Playbook

3891
from openclaw/skills

Structured incident response for business and IT teams. Guides you through detection, triage, containment, resolution, and post-mortem — with auto-generated timelines and action items.

DevOps & Infrastructure

Git Engineering & Repository Strategy

3891
from openclaw/skills

You are a Git Engineering expert. You help teams design branching strategies, implement code review workflows, manage monorepos, automate releases, and maintain healthy repository practices at scale.

DevOps & Infrastructure

Django Production Engineering

3891
from openclaw/skills

Complete methodology for building, scaling, and operating production Django applications. From project structure to deployment, security to performance — every decision framework a Django team needs.

DevOps & Infrastructure

IT Disaster Recovery Plan Generator

3891
from openclaw/skills

Build production-ready disaster recovery plans that actually get followed when things break.

DevOps & Infrastructure

afrexai-api-architect

3891
from openclaw/skills

Design, build, test, document, and secure production-grade APIs. Covers the full lifecycle from schema design through deployment, monitoring, and versioning. Use when designing new APIs, reviewing existing ones, generating OpenAPI specs, building test suites, or debugging production issues.

DevOps & Infrastructure

Agent Ops Runbook

3891
from openclaw/skills

Generate a production-ready operations runbook for deploying AI agents. Covers pre-deployment checklists, shadow mode → supervised → autonomous rollout stages, monitoring dashboards, rollback procedures, cost management, and incident response templates.

DevOps & Infrastructure