offline-ai-toolkit

Build offline-capable AI systems with local models, embedded knowledge bases, and no internet dependency. Use when: building AI tools for offline use, creating self-contained knowledge systems, deploying AI in air-gapped environments.

26 stars

Best use case

offline-ai-toolkit is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Build offline-capable AI systems with local models, embedded knowledge bases, and no internet dependency. Use when: building AI tools for offline use, creating self-contained knowledge systems, deploying AI in air-gapped environments.

Teams using offline-ai-toolkit should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/offline-ai-toolkit/SKILL.md --create-dirs "https://raw.githubusercontent.com/TerminalSkills/skills/main/skills/offline-ai-toolkit/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/offline-ai-toolkit/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How offline-ai-toolkit Compares

Feature / Agentoffline-ai-toolkitStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Build offline-capable AI systems with local models, embedded knowledge bases, and no internet dependency. Use when: building AI tools for offline use, creating self-contained knowledge systems, deploying AI in air-gapped environments.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Offline AI Toolkit — Self-Contained AI Systems

## Overview

Build AI systems that work completely offline — local LLMs via Ollama, embedded vector search with SQLite, knowledge bases from pre-downloaded content, and a PWA interface that runs without connectivity. Ideal for field work, air-gapped environments, or privacy-first deployments.

## Instructions

### Step 1: Install Ollama for Local LLMs

```bash
curl -fsSL https://ollama.ai/install.sh | sh
ollama pull llama3.1:8b       # General purpose (4.7GB)
ollama pull nomic-embed-text   # Embeddings (274MB)
```

| Use Case | Model | RAM Needed |
|----------|-------|------------|
| General Q&A | llama3.1:8b | 8GB |
| Quick answers | phi3:mini | 4GB |
| Code help | codellama:7b | 8GB |
| Embeddings | nomic-embed-text | 2GB |

### Step 2: Build the Offline Knowledge Base

```python
import requests, sqlite3, os

def init_knowledge_db(db_path='knowledge.db'):
    """Initialize SQLite database for knowledge storage."""
    conn = sqlite3.connect(db_path)
    conn.execute('''CREATE TABLE IF NOT EXISTS documents (
        id INTEGER PRIMARY KEY AUTOINCREMENT, title TEXT,
        source TEXT, content TEXT, category TEXT,
        created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
    )''')
    conn.execute('''CREATE TABLE IF NOT EXISTS embeddings (
        id INTEGER PRIMARY KEY, doc_id INTEGER REFERENCES documents(id),
        chunk_text TEXT, embedding BLOB, chunk_index INTEGER
    )''')
    conn.execute('''CREATE VIRTUAL TABLE IF NOT EXISTS fts_documents
        USING fts5(title, content, category)''')
    return conn

def download_wikipedia_articles(topics, conn):
    """Download Wikipedia articles for offline knowledge."""
    for topic in topics:
        url = f"https://en.wikipedia.org/api/rest_v1/page/summary/{topic}"
        try:
            r = requests.get(url, timeout=10)
            data = r.json()
            content = data.get('extract', '')
            if content:
                conn.execute(
                    'INSERT INTO documents (title, source, content, category) VALUES (?, ?, ?, ?)',
                    (data.get('title', topic), f'wikipedia:{topic}', content, 'encyclopedia'))
        except Exception as e:
            print(f"Failed to download {topic}: {e}")
    conn.commit()
```

### Step 3: Generate Embeddings with Ollama

```python
import struct

def get_embedding(text, model='nomic-embed-text'):
    """Get embedding vector from Ollama."""
    response = requests.post('http://localhost:11434/api/embeddings',
                             json={'model': model, 'prompt': text})
    return response.json()['embedding']

def embedding_to_blob(embedding):
    return struct.pack(f'{len(embedding)}f', *embedding)

def blob_to_embedding(blob):
    n = len(blob) // 4
    return list(struct.unpack(f'{n}f', blob))

def embed_all_documents(conn, chunk_size=500):
    """Generate embeddings for all documents in the database."""
    cursor = conn.execute('SELECT id, content FROM documents')
    for doc_id, content in cursor.fetchall():
        words = content.split()
        for i in range(0, len(words), chunk_size):
            chunk = ' '.join(words[i:i + chunk_size])
            if len(chunk.strip()) < 20:
                continue
            emb = get_embedding(chunk)
            conn.execute(
                'INSERT INTO embeddings (doc_id, chunk_text, embedding, chunk_index) VALUES (?, ?, ?, ?)',
                (doc_id, chunk, embedding_to_blob(emb), i // chunk_size))
    conn.commit()
```

### Step 4: Offline Vector Search

```python
import math

def cosine_similarity(a, b):
    dot = sum(x * y for x, y in zip(a, b))
    norm_a = math.sqrt(sum(x * x for x in a))
    norm_b = math.sqrt(sum(x * x for x in b))
    return dot / (norm_a * norm_b) if norm_a and norm_b else 0

def search_knowledge(query, conn, top_k=5):
    """Search the knowledge base using vector similarity."""
    query_emb = get_embedding(query)
    cursor = conn.execute('SELECT doc_id, chunk_text, embedding FROM embeddings')
    results = []
    for row in cursor.fetchall():
        sim = cosine_similarity(query_emb, blob_to_embedding(row[2]))
        results.append({'chunk_text': row[1], 'doc_id': row[0], 'similarity': sim})
    results.sort(key=lambda x: x['similarity'], reverse=True)
    return results[:top_k]
```

### Step 5: RAG with Local LLM

```python
def ask_offline(question, conn, model='llama3.1:8b'):
    """Answer questions using local RAG pipeline."""
    results = search_knowledge(question, conn, top_k=3)
    context = '\n\n'.join([r['chunk_text'] for r in results])
    response = requests.post('http://localhost:11434/api/generate', json={
        'model': model,
        'prompt': f"""Answer using ONLY the context provided.
If the context doesn't contain the answer, say "I don't have information about that."

Context:
{context}

Question: {question}
Answer:""",
        'stream': False
    })
    return {
        'answer': response.json()['response'],
        'sources': [r['chunk_text'][:100] for r in results],
        'model': model
    }
```

## Examples

### Example 1: Build an Offline Field Research Assistant

A wildlife researcher prepares an offline AI assistant before a 2-week trip to a remote area with no connectivity:

```python
# While online: download knowledge and build embeddings
conn = init_knowledge_db('field_research.db')

# Load species identification guides and park documentation
download_wikipedia_articles([
    'Grizzly_bear', 'Gray_wolf', 'Elk', 'Moose',
    'Yellowstone_National_Park', 'Wildlife_tracking',
    'Bear_safety', 'GPS_navigation'
], conn)

# Ingest local field manuals (markdown files on laptop)
for root, _, files in os.walk('./field-manuals'):
    for f in files:
        if f.endswith('.md'):
            with open(os.path.join(root, f)) as fh:
                conn.execute('INSERT INTO documents (title, source, content, category) VALUES (?,?,?,?)',
                             (f, os.path.join(root, f), fh.read(), 'field-manual'))
conn.commit()
embed_all_documents(conn)

# In the field (fully offline):
result = ask_offline("What are the signs of a nearby grizzly bear den?", conn)
# Answer: "Look for excavated hillside entrances, claw marks on nearby trees,
#  matted vegetation, and a strong musky odor. Dens are typically on north-facing
#  slopes at elevations above 6,000 feet..."
```

### Example 2: Air-Gapped Developer Documentation Server

A defense contractor sets up an offline coding assistant for a secure facility with no internet:

```python
conn = init_knowledge_db('dev_docs.db')

# Pre-load language and framework documentation
import os
for doc_dir in ['./docs/python-stdlib', './docs/react-docs', './docs/kubernetes']:
    for root, _, files in os.walk(doc_dir):
        for f in files:
            if f.endswith(('.md', '.txt', '.rst')):
                path = os.path.join(root, f)
                with open(path, 'r', errors='ignore') as fh:
                    conn.execute('INSERT INTO documents (title,source,content,category) VALUES (?,?,?,?)',
                                 (f, path, fh.read(), 'dev-docs'))
conn.commit()
embed_all_documents(conn)

# Developer queries the system (no internet needed):
result = ask_offline("How do I create a Kubernetes CronJob that runs every 6 hours?", conn)
# Answer: "Create a CronJob manifest with schedule '0 */6 * * *' and specify
#  your container image in the jobTemplate spec. Set restartPolicy to OnFailure..."
print(result['sources'])  # Shows which doc chunks were used as context
```

## Guidelines

- **Download everything while online** — models, knowledge content, and embeddings must be prepared beforehand
- **Test offline before deploying** — disconnect WiFi and verify the full pipeline works end-to-end
- **Choose models by hardware** — phi3:mini for 4GB RAM devices, llama3.1:8b for 8GB+, llama3.1:70b for workstations
- **Use FTS as fallback** — SQLite full-text search works when embeddings are unavailable or for exact matches
- **Package for portability** — bundle everything on a USB drive or Docker image for easy deployment
- **Keep knowledge fresh** — sync new content and re-embed when connectivity returns

## References

- [Ollama](https://ollama.ai/) — local LLM runtime
- [SQLite FTS5](https://www.sqlite.org/fts5.html) — full-text search
- [PWA docs](https://web.dev/progressive-web-apps/) — offline-first web apps

Related Skills

social-engineer-toolkit

26
from TerminalSkills/skills

Run authorized red team social engineering assessments with the Social Engineer Toolkit (SET). Use when a user asks to simulate a phishing campaign for security awareness training, clone a login page for a sanctioned exercise, or test an organization's human-layer defenses under a signed engagement.

remotion-video-toolkit

26
from TerminalSkills/skills

Programmatic video creation with React using Remotion. Use when a user asks to create videos with code, generate personalized videos, build animated data visualizations, add TikTok-style captions, render video programmatically, or automate video production. Supports CLI rendering, AWS Lambda, and Google Cloud Run deployment.

zustand

26
from TerminalSkills/skills

You are an expert in Zustand, the small, fast, and scalable state management library for React. You help developers manage global state without boilerplate using Zustand's hook-based stores, selectors for performance, middleware (persist, devtools, immer), computed values, and async actions — replacing Redux complexity with a simple, un-opinionated API in under 1KB.

zoho

26
from TerminalSkills/skills

Integrate and automate Zoho products. Use when a user asks to work with Zoho CRM, Zoho Books, Zoho Desk, Zoho Projects, Zoho Mail, or Zoho Creator, build custom integrations via Zoho APIs, automate workflows with Deluge scripting, sync data between Zoho apps and external systems, manage leads and deals, automate invoicing, build custom Zoho Creator apps, set up webhooks, or manage Zoho organization settings. Covers Zoho CRM, Books, Desk, Projects, Creator, and cross-product integrations.

zod

26
from TerminalSkills/skills

You are an expert in Zod, the TypeScript-first schema declaration and validation library. You help developers define schemas that validate data at runtime AND infer TypeScript types at compile time — eliminating the need to write types and validators separately. Used for API input validation, form validation, environment variables, config files, and any data boundary.

zipkin

26
from TerminalSkills/skills

Deploy and configure Zipkin for distributed tracing and request flow visualization. Use when a user needs to set up trace collection, instrument Java/Spring or other services with Zipkin, analyze service dependencies, or configure storage backends for trace data.

zig

26
from TerminalSkills/skills

Expert guidance for Zig, the systems programming language focused on performance, safety, and readability. Helps developers write high-performance code with compile-time evaluation, seamless C interop, no hidden control flow, and no garbage collector. Zig is used for game engines, operating systems, networking, and as a C/C++ replacement.

zed

26
from TerminalSkills/skills

Expert guidance for Zed, the high-performance code editor built in Rust with native collaboration, AI integration, and GPU-accelerated rendering. Helps developers configure Zed, create custom extensions, set up collaborative editing sessions, and integrate AI assistants for productive coding.

zeabur

26
from TerminalSkills/skills

Expert guidance for Zeabur, the cloud deployment platform that auto-detects frameworks, builds and deploys applications with zero configuration, and provides managed services like databases and message queues. Helps developers deploy full-stack applications with automatic scaling and one-click marketplace services.

zapier

26
from TerminalSkills/skills

Automate workflows between apps with Zapier. Use when a user asks to connect apps without code, automate repetitive tasks, sync data between services, or build no-code integrations between SaaS tools.

zabbix

26
from TerminalSkills/skills

Configure Zabbix for enterprise infrastructure monitoring with templates, triggers, discovery rules, and dashboards. Use when a user needs to set up Zabbix server, configure host monitoring, create custom templates, define trigger expressions, or automate host discovery and registration.

yup

26
from TerminalSkills/skills

Validate data with Yup schemas. Use when adding form validation, defining API request schemas, validating configuration, or building type-safe validation pipelines in JavaScript/TypeScript.