ClaudeCursorGeminiCodexMobile Development

earllm-build

Build, maintain, and extend the EarLLM One Android project — a Kotlin/Compose app that connects Bluetooth earbuds to an LLM via voice pipeline.

31,392 stars
Complexity: easy

About this skill

The `earllm-build` skill enables an AI agent to directly interact with and manage the development lifecycle of the EarLLM One Android project. This includes tasks such as generating new Kotlin/Compose code for features, refactoring existing components, debugging issues, and ensuring the application effectively connects Bluetooth earbuds to a Large Language Model (LLM) for voice-based interactions. It's designed for agents capable of code generation and project manipulation, facilitating rapid iteration and maintenance of this specialized mobile application and its voice pipeline.

Best use case

Automating various development tasks for the EarLLM One Android app, such as implementing new features, fixing bugs, refactoring code, or extending the voice pipeline integration between Bluetooth earbuds and an LLM.

Build, maintain, and extend the EarLLM One Android project — a Kotlin/Compose app that connects Bluetooth earbuds to an LLM via voice pipeline.

Successful compilation, modification, or extension of the EarLLM One Android project code, including generated features, bug fixes, or architectural improvements. The agent might provide code snippets, revised file structures, or summaries of proposed changes ready for review and application.

Practical example

Example input

I need to add a new voice command recognition module to the EarLLM One app that can trigger specific LLM prompts. Can you draft the Kotlin/Compose code for this feature?

Example output

I've drafted the necessary Kotlin/Compose code for a new `VoiceCommandRecognitionModule` within the EarLLM One project. This module integrates with the existing voice pipeline and includes a basic intent recognition system. Here's the proposed file structure and a snippet of the core `VoiceCommandProcessor` class with a placeholder for LLM prompt mapping. Please review before I apply the changes:

```kotlin
// app/src/main/java/com/earllm/VoiceCommandRecognitionModule.kt
package com.earllm

import androidx.compose.runtime.Composable
// ... other imports

class VoiceCommandRecognitionModule(private val llmService: LLMService) {
    @Composable
    fun CommandRecognitionUI() {
        // ... Jetpack Compose UI for command recognition status
    }

    fun processVoiceInput(audioData: ByteArray): String {
        // Logic to convert audio to text and identify commands
        val recognizedText = "simulate_speech_to_text(audioData)"
        return when {
            recognizedText.contains("summarize") -> llmService.generatePrompt("Summarize the last conversation.")
            recognizedText.contains("remind me") -> llmService.generatePrompt("Create a reminder.")
            else -> llmService.processNaturalQuery(recognizedText)
        }
    }
}
```

I recommend integrating this module into your existing `MainActivity` or a dedicated `VoiceController` ViewModel.

When to use this skill

  • When the user explicitly mentions 'earllm', 'earbudllm', or 'earbud app' in the context of development.
  • When the user needs to develop, modify, or debug a Kotlin/Compose Android application that integrates Bluetooth audio with an LLM.
  • When discussing specific tasks related to the EarLLM One project, such as adding voice commands, improving audio processing, or enhancing LLM interaction.
  • When the user asks for assistance with 'voice pipeline Kotlin' or similar development-centric queries for this project.

When not to use this skill

  • For projects unrelated to the EarLLM One Android application.
  • When the development task is not related to Android, Kotlin, or Jetpack Compose.
  • When working with LLM integrations that do not involve Bluetooth earbuds or voice pipelines.
  • When physical hardware debugging or direct device interaction beyond an AI agent's capabilities is required.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/earllm-build/SKILL.md --create-dirs "https://raw.githubusercontent.com/sickn33/antigravity-awesome-skills/main/plugins/antigravity-awesome-skills-claude/skills/earllm-build/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/earllm-build/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How earllm-build Compares

Feature / Agentearllm-buildStandard Approach
Platform SupportClaude, Cursor, Gemini, CodexLimited / Varies
Context Awareness High Baseline
Installation ComplexityeasyN/A

Frequently Asked Questions

What does this skill do?

Build, maintain, and extend the EarLLM One Android project — a Kotlin/Compose app that connects Bluetooth earbuds to an LLM via voice pipeline.

Which AI agents support this skill?

This skill is designed for Claude, Cursor, Gemini, Codex.

How difficult is it to install?

The installation complexity is rated as easy. You can find the installation instructions above.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

Related Guides

SKILL.md Source

# EarLLM One — Build & Maintain

## Overview

Build, maintain, and extend the EarLLM One Android project — a Kotlin/Compose app that connects Bluetooth earbuds to an LLM via voice pipeline.

## When to Use This Skill

- When the user mentions "earllm" or related topics
- When the user mentions "earbudllm" or related topics
- When the user mentions "earbud app" or related topics
- When the user mentions "voice pipeline kotlin" or related topics
- When the user mentions "bluetooth audio android" or related topics
- When the user mentions "sco microphone" or related topics

## Do Not Use This Skill When

- The task is unrelated to earllm build
- A simpler, more specific tool can handle the request
- The user needs general-purpose assistance without domain expertise

## How It Works

EarLLM One is a multi-module Android app (Kotlin + Jetpack Compose) that captures voice from Bluetooth earbuds, transcribes it, sends it to an LLM, and speaks the response back.

## Project Location

`C:\Users\renat\earbudllm`

## Module Dependency Graph

```
app ──→ voice ──→ audio ──→ core-logging
  │       │
  ├──→ bluetooth ──→ core-logging
  └──→ llm ──→ core-logging
```

## Modules And Key Files

| Module | Purpose | Key Files |
|--------|---------|-----------|
| **core-logging** | Structured logging, performance tracking | `EarLogger.kt`, `PerformanceTracker.kt` |
| **bluetooth** | BT discovery, pairing, A2DP/HFP profiles | `BluetoothController.kt`, `BluetoothState.kt`, `BluetoothPermissions.kt` |
| **audio** | Audio routing (SCO/BLE), capture, headset buttons | `AudioRouteController.kt`, `VoiceCaptureController.kt`, `HeadsetButtonController.kt` |
| **voice** | STT (SpeechRecognizer + Vosk stub), TTS, pipeline | `SpeechToTextController.kt`, `TextToSpeechController.kt`, `VoicePipeline.kt` |
| **llm** | LLM interface, stub, OpenAI-compatible client | `LlmClient.kt`, `StubLlmClient.kt`, `RealLlmClient.kt`, `SecureTokenStore.kt` |
| **app** | UI, ViewModel, Service, Settings, all screens | `MainViewModel.kt`, `EarLlmForegroundService.kt`, 6 Compose screens |

## Build Configuration

- **SDK**: minSdk 26, targetSdk 34, compileSdk 34
- **Build tools**: AGP 8.2.2, Kotlin 1.9.22, Gradle 8.5
- **Compose BOM**: 2024.02.00
- **Key deps**: OkHttp, AndroidX Security (EncryptedSharedPreferences), DataStore, Media

## Target Hardware

| Device | Model | Key Details |
|--------|-------|-------------|
| Phone | Samsung Galaxy S24 Ultra | Android 14, One UI 6.1, Snapdragon 8 Gen 3 |
| Earbuds | Xiaomi Redmi Buds 6 Pro | BT 5.3, A2DP/HFP/AVRCP, ANC, LDAC |

## Critical Technical Facts

These are verified facts from official documentation and device testing. Treat them as ground truth when making decisions:

1. **Bluetooth SCO is limited to 8kHz mono input** on most devices. Some support 16kHz mSBC. BLE Audio (Android 12+, `TYPE_BLE_HEADSET = 26`) supports up to 32kHz stereo. Always prefer BLE Audio when available.

2. **`startBluetoothSco()` is deprecated since Android 12 (API 31).** Use `AudioManager.setCommunicationDevice(AudioDeviceInfo)` and `clearCommunicationDevice()` instead. The project already implements both paths in `AudioRouteController.kt`.

3. **Samsung One UI 7/8 has a known HFP corruption bug** where A2DP playback corrupts the SCO link. The app handles this with silence detection and automatic fallback to the phone's built-in mic.

4. **Redmi Buds 6 Pro tap controls must be set to "Default" (Play/Pause)** in the Xiaomi Earbuds companion app. If set to ANC or custom functions, events are handled internally by the earbuds and never reach Android.

5. **Android 14+ requires `FOREGROUND_SERVICE_MICROPHONE` permission** and `foregroundServiceType="microphone"` in the service declaration. `RECORD_AUDIO` must be granted before `startForeground()`.

6. **`VOICE_COMMUNICATION` audio source enables AEC** (Acoustic Echo Cancellation), which is critical to prevent TTS audio output from feeding back into the STT microphone input. Never change this source without understanding the echo implications.

7. **Never play TTS (A2DP) while simultaneously recording via SCO.** The correct sequence is: stop playback → switch to HFP → record → switch to A2DP → play response.

## Data Flow

```
Headset button tap
  → MediaSession (HeadsetButtonController)
  → TapAction.RECORD_TOGGLE
  → VoicePipeline.toggleRecording()
  → VoiceCaptureController captures PCM (16kHz mono)
  → stopRecording() returns ByteArray
  → SpeechToTextController.transcribe(pcmData)
  → LlmClient.chat(messages)
  → TextToSpeechController.speak(response)
  → Audio output via A2DP to earbuds
```

## Adding A New Feature

1. Identify which module(s) are affected
2. Read existing code in those modules first
3. Follow the StateFlow pattern — expose state via `MutableStateFlow` / `StateFlow`
4. Update `MainViewModel.kt` if the feature needs UI integration
5. Add unit tests in the module's `src/test/` directory
6. Update docs if the feature changes behavior

## Modifying Audio Capture

- `VoiceCaptureController.kt` handles PCM recording at 16kHz mono
- WAV headers use hex byte values (not char literals) to avoid shell quoting issues
- VU meter: RMS calculation → dB conversion → normalized 0-1 range
- Buffer size: `getMinBufferSize().coerceAtLeast(4096)`

## Changing Bluetooth Behavior

- `BluetoothController.kt` manages discovery, pairing, profile proxies
- Earbuds detection uses name heuristics: "buds", "earbuds", "tws", "pods", "ear"
- Always handle both Bluetooth Classic and BLE Audio paths

## Modifying The Llm Integration

- `LlmClient.kt` defines the interface — keep it generic
- `StubLlmClient.kt` for offline testing (500ms simulated delay)
- `RealLlmClient.kt` uses OkHttp to call OpenAI-compatible APIs
- API keys stored in `SecureTokenStore.kt` (EncryptedSharedPreferences)

## Generating A Build Artifact

After code changes, regenerate the ZIP:
```powershell

## From Project Root

powershell -Command "Remove-Item 'EarLLM_One_v1.0.zip' -Force -ErrorAction SilentlyContinue; Compress-Archive -Path (Get-ChildItem -Exclude '*.zip','_zip_verify','.git') -DestinationPath 'EarLLM_One_v1.0.zip' -Force"
```

## Running Tests

```bash
./gradlew test --stacktrace          # Unit tests
./gradlew connectedAndroidTest       # Instrumented tests (device required)
```

## Phase 2 Roadmap

- Real-time streaming voice conversation with LLM through earbuds
- Smart assistant: categorize speech into meetings, shopping lists, memos, emails
- Vosk offline STT integration (currently stubbed)
- Wake-word detection to avoid keeping SCO open continuously
- Streaming TTS (Android built-in TTS does NOT support streaming)

## Stt Engine Reference

| Engine | Size | WER | Streaming | Best For |
|--------|------|-----|-----------|----------|
| Vosk small-en | 40 MB | ~10% | Yes | Real-time mobile |
| Vosk lgraph | 128 MB | ~8% | Yes | Better accuracy |
| Whisper tiny | 40 MB | ~10-12% | No (batch) | Post-utterance polish |
| Android SpeechRecognizer | 0 MB | varies | Yes | Online, no extra deps |

## Best Practices

- Provide clear, specific context about your project and requirements
- Review all suggestions before applying them to production code
- Combine with other complementary skills for comprehensive analysis

## Common Pitfalls

- Using this skill for tasks outside its domain expertise
- Applying recommendations without understanding your specific context
- Not providing enough project context for accurate analysis

Related Skills

ios-developer

31392
from sickn33/antigravity-awesome-skills

Develop native iOS applications with Swift/SwiftUI. Masters iOS 18, SwiftUI, UIKit integration, Core Data, networking, and App Store optimization.

Mobile DevelopmentClaude

ios-debugger-agent

31392
from sickn33/antigravity-awesome-skills

Debug the current iOS project on a booted simulator with XcodeBuildMCP.

Mobile DevelopmentClaude

expo-tailwind-setup

31392
from sickn33/antigravity-awesome-skills

Set up Tailwind CSS v4 in Expo with react-native-css and NativeWind v5 for universal styling

Mobile DevelopmentClaude

expo-deployment

31392
from sickn33/antigravity-awesome-skills

Deploy Expo apps to production

Mobile DevelopmentClaude

expo-api-routes

31392
from sickn33/antigravity-awesome-skills

Guidelines for creating API routes in Expo Router with EAS Hosting

Mobile DevelopmentClaude

liquid-glass-design

144923
from affaan-m/everything-claude-code

iOS 26 液态玻璃设计系统 — 适用于 SwiftUI、UIKit 和 WidgetKit 的动态玻璃材质,具有模糊、反射和交互式变形效果。

Mobile DevelopmentClaude

mcp-builder-ms

31392
from sickn33/antigravity-awesome-skills

Use this skill when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK).

Developer ToolsClaude

hugging-face-tool-builder

31392
from sickn33/antigravity-awesome-skills

Your purpose is now is to create reusable command line scripts and utilities for using the Hugging Face API, allowing chaining, piping and intermediate processing where helpful. You can access the API directly, as well as use the hf command line tool.

Developer ToolsClaude

build

31392
from sickn33/antigravity-awesome-skills

build

Project ManagementClaude

bazel-build-optimization

31392
from sickn33/antigravity-awesome-skills

Optimize Bazel builds for large-scale monorepos. Use when configuring Bazel, implementing remote execution, or optimizing build performance for enterprise codebases.

DevOps ToolsClaude

nft-standards

31392
from sickn33/antigravity-awesome-skills

Master ERC-721 and ERC-1155 NFT standards, metadata best practices, and advanced NFT features.

Web3 & BlockchainClaude

nextjs-app-router-patterns

31392
from sickn33/antigravity-awesome-skills

Comprehensive patterns for Next.js 14+ App Router architecture, Server Components, and modern full-stack React development.

Web FrameworksClaude