google-ai-studio
Google AI Studio and Gemini API for multimodal AI. Use when you need multimodal AI (text + image + video + audio), long context up to 1M tokens, code generation with Gemini, grounding with Google Search, or structured output with response schemas.
Best use case
google-ai-studio is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Google AI Studio and Gemini API for multimodal AI. Use when you need multimodal AI (text + image + video + audio), long context up to 1M tokens, code generation with Gemini, grounding with Google Search, or structured output with response schemas.
Teams using google-ai-studio should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/google-ai-studio/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How google-ai-studio Compares
| Feature / Agent | google-ai-studio | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Google AI Studio and Gemini API for multimodal AI. Use when you need multimodal AI (text + image + video + audio), long context up to 1M tokens, code generation with Gemini, grounding with Google Search, or structured output with response schemas.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Google AI Studio — Gemini API
## Overview
Google AI Studio provides access to the Gemini family of models via API. Gemini 2.0 Flash is Google's fastest model for high-frequency tasks; Gemini 1.5 Pro supports up to 1 million token context windows and handles images, audio, video, and PDFs natively. The API supports grounding with Google Search, structured JSON output, and streaming.
## Setup
```bash
# Python
pip install google-generativeai
# Node.js
npm install @google/generative-ai
```
```bash
export GOOGLE_API_KEY=AIza...
```
Get your API key from [Google AI Studio](https://aistudio.google.com/apikey).
## Available Models
| Model | Context | Best For |
|---|---|---|
| `gemini-2.0-flash` | 1M tokens | Fast, cost-efficient, high-volume |
| `gemini-2.0-flash-thinking-exp` | 1M tokens | Complex reasoning with thoughts |
| `gemini-1.5-pro` | 2M tokens | Longest context, complex tasks |
| `gemini-1.5-flash` | 1M tokens | Balanced speed and capability |
| `text-embedding-004` | 2048 input | Text embeddings |
## Instructions
### Basic Text Generation
```python
import google.generativeai as genai
genai.configure(api_key="AIza...") # or reads GOOGLE_API_KEY
model = genai.GenerativeModel("gemini-2.0-flash")
response = model.generate_content("Explain neural networks in one paragraph.")
print(response.text)
```
### Multi-Turn Chat
```python
import google.generativeai as genai
genai.configure(api_key="AIza...")
model = genai.GenerativeModel(
model_name="gemini-2.0-flash",
system_instruction="You are a Python expert. Always show working code examples.",
)
chat = model.start_chat()
response = chat.send_message("How do I read a CSV with pandas?")
print(response.text)
response = chat.send_message("Now show me how to filter rows where age > 30.")
print(response.text)
```
### Image Analysis
```python
import google.generativeai as genai
import PIL.Image
genai.configure(api_key="AIza...")
model = genai.GenerativeModel("gemini-2.0-flash")
# From local file
image = PIL.Image.open("screenshot.png")
response = model.generate_content(["What's in this image? List all visible text.", image])
print(response.text)
# From URL (inline data)
import httpx
import base64
img_data = httpx.get("https://example.com/chart.png").content
image_part = {"mime_type": "image/png", "data": base64.b64encode(img_data).decode()}
response = model.generate_content(["Analyze this chart:", image_part])
print(response.text)
```
### PDF Processing
```python
import google.generativeai as genai
import pathlib
genai.configure(api_key="AIza...")
model = genai.GenerativeModel("gemini-1.5-pro")
# Upload a PDF file
pdf_file = genai.upload_file(
path="report.pdf",
mime_type="application/pdf",
display_name="Annual Report 2024",
)
response = model.generate_content([
"Summarize the key financial metrics from this report.",
pdf_file,
])
print(response.text)
# Inline PDF (smaller files)
pdf_bytes = pathlib.Path("document.pdf").read_bytes()
import base64
pdf_part = {"mime_type": "application/pdf", "data": base64.b64encode(pdf_bytes).decode()}
response = model.generate_content(["Extract all dates and deadlines:", pdf_part])
print(response.text)
```
### Streaming
```python
import google.generativeai as genai
genai.configure(api_key="AIza...")
model = genai.GenerativeModel("gemini-2.0-flash")
for chunk in model.generate_content("Write a short story about AI.", stream=True):
print(chunk.text, end="", flush=True)
print()
```
### Structured Output with Response Schema
```python
import google.generativeai as genai
import json
genai.configure(api_key="AIza...")
model = genai.GenerativeModel(
model_name="gemini-2.0-flash",
generation_config={
"response_mime_type": "application/json",
"response_schema": {
"type": "object",
"properties": {
"companies": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {"type": "string"},
"founded": {"type": "integer"},
"country": {"type": "string"},
},
"required": ["name", "founded", "country"],
},
}
},
},
},
)
response = model.generate_content(
"List 3 major AI companies with their founding year and country."
)
data = json.loads(response.text)
print(data)
```
### Grounding with Google Search
```python
import google.generativeai as genai
from google.generativeai import types
genai.configure(api_key="AIza...")
model = genai.GenerativeModel("gemini-2.0-flash")
# Enable Google Search grounding
response = model.generate_content(
"What are the latest AI research papers published this week?",
tools=[types.Tool(google_search=types.GoogleSearch())],
)
print(response.text)
# Check grounding metadata
if response.candidates[0].grounding_metadata:
for source in response.candidates[0].grounding_metadata.search_entry_point or []:
print(f"Source: {source}")
```
### Function Calling
```python
import google.generativeai as genai
genai.configure(api_key="AIza...")
def get_product_info(product_id: str) -> dict:
"""Simulated product lookup."""
return {"id": product_id, "name": "Widget Pro", "price": 49.99, "in_stock": True}
model = genai.GenerativeModel(
model_name="gemini-2.0-flash",
tools=[get_product_info], # Pass Python function directly!
)
chat = model.start_chat(enable_automatic_function_calling=True)
response = chat.send_message("What's the price and availability of product P123?")
print(response.text)
# Gemini automatically calls get_product_info("P123") and incorporates the result
```
### Long Context — Process Entire Codebase
```python
import google.generativeai as genai
import pathlib
genai.configure(api_key="AIza...")
model = genai.GenerativeModel("gemini-1.5-pro") # 2M token context
# Read entire codebase into context
files = list(pathlib.Path("./src").rglob("*.py"))
code_content = "\n\n".join([
f"# File: {f}\n{f.read_text()}" for f in files
])
response = model.generate_content([
"Analyze this codebase and identify security vulnerabilities:",
code_content,
])
print(response.text)
```
### Text Embeddings
```python
import google.generativeai as genai
genai.configure(api_key="AIza...")
# Single embedding
result = genai.embed_content(
model="text-embedding-004",
content="Machine learning transforms industries.",
task_type="retrieval_document",
)
print(f"Embedding dim: {len(result['embedding'])}") # 768
# Batch embeddings
texts = ["Hello world", "Machine learning", "AI systems"]
result = genai.embed_content(
model="text-embedding-004",
content=texts,
task_type="retrieval_document",
)
embeddings = result["embedding"] # List of 768-dim vectors
```
## Task Types for Embeddings
| Task Type | Use When |
|---|---|
| `retrieval_document` | Embedding documents to be retrieved |
| `retrieval_query` | Embedding search queries |
| `semantic_similarity` | Comparing text similarity |
| `classification` | Text classification tasks |
## Guidelines
- `gemini-2.0-flash` is the best default for most tasks — fast, cheap, and capable.
- Use `gemini-1.5-pro` only when you need >1M token context or maximum quality.
- Automatic function calling simplifies tool use — pass Python functions directly to `tools=`.
- Always specify `response_mime_type: "application/json"` with `response_schema` for structured output.
- Google Search grounding adds latency but ensures responses reflect current web information.
- The File API supports uploading files up to 2GB; uploaded files are retained for 48 hours.
- Rate limits on the free tier are low (~15 RPM) — use an API key with billing for production.Related Skills
lm-studio-subagents
Offload tasks to local LLMs via LM Studio. Use when a user asks to run local models with LM Studio, save API costs by using local LLMs, create subagents with local models, offload summarization or classification to a local model, or use LM Studio's API for batch processing. Covers local model inference, task delegation, and cost optimization.
label-studio
Open-source data labeling and annotation platform for ML projects. Supports text, image, audio, video, and time-series data. Features configurable labeling interfaces, ML-assisted labeling, team collaboration, and API integration for automated workflows.
google-indexing
Submit URLs to Google for indexing using the Google Indexing API and bulk-submit from sitemaps. Use when a user asks to index pages on Google, submit URLs to Google Search Console, speed up Google indexing, request crawling, bulk index pages, submit a sitemap's URLs for indexing, or check indexing status. Also use when the user mentions "Google Indexing API", "request indexing", "submit to Google", or "pages not indexed".
drizzle-studio
Explore and manage databases with Drizzle Studio. Use when a user asks to browse database contents visually, inspect tables and data, run ad-hoc queries, manage database records through a GUI, debug database issues, or use a lightweight alternative to pgAdmin or DBeaver. Covers setup with Drizzle ORM, standalone usage, data browsing, filtering, and inline editing.
zustand
You are an expert in Zustand, the small, fast, and scalable state management library for React. You help developers manage global state without boilerplate using Zustand's hook-based stores, selectors for performance, middleware (persist, devtools, immer), computed values, and async actions — replacing Redux complexity with a simple, un-opinionated API in under 1KB.
zoho
Integrate and automate Zoho products. Use when a user asks to work with Zoho CRM, Zoho Books, Zoho Desk, Zoho Projects, Zoho Mail, or Zoho Creator, build custom integrations via Zoho APIs, automate workflows with Deluge scripting, sync data between Zoho apps and external systems, manage leads and deals, automate invoicing, build custom Zoho Creator apps, set up webhooks, or manage Zoho organization settings. Covers Zoho CRM, Books, Desk, Projects, Creator, and cross-product integrations.
zod
You are an expert in Zod, the TypeScript-first schema declaration and validation library. You help developers define schemas that validate data at runtime AND infer TypeScript types at compile time — eliminating the need to write types and validators separately. Used for API input validation, form validation, environment variables, config files, and any data boundary.
zipkin
Deploy and configure Zipkin for distributed tracing and request flow visualization. Use when a user needs to set up trace collection, instrument Java/Spring or other services with Zipkin, analyze service dependencies, or configure storage backends for trace data.
zig
Expert guidance for Zig, the systems programming language focused on performance, safety, and readability. Helps developers write high-performance code with compile-time evaluation, seamless C interop, no hidden control flow, and no garbage collector. Zig is used for game engines, operating systems, networking, and as a C/C++ replacement.
zed
Expert guidance for Zed, the high-performance code editor built in Rust with native collaboration, AI integration, and GPU-accelerated rendering. Helps developers configure Zed, create custom extensions, set up collaborative editing sessions, and integrate AI assistants for productive coding.
zeabur
Expert guidance for Zeabur, the cloud deployment platform that auto-detects frameworks, builds and deploys applications with zero configuration, and provides managed services like databases and message queues. Helps developers deploy full-stack applications with automatic scaling and one-click marketplace services.
zapier
Automate workflows between apps with Zapier. Use when a user asks to connect apps without code, automate repetitive tasks, sync data between services, or build no-code integrations between SaaS tools.