nano-banana-pro
Generate images with Google's Nano Banana Pro (Gemini 3 Pro Image). Use when generating AI images via Gemini API, creating professional visuals, or building image generation features. Triggers on Nano Banana Pro, Gemini 3 Pro Image, gemini-3-pro-image-preview, Google image generation.
Best use case
nano-banana-pro is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Generate images with Google's Nano Banana Pro (Gemini 3 Pro Image). Use when generating AI images via Gemini API, creating professional visuals, or building image generation features. Triggers on Nano Banana Pro, Gemini 3 Pro Image, gemini-3-pro-image-preview, Google image generation.
Teams using nano-banana-pro should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/nano-banana-pro/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How nano-banana-pro Compares
| Feature / Agent | nano-banana-pro | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Generate images with Google's Nano Banana Pro (Gemini 3 Pro Image). Use when generating AI images via Gemini API, creating professional visuals, or building image generation features. Triggers on Nano Banana Pro, Gemini 3 Pro Image, gemini-3-pro-image-preview, Google image generation.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Nano Banana Pro (Gemini 3 Pro Image)
Generate high-quality images with Google's Gemini 3 Pro Image API.
## Overview
**Nano Banana Pro** is the marketing name for **Gemini 3 Pro Image** (`gemini-3-pro-image-preview`), Google's state-of-the-art image generation and editing model built on Gemini 3 Pro.
## Quick Start
### Get API Key
1. Go to [Google AI Studio](https://aistudio.google.com)
2. Click "Get API Key"
3. Store securely as environment variable
### Basic Image Generation (Python)
```python
from google import genai
from google.genai import types
client = genai.Client(api_key="YOUR_GEMINI_API_KEY")
response = client.models.generate_content(
model="gemini-3-pro-image-preview",
contents="A serene Japanese garden with cherry blossoms and a koi pond",
config=types.GenerateContentConfig(
response_modalities=['TEXT', 'IMAGE']
)
)
# Process response
for part in response.candidates[0].content.parts:
if hasattr(part, 'text'):
print(f"Description: {part.text}")
elif hasattr(part, 'inline_data'):
# Save image
image_data = part.inline_data.data # Base64 encoded
mime_type = part.inline_data.mime_type # image/png
import base64
with open("output.png", "wb") as f:
f.write(base64.b64decode(image_data))
```
### REST API (cURL)
```bash
curl -s -X POST \
"https://generativelanguage.googleapis.com/v1beta/models/gemini-3-pro-image-preview:generateContent" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"contents": [{
"role": "user",
"parts": [{"text": "Create a vibrant infographic about photosynthesis"}]
}],
"generationConfig": {
"responseModalities": ["TEXT", "IMAGE"]
}
}'
```
### TypeScript/JavaScript
```typescript
const GEMINI_API_KEY = process.env.GEMINI_API_KEY;
async function generateImage(prompt: string) {
const response = await fetch(
'https://generativelanguage.googleapis.com/v1beta/models/gemini-3-pro-image-preview:generateContent',
{
method: 'POST',
headers: {
'x-goog-api-key': GEMINI_API_KEY!,
'Content-Type': 'application/json',
},
body: JSON.stringify({
contents: [{
role: 'user',
parts: [{ text: prompt }]
}],
generationConfig: {
responseModalities: ['TEXT', 'IMAGE'],
},
}),
}
);
const data = await response.json();
return data;
}
```
## Configuration Options
### Image Configuration
```python
response = client.models.generate_content(
model="gemini-3-pro-image-preview",
contents="Professional product photo of a coffee mug",
config=types.GenerateContentConfig(
response_modalities=['TEXT', 'IMAGE'],
image_config=types.ImageConfig(
aspect_ratio="16:9", # Options: 1:1, 3:2, 16:9, 9:16, 21:9
image_size="2K" # Options: 1K, 2K, 4K
)
)
)
```
### With Google Search Grounding
```python
response = client.models.generate_content(
model="gemini-3-pro-image-preview",
contents="Create an infographic showing today's stock market trends",
config=types.GenerateContentConfig(
response_modalities=['TEXT', 'IMAGE'],
tools=[{"google_search": {}}] # Enable search grounding
)
)
```
## Multi-Turn Conversations (Iterative Editing)
```python
# Create a chat session
chat = client.chats.create(
model="gemini-3-pro-image-preview",
config=types.GenerateContentConfig(
response_modalities=['TEXT', 'IMAGE'],
tools=[{"google_search": {}}]
)
)
# Initial generation
response1 = chat.send_message(
"Create a vibrant infographic explaining photosynthesis"
)
# Edit the image
response2 = chat.send_message(
"Update this infographic to be in Spanish. Keep all other elements the same."
)
```
## Key Capabilities
### 1. Superior Text Rendering
```python
response = client.models.generate_content(
model="gemini-3-pro-image-preview",
contents="""Create a professional poster with:
- Title: "Annual Tech Summit 2025"
- Date: March 15-17, 2025
- Location: San Francisco Convention Center
""",
config=types.GenerateContentConfig(
response_modalities=['TEXT', 'IMAGE']
)
)
```
### 2. Character Consistency (Up to 5 Subjects)
```python
import base64
def load_image(path: str) -> str:
with open(path, "rb") as f:
return base64.b64encode(f.read()).decode()
character_ref = load_image("character.png")
response = client.models.generate_content(
model="gemini-3-pro-image-preview",
contents=[
{"text": "Generate an image of this person at a tech conference"},
{"inline_data": {"mime_type": "image/png", "data": character_ref}}
],
config=types.GenerateContentConfig(
response_modalities=['TEXT', 'IMAGE']
)
)
```
## Next.js API Route
```typescript
// app/api/generate-image/route.ts
import { NextRequest, NextResponse } from 'next/server';
export async function POST(request: NextRequest) {
const { prompt, aspectRatio = '1:1', imageSize = '2K' } = await request.json();
try {
const response = await fetch(
'https://generativelanguage.googleapis.com/v1beta/models/gemini-3-pro-image-preview:generateContent',
{
method: 'POST',
headers: {
'x-goog-api-key': process.env.GEMINI_API_KEY!,
'Content-Type': 'application/json',
},
body: JSON.stringify({
contents: [{ role: 'user', parts: [{ text: prompt }] }],
generationConfig: {
responseModalities: ['TEXT', 'IMAGE'],
imageConfig: { aspectRatio, imageSize },
},
}),
}
);
const data = await response.json();
const parts = data.candidates?.[0]?.content?.parts || [];
const imagePart = parts.find((p: any) => p.inline_data);
return NextResponse.json({
image: imagePart ? {
data: imagePart.inline_data.data,
mimeType: imagePart.inline_data.mime_type,
url: `data:${imagePart.inline_data.mime_type};base64,${imagePart.inline_data.data}`,
} : null,
});
} catch (error) {
return NextResponse.json({ error: 'Generation failed' }, { status: 500 });
}
}
```
## Model Comparison
| Feature | Nano Banana (2.5 Flash) | Nano Banana Pro (3 Pro Image) |
|---------|-------------------------|-------------------------------|
| Model ID | gemini-2.5-flash-image | gemini-3-pro-image-preview |
| Quality | Good | Best |
| Speed | Faster | Slower |
| Cost | Lower | Higher |
| Best For | Previews, high-volume | Production, professional |
## Resources
- **Documentation**: https://ai.google.dev/gemini-api/docs/image-generation
- **Google AI Studio**: https://aistudio.google.com
- **Prompt Guide**: https://ai.google.dev/gemini-api/docs/prompting-introRelated Skills
nano-nets-automation
Automate Nano Nets tasks via Rube MCP (Composio). Always search tools first for current schemas.
nanobanana-ppt-skills
AI-powered PPT generation with document analysis and styled images
nano-banana-pro-openrouter
Generate or edit images via OpenRouter with the Gemini 3 Pro Image model. Use for prompt-only image generation, image edits, and multi-image compositing; supports 1K/2K/4K output.
genderapi-io-automation
Automate Genderapi IO tasks via Rube MCP (Composio). Always search tools first for current schemas.
gender-api-automation
Automate Gender API tasks via Rube MCP (Composio). Always search tools first for current schemas.
fred-economic-data
Query FRED (Federal Reserve Economic Data) API for 800,000+ economic time series from 100+ sources. Access GDP, unemployment, inflation, interest rates, exchange rates, housing, and regional data. Use for macroeconomic analysis, financial research, policy studies, economic forecasting, and academic research requiring U.S. and international economic indicators.
fidel-api-automation
Automate Fidel API tasks via Rube MCP (Composio). Always search tools first for current schemas.
fastapi-templates
Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.
fastapi-router-py
Create FastAPI routers with CRUD operations, authentication dependencies, and proper response models. Use when building REST API endpoints, creating new routes, implementing CRUD operations, or add...
fastapi-pro
Build high-performance async APIs with FastAPI, SQLAlchemy 2.0, and Pydantic V2. Master microservices, WebSockets, and modern Python async patterns.
expo-api-routes
Guidelines for creating API routes in Expo Router with EAS Hosting
esm
Comprehensive toolkit for protein language models including ESM3 (generative multimodal protein design across sequence, structure, and function) and ESM C (efficient protein embeddings and representations). Use this skill when working with protein sequences, structures, or function prediction; designing novel proteins; generating protein embeddings; performing inverse folding; or conducting protein engineering tasks. Supports both local model usage and cloud-based Forge API for scalable inference.