blender-vse-pipeline
Automate video editing in Blender's Video Sequence Editor with Python. Use when the user wants to add video, image, or audio strips, create transitions, apply effects, build edit timelines, batch assemble footage, estimate render times, or script any VSE workflow from the command line.
Best use case
blender-vse-pipeline is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Automate video editing in Blender's Video Sequence Editor with Python. Use when the user wants to add video, image, or audio strips, create transitions, apply effects, build edit timelines, batch assemble footage, estimate render times, or script any VSE workflow from the command line.
Teams using blender-vse-pipeline should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/blender-vse-pipeline/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How blender-vse-pipeline Compares
| Feature / Agent | blender-vse-pipeline | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Automate video editing in Blender's Video Sequence Editor with Python. Use when the user wants to add video, image, or audio strips, create transitions, apply effects, build edit timelines, batch assemble footage, estimate render times, or script any VSE workflow from the command line.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Blender VSE Pipeline
## Overview
Automate video editing with Blender's Video Sequence Editor (VSE) using Python. Add and arrange strips (video, image, audio), create transitions, apply effects, assemble edits from file lists, and render final video — all headlessly from the terminal.
## Instructions
### 1. Initialize the sequence editor
```python
import bpy
scene = bpy.context.scene
if not scene.sequence_editor:
scene.sequence_editor_create()
sequences = scene.sequence_editor.sequences
scene.frame_start = 1
scene.frame_end = 250
scene.render.fps = 24
```
### 2. Add strips
```python
# Video strip
strip = sequences.new_movie(name="Clip_A", filepath="/path/to/video.mp4", channel=1, frame_start=1)
strip.frame_offset_start = 24 # trim start (skip 1 sec)
strip.frame_offset_end = 48 # trim end
# Image strip (hold for duration)
img = sequences.new_image(name="Title_Card", filepath="/path/to/title.png", channel=2, frame_start=1, fit_method='FIT')
img.frame_final_duration = 72 # 3 seconds at 24fps
# Image sequence
import glob
img_files = sorted(glob.glob("/path/to/frames/frame_*.png"))
img_seq = sequences.new_image(name="Render", filepath=img_files[0], channel=1, frame_start=1)
for f in img_files[1:]:
img_seq.elements.append(f.split("/")[-1])
# Audio strip with fade in/out
audio = sequences.new_sound(name="Music", filepath="/path/to/music.mp3", channel=3, frame_start=1)
audio.volume = 0.0
audio.keyframe_insert(data_path="volume", frame=1)
audio.volume = 0.6
audio.keyframe_insert(data_path="volume", frame=24)
audio.volume = 0.6
audio.keyframe_insert(data_path="volume", frame=audio.frame_final_end - 48)
audio.volume = 0.0
audio.keyframe_insert(data_path="volume", frame=audio.frame_final_end)
```
### 3. Transitions and effects
```python
# Cross dissolve (strips must overlap and be on different channels)
cross = sequences.new_effect(
name="CrossDissolve", type='GAMMA_CROSS', channel=3,
frame_start=clip_a.frame_final_end - 24,
frame_end=clip_b.frame_final_start + 24,
seq1=clip_a, seq2=clip_b
)
# Color strip (solid background)
color = sequences.new_effect(name="BlackBG", type='COLOR', channel=1, frame_start=1, frame_end=48)
color.color = (0, 0, 0)
# Text overlay
text = sequences.new_effect(name="Title", type='TEXT', channel=4, frame_start=1, frame_end=72)
text.text = "My Video Title"
text.font_size = 80
text.color = (1, 1, 1, 1)
text.location = (0.5, 0.5)
text.align_x = 'CENTER'
text.align_y = 'CENTER'
text.use_shadow = True
# Speed control
speed = sequences.new_effect(name="SlowMo", type='SPEED', channel=5,
frame_start=clip_a.frame_final_start, frame_end=clip_a.frame_final_end, seq1=clip_a)
speed.speed_factor = 0.5
# Transform (position, scale, rotation)
transform = sequences.new_effect(name="Transform", type='TRANSFORM', channel=5, frame_start=1, frame_end=100, seq1=clip_a)
transform.scale_start_x = 1.2
transform.scale_start_y = 1.2
```
### 4. Strip modifiers for color correction
```python
strip = sequences["Clip_A"]
bc = strip.modifiers.new(name="BrightContrast", type='BRIGHT_CONTRAST')
bc.bright = 0.1
bc.contrast = 0.15
cb = strip.modifiers.new(name="ColorBalance", type='COLOR_BALANCE')
cb.color_balance.lift = (0.95, 0.95, 1.0)
cb.color_balance.gain = (1.1, 1.05, 0.95)
```
### 5. Render the final video
```python
scene = bpy.context.scene
scene.render.filepath = "/tmp/final_edit.mp4"
scene.render.image_settings.file_format = 'FFMPEG'
scene.render.ffmpeg.format = 'MPEG4'
scene.render.ffmpeg.codec = 'H264'
scene.render.ffmpeg.audio_codec = 'AAC'
scene.render.ffmpeg.audio_bitrate = 192
scene.render.resolution_x = 1920
scene.render.resolution_y = 1080
# Auto-set frame range from strips
all_strips = scene.sequence_editor.sequences_all
if all_strips:
scene.frame_start = min(s.frame_final_start for s in all_strips)
scene.frame_end = max(s.frame_final_end for s in all_strips)
bpy.ops.render.render(animation=True)
```
## Examples
### Example 1: Assemble an edit from a shot list
**User request:** "Build a timeline from video clips with crossfades between them"
```python
import bpy, os
clips = ["/path/to/shot_01.mp4", "/path/to/shot_02.mp4", "/path/to/shot_03.mp4", "/path/to/shot_04.mp4"]
crossfade_frames = 12
scene = bpy.context.scene
scene.render.fps = 24
if not scene.sequence_editor:
scene.sequence_editor_create()
sequences = scene.sequence_editor.sequences
current_frame = 1
prev_strip = None
for i, clip_path in enumerate(clips):
if prev_strip and crossfade_frames > 0:
current_frame -= crossfade_frames
strip = sequences.new_movie(
name=os.path.splitext(os.path.basename(clip_path))[0],
filepath=clip_path, channel=1 + (i % 2), frame_start=current_frame
)
if prev_strip and crossfade_frames > 0:
sequences.new_effect(name=f"Fade_{i}", type='GAMMA_CROSS', channel=3,
frame_start=current_frame, frame_end=current_frame + crossfade_frames,
seq1=prev_strip, seq2=strip)
current_frame += strip.frame_final_duration
prev_strip = strip
scene.frame_start = 1
scene.frame_end = current_frame
```
### Example 2: Batch add text overlays from data
**User request:** "Add text titles at specific timecodes"
```python
import bpy
titles = [("Introduction", 0, 3), ("Chapter 1", 15, 4), ("Chapter 2", 120, 4), ("Conclusion", 300, 5)]
fps = bpy.context.scene.render.fps
sequences = bpy.context.scene.sequence_editor.sequences
for text_content, start_sec, dur_sec in titles:
start_frame = int(start_sec * fps) + 1
text = sequences.new_effect(name=text_content[:20], type='TEXT', channel=5,
frame_start=start_frame, frame_end=start_frame + int(dur_sec * fps))
text.text = text_content
text.font_size = 60
text.color = (1, 1, 1, 1)
text.location = (0.5, 0.15)
text.align_x = 'CENTER'
text.use_shadow = True
```
## Guidelines
- Always call `scene.sequence_editor_create()` if `scene.sequence_editor` is `None`.
- Use `sequences.new_movie()`, `new_image()`, `new_sound()`, `new_effect()`. Avoid `bpy.ops.sequencer.*` in background mode.
- Cross dissolves need strips on different channels with overlapping frames. Alternate channels (1, 2, 1, 2).
- `frame_offset_start/end` trims content without moving the strip. `frame_start` moves it on the timeline.
- For audio fades, keyframe `strip.volume` — no built-in fade effect in VSE.
- Render long edits as image sequences first, then assemble with ffmpeg.
- Set resolution to match source footage to avoid scaling artifacts.
- VSE processes channels bottom-to-top — higher channels layer on top.Related Skills
jenkins-pipelines
Builds and manages Jenkins CI/CD pipelines. Use when the user wants to write Jenkinsfiles, configure declarative or scripted pipelines, set up multibranch pipelines, manage Jenkins agents and nodes, configure shared libraries, integrate with Docker/Kubernetes/cloud providers, set up webhooks and triggers, manage credentials and secrets, or troubleshoot build failures. Trigger words: jenkins, jenkinsfile, jenkins pipeline, jenkins agent, jenkins node, jenkins shared library, jenkins docker, jenkins kubernetes, multibranch pipeline, jenkins credentials, jenkins webhook, jenkins groovy, jenkins blue ocean, jenkins job dsl.
cicd-pipeline
Generate and optimize CI/CD pipelines for GitLab CI and CircleCI. Use when a user asks to set up GitLab CI, create a CircleCI pipeline, build a CI pipeline for GitLab, automate deployments with CircleCI, add test automation to GitLab, or configure continuous integration on non-GitHub platforms. For GitHub Actions pipelines, use the github-actions skill instead.
blender-scripting
Write and run Blender Python scripts for 3D automation and procedural modeling. Use when the user wants to automate Blender tasks, create 3D models from code, run headless scripts, manipulate scenes, batch process .blend files, build geometry with bmesh, apply modifiers, generate procedural shapes, or import/export 3D models using the bpy API.
blender-render-automation
Automate Blender rendering from the command line. Use when the user wants to set up renders, batch render scenes, configure Cycles or EEVEE, set up cameras and lights, render animations, create materials and shaders, or build a render pipeline with Blender Python scripting.
blender-motion-capture
Automate motion capture and tracking workflows in Blender with Python. Use when the user wants to import BVH or FBX mocap data, retarget motion to armatures, track camera or object motion from video, solve camera motion, clean up motion capture data, or script any tracking pipeline in Blender.
blender-grease-pencil
Create 2D art and animation in Blender using Grease Pencil and Python. Use when the user wants to draw strokes programmatically, create 2D animations, build Grease Pencil objects from code, manage GP layers and frames, apply GP modifiers, set up drawing guides, or script any Grease Pencil workflow in Blender.
blender-compositing
Automate Blender compositing and post-processing with Python. Use when the user wants to set up compositor nodes, add post-processing effects, color correct renders, combine render passes, apply blur or glare, key green screens, create node-based VFX pipelines, or script the Blender compositor.
blender-animation
Animate 3D objects and characters in Blender with Python. Use when the user wants to keyframe properties, create armatures and rigs, set up IK/FK chains, animate shape keys for facial animation, edit F-Curves, use the NLA editor to blend actions, add drivers for expression-based animation, or script any animation workflow in Blender.
blender-addon-dev
Build custom Blender add-ons with Python. Use when the user wants to create a Blender add-on, register operators, build UI panels, add custom properties, create menus, package an add-on for distribution, or extend Blender with custom tools and workflows.
zustand
You are an expert in Zustand, the small, fast, and scalable state management library for React. You help developers manage global state without boilerplate using Zustand's hook-based stores, selectors for performance, middleware (persist, devtools, immer), computed values, and async actions — replacing Redux complexity with a simple, un-opinionated API in under 1KB.
zoho
Integrate and automate Zoho products. Use when a user asks to work with Zoho CRM, Zoho Books, Zoho Desk, Zoho Projects, Zoho Mail, or Zoho Creator, build custom integrations via Zoho APIs, automate workflows with Deluge scripting, sync data between Zoho apps and external systems, manage leads and deals, automate invoicing, build custom Zoho Creator apps, set up webhooks, or manage Zoho organization settings. Covers Zoho CRM, Books, Desk, Projects, Creator, and cross-product integrations.
zod
You are an expert in Zod, the TypeScript-first schema declaration and validation library. You help developers define schemas that validate data at runtime AND infer TypeScript types at compile time — eliminating the need to write types and validators separately. Used for API input validation, form validation, environment variables, config files, and any data boundary.