Project 2: Video Style Reference Controller
Project Overview
Create a system that extracts style from reference videos and applies it to new generations. Similar to how depth LoRA works, but using video style instead of depth maps. Goal is "generate new video in the style of this reference" while keeping our existing v2v controls.
Deliverables
- Style extraction pipeline
- Trained adapter model
- Integration with v2v pipeline
- Test suite with examples
- Technical docs
Technical Implementation
Phase 1: Research & Design
- Study LoRA/adapter architectures, especially depth LoRA
- Research video style transfer approaches
- Design adapter architecture for video conditioning
- Define what "style" means for motion graphics (texture, motion, color, etc.)
Phase 2: Style Extraction
- Build video analysis pipeline for style features:
- Texture patterns (dither, noise, grain)
- Motion characteristics (speed, smoothness, direction)
- Color palettes and gradients
- Shape/geometric patterns
- Create style embeddings from these features