Add initial docs

This commit is contained in:
Skyler Lehmkuhl 2026-02-11 19:10:58 -05:00
parent f924b4c0cd
commit 908da99321
6 changed files with 4091 additions and 0 deletions

538
ARCHITECTURE.md Normal file
View File

@ -0,0 +1,538 @@
# Lightningbeam Architecture
This document provides a comprehensive overview of Lightningbeam's architecture, design decisions, and component interactions.
## Table of Contents
- [System Overview](#system-overview)
- [Technology Stack](#technology-stack)
- [Component Architecture](#component-architecture)
- [Data Flow](#data-flow)
- [Rendering Pipeline](#rendering-pipeline)
- [Audio Architecture](#audio-architecture)
- [Key Design Decisions](#key-design-decisions)
- [Directory Structure](#directory-structure)
## System Overview
Lightningbeam is a 2D multimedia editor combining vector animation, audio production, and video editing. The application is built as a pure Rust desktop application using immediate-mode GUI (egui) with GPU-accelerated vector rendering (Vello).
### High-Level Architecture
```
┌────────────────────────────────────────────────────────────┐
│ Lightningbeam Editor │
│ (egui UI) │
├────────────────────────────────────────────────────────────┤
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Stage │ │ Timeline │ │ Asset │ │ Info │ │
│ │ Pane │ │ Pane │ │ Library │ │ Panel │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
│ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ Lightningbeam Core (Data Model) │ │
│ │ Document, Layers, Clips, Actions, Undo/Redo │ │
│ └──────────────────────────────────────────────────────┘ │
├────────────────────────────────────────────────────────────┤
│ Rendering & Audio │
│ ┌──────────────────┐ ┌──────────────────┐ │
│ │ Vello + wgpu │ │ daw-backend │ │
│ │ (GPU Rendering) │ │ (Audio Engine) │ │
│ └──────────────────┘ └──────────────────┘ │
└────────────────────────────────────────────────────────────┘
↓ ↓
┌─────────┐ ┌─────────┐
│ GPU │ │ cpal │
│ (Vulkan │ │ (Audio │
│ /Metal) │ │ I/O) │
└─────────┘ └─────────┘
```
### Migration from Tauri/JavaScript
Lightningbeam is undergoing a rewrite from a Tauri/JavaScript prototype to pure Rust. The original architecture hit IPC bandwidth limitations when streaming decoded video frames. The new Rust UI eliminates this bottleneck by handling all rendering natively.
**Current Status**: Active development on the `rust-ui` branch. Core UI, tools, and undo system are implemented. Audio integration in progress.
## Technology Stack
### UI Framework
- **egui 0.33.3**: Immediate-mode GUI framework
- **eframe 0.33.3**: Application framework wrapping egui
- **winit 0.30**: Cross-platform windowing
### GPU Rendering
- **Vello (git main)**: GPU-accelerated 2D vector graphics using compute shaders
- **wgpu 27**: Low-level GPU API (Vulkan/Metal backend)
- **kurbo 0.12**: 2D curve and shape primitives
- **peniko 0.5**: Color and brush definitions
### Audio Engine
- **daw-backend**: Custom real-time audio engine
- **cpal 0.15**: Cross-platform audio I/O
- **symphonia 0.5**: Audio decoding (MP3, FLAC, WAV, Ogg, etc.)
- **rtrb 0.3**: Lock-free ringbuffers for audio thread communication
- **dasp**: Audio graph processing
### Video
- **FFmpeg**: Video encoding/decoding (via ffmpeg-next)
### Serialization
- **serde**: Document serialization
- **serde_json**: JSON format
## Component Architecture
### 1. Lightningbeam Core (`lightningbeam-core/`)
The core crate contains the data model and business logic, independent of UI framework.
**Key Types:**
```rust
Document {
canvas_size: (u32, u32),
layers: Vec<Layer>,
undo_stack: Vec<Box<dyn Action>>,
redo_stack: Vec<Box<dyn Action>>,
}
Layer (enum) {
VectorLayer { clips: Vec<VectorClip>, ... },
AudioLayer { clips: Vec<AudioClip>, ... },
VideoLayer { clips: Vec<VideoClip>, ... },
}
ClipInstance {
clip_id: Uuid, // Reference to clip definition
start_time: f64, // Timeline position
duration: f64,
trim_start: f64,
trim_end: f64,
}
```
**Responsibilities:**
- Document structure and state
- Clip and layer management
- Action system (undo/redo)
- Tool definitions
- Animation data and keyframes
### 2. Lightningbeam Editor (`lightningbeam-editor/`)
The editor application implements the UI and user interactions.
**Main Entry Point:** `src/main.rs`
- Initializes eframe application
- Sets up window, GPU context, and audio system
- Runs main event loop
**Panes** (`src/panes/`):
Each pane is a self-contained UI component:
- `stage.rs` (214KB): Main canvas for drawing, transform tools, GPU rendering
- `timeline.rs` (84KB): Multi-track timeline with clip editing
- `asset_library.rs` (70KB): Asset browser with drag-and-drop
- `infopanel.rs` (31KB): Context-sensitive property editor
- `virtual_piano.rs` (31KB): MIDI keyboard input
- `toolbar.rs` (9KB): Tool palette
**Pane System:**
```rust
pub enum PaneInstance {
Stage(Stage),
Timeline(Timeline),
AssetLibrary(AssetLibrary),
// ... other panes
}
impl PaneInstance {
pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) {
match self {
PaneInstance::Stage(stage) => stage.render(ui, shared_state),
// ... dispatch to specific pane
}
}
}
```
**SharedPaneState:**
Facilitates communication between panes:
```rust
pub struct SharedPaneState {
pub document: Document,
pub selected_tool: Tool,
pub pending_actions: Vec<Box<dyn Action>>,
pub audio_system: AudioSystem,
// ... other shared state
}
```
### 3. DAW Backend (`daw-backend/`)
Standalone audio engine crate with real-time audio processing.
**Architecture:**
```
UI Thread Audio Thread (real-time)
│ │
│ Commands (rtrb queue) │
├──────────────────────────────>│
│ │
│ State Updates │
<──────────────────────────────┤
│ │
┌───────────────┐
│ Audio Engine │
│ process() │
└───────────────┘
┌───────────────┐
│ Track Mix │
└───────────────┘
┌───────────────┐
│ cpal Output │
└───────────────┘
```
**Key Components:**
- **Engine** (`audio/engine.rs`): Main audio callback, runs on real-time thread
- **Project** (`audio/project.rs`): Top-level audio state
- **Track** (`audio/track.rs`): Individual audio tracks with effects chains
- **Effects**: Reverb, delay, EQ, compressor, distortion, etc.
- **Synthesizers**: Oscillator, FM synth, wavetable, sampler
**Lock-Free Design:**
The audio thread never blocks. UI sends commands via lock-free ringbuffers (rtrb), audio thread processes them between buffer callbacks.
## Data Flow
### Document Editing Flow
```
User Input (mouse/keyboard)
egui Event Handlers (in pane.render())
Create Action (implements Action trait)
Add to SharedPaneState.pending_actions
After all panes render: execute actions
Action.apply(&mut document)
Push to undo_stack
UI re-renders with updated document
```
### Audio Playback Flow
```
UI: User clicks Play
Send PlayCommand to audio engine (via rtrb queue)
Audio thread: Receive command
Audio thread: Start playback, increment playhead
Audio callback (every ~5ms): Engine::process()
Mix tracks, apply effects, output samples
Send playhead position back to UI
UI: Update timeline playhead position
```
### GPU Rendering Flow
```
egui layout phase
Stage pane requests wgpu callback
Vello renders vector shapes to GPU texture
Custom wgpu integration composites:
- Vello output (vector graphics)
- Waveform textures (GPU-rendered audio)
- egui UI overlay
Present to screen
```
## Rendering Pipeline
### Stage Rendering
The Stage pane uses a custom wgpu callback to render directly to GPU:
```rust
ui.painter().add(egui_wgpu::Callback::new_paint_callback(
rect,
StageCallback { /* render data */ }
));
```
**Vello Integration:**
1. Create Vello `Scene` from document shapes
2. Render scene to GPU texture using compute shaders
3. Composite with UI elements
**Waveform Rendering:**
- Audio waveforms rendered on GPU using custom WGSL shaders
- Mipmaps generated via compute shader for level-of-detail
- Uniform buffers store view parameters (zoom, offset, tint color)
**WGSL Alignment Requirements:**
WGSL has strict alignment rules. `vec4<f32>` requires 16-byte alignment:
```rust
#[repr(C)]
#[derive(Copy, Clone, bytemuck::Pod, bytemuck::Zeroable)]
struct WaveformParams {
view_matrix: [f32; 16], // 64 bytes
viewport_size: [f32; 2], // 8 bytes
zoom: f32, // 4 bytes
_pad1: f32, // 4 bytes padding
tint_color: [f32; 4], // 16 bytes (requires 16-byte alignment)
}
// Total: 96 bytes
```
## Audio Architecture
### Real-Time Constraints
Audio callbacks run on a dedicated real-time thread with strict timing requirements:
- Buffer size: 256 frames default (~5.8ms at 44.1kHz)
- ALSA may provide smaller buffers (64-75 frames, ~1.5ms)
- **No blocking operations allowed**: No locks, no allocations, no syscalls
### Lock-Free Communication
UI and audio thread communicate via lock-free ringbuffers (rtrb):
```rust
// UI Thread
command_sender.push(AudioCommand::Play).ok();
// Audio Thread (in process callback)
while let Ok(command) = command_receiver.pop() {
match command {
AudioCommand::Play => self.playing = true,
// ... handle other commands
}
}
```
### Audio Processing Pipeline
```
Audio Callback Invoked (every ~5ms)
Process queued commands
For each track:
- Read audio samples at playhead position
- Apply effects chain
- Mix to master output
Write samples to output buffer
Return from callback (must complete in <5ms)
```
### Optimized Debug Builds
Audio code is optimized even in debug builds to meet real-time deadlines:
```toml
[profile.dev.package.daw-backend]
opt-level = 2
[profile.dev.package.symphonia]
opt-level = 2
# ... other audio libraries
```
## Key Design Decisions
### Layer & Clip System
**Type-Specific Layers:**
Each layer type supports only its matching clip type:
- `VectorLayer``VectorClip`
- `AudioLayer``AudioClip`
- `VideoLayer``VideoClip`
**Recursive Nesting:**
Vector clips can contain internal layers of any type, enabling complex nested compositions.
**Clip vs ClipInstance:**
- **Clip**: Template/definition in asset library (the "master")
- **ClipInstance**: Placed on timeline with instance-specific properties (position, duration, trim points)
- Multiple instances can reference the same clip
- "Make Unique" operation duplicates the underlying clip
### Undo/Redo System
**Action Trait:**
```rust
pub trait Action: Send {
fn apply(&mut self, document: &mut Document);
fn undo(&mut self, document: &mut Document);
fn redo(&mut self, document: &mut Document);
}
```
All operations (drawing, editing, clip manipulation) implement this trait.
**Continuous Operations:**
Dragging sliders or scrubbing creates only one undo action when complete, not one per frame.
### Two-Phase Dispatch Pattern
Panes cannot directly mutate shared state during rendering (borrowing rules). Instead:
1. **Phase 1 (Render)**: Panes register actions
```rust
shared_state.register_action(Box::new(MyAction { ... }));
```
2. **Phase 2 (Execute)**: After all panes rendered, execute actions
```rust
for action in shared_state.pending_actions.drain(..) {
action.apply(&mut document);
undo_stack.push(action);
}
```
### Pane ID Salting
egui uses IDs to track widget state. Multiple instances of the same pane would collide without unique IDs.
**Solution**: Salt all IDs with the pane's node path:
```rust
ui.horizontal(|ui| {
ui.label("My Widget");
}).id.with(&node_path);
```
### Selection & Clipboard
- **Selection scope**: Limited to current clip/layer
- **Type-aware paste**: Content must match target type
- **Clip instance copying**: Creates reference to same underlying clip
- **Make unique**: Duplicates underlying clip for independent editing
## Directory Structure
```
lightningbeam-2/
├── lightningbeam-ui/ # Rust UI workspace
│ ├── Cargo.toml # Workspace manifest
│ ├── lightningbeam-editor/ # Main application crate
│ │ ├── Cargo.toml
│ │ └── src/
│ │ ├── main.rs # Entry point, event loop
│ │ ├── app.rs # Application state
│ │ ├── panes/
│ │ │ ├── mod.rs # Pane system dispatch
│ │ │ ├── stage.rs # Main canvas
│ │ │ ├── timeline.rs # Timeline editor
│ │ │ ├── asset_library.rs
│ │ │ └── ...
│ │ ├── tools/ # Drawing and editing tools
│ │ ├── rendering/
│ │ │ ├── vello_integration.rs
│ │ │ ├── waveform_gpu.rs
│ │ │ └── shaders/
│ │ │ ├── waveform.wgsl
│ │ │ └── waveform_mipgen.wgsl
│ │ └── export/ # Export functionality
│ └── lightningbeam-core/ # Core data model crate
│ ├── Cargo.toml
│ └── src/
│ ├── lib.rs
│ ├── document.rs # Document structure
│ ├── layer.rs # Layer types
│ ├── clip.rs # Clip types and instances
│ ├── shape.rs # Shape definitions
│ ├── action.rs # Action trait and undo/redo
│ ├── animation.rs # Keyframe animation
│ └── tools.rs # Tool definitions
├── daw-backend/ # Audio engine (standalone)
│ ├── Cargo.toml
│ └── src/
│ ├── lib.rs # Audio system initialization
│ ├── audio/
│ │ ├── engine.rs # Main audio callback
│ │ ├── track.rs # Track management
│ │ ├── project.rs # Project state
│ │ └── buffer.rs # Audio buffer utilities
│ ├── effects/ # Audio effects
│ │ ├── reverb.rs
│ │ ├── delay.rs
│ │ └── ...
│ ├── synth/ # Synthesizers
│ └── midi/ # MIDI handling
├── src-tauri/ # Legacy Tauri backend
├── src/ # Legacy JavaScript frontend
├── CONTRIBUTING.md # Contributor guide
├── ARCHITECTURE.md # This file
├── README.md # Project overview
└── docs/ # Additional documentation
├── AUDIO_SYSTEM.md
├── UI_SYSTEM.md
└── ...
```
## Performance Considerations
### GPU Rendering
- Vello uses compute shaders for efficient 2D rendering
- Waveforms pre-rendered on GPU with mipmaps for smooth zooming
- Custom wgpu integration minimizes CPU↔GPU data transfer
### Audio Processing
- Lock-free design: No blocking in audio thread
- Optimized even in debug builds (`opt-level = 2`)
- Memory-mapped file I/O for large audio files
- Zero-copy audio buffers where possible
### Memory Management
- Audio buffers pre-allocated, no allocations in audio thread
- Vello manages GPU memory automatically
- Document structure uses `Rc`/`Arc` for shared clip references
## Future Considerations
### Video Integration
Video decoding has been ported from the legacy Tauri backend. Video soundtracks become audio tracks in daw-backend, enabling full effects processing.
### File Format
The .beam file format is not yet finalized. Considerations:
- Single JSON file vs container format (e.g., ZIP)
- Embedded media vs external references
- Forward/backward compatibility strategy
### Node Editor
Primary use: Audio effects chains and modular synthesizers. Future expansion to visual effects and procedural generation is possible.
## Related Documentation
- [CONTRIBUTING.md](CONTRIBUTING.md) - Development setup and workflow
- [docs/AUDIO_SYSTEM.md](docs/AUDIO_SYSTEM.md) - Detailed audio engine documentation
- [docs/UI_SYSTEM.md](docs/UI_SYSTEM.md) - UI pane system details
- [docs/RENDERING.md](docs/RENDERING.md) - GPU rendering pipeline
- [Claude.md](Claude.md) - Comprehensive architectural reference for AI assistants

278
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,278 @@
# Contributing to Lightningbeam
Thank you for your interest in contributing to Lightningbeam! This document provides guidelines and instructions for setting up your development environment and contributing to the project.
## Table of Contents
- [Development Setup](#development-setup)
- [Building the Project](#building-the-project)
- [Project Structure](#project-structure)
- [Making Changes](#making-changes)
- [Code Style](#code-style)
- [Testing](#testing)
- [Submitting Changes](#submitting-changes)
- [Getting Help](#getting-help)
## Development Setup
### Prerequisites
- **Rust**: Install via [rustup](https://rustup.rs/) (stable toolchain)
- **System dependencies** (Linux):
- ALSA development files: `libasound2-dev`
- For Ubuntu/Debian: `sudo apt install libasound2-dev pkg-config`
- For Arch/Manjaro: `sudo pacman -S alsa-lib`
- **FFmpeg**: Required for video encoding/decoding
- Ubuntu/Debian: `sudo apt install ffmpeg libavcodec-dev libavformat-dev libavutil-dev libswscale-dev libswresample-dev pkg-config clang`
- Arch/Manjaro: `sudo pacman -S ffmpeg`
### Clone and Build
```bash
# Clone the repository (GitHub)
git clone https://github.com/skykooler/lightningbeam.git
# Or from Gitea
git clone https://git.skyler.io/skyler/lightningbeam.git
cd lightningbeam
# Build the Rust UI editor (current focus)
cd lightningbeam-ui
cargo build
# Run the editor
cargo run
```
**Note**: The project is hosted on both GitHub and Gitea (git.skyler.io). You can use either for cloning and submitting pull requests.
## Building the Project
### Workspace Structure
The project consists of multiple Rust workspaces:
1. **lightningbeam-ui** (current focus) - Pure Rust UI application
- `lightningbeam-editor/` - Main editor application
- `lightningbeam-core/` - Core data models and business logic
2. **daw-backend** - Audio engine (standalone crate)
3. **Root workspace** (legacy) - Contains Tauri backend and benchmarks
### Build Commands
```bash
# Build the editor (from lightningbeam-ui/)
cargo build
# Build with optimizations (faster runtime)
cargo build --release
# Check just the audio backend
cargo check -p daw-backend
# Build the audio backend separately
cd ../daw-backend
cargo build
```
### Debug Builds and Audio Performance
The audio engine runs on a real-time thread with strict timing constraints (~5.8ms at 44.1kHz). To maintain performance in debug builds, the audio backend is compiled with optimizations even in debug mode:
```toml
# In lightningbeam-ui/Cargo.toml
[profile.dev.package.daw-backend]
opt-level = 2
```
This is already configured—no action needed.
### Debug Flags
Enable audio diagnostics with:
```bash
DAW_AUDIO_DEBUG=1 cargo run
```
This prints timing information, buffer sizes, and overrun warnings to help debug audio issues.
## Project Structure
```
lightningbeam-2/
├── lightningbeam-ui/ # Rust UI workspace (current)
│ ├── lightningbeam-editor/ # Main application
│ │ └── src/
│ │ ├── main.rs # Entry point
│ │ ├── panes/ # UI panes (stage, timeline, etc.)
│ │ └── tools/ # Drawing and editing tools
│ └── lightningbeam-core/ # Core data model
│ └── src/
│ ├── document.rs # Document structure
│ ├── clip.rs # Clips and instances
│ ├── action.rs # Undo/redo system
│ └── tools.rs # Tool system
├── daw-backend/ # Audio engine
│ └── src/
│ ├── lib.rs # Audio system setup
│ ├── audio/
│ │ ├── engine.rs # Audio callback
│ │ ├── track.rs # Track management
│ │ └── project.rs # Project state
│ └── effects/ # Audio effects
├── src-tauri/ # Legacy Tauri backend
└── src/ # Legacy JavaScript frontend
```
## Making Changes
### Branching Strategy
- `main` - Stable branch
- `rust-ui` - Active development branch for Rust UI rewrite
- Feature branches - Create from `rust-ui` for new features
### Before You Start
1. Check existing issues or create a new one to discuss your change
2. Make sure you're on the latest `rust-ui` branch:
```bash
git checkout rust-ui
git pull origin rust-ui
```
3. Create a feature branch:
```bash
git checkout -b feature/your-feature-name
```
## Code Style
### Rust Style
- Follow standard Rust formatting: `cargo fmt`
- Check for common issues: `cargo clippy`
- Use meaningful variable names
- Add comments for non-obvious code
- Keep functions focused and reasonably sized
### Key Patterns
#### Pane ID Salting
When implementing new panes, **always salt egui IDs** with the node path to avoid collisions when users add multiple instances of the same pane:
```rust
ui.horizontal(|ui| {
ui.label("My Widget");
}).id.with(&node_path); // Salt with node path
```
#### Splitting Borrows with `std::mem::take`
When you need to split borrows from a struct, use `std::mem::take`:
```rust
let mut clips = std::mem::take(&mut self.clips);
// Now you can borrow other fields while processing clips
```
#### Two-Phase Dispatch
Panes register handlers during render, execution happens after:
```rust
// During render
shared_state.register_action(Box::new(MyAction { ... }));
// After all panes rendered
for action in shared_state.pending_actions.drain(..) {
action.execute(&mut document);
}
```
## Testing
### Running Tests
```bash
# Run all tests
cargo test
# Test specific package
cargo test -p lightningbeam-core
cargo test -p daw-backend
# Run with output
cargo test -- --nocapture
```
### Audio Testing
Test audio functionality:
```bash
# Run with audio debug output
DAW_AUDIO_DEBUG=1 cargo run
# Check for audio dropouts or timing issues in the console output
```
## Submitting Changes
### Before Submitting
1. **Format your code**: `cargo fmt --all`
2. **Run clippy**: `cargo clippy --all-targets --all-features`
3. **Run tests**: `cargo test --all`
4. **Test manually**: Build and run the application to verify your changes work
5. **Write clear commit messages**: Describe what and why, not just what
### Commit Message Format
```
Short summary (50 chars or less)
More detailed explanation if needed. Wrap at 72 characters.
Explain the problem this commit solves and why you chose
this solution.
- Bullet points are fine
- Use present tense: "Add feature" not "Added feature"
```
### Pull Request Process
1. Push your branch to GitHub or Gitea
2. Open a pull request against `rust-ui` branch
- GitHub: https://github.com/skykooler/lightningbeam
- Gitea: https://git.skyler.io/skyler/lightningbeam
3. Provide a clear description of:
- What problem does this solve?
- How does it work?
- Any testing you've done
- Screenshots/videos if applicable (especially for UI changes)
4. Address review feedback
5. Once approved, a maintainer will merge your PR
### PR Checklist
- [ ] Code follows project style (`cargo fmt`, `cargo clippy`)
- [ ] Tests pass (`cargo test`)
- [ ] New code has appropriate tests (if applicable)
- [ ] Documentation updated (if needed)
- [ ] Commit messages are clear
- [ ] PR description explains the change
## Getting Help
- **Issues**: Check issues on [GitHub](https://github.com/skykooler/lightningbeam/issues) or [Gitea](https://git.skyler.io/skyler/lightningbeam/issues) for existing discussions
- **Documentation**: See `ARCHITECTURE.md` and `docs/` folder for technical details
- **Questions**: Open a discussion or issue with the "question" label on either platform
## Additional Resources
- [ARCHITECTURE.md](ARCHITECTURE.md) - System architecture overview
- [docs/AUDIO_SYSTEM.md](docs/AUDIO_SYSTEM.md) - Audio engine details
- [docs/UI_SYSTEM.md](docs/UI_SYSTEM.md) - UI and pane system
## License
By contributing, you agree that your contributions will be licensed under the same license as the project.

1092
docs/AUDIO_SYSTEM.md Normal file

File diff suppressed because it is too large Load Diff

523
docs/BUILDING.md Normal file
View File

@ -0,0 +1,523 @@
# Building Lightningbeam
This guide provides detailed instructions for building Lightningbeam on different platforms, including dependency installation, troubleshooting, and advanced build configurations.
## Table of Contents
- [Quick Start](#quick-start)
- [Platform-Specific Instructions](#platform-specific-instructions)
- [Dependencies](#dependencies)
- [Build Configurations](#build-configurations)
- [Troubleshooting](#troubleshooting)
- [Development Builds](#development-builds)
## Quick Start
```bash
# Clone the repository
git clone https://github.com/skykooler/lightningbeam.git
cd lightningbeam/lightningbeam-ui
# Build and run
cargo build
cargo run
```
## Platform-Specific Instructions
### Linux
#### Ubuntu/Debian
**Important**: Lightningbeam requires FFmpeg 8, which may not be in the default repositories.
```bash
# Install basic dependencies
sudo apt update
sudo apt install -y \
build-essential \
pkg-config \
libasound2-dev \
clang \
libclang-dev
# Install FFmpeg 8 from PPA (Ubuntu)
sudo add-apt-repository ppa:ubuntuhandbook1/ffmpeg7
sudo apt update
sudo apt install -y \
ffmpeg \
libavcodec-dev \
libavformat-dev \
libavutil-dev \
libswscale-dev \
libswresample-dev
# Verify FFmpeg version (should be 8.x)
ffmpeg -version
# Install Rust if needed
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Build
cd lightningbeam-ui
cargo build --release
```
**Note**: If the PPA doesn't provide FFmpeg 8, you may need to compile FFmpeg from source or find an alternative PPA. See [FFmpeg Issues](#ffmpeg-issues) for details.
#### Arch Linux/Manjaro
```bash
# Install system dependencies
sudo pacman -S --needed \
base-devel \
rust \
alsa-lib \
ffmpeg \
clang
# Build
cd lightningbeam-ui
cargo build --release
```
#### Fedora/RHEL
```bash
# Install system dependencies
sudo dnf install -y \
gcc \
gcc-c++ \
make \
pkg-config \
alsa-lib-devel \
ffmpeg \
ffmpeg-devel \
clang \
clang-devel
# Install Rust if needed
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Build
cd lightningbeam-ui
cargo build --release
```
### macOS
```bash
# Install Homebrew if needed
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install dependencies
brew install rust ffmpeg pkg-config
# Build
cd lightningbeam-ui
cargo build --release
```
**Note**: macOS uses CoreAudio for audio I/O (via cpal), so no additional audio libraries are needed.
### Windows
#### Using Visual Studio
1. Install [Visual Studio 2022](https://visualstudio.microsoft.com/) with "Desktop development with C++" workload
2. Install [Rust](https://rustup.rs/)
3. Install [FFmpeg](https://ffmpeg.org/download.html#build-windows):
- Download a shared build from https://www.gyan.dev/ffmpeg/builds/
- Extract to `C:\ffmpeg`
- Add `C:\ffmpeg\bin` to PATH
- Set environment variables:
```cmd
set FFMPEG_DIR=C:\ffmpeg
set PKG_CONFIG_PATH=C:\ffmpeg\lib\pkgconfig
```
4. Build:
```cmd
cd lightningbeam-ui
cargo build --release
```
#### Using MSYS2/MinGW
```bash
# In MSYS2 shell
pacman -S mingw-w64-x86_64-rust \
mingw-w64-x86_64-ffmpeg \
mingw-w64-x86_64-pkg-config
cd lightningbeam-ui
cargo build --release
```
**Note**: Windows uses WASAPI for audio I/O (via cpal), which is built into Windows.
## Dependencies
### Required Dependencies
#### Rust Toolchain
- **Version**: Stable (1.70+)
- **Install**: https://rustup.rs/
- **Components**: Default installation includes everything needed
#### Audio I/O (ALSA on Linux)
- **Ubuntu/Debian**: `libasound2-dev`
- **Arch**: `alsa-lib`
- **Fedora**: `alsa-lib-devel`
- **macOS**: CoreAudio (built-in)
- **Windows**: WASAPI (built-in)
#### FFmpeg
**Version Required**: FFmpeg 8.x
Required for video encoding/decoding. Note that many distribution repositories may have older versions.
- **Ubuntu/Debian**: Use PPA for FFmpeg 8 (see [Ubuntu/Debian instructions](#ubuntudebian))
- **Arch**: `ffmpeg` (usually up-to-date)
- **Fedora**: `ffmpeg ffmpeg-devel` (check version with `ffmpeg -version`)
- **macOS**: `brew install ffmpeg` (Homebrew usually has latest)
- **Windows**: Download FFmpeg 8 from https://ffmpeg.org/download.html
#### Build Tools
- **Linux**: `build-essential` (Ubuntu), `base-devel` (Arch)
- **macOS**: Xcode Command Line Tools (`xcode-select --install`)
- **Windows**: Visual Studio with C++ tools or MinGW
#### pkg-config
Required for finding system libraries.
- **Linux**: Usually included with build tools
- **macOS**: `brew install pkg-config`
- **Windows**: Included with MSYS2/MinGW, or use vcpkg
### Optional Dependencies
#### GPU Drivers
Vello requires a GPU with Vulkan (Linux/Windows) or Metal (macOS) support:
- **Linux Vulkan**:
- NVIDIA: Install proprietary drivers
- AMD: `mesa-vulkan-drivers` (Ubuntu) or `vulkan-radeon` (Arch)
- Intel: `mesa-vulkan-drivers` (Ubuntu) or `vulkan-intel` (Arch)
- **macOS Metal**: Built-in (macOS 10.13+)
- **Windows Vulkan**:
- Usually included with GPU drivers
- Manual install: https://vulkan.lunarg.com/
## Build Configurations
### Release Build (Optimized)
```bash
cargo build --release
```
- Optimizations: Level 3
- LTO: Enabled
- Debug info: None
- Build time: Slower (~5-10 minutes)
- Runtime: Fast
Binary location: `target/release/lightningbeam-editor`
### Debug Build (Default)
```bash
cargo build
```
- Optimizations: Level 1 (Level 2 for audio code)
- LTO: Disabled
- Debug info: Full
- Build time: Faster (~2-5 minutes)
- Runtime: Slower (but audio is still optimized)
Binary location: `target/debug/lightningbeam-editor`
**Note**: Audio code is always compiled with `opt-level = 2` even in debug builds to meet real-time deadlines. This is configured in `lightningbeam-ui/Cargo.toml`:
```toml
[profile.dev.package.daw-backend]
opt-level = 2
```
### Check Without Building
Quickly check for compilation errors without producing binaries:
```bash
cargo check
```
Useful for rapid feedback during development.
### Build Specific Package
```bash
# Check only the audio backend
cargo check -p daw-backend
# Build only the core library
cargo build -p lightningbeam-core
```
## Troubleshooting
### Audio Issues
#### "ALSA lib cannot find card" or similar errors
**Solution**: Install ALSA development files:
```bash
# Ubuntu/Debian
sudo apt install libasound2-dev
# Arch
sudo pacman -S alsa-lib
```
#### Audio dropouts or crackling
**Symptoms**: Console shows "Audio overrun" or timing warnings.
**Solutions**:
1. Increase buffer size in `daw-backend/src/lib.rs` (default: 256 frames)
2. Enable audio debug logging:
```bash
DAW_AUDIO_DEBUG=1 cargo run
```
3. Make sure audio code is optimized (check `Cargo.toml` profile settings)
4. Close other audio applications
#### "PulseAudio" or "JACK" errors in container
**Note**: This is expected in containerized environments without audio support. These errors don't occur on native systems.
### FFmpeg Issues
#### "Could not find FFmpeg libraries" or linking errors
**Version Check First**:
```bash
ffmpeg -version
# Should show version 8.x
```
**Linux**:
```bash
# Ubuntu/Debian - requires FFmpeg 8 from PPA
sudo add-apt-repository ppa:ubuntuhandbook1/ffmpeg7
sudo apt update
sudo apt install libavcodec-dev libavformat-dev libavutil-dev libswscale-dev libswresample-dev
# Arch (usually has latest)
sudo pacman -S ffmpeg
# Check installation
pkg-config --modversion libavcodec
# Should show 61.x or higher (FFmpeg 8)
```
If the PPA doesn't work or doesn't have FFmpeg 8, you may need to compile from source:
```bash
# Download and compile FFmpeg 8
wget https://ffmpeg.org/releases/ffmpeg-8.0.tar.xz
tar xf ffmpeg-8.0.tar.xz
cd ffmpeg-8.0
./configure --enable-shared --disable-static
make -j$(nproc)
sudo make install
sudo ldconfig
```
**macOS**:
```bash
brew install ffmpeg
export PKG_CONFIG_PATH="/opt/homebrew/opt/ffmpeg/lib/pkgconfig:$PKG_CONFIG_PATH"
```
**Windows**:
Set environment variables:
```cmd
set FFMPEG_DIR=C:\path\to\ffmpeg
set PKG_CONFIG_PATH=C:\path\to\ffmpeg\lib\pkgconfig
```
#### "Unsupported codec" or video not playing
Make sure FFmpeg was compiled with the necessary codecs:
```bash
ffmpeg -codecs | grep h264 # Check for H.264
ffmpeg -codecs | grep vp9 # Check for VP9
```
### GPU/Rendering Issues
#### Black screen or no rendering
**Check GPU support**:
```bash
# Linux - check Vulkan
vulkaninfo | grep deviceName
# macOS - Metal is built-in on 10.13+
system_profiler SPDisplaysDataType
```
**Solutions**:
1. Update GPU drivers
2. Install Vulkan runtime (Linux)
3. Check console for wgpu errors
#### "No suitable GPU adapter found"
This usually means missing Vulkan/Metal support.
**Linux**: Install Vulkan drivers (see [Optional Dependencies](#optional-dependencies))
**macOS**: Requires macOS 10.13+ (Metal support)
**Windows**: Update GPU drivers
### Build Performance
#### Slow compilation times
**Solutions**:
1. Use `cargo check` instead of `cargo build` during development
2. Enable incremental compilation (enabled by default)
3. Use `mold` linker (Linux):
```bash
# Install mold
sudo apt install mold # Ubuntu 22.04+
# Use mold
mold -run cargo build
```
4. Increase parallel jobs:
```bash
cargo build -j 8 # Use 8 parallel jobs
```
#### Out of memory during compilation
**Solution**: Reduce parallel jobs:
```bash
cargo build -j 2 # Use only 2 parallel jobs
```
### Linker Errors
#### "undefined reference to..." or "cannot find -l..."
**Cause**: Missing system libraries.
**Solution**: Install all dependencies listed in [Platform-Specific Instructions](#platform-specific-instructions).
#### Windows: "LNK1181: cannot open input file"
**Cause**: FFmpeg libraries not found.
**Solution**:
1. Download FFmpeg shared build
2. Set `FFMPEG_DIR` environment variable
3. Add FFmpeg bin directory to PATH
## Development Builds
### Enable Audio Debug Logging
```bash
DAW_AUDIO_DEBUG=1 cargo run
```
Output includes:
- Buffer sizes
- Average/worst-case processing times
- Audio overruns/underruns
- Playhead position updates
### Disable Optimizations for Specific Crates
Edit `lightningbeam-ui/Cargo.toml`:
```toml
[profile.dev.package.my-crate]
opt-level = 0 # No optimizations
```
**Warning**: Do not disable optimizations for `daw-backend` or audio-related crates, as this will cause audio dropouts.
### Build with Specific Features
```bash
# Build with all features
cargo build --all-features
# Build with no default features
cargo build --no-default-features
```
### Clean Build
Remove all build artifacts and start fresh:
```bash
cargo clean
cargo build
```
Useful when dependencies change or build cache becomes corrupted.
### Cross-Compilation
Cross-compiling is not currently documented but should be possible using `cross`:
```bash
cargo install cross
cross build --target x86_64-unknown-linux-gnu
```
See [cross documentation](https://github.com/cross-rs/cross) for details.
## Running Tests
```bash
# Run all tests
cargo test
# Run tests for specific package
cargo test -p lightningbeam-core
# Run with output
cargo test -- --nocapture
# Run specific test
cargo test test_name
```
## Building Documentation
Generate and open Rust API documentation:
```bash
cargo doc --open
```
This generates HTML documentation from code comments and opens it in your browser.
## Next Steps
After building successfully:
- See [CONTRIBUTING.md](../CONTRIBUTING.md) for development workflow
- See [ARCHITECTURE.md](../ARCHITECTURE.md) for system architecture
- See [docs/AUDIO_SYSTEM.md](AUDIO_SYSTEM.md) for audio engine details
- See [docs/UI_SYSTEM.md](UI_SYSTEM.md) for UI development

812
docs/RENDERING.md Normal file
View File

@ -0,0 +1,812 @@
# GPU Rendering Architecture
This document describes Lightningbeam's GPU rendering pipeline, including Vello integration for vector graphics, custom WGSL shaders for waveforms, and wgpu integration patterns.
## Table of Contents
- [Overview](#overview)
- [Rendering Pipeline](#rendering-pipeline)
- [Vello Integration](#vello-integration)
- [Waveform Rendering](#waveform-rendering)
- [WGSL Shaders](#wgsl-shaders)
- [Uniform Buffer Alignment](#uniform-buffer-alignment)
- [Custom wgpu Integration](#custom-wgpu-integration)
- [Performance Optimization](#performance-optimization)
- [Debugging Rendering Issues](#debugging-rendering-issues)
## Overview
Lightningbeam uses GPU-accelerated rendering for high-performance 2D graphics:
- **Vello**: Compute shader-based 2D vector rendering
- **wgpu 27**: Cross-platform GPU API (Vulkan, Metal, D3D12)
- **egui-wgpu**: Integration layer between egui and wgpu
- **Custom WGSL shaders**: For specialized rendering (waveforms, effects)
### Supported Backends
- **Linux**: Vulkan (primary), OpenGL (fallback)
- **macOS**: Metal
- **Windows**: Vulkan, DirectX 12
## Rendering Pipeline
### High-Level Flow
```
┌─────────────────────────────────────────────────────────────┐
│ Application Frame │
├─────────────────────────────────────────────────────────────┤
│ │
│ 1. egui Layout Phase │
│ - Build UI tree │
│ - Collect paint primitives │
│ - Register wgpu callbacks │
│ │
│ 2. Custom GPU Rendering (via egui_wgpu::Callback) │
│ ┌────────────────────────────────────────────────┐ │
│ │ prepare(): │ │
│ │ - Build Vello scene from document │ │
│ │ - Update uniform buffers │ │
│ │ - Generate waveform mipmaps (if needed) │ │
│ └────────────────────────────────────────────────┘ │
│ ┌────────────────────────────────────────────────┐ │
│ │ paint(): │ │
│ │ - Render Vello scene to texture │ │
│ │ - Render waveforms │ │
│ │ - Composite layers │ │
│ └────────────────────────────────────────────────┘ │
│ │
│ 3. egui Paint │
│ - Render egui UI elements │
│ - Composite with custom rendering │
│ │
│ 4. Present to Screen │
│ │
└─────────────────────────────────────────────────────────────┘
```
### Render Pass Structure
```
Main Render Pass
├─> Clear screen
├─> Custom wgpu callbacks (Stage pane, etc.)
│ ├─> Vello vector rendering
│ └─> Waveform rendering
└─> egui UI rendering (text, widgets, overlays)
```
## Vello Integration
Vello is a GPU-accelerated 2D rendering engine that uses compute shaders for high-performance vector graphics.
### Vello Architecture
```
Document Shapes
Convert to kurbo paths
Build Vello Scene
Vello Renderer (compute shaders)
Render to GPU texture
Composite with UI
```
### Building a Vello Scene
```rust
use vello::{Scene, SceneBuilder, kurbo::{Affine, BezPath}};
use peniko::{Color, Fill, Brush};
fn build_vello_scene(document: &Document) -> Scene {
let mut scene = Scene::new();
let mut builder = SceneBuilder::for_scene(&mut scene);
for layer in &document.layers {
if let Layer::VectorLayer { clips, visible, .. } = layer {
if !visible {
continue;
}
for clip in clips {
for shape_instance in &clip.shapes {
// Get transform for this shape
let transform = shape_instance.compute_world_transform();
let affine = to_vello_affine(transform);
// Convert shape to kurbo path
let path = shape_to_kurbo_path(&shape_instance.shape);
// Fill
if let Some(fill_color) = shape_instance.shape.fill {
let brush = Brush::Solid(to_peniko_color(fill_color));
builder.fill(
Fill::NonZero,
affine,
&brush,
None,
&path,
);
}
// Stroke
if let Some(stroke) = &shape_instance.shape.stroke {
let brush = Brush::Solid(to_peniko_color(stroke.color));
let stroke_style = vello::kurbo::Stroke::new(stroke.width);
builder.stroke(
&stroke_style,
affine,
&brush,
None,
&path,
);
}
}
}
}
}
scene
}
```
### Shape to Kurbo Path Conversion
```rust
use kurbo::{BezPath, PathEl, Point};
fn shape_to_kurbo_path(shape: &Shape) -> BezPath {
let mut path = BezPath::new();
if shape.curves.is_empty() {
return path;
}
// Start at first point
path.move_to(Point::new(
shape.curves[0].start.x as f64,
shape.curves[0].start.y as f64,
));
// Add curves
for curve in &shape.curves {
match curve.curve_type {
CurveType::Linear => {
path.line_to(Point::new(
curve.end.x as f64,
curve.end.y as f64,
));
}
CurveType::Quadratic => {
path.quad_to(
Point::new(curve.control1.x as f64, curve.control1.y as f64),
Point::new(curve.end.x as f64, curve.end.y as f64),
);
}
CurveType::Cubic => {
path.curve_to(
Point::new(curve.control1.x as f64, curve.control1.y as f64),
Point::new(curve.control2.x as f64, curve.control2.y as f64),
Point::new(curve.end.x as f64, curve.end.y as f64),
);
}
}
}
// Close path if needed
if shape.closed {
path.close_path();
}
path
}
```
### Vello Renderer Setup
```rust
use vello::{Renderer, RendererOptions, RenderParams};
use wgpu;
pub struct VelloRenderer {
renderer: Renderer,
surface_format: wgpu::TextureFormat,
}
impl VelloRenderer {
pub fn new(device: &wgpu::Device, surface_format: wgpu::TextureFormat) -> Self {
let renderer = Renderer::new(
device,
RendererOptions {
surface_format: Some(surface_format),
use_cpu: false,
antialiasing_support: vello::AaSupport::all(),
num_init_threads: None,
},
).expect("Failed to create Vello renderer");
Self {
renderer,
surface_format,
}
}
pub fn render(
&mut self,
device: &wgpu::Device,
queue: &wgpu::Queue,
scene: &Scene,
texture: &wgpu::TextureView,
width: u32,
height: u32,
) {
let params = RenderParams {
base_color: peniko::Color::TRANSPARENT,
width,
height,
antialiasing_method: vello::AaConfig::Msaa16,
};
self.renderer
.render_to_texture(device, queue, scene, texture, &params)
.expect("Failed to render Vello scene");
}
}
```
## Waveform Rendering
Audio waveforms are rendered on the GPU using custom WGSL shaders with mipmapping for efficient zooming.
### Waveform GPU Resources
```rust
pub struct WaveformGPU {
// Waveform data texture (min/max per sample)
texture: wgpu::Texture,
texture_view: wgpu::TextureView,
// Mipmap chain for level-of-detail
mip_levels: Vec<wgpu::TextureView>,
// Render pipeline
pipeline: wgpu::RenderPipeline,
// Uniform buffer for view parameters
uniform_buffer: wgpu::Buffer,
bind_group: wgpu::BindGroup,
}
```
### Waveform Texture Format
Each texel stores min/max amplitude for a sample range:
```
Texture Format: Rgba16Float (4 channels, 16-bit float each)
- R channel: Left channel minimum amplitude in range [-1, 1]
- G channel: Left channel maximum amplitude in range [-1, 1]
- B channel: Right channel minimum amplitude in range [-1, 1]
- A channel: Right channel maximum amplitude in range [-1, 1]
Mip level 0: Per-sample min/max (1x)
Mip level 1: Per-4-sample min/max (1/4x)
Mip level 2: Per-16-sample min/max (1/16x)
Mip level 3: Per-64-sample min/max (1/64x)
...
Each mip level reduces by 4x, not 2x, for efficient zooming.
```
### Generating Waveform Texture
```rust
fn generate_waveform_texture(
device: &wgpu::Device,
queue: &wgpu::Queue,
audio_samples: &[f32],
) -> wgpu::Texture {
// Calculate mip levels
let width = audio_samples.len() as u32;
let mip_levels = (width as f32).log2().floor() as u32 + 1;
// Create texture
let texture = device.create_texture(&wgpu::TextureDescriptor {
label: Some("Waveform Texture"),
size: wgpu::Extent3d {
width,
height: 1,
depth_or_array_layers: 1,
},
mip_level_count: mip_levels,
sample_count: 1,
dimension: wgpu::TextureDimension::D1,
format: wgpu::TextureFormat::Rg32Float,
usage: wgpu::TextureUsages::TEXTURE_BINDING | wgpu::TextureUsages::COPY_DST,
view_formats: &[],
});
// Upload base level (per-sample min/max)
let mut data: Vec<f32> = Vec::with_capacity(width as usize * 2);
for &sample in audio_samples {
data.push(sample); // min
data.push(sample); // max
}
queue.write_texture(
wgpu::ImageCopyTexture {
texture: &texture,
mip_level: 0,
origin: wgpu::Origin3d::ZERO,
aspect: wgpu::TextureAspect::All,
},
bytemuck::cast_slice(&data),
wgpu::ImageDataLayout {
offset: 0,
bytes_per_row: Some(width * 8), // 2 floats * 4 bytes
rows_per_image: None,
},
wgpu::Extent3d {
width,
height: 1,
depth_or_array_layers: 1,
},
);
texture
}
```
### Mipmap Generation (Compute Shader)
```rust
// Compute shader generates mipmaps by taking min/max of 4 parent samples
// Each mip level is 4x smaller than the previous level
fn generate_mipmaps(
device: &wgpu::Device,
queue: &wgpu::Queue,
texture: &wgpu::Texture,
base_width: u32,
base_height: u32,
mip_count: u32,
base_sample_count: u32,
) -> Vec<wgpu::CommandBuffer> {
if mip_count <= 1 {
return Vec::new();
}
let mut encoder = device.create_command_encoder(&Default::default());
let mut src_width = base_width;
let mut src_height = base_height;
let mut src_sample_count = base_sample_count;
for level in 1..mip_count {
// Dimensions halve (2x2 texels -> 1 texel)
let dst_width = (src_width / 2).max(1);
let dst_height = (src_height / 2).max(1);
// But sample count reduces by 4x (4 samples -> 1)
let dst_sample_count = (src_sample_count + 3) / 4;
let src_view = texture.create_view(&wgpu::TextureViewDescriptor {
base_mip_level: level - 1,
mip_level_count: Some(1),
..Default::default()
});
let dst_view = texture.create_view(&wgpu::TextureViewDescriptor {
base_mip_level: level,
mip_level_count: Some(1),
..Default::default()
});
let params = MipgenParams {
src_width,
dst_width,
src_sample_count,
_pad: 0,
};
let params_buffer = device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
contents: bytemuck::cast_slice(&[params]),
usage: wgpu::BufferUsages::UNIFORM,
});
let bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
layout: &mipgen_bind_group_layout,
entries: &[
wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::TextureView(&src_view),
},
wgpu::BindGroupEntry {
binding: 1,
resource: wgpu::BindingResource::TextureView(&dst_view),
},
wgpu::BindGroupEntry {
binding: 2,
resource: params_buffer.as_entire_binding(),
},
],
});
// Dispatch compute shader
let total_dst_texels = dst_width * dst_height;
let workgroup_count = (total_dst_texels + 63) / 64;
let mut pass = encoder.begin_compute_pass(&Default::default());
pass.set_pipeline(&mipgen_pipeline);
pass.set_bind_group(0, &bind_group, &[]);
pass.dispatch_workgroups(workgroup_count, 1, 1);
drop(pass);
src_width = dst_width;
src_height = dst_height;
src_sample_count = dst_sample_count;
}
vec![encoder.finish()]
}
```
## WGSL Shaders
### Waveform Render Shader
```wgsl
// waveform.wgsl
struct WaveformParams {
view_matrix: mat4x4<f32>, // 64 bytes
viewport_size: vec2<f32>, // 8 bytes
zoom: f32, // 4 bytes
_pad1: f32, // 4 bytes (padding)
tint_color: vec4<f32>, // 16 bytes (requires 16-byte alignment)
// Total: 96 bytes
}
@group(0) @binding(0) var<uniform> params: WaveformParams;
@group(0) @binding(1) var waveform_texture: texture_1d<f32>;
@group(0) @binding(2) var waveform_sampler: sampler;
struct VertexOutput {
@builtin(position) position: vec4<f32>,
@location(0) uv: vec2<f32>,
}
@vertex
fn vs_main(@builtin(vertex_index) vertex_index: u32) -> VertexOutput {
// Generate fullscreen quad
var positions = array<vec2<f32>, 6>(
vec2(-1.0, -1.0),
vec2( 1.0, -1.0),
vec2( 1.0, 1.0),
vec2(-1.0, -1.0),
vec2( 1.0, 1.0),
vec2(-1.0, 1.0),
);
var output: VertexOutput;
output.position = vec4(positions[vertex_index], 0.0, 1.0);
output.uv = (positions[vertex_index] + 1.0) * 0.5;
return output;
}
@fragment
fn fs_main(input: VertexOutput) -> @location(0) vec4<f32> {
// Sample waveform texture
let sample_pos = input.uv.x;
let waveform = textureSample(waveform_texture, waveform_sampler, sample_pos);
// waveform.r = min amplitude, waveform.g = max amplitude
let min_amp = waveform.r;
let max_amp = waveform.g;
// Map amplitude to vertical position
let center_y = 0.5;
let min_y = center_y - min_amp * 0.5;
let max_y = center_y + max_amp * 0.5;
// Check if pixel is within waveform range
if (input.uv.y >= min_y && input.uv.y <= max_y) {
return params.tint_color;
} else {
return vec4(0.0, 0.0, 0.0, 0.0); // Transparent
}
}
```
### Mipmap Generation Shader
```wgsl
// waveform_mipgen.wgsl
struct MipgenParams {
src_width: u32,
dst_width: u32,
src_sample_count: u32,
}
@group(0) @binding(0) var src_texture: texture_2d<f32>;
@group(0) @binding(1) var dst_texture: texture_storage_2d<rgba16float, write>;
@group(0) @binding(2) var<uniform> params: MipgenParams;
@compute @workgroup_size(64)
fn main(@builtin(global_invocation_id) global_id: vec3<u32>) {
let linear_index = global_id.x;
// Convert linear index to 2D coordinates
let dst_x = linear_index % params.dst_width;
let dst_y = linear_index / params.dst_width;
// Each dst texel corresponds to 4 src samples (not 4 src texels)
// But 2D texture layout halves in each dimension
let src_x = dst_x * 2u;
let src_y = dst_y * 2u;
// Sample 4 texels from parent level (2x2 block)
let s00 = textureLoad(src_texture, vec2<i32>(i32(src_x), i32(src_y)), 0);
let s10 = textureLoad(src_texture, vec2<i32>(i32(src_x + 1u), i32(src_y)), 0);
let s01 = textureLoad(src_texture, vec2<i32>(i32(src_x), i32(src_y + 1u)), 0);
let s11 = textureLoad(src_texture, vec2<i32>(i32(src_x + 1u), i32(src_y + 1u)), 0);
// Compute min/max across all 4 samples for each channel
let left_min = min(min(s00.r, s10.r), min(s01.r, s11.r));
let left_max = max(max(s00.g, s10.g), max(s01.g, s11.g));
let right_min = min(min(s00.b, s10.b), min(s01.b, s11.b));
let right_max = max(max(s00.a, s10.a), max(s01.a, s11.a));
// Write to destination mip level
textureStore(dst_texture, vec2<i32>(i32(dst_x), i32(dst_y)),
vec4(left_min, left_max, right_min, right_max));
}
```
## Uniform Buffer Alignment
WGSL has strict alignment requirements. The most common issue is `vec4<f32>` requiring 16-byte alignment.
### Alignment Rules
```rust
// ❌ Bad: tint_color not aligned to 16 bytes
#[repr(C)]
struct WaveformParams {
view_matrix: [f32; 16], // 64 bytes (offset 0)
viewport_size: [f32; 2], // 8 bytes (offset 64)
zoom: f32, // 4 bytes (offset 72)
tint_color: [f32; 4], // 16 bytes (offset 76) ❌ Not 16-byte aligned!
}
// ✅ Good: explicit padding for alignment
#[repr(C)]
#[derive(Copy, Clone, bytemuck::Pod, bytemuck::Zeroable)]
struct WaveformParams {
view_matrix: [f32; 16], // 64 bytes (offset 0)
viewport_size: [f32; 2], // 8 bytes (offset 64)
zoom: f32, // 4 bytes (offset 72)
_pad1: f32, // 4 bytes (offset 76) - padding
tint_color: [f32; 4], // 16 bytes (offset 80) ✅ 16-byte aligned!
}
// Total size: 96 bytes
```
### Common Alignment Requirements
| WGSL Type | Size | Alignment |
|-----------|------|-----------|
| `f32` | 4 bytes | 4 bytes |
| `vec2<f32>` | 8 bytes | 8 bytes |
| `vec3<f32>` | 12 bytes | 16 bytes ⚠️ |
| `vec4<f32>` | 16 bytes | 16 bytes |
| `mat4x4<f32>` | 64 bytes | 16 bytes |
| Struct | Sum of members | 16 bytes (uniform buffers) |
### Debug Alignment Issues
```rust
// Use static_assertions to catch alignment bugs at compile time
use static_assertions::const_assert_eq;
const_assert_eq!(std::mem::size_of::<WaveformParams>(), 96);
const_assert_eq!(std::mem::align_of::<WaveformParams>(), 16);
// Runtime validation
fn validate_uniform_buffer<T: bytemuck::Pod>(data: &T) {
let size = std::mem::size_of::<T>();
let align = std::mem::align_of::<T>();
assert!(size % 16 == 0, "Uniform buffer size must be multiple of 16");
assert!(align >= 16, "Uniform buffer must be 16-byte aligned");
}
```
## Custom wgpu Integration
### egui-wgpu Callback Pattern
```rust
use egui_wgpu::CallbackTrait;
struct CustomRenderCallback {
// Data needed for rendering
scene: Scene,
params: UniformData,
}
impl CallbackTrait for CustomRenderCallback {
fn prepare(
&self,
device: &wgpu::Device,
queue: &wgpu::Queue,
_screen_descriptor: &egui_wgpu::ScreenDescriptor,
_encoder: &mut wgpu::CommandEncoder,
resources: &mut egui_wgpu::CallbackResources,
) -> Vec<wgpu::CommandBuffer> {
// Update GPU resources (buffers, textures, etc.)
// This runs before rendering
// Get or create renderer
let renderer: &mut MyRenderer = resources.get_or_insert_with(|| {
MyRenderer::new(device)
});
// Update uniform buffer
queue.write_buffer(&renderer.uniform_buffer, 0, bytemuck::bytes_of(&self.params));
vec![] // Return additional command buffers if needed
}
fn paint<'a>(
&'a self,
_info: egui::PaintCallbackInfo,
render_pass: &mut wgpu::RenderPass<'a>,
resources: &'a egui_wgpu::CallbackResources,
) {
// Actual rendering
let renderer: &MyRenderer = resources.get().unwrap();
render_pass.set_pipeline(&renderer.pipeline);
render_pass.set_bind_group(0, &renderer.bind_group, &[]);
render_pass.draw(0..6, 0..1); // Draw fullscreen quad
}
}
```
### Registering Callback in egui
```rust
// In Stage pane render method
let callback = egui_wgpu::Callback::new_paint_callback(
rect,
CustomRenderCallback {
scene: self.build_scene(document),
params: self.compute_params(),
},
);
ui.painter().add(callback);
```
## Performance Optimization
### Minimize GPU↔CPU Transfer
```rust
// ❌ Bad: Update uniform buffer every frame
for frame in frames {
queue.write_buffer(&uniform_buffer, 0, &params);
render();
}
// ✅ Good: Only update when changed
if params_changed {
queue.write_buffer(&uniform_buffer, 0, &params);
}
render();
```
### Reuse GPU Resources
```rust
// ✅ Good: Reuse textures and buffers
struct WaveformCache {
textures: HashMap<Uuid, wgpu::Texture>,
}
impl WaveformCache {
fn get_or_create(&mut self, clip_id: Uuid, audio_data: &[f32]) -> &wgpu::Texture {
self.textures.entry(clip_id).or_insert_with(|| {
generate_waveform_texture(device, queue, audio_data)
})
}
}
```
### Batch Draw Calls
```rust
// ❌ Bad: One draw call per shape
for shape in shapes {
render_pass.set_bind_group(0, &shape.bind_group, &[]);
render_pass.draw(0..shape.vertex_count, 0..1);
}
// ✅ Good: Batch into single draw call
let batched_vertices = batch_shapes(shapes);
render_pass.set_bind_group(0, &batched_bind_group, &[]);
render_pass.draw(0..batched_vertices.len(), 0..1);
```
### Use Mipmaps for Zooming
```rust
// ✅ Good: Select appropriate mip level based on zoom
let mip_level = ((1.0 / zoom).log2().floor() as u32).min(max_mip_level);
let texture_view = texture.create_view(&wgpu::TextureViewDescriptor {
base_mip_level: mip_level,
mip_level_count: Some(1),
..Default::default()
});
```
## Debugging Rendering Issues
### Enable wgpu Validation
```rust
let instance = wgpu::Instance::new(wgpu::InstanceDescriptor {
backends: wgpu::Backends::all(),
dx12_shader_compiler: Default::default(),
flags: wgpu::InstanceFlags::validation(), // Enable validation
gles_minor_version: wgpu::Gles3MinorVersion::Automatic,
});
```
### Check for Errors
```rust
// Set error handler
device.on_uncaptured_error(Box::new(|error| {
eprintln!("wgpu error: {:?}", error);
}));
```
### Capture GPU Frame
**Linux** (RenderDoc):
```bash
renderdoccmd capture ./lightningbeam-editor
```
**macOS** (Xcode):
- Run with GPU Frame Capture enabled
- Trigger capture with Cmd+Option+G
### Common Issues
#### Black Screen
- Check that vertex shader outputs correct clip-space coordinates
- Verify texture bindings are correct
- Check that render pipeline format matches surface format
#### Validation Errors
- Check uniform buffer alignment (see [Uniform Buffer Alignment](#uniform-buffer-alignment))
- Verify texture formats match shader expectations
- Ensure bind groups match pipeline layout
#### Performance Issues
- Use GPU profiler (RenderDoc, Xcode)
- Check for redundant buffer uploads
- Profile shader performance
- Reduce draw call count via batching
## Related Documentation
- [ARCHITECTURE.md](../ARCHITECTURE.md) - Overall system architecture
- [docs/UI_SYSTEM.md](UI_SYSTEM.md) - UI and pane integration
- [CONTRIBUTING.md](../CONTRIBUTING.md) - Development workflow

848
docs/UI_SYSTEM.md Normal file
View File

@ -0,0 +1,848 @@
# UI System Architecture
This document describes Lightningbeam's UI architecture, including the pane system, tool system, GPU integration, and patterns for extending the UI with new features.
## Table of Contents
- [Overview](#overview)
- [Pane System](#pane-system)
- [Shared State](#shared-state)
- [Two-Phase Dispatch](#two-phase-dispatch)
- [ID Collision Avoidance](#id-collision-avoidance)
- [Tool System](#tool-system)
- [GPU Integration](#gpu-integration)
- [Adding New Panes](#adding-new-panes)
- [Adding New Tools](#adding-new-tools)
- [Event Handling](#event-handling)
- [Best Practices](#best-practices)
## Overview
Lightningbeam's UI is built with **egui**, an immediate-mode GUI framework. Unlike retained-mode frameworks (Qt, GTK), immediate-mode rebuilds the UI every frame by running code that describes what should be displayed.
### Key Technologies
- **egui 0.33.3**: Immediate-mode GUI framework
- **eframe**: Application framework wrapping egui
- **winit**: Cross-platform windowing
- **Vello**: GPU-accelerated 2D vector rendering
- **wgpu**: Low-level GPU API
- **egui-wgpu**: Integration layer between egui and wgpu
### Immediate Mode Overview
```rust
// Immediate mode: UI is described every frame
fn render(&mut self, ui: &mut egui::Ui) {
if ui.button("Click me").clicked() {
self.counter += 1;
}
ui.label(format!("Count: {}", self.counter));
}
```
**Benefits**:
- Simple mental model (just describe what you see)
- No manual synchronization between state and UI
- Easy to compose and reuse components
**Considerations**:
- Must avoid expensive operations in render code
- IDs needed for stateful widgets (handled automatically in most cases)
## Pane System
Lightningbeam uses a flexible pane system where the UI is composed of independent, reusable panes (Stage, Timeline, Asset Library, etc.).
### Pane Architecture
```
┌─────────────────────────────────────────────────────────┐
│ Main Application │
│ (LightningbeamApp) │
├─────────────────────────────────────────────────────────┤
│ │
│ ┌────────────────────────────────────────────────┐ │
│ │ Pane Tree (egui_tiles) │ │
│ │ │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │
│ │ │ Stage │ │ Timeline │ │ Asset │ │ │
│ │ │ Pane │ │ Pane │ │ Library │ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ │ │
│ │ │ │
│ │ Each pane: │ │
│ │ - Renders its UI │ │
│ │ - Registers actions with SharedPaneState │ │
│ │ - Accesses shared document state │ │
│ └────────────────────────────────────────────────┘ │
│ │
│ ┌────────────────────────────────────────────────┐ │
│ │ SharedPaneState │ │
│ │ - Document │ │
│ │ - Selected tool │ │
│ │ - Pending actions │ │
│ │ - Audio system │ │
│ └────────────────────────────────────────────────┘ │
│ │
│ After all panes render: │
│ - Execute pending actions │
│ - Update undo/redo stacks │
│ - Synchronize with audio engine │
│ │
└─────────────────────────────────────────────────────────┘
```
### PaneInstance Enum
All panes are variants of the `PaneInstance` enum:
```rust
// In lightningbeam-editor/src/panes/mod.rs
pub enum PaneInstance {
Stage(Stage),
Timeline(Timeline),
AssetLibrary(AssetLibrary),
InfoPanel(InfoPanel),
VirtualPiano(VirtualPiano),
Toolbar(Toolbar),
NodeEditor(NodeEditor),
PianoRoll(PianoRoll),
Outliner(Outliner),
PresetBrowser(PresetBrowser),
}
impl PaneInstance {
pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) {
match self {
PaneInstance::Stage(stage) => stage.render(ui, shared_state),
PaneInstance::Timeline(timeline) => timeline.render(ui, shared_state),
PaneInstance::AssetLibrary(lib) => lib.render(ui, shared_state),
// ... dispatch to specific pane
}
}
pub fn title(&self) -> &str {
match self {
PaneInstance::Stage(_) => "Stage",
PaneInstance::Timeline(_) => "Timeline",
// ...
}
}
}
```
### Individual Pane Structure
Each pane is a struct with its own state and a `render` method:
```rust
pub struct MyPane {
// Pane-specific state
scroll_offset: f32,
selected_item: Option<usize>,
// ... other state
}
impl MyPane {
pub fn new() -> Self {
Self {
scroll_offset: 0.0,
selected_item: None,
}
}
pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) {
// Render pane UI
ui.heading("My Pane");
// Access shared state
let document = &shared_state.document;
// Create actions
if ui.button("Do something").clicked() {
let action = Box::new(MyAction { /* ... */ });
shared_state.pending_actions.push(action);
}
}
}
```
### Key Panes
Located in `lightningbeam-editor/src/panes/`:
- **stage.rs** (214KB): Main canvas for drawing and transform tools
- **timeline.rs** (84KB): Multi-track timeline with clip editing
- **asset_library.rs** (70KB): Asset browser with drag-to-timeline
- **infopanel.rs** (31KB): Context-sensitive property editor
- **virtual_piano.rs** (31KB): On-screen MIDI keyboard
- **toolbar.rs** (9KB): Tool palette
## Shared State
`SharedPaneState` is passed to all panes during rendering to share data and coordinate actions.
### SharedPaneState Structure
```rust
pub struct SharedPaneState {
// Document state
pub document: Document,
pub undo_stack: Vec<Box<dyn Action>>,
pub redo_stack: Vec<Box<dyn Action>>,
// Tool state
pub selected_tool: Tool,
pub tool_state: ToolState,
// Actions to execute after rendering
pub pending_actions: Vec<Box<dyn Action>>,
// Audio engine
pub audio_system: AudioSystem,
pub playhead_position: f64,
pub is_playing: bool,
// Selection state
pub selected_clips: HashSet<Uuid>,
pub selected_shapes: HashSet<Uuid>,
// Clipboard
pub clipboard: Option<ClipboardData>,
// UI state
pub show_grid: bool,
pub snap_to_grid: bool,
pub grid_size: f32,
}
```
### Accessing Shared State
```rust
impl MyPane {
pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) {
// Read from document
let layer_count = shared_state.document.layers.len();
ui.label(format!("Layers: {}", layer_count));
// Check tool state
if shared_state.selected_tool == Tool::Select {
// ... render selection-specific UI
}
// Check playback state
if shared_state.is_playing {
ui.label("▶ Playing");
}
}
}
```
## Two-Phase Dispatch
Panes cannot directly mutate shared state during rendering due to Rust's borrowing rules. Instead, they register actions to be executed after all panes have rendered.
### Why Two-Phase?
```rust
// This doesn't work: can't borrow shared_state as mutable twice
pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) {
if ui.button("Add layer").clicked() {
// ❌ Can't mutate document while borrowed by render
shared_state.document.layers.push(Layer::new());
}
}
```
### Solution: Pending Actions
```rust
// Phase 1: Register action during render
pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) {
if ui.button("Add layer").clicked() {
let action = Box::new(AddLayerAction::new());
shared_state.pending_actions.push(action);
}
}
// Phase 2: Execute after all panes rendered (in main app)
for action in shared_state.pending_actions.drain(..) {
action.apply(&mut shared_state.document);
shared_state.undo_stack.push(action);
}
```
### Action Trait
All actions implement the `Action` trait:
```rust
pub trait Action: Send {
fn apply(&mut self, document: &mut Document);
fn undo(&mut self, document: &mut Document);
fn redo(&mut self, document: &mut Document);
}
```
Example action:
```rust
pub struct AddLayerAction {
layer_id: Uuid,
layer_type: LayerType,
}
impl Action for AddLayerAction {
fn apply(&mut self, document: &mut Document) {
let layer = Layer::new(self.layer_id, self.layer_type);
document.layers.push(layer);
}
fn undo(&mut self, document: &mut Document) {
document.layers.retain(|l| l.id != self.layer_id);
}
fn redo(&mut self, document: &mut Document) {
self.apply(document);
}
}
```
## ID Collision Avoidance
egui uses IDs to track widget state across frames (e.g., scroll position, collapse state). When multiple instances of the same pane exist, IDs can collide.
### The Problem
```rust
// If two Timeline panes exist, they'll share the same ID
ui.collapsing("Track 1", |ui| {
// ... content
}); // ID is derived from label "Track 1"
```
Both timeline instances would have the same "Track 1" ID, causing state conflicts.
### Solution: Salt IDs with Node Path
Each pane has a unique node path (e.g., `"root/0/1/2"`). Salt all IDs with this path:
```rust
pub struct Timeline {
node_path: String, // Unique path for this pane instance
}
impl Timeline {
pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) {
// Salt IDs with node path
ui.push_id(&self.node_path, |ui| {
// Now all IDs within this closure are unique to this instance
ui.collapsing("Track 1", |ui| {
// ... content
});
});
}
}
```
### Alternative: Per-Widget Salting
For individual widgets:
```rust
ui.collapsing("Track 1", |ui| {
// ... content
}).id.with(&self.node_path); // Salt this specific ID
```
### Best Practice
**Always salt IDs in new panes** to support multiple instances:
```rust
impl NewPane {
pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) {
ui.push_id(&self.node_path, |ui| {
// All rendering code goes here
});
}
}
```
## Tool System
Tools handle user input on the Stage pane (drawing, selection, transforms, etc.).
### Tool Enum
```rust
pub enum Tool {
Select,
Draw,
Rectangle,
Ellipse,
Line,
PaintBucket,
Transform,
Eyedropper,
}
```
### Tool State
```rust
pub struct ToolState {
// Generic tool state
pub mouse_pos: Pos2,
pub mouse_down: bool,
pub drag_start: Option<Pos2>,
// Tool-specific state
pub draw_points: Vec<Pos2>,
pub transform_mode: TransformMode,
pub paint_bucket_tolerance: f32,
}
```
### Tool Implementation
Tools implement the `ToolBehavior` trait:
```rust
pub trait ToolBehavior {
fn on_mouse_down(&mut self, pos: Pos2, shared_state: &mut SharedPaneState);
fn on_mouse_move(&mut self, pos: Pos2, shared_state: &mut SharedPaneState);
fn on_mouse_up(&mut self, pos: Pos2, shared_state: &mut SharedPaneState);
fn on_key(&mut self, key: Key, shared_state: &mut SharedPaneState);
fn render_overlay(&self, painter: &Painter);
}
```
Example: Rectangle tool:
```rust
pub struct RectangleTool {
start_pos: Option<Pos2>,
}
impl ToolBehavior for RectangleTool {
fn on_mouse_down(&mut self, pos: Pos2, _shared_state: &mut SharedPaneState) {
self.start_pos = Some(pos);
}
fn on_mouse_move(&mut self, pos: Pos2, _shared_state: &mut SharedPaneState) {
// Visual feedback handled in render_overlay
}
fn on_mouse_up(&mut self, pos: Pos2, shared_state: &mut SharedPaneState) {
if let Some(start) = self.start_pos.take() {
// Create rectangle shape
let rect = Rect::from_two_pos(start, pos);
let action = Box::new(AddShapeAction::rectangle(rect));
shared_state.pending_actions.push(action);
}
}
fn render_overlay(&self, painter: &Painter) {
if let Some(start) = self.start_pos {
let current = painter.mouse_pos();
let rect = Rect::from_two_pos(start, current);
painter.rect_stroke(rect, 0.0, Stroke::new(2.0, Color32::WHITE));
}
}
}
```
### Tool Selection
```rust
// In Toolbar pane
if ui.button("✏ Draw").clicked() {
shared_state.selected_tool = Tool::Draw;
}
// In Stage pane
match shared_state.selected_tool {
Tool::Draw => self.draw_tool.on_mouse_move(pos, shared_state),
Tool::Select => self.select_tool.on_mouse_move(pos, shared_state),
// ...
}
```
## GPU Integration
The Stage pane uses custom wgpu rendering for vector graphics and waveforms.
### egui-wgpu Callbacks
```rust
// In Stage::render()
ui.painter().add(egui_wgpu::Callback::new_paint_callback(
rect,
StageCallback {
document: shared_state.document.clone(),
vello_renderer: self.vello_renderer.clone(),
waveform_renderer: self.waveform_renderer.clone(),
},
));
```
### Callback Implementation
```rust
struct StageCallback {
document: Document,
vello_renderer: Arc<Mutex<VelloRenderer>>,
waveform_renderer: Arc<Mutex<WaveformRenderer>>,
}
impl egui_wgpu::CallbackTrait for StageCallback {
fn prepare(
&self,
device: &wgpu::Device,
queue: &wgpu::Queue,
encoder: &mut wgpu::CommandEncoder,
resources: &egui_wgpu::CallbackResources,
) -> Vec<wgpu::CommandBuffer> {
// Prepare GPU resources
let mut vello = self.vello_renderer.lock().unwrap();
vello.prepare_scene(&self.document);
vec![]
}
fn paint<'a>(
&'a self,
info: egui::PaintCallbackInfo,
render_pass: &mut wgpu::RenderPass<'a>,
resources: &'a egui_wgpu::CallbackResources,
) {
// Render vector graphics
let vello = self.vello_renderer.lock().unwrap();
vello.render(render_pass);
// Render waveforms
let waveforms = self.waveform_renderer.lock().unwrap();
waveforms.render(render_pass);
}
}
```
### Vello Integration
Vello renders 2D vector graphics using GPU compute shaders:
```rust
use vello::{Scene, SceneBuilder, kurbo};
fn build_vello_scene(document: &Document) -> Scene {
let mut scene = Scene::new();
let mut builder = SceneBuilder::for_scene(&mut scene);
for layer in &document.layers {
if let Layer::VectorLayer { clips, .. } = layer {
for clip in clips {
for shape in &clip.shapes {
// Convert shape to kurbo path
let path = shape.to_kurbo_path();
// Add to scene with fill/stroke
builder.fill(
Fill::NonZero,
Affine::IDENTITY,
&shape.fill_color,
None,
&path,
);
}
}
}
}
scene
}
```
## Adding New Panes
### Step 1: Create Pane Struct
```rust
// In lightningbeam-editor/src/panes/my_pane.rs
pub struct MyPane {
node_path: String,
// Pane-specific state
selected_index: usize,
scroll_offset: f32,
}
impl MyPane {
pub fn new(node_path: String) -> Self {
Self {
node_path,
selected_index: 0,
scroll_offset: 0.0,
}
}
pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) {
// IMPORTANT: Salt IDs with node path
ui.push_id(&self.node_path, |ui| {
ui.heading("My Pane");
// Render pane content
// ...
});
}
}
```
### Step 2: Add to PaneInstance Enum
```rust
// In lightningbeam-editor/src/panes/mod.rs
pub enum PaneInstance {
// ... existing variants
MyPane(MyPane),
}
impl PaneInstance {
pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) {
match self {
// ... existing cases
PaneInstance::MyPane(pane) => pane.render(ui, shared_state),
}
}
pub fn title(&self) -> &str {
match self {
// ... existing cases
PaneInstance::MyPane(_) => "My Pane",
}
}
}
```
### Step 3: Add to Menu
```rust
// In main application
if ui.button("My Pane").clicked() {
let pane = PaneInstance::MyPane(MyPane::new(generate_node_path()));
app.add_pane(pane);
}
```
## Adding New Tools
### Step 1: Add to Tool Enum
```rust
pub enum Tool {
// ... existing tools
MyTool,
}
```
### Step 2: Implement Tool Behavior
```rust
pub struct MyToolState {
// Tool-specific state
start_pos: Option<Pos2>,
}
impl MyToolState {
pub fn handle_input(
&mut self,
response: &Response,
shared_state: &mut SharedPaneState,
) {
if response.clicked() {
self.start_pos = response.interact_pointer_pos();
}
if response.drag_released() {
if let Some(start) = self.start_pos.take() {
// Create action
let action = Box::new(MyAction { /* ... */ });
shared_state.pending_actions.push(action);
}
}
}
pub fn render_overlay(&self, painter: &Painter) {
// Draw tool-specific overlay
}
}
```
### Step 3: Add to Toolbar
```rust
// In Toolbar pane
if ui.button("🔧 My Tool").clicked() {
shared_state.selected_tool = Tool::MyTool;
}
```
### Step 4: Handle in Stage Pane
```rust
// In Stage pane
match shared_state.selected_tool {
// ... existing tools
Tool::MyTool => self.my_tool_state.handle_input(&response, shared_state),
}
// Render overlay
match shared_state.selected_tool {
// ... existing tools
Tool::MyTool => self.my_tool_state.render_overlay(&painter),
}
```
## Event Handling
### Mouse Events
```rust
let response = ui.allocate_rect(rect, Sense::click_and_drag());
if response.clicked() {
let pos = response.interact_pointer_pos().unwrap();
// Handle click at pos
}
if response.dragged() {
let delta = response.drag_delta();
// Handle drag by delta
}
if response.drag_released() {
// Handle drag end
}
```
### Keyboard Events
```rust
ui.input(|i| {
if i.key_pressed(Key::Delete) {
// Delete selected items
}
if i.modifiers.ctrl && i.key_pressed(Key::Z) {
// Undo
}
if i.modifiers.ctrl && i.key_pressed(Key::Y) {
// Redo
}
});
```
### Drag and Drop
```rust
// Source (Asset Library)
let response = ui.label("Audio Clip");
if response.dragged() {
let payload = DragPayload::AudioClip(clip_id);
ui.memory_mut(|mem| {
mem.data.insert_temp(Id::new("drag_payload"), payload);
});
}
// Target (Timeline)
let response = ui.allocate_rect(rect, Sense::hover());
if response.hovered() {
if let Some(payload) = ui.memory(|mem| mem.data.get_temp::<DragPayload>(Id::new("drag_payload"))) {
// Handle drop
let action = Box::new(AddClipAction { clip_id: payload.clip_id(), position });
shared_state.pending_actions.push(action);
}
}
```
## Best Practices
### 1. Always Salt IDs
```rust
// ✅ Good
ui.push_id(&self.node_path, |ui| {
// All rendering here
});
// ❌ Bad (ID collisions if multiple instances)
ui.collapsing("Settings", |ui| {
// ...
});
```
### 2. Use Pending Actions
```rust
// ✅ Good
shared_state.pending_actions.push(Box::new(action));
// ❌ Bad (borrowing conflicts)
shared_state.document.layers.push(layer);
```
### 3. Split Borrows with std::mem::take
```rust
// ✅ Good
let mut clips = std::mem::take(&mut self.clips);
for clip in &mut clips {
self.render_clip(ui, clip); // Can borrow self immutably
}
self.clips = clips;
// ❌ Bad (can't borrow self while iterating clips)
for clip in &mut self.clips {
self.render_clip(ui, clip); // Error!
}
```
### 4. Avoid Expensive Operations in Render
```rust
// ❌ Bad (heavy computation every frame)
pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) {
let thumbnail = self.generate_thumbnail(); // Expensive!
ui.image(thumbnail);
}
// ✅ Good (cache result)
pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) {
if self.thumbnail_cache.is_none() {
self.thumbnail_cache = Some(self.generate_thumbnail());
}
ui.image(self.thumbnail_cache.as_ref().unwrap());
}
```
### 5. Handle Missing State Gracefully
```rust
// ✅ Good
if let Some(layer) = document.layers.get(layer_index) {
// Render layer
} else {
ui.label("Layer not found");
}
// ❌ Bad (panics if layer missing)
let layer = &document.layers[layer_index]; // May panic!
```
## Related Documentation
- [ARCHITECTURE.md](../ARCHITECTURE.md) - Overall system architecture
- [docs/AUDIO_SYSTEM.md](AUDIO_SYSTEM.md) - Audio engine integration
- [docs/RENDERING.md](RENDERING.md) - GPU rendering details
- [CONTRIBUTING.md](../CONTRIBUTING.md) - Development workflow