diff --git a/ARCHITECTURE.md b/ARCHITECTURE.md new file mode 100644 index 0000000..7da90d8 --- /dev/null +++ b/ARCHITECTURE.md @@ -0,0 +1,538 @@ +# Lightningbeam Architecture + +This document provides a comprehensive overview of Lightningbeam's architecture, design decisions, and component interactions. + +## Table of Contents + +- [System Overview](#system-overview) +- [Technology Stack](#technology-stack) +- [Component Architecture](#component-architecture) +- [Data Flow](#data-flow) +- [Rendering Pipeline](#rendering-pipeline) +- [Audio Architecture](#audio-architecture) +- [Key Design Decisions](#key-design-decisions) +- [Directory Structure](#directory-structure) + +## System Overview + +Lightningbeam is a 2D multimedia editor combining vector animation, audio production, and video editing. The application is built as a pure Rust desktop application using immediate-mode GUI (egui) with GPU-accelerated vector rendering (Vello). + +### High-Level Architecture + +``` +┌────────────────────────────────────────────────────────────┐ +│ Lightningbeam Editor │ +│ (egui UI) │ +├────────────────────────────────────────────────────────────┤ +│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ +│ │ Stage │ │ Timeline │ │ Asset │ │ Info │ │ +│ │ Pane │ │ Pane │ │ Library │ │ Panel │ │ +│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │ +│ │ +│ ┌──────────────────────────────────────────────────────┐ │ +│ │ Lightningbeam Core (Data Model) │ │ +│ │ Document, Layers, Clips, Actions, Undo/Redo │ │ +│ └──────────────────────────────────────────────────────┘ │ +├────────────────────────────────────────────────────────────┤ +│ Rendering & Audio │ +│ ┌──────────────────┐ ┌──────────────────┐ │ +│ │ Vello + wgpu │ │ daw-backend │ │ +│ │ (GPU Rendering) │ │ (Audio Engine) │ │ +│ └──────────────────┘ └──────────────────┘ │ +└────────────────────────────────────────────────────────────┘ + ↓ ↓ + ┌─────────┐ ┌─────────┐ + │ GPU │ │ cpal │ + │ (Vulkan │ │ (Audio │ + │ /Metal) │ │ I/O) │ + └─────────┘ └─────────┘ +``` + +### Migration from Tauri/JavaScript + +Lightningbeam is undergoing a rewrite from a Tauri/JavaScript prototype to pure Rust. The original architecture hit IPC bandwidth limitations when streaming decoded video frames. The new Rust UI eliminates this bottleneck by handling all rendering natively. + +**Current Status**: Active development on the `rust-ui` branch. Core UI, tools, and undo system are implemented. Audio integration in progress. + +## Technology Stack + +### UI Framework +- **egui 0.33.3**: Immediate-mode GUI framework +- **eframe 0.33.3**: Application framework wrapping egui +- **winit 0.30**: Cross-platform windowing + +### GPU Rendering +- **Vello (git main)**: GPU-accelerated 2D vector graphics using compute shaders +- **wgpu 27**: Low-level GPU API (Vulkan/Metal backend) +- **kurbo 0.12**: 2D curve and shape primitives +- **peniko 0.5**: Color and brush definitions + +### Audio Engine +- **daw-backend**: Custom real-time audio engine +- **cpal 0.15**: Cross-platform audio I/O +- **symphonia 0.5**: Audio decoding (MP3, FLAC, WAV, Ogg, etc.) +- **rtrb 0.3**: Lock-free ringbuffers for audio thread communication +- **dasp**: Audio graph processing + +### Video +- **FFmpeg**: Video encoding/decoding (via ffmpeg-next) + +### Serialization +- **serde**: Document serialization +- **serde_json**: JSON format + +## Component Architecture + +### 1. Lightningbeam Core (`lightningbeam-core/`) + +The core crate contains the data model and business logic, independent of UI framework. + +**Key Types:** + +```rust +Document { + canvas_size: (u32, u32), + layers: Vec, + undo_stack: Vec>, + redo_stack: Vec>, +} + +Layer (enum) { + VectorLayer { clips: Vec, ... }, + AudioLayer { clips: Vec, ... }, + VideoLayer { clips: Vec, ... }, +} + +ClipInstance { + clip_id: Uuid, // Reference to clip definition + start_time: f64, // Timeline position + duration: f64, + trim_start: f64, + trim_end: f64, +} +``` + +**Responsibilities:** +- Document structure and state +- Clip and layer management +- Action system (undo/redo) +- Tool definitions +- Animation data and keyframes + +### 2. Lightningbeam Editor (`lightningbeam-editor/`) + +The editor application implements the UI and user interactions. + +**Main Entry Point:** `src/main.rs` +- Initializes eframe application +- Sets up window, GPU context, and audio system +- Runs main event loop + +**Panes** (`src/panes/`): +Each pane is a self-contained UI component: + +- `stage.rs` (214KB): Main canvas for drawing, transform tools, GPU rendering +- `timeline.rs` (84KB): Multi-track timeline with clip editing +- `asset_library.rs` (70KB): Asset browser with drag-and-drop +- `infopanel.rs` (31KB): Context-sensitive property editor +- `virtual_piano.rs` (31KB): MIDI keyboard input +- `toolbar.rs` (9KB): Tool palette + +**Pane System:** +```rust +pub enum PaneInstance { + Stage(Stage), + Timeline(Timeline), + AssetLibrary(AssetLibrary), + // ... other panes +} + +impl PaneInstance { + pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) { + match self { + PaneInstance::Stage(stage) => stage.render(ui, shared_state), + // ... dispatch to specific pane + } + } +} +``` + +**SharedPaneState:** +Facilitates communication between panes: +```rust +pub struct SharedPaneState { + pub document: Document, + pub selected_tool: Tool, + pub pending_actions: Vec>, + pub audio_system: AudioSystem, + // ... other shared state +} +``` + +### 3. DAW Backend (`daw-backend/`) + +Standalone audio engine crate with real-time audio processing. + +**Architecture:** +``` +UI Thread Audio Thread (real-time) + │ │ + │ Commands (rtrb queue) │ + ├──────────────────────────────>│ + │ │ + │ State Updates │ + │<──────────────────────────────┤ + │ │ + ↓ + ┌───────────────┐ + │ Audio Engine │ + │ process() │ + └───────────────┘ + ↓ + ┌───────────────┐ + │ Track Mix │ + └───────────────┘ + ↓ + ┌───────────────┐ + │ cpal Output │ + └───────────────┘ +``` + +**Key Components:** + +- **Engine** (`audio/engine.rs`): Main audio callback, runs on real-time thread +- **Project** (`audio/project.rs`): Top-level audio state +- **Track** (`audio/track.rs`): Individual audio tracks with effects chains +- **Effects**: Reverb, delay, EQ, compressor, distortion, etc. +- **Synthesizers**: Oscillator, FM synth, wavetable, sampler + +**Lock-Free Design:** +The audio thread never blocks. UI sends commands via lock-free ringbuffers (rtrb), audio thread processes them between buffer callbacks. + +## Data Flow + +### Document Editing Flow + +``` +User Input (mouse/keyboard) + ↓ +egui Event Handlers (in pane.render()) + ↓ +Create Action (implements Action trait) + ↓ +Add to SharedPaneState.pending_actions + ↓ +After all panes render: execute actions + ↓ +Action.apply(&mut document) + ↓ +Push to undo_stack + ↓ +UI re-renders with updated document +``` + +### Audio Playback Flow + +``` +UI: User clicks Play + ↓ +Send PlayCommand to audio engine (via rtrb queue) + ↓ +Audio thread: Receive command + ↓ +Audio thread: Start playback, increment playhead + ↓ +Audio callback (every ~5ms): Engine::process() + ↓ +Mix tracks, apply effects, output samples + ↓ +Send playhead position back to UI + ↓ +UI: Update timeline playhead position +``` + +### GPU Rendering Flow + +``` +egui layout phase + ↓ +Stage pane requests wgpu callback + ↓ +Vello renders vector shapes to GPU texture + ↓ +Custom wgpu integration composites: + - Vello output (vector graphics) + - Waveform textures (GPU-rendered audio) + - egui UI overlay + ↓ +Present to screen +``` + +## Rendering Pipeline + +### Stage Rendering + +The Stage pane uses a custom wgpu callback to render directly to GPU: + +```rust +ui.painter().add(egui_wgpu::Callback::new_paint_callback( + rect, + StageCallback { /* render data */ } +)); +``` + +**Vello Integration:** +1. Create Vello `Scene` from document shapes +2. Render scene to GPU texture using compute shaders +3. Composite with UI elements + +**Waveform Rendering:** +- Audio waveforms rendered on GPU using custom WGSL shaders +- Mipmaps generated via compute shader for level-of-detail +- Uniform buffers store view parameters (zoom, offset, tint color) + +**WGSL Alignment Requirements:** +WGSL has strict alignment rules. `vec4` requires 16-byte alignment: + +```rust +#[repr(C)] +#[derive(Copy, Clone, bytemuck::Pod, bytemuck::Zeroable)] +struct WaveformParams { + view_matrix: [f32; 16], // 64 bytes + viewport_size: [f32; 2], // 8 bytes + zoom: f32, // 4 bytes + _pad1: f32, // 4 bytes padding + tint_color: [f32; 4], // 16 bytes (requires 16-byte alignment) +} +// Total: 96 bytes +``` + +## Audio Architecture + +### Real-Time Constraints + +Audio callbacks run on a dedicated real-time thread with strict timing requirements: +- Buffer size: 256 frames default (~5.8ms at 44.1kHz) +- ALSA may provide smaller buffers (64-75 frames, ~1.5ms) +- **No blocking operations allowed**: No locks, no allocations, no syscalls + +### Lock-Free Communication + +UI and audio thread communicate via lock-free ringbuffers (rtrb): + +```rust +// UI Thread +command_sender.push(AudioCommand::Play).ok(); + +// Audio Thread (in process callback) +while let Ok(command) = command_receiver.pop() { + match command { + AudioCommand::Play => self.playing = true, + // ... handle other commands + } +} +``` + +### Audio Processing Pipeline + +``` +Audio Callback Invoked (every ~5ms) + ↓ +Process queued commands + ↓ +For each track: + - Read audio samples at playhead position + - Apply effects chain + - Mix to master output + ↓ +Write samples to output buffer + ↓ +Return from callback (must complete in <5ms) +``` + +### Optimized Debug Builds + +Audio code is optimized even in debug builds to meet real-time deadlines: + +```toml +[profile.dev.package.daw-backend] +opt-level = 2 + +[profile.dev.package.symphonia] +opt-level = 2 +# ... other audio libraries +``` + +## Key Design Decisions + +### Layer & Clip System + +**Type-Specific Layers:** +Each layer type supports only its matching clip type: +- `VectorLayer` → `VectorClip` +- `AudioLayer` → `AudioClip` +- `VideoLayer` → `VideoClip` + +**Recursive Nesting:** +Vector clips can contain internal layers of any type, enabling complex nested compositions. + +**Clip vs ClipInstance:** +- **Clip**: Template/definition in asset library (the "master") +- **ClipInstance**: Placed on timeline with instance-specific properties (position, duration, trim points) +- Multiple instances can reference the same clip +- "Make Unique" operation duplicates the underlying clip + +### Undo/Redo System + +**Action Trait:** +```rust +pub trait Action: Send { + fn apply(&mut self, document: &mut Document); + fn undo(&mut self, document: &mut Document); + fn redo(&mut self, document: &mut Document); +} +``` + +All operations (drawing, editing, clip manipulation) implement this trait. + +**Continuous Operations:** +Dragging sliders or scrubbing creates only one undo action when complete, not one per frame. + +### Two-Phase Dispatch Pattern + +Panes cannot directly mutate shared state during rendering (borrowing rules). Instead: + +1. **Phase 1 (Render)**: Panes register actions + ```rust + shared_state.register_action(Box::new(MyAction { ... })); + ``` + +2. **Phase 2 (Execute)**: After all panes rendered, execute actions + ```rust + for action in shared_state.pending_actions.drain(..) { + action.apply(&mut document); + undo_stack.push(action); + } + ``` + +### Pane ID Salting + +egui uses IDs to track widget state. Multiple instances of the same pane would collide without unique IDs. + +**Solution**: Salt all IDs with the pane's node path: +```rust +ui.horizontal(|ui| { + ui.label("My Widget"); +}).id.with(&node_path); +``` + +### Selection & Clipboard + +- **Selection scope**: Limited to current clip/layer +- **Type-aware paste**: Content must match target type +- **Clip instance copying**: Creates reference to same underlying clip +- **Make unique**: Duplicates underlying clip for independent editing + +## Directory Structure + +``` +lightningbeam-2/ +├── lightningbeam-ui/ # Rust UI workspace +│ ├── Cargo.toml # Workspace manifest +│ ├── lightningbeam-editor/ # Main application crate +│ │ ├── Cargo.toml +│ │ └── src/ +│ │ ├── main.rs # Entry point, event loop +│ │ ├── app.rs # Application state +│ │ ├── panes/ +│ │ │ ├── mod.rs # Pane system dispatch +│ │ │ ├── stage.rs # Main canvas +│ │ │ ├── timeline.rs # Timeline editor +│ │ │ ├── asset_library.rs +│ │ │ └── ... +│ │ ├── tools/ # Drawing and editing tools +│ │ ├── rendering/ +│ │ │ ├── vello_integration.rs +│ │ │ ├── waveform_gpu.rs +│ │ │ └── shaders/ +│ │ │ ├── waveform.wgsl +│ │ │ └── waveform_mipgen.wgsl +│ │ └── export/ # Export functionality +│ └── lightningbeam-core/ # Core data model crate +│ ├── Cargo.toml +│ └── src/ +│ ├── lib.rs +│ ├── document.rs # Document structure +│ ├── layer.rs # Layer types +│ ├── clip.rs # Clip types and instances +│ ├── shape.rs # Shape definitions +│ ├── action.rs # Action trait and undo/redo +│ ├── animation.rs # Keyframe animation +│ └── tools.rs # Tool definitions +│ +├── daw-backend/ # Audio engine (standalone) +│ ├── Cargo.toml +│ └── src/ +│ ├── lib.rs # Audio system initialization +│ ├── audio/ +│ │ ├── engine.rs # Main audio callback +│ │ ├── track.rs # Track management +│ │ ├── project.rs # Project state +│ │ └── buffer.rs # Audio buffer utilities +│ ├── effects/ # Audio effects +│ │ ├── reverb.rs +│ │ ├── delay.rs +│ │ └── ... +│ ├── synth/ # Synthesizers +│ └── midi/ # MIDI handling +│ +├── src-tauri/ # Legacy Tauri backend +├── src/ # Legacy JavaScript frontend +├── CONTRIBUTING.md # Contributor guide +├── ARCHITECTURE.md # This file +├── README.md # Project overview +└── docs/ # Additional documentation + ├── AUDIO_SYSTEM.md + ├── UI_SYSTEM.md + └── ... +``` + +## Performance Considerations + +### GPU Rendering +- Vello uses compute shaders for efficient 2D rendering +- Waveforms pre-rendered on GPU with mipmaps for smooth zooming +- Custom wgpu integration minimizes CPU↔GPU data transfer + +### Audio Processing +- Lock-free design: No blocking in audio thread +- Optimized even in debug builds (`opt-level = 2`) +- Memory-mapped file I/O for large audio files +- Zero-copy audio buffers where possible + +### Memory Management +- Audio buffers pre-allocated, no allocations in audio thread +- Vello manages GPU memory automatically +- Document structure uses `Rc`/`Arc` for shared clip references + +## Future Considerations + +### Video Integration +Video decoding has been ported from the legacy Tauri backend. Video soundtracks become audio tracks in daw-backend, enabling full effects processing. + +### File Format +The .beam file format is not yet finalized. Considerations: +- Single JSON file vs container format (e.g., ZIP) +- Embedded media vs external references +- Forward/backward compatibility strategy + +### Node Editor +Primary use: Audio effects chains and modular synthesizers. Future expansion to visual effects and procedural generation is possible. + +## Related Documentation + +- [CONTRIBUTING.md](CONTRIBUTING.md) - Development setup and workflow +- [docs/AUDIO_SYSTEM.md](docs/AUDIO_SYSTEM.md) - Detailed audio engine documentation +- [docs/UI_SYSTEM.md](docs/UI_SYSTEM.md) - UI pane system details +- [docs/RENDERING.md](docs/RENDERING.md) - GPU rendering pipeline +- [Claude.md](Claude.md) - Comprehensive architectural reference for AI assistants diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 0000000..12a96ec --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,278 @@ +# Contributing to Lightningbeam + +Thank you for your interest in contributing to Lightningbeam! This document provides guidelines and instructions for setting up your development environment and contributing to the project. + +## Table of Contents + +- [Development Setup](#development-setup) +- [Building the Project](#building-the-project) +- [Project Structure](#project-structure) +- [Making Changes](#making-changes) +- [Code Style](#code-style) +- [Testing](#testing) +- [Submitting Changes](#submitting-changes) +- [Getting Help](#getting-help) + +## Development Setup + +### Prerequisites + +- **Rust**: Install via [rustup](https://rustup.rs/) (stable toolchain) +- **System dependencies** (Linux): + - ALSA development files: `libasound2-dev` + - For Ubuntu/Debian: `sudo apt install libasound2-dev pkg-config` + - For Arch/Manjaro: `sudo pacman -S alsa-lib` +- **FFmpeg**: Required for video encoding/decoding + - Ubuntu/Debian: `sudo apt install ffmpeg libavcodec-dev libavformat-dev libavutil-dev libswscale-dev libswresample-dev pkg-config clang` + - Arch/Manjaro: `sudo pacman -S ffmpeg` + +### Clone and Build + +```bash +# Clone the repository (GitHub) +git clone https://github.com/skykooler/lightningbeam.git +# Or from Gitea +git clone https://git.skyler.io/skyler/lightningbeam.git + +cd lightningbeam + +# Build the Rust UI editor (current focus) +cd lightningbeam-ui +cargo build + +# Run the editor +cargo run +``` + +**Note**: The project is hosted on both GitHub and Gitea (git.skyler.io). You can use either for cloning and submitting pull requests. + +## Building the Project + +### Workspace Structure + +The project consists of multiple Rust workspaces: + +1. **lightningbeam-ui** (current focus) - Pure Rust UI application + - `lightningbeam-editor/` - Main editor application + - `lightningbeam-core/` - Core data models and business logic + +2. **daw-backend** - Audio engine (standalone crate) + +3. **Root workspace** (legacy) - Contains Tauri backend and benchmarks + +### Build Commands + +```bash +# Build the editor (from lightningbeam-ui/) +cargo build + +# Build with optimizations (faster runtime) +cargo build --release + +# Check just the audio backend +cargo check -p daw-backend + +# Build the audio backend separately +cd ../daw-backend +cargo build +``` + +### Debug Builds and Audio Performance + +The audio engine runs on a real-time thread with strict timing constraints (~5.8ms at 44.1kHz). To maintain performance in debug builds, the audio backend is compiled with optimizations even in debug mode: + +```toml +# In lightningbeam-ui/Cargo.toml +[profile.dev.package.daw-backend] +opt-level = 2 +``` + +This is already configured—no action needed. + +### Debug Flags + +Enable audio diagnostics with: +```bash +DAW_AUDIO_DEBUG=1 cargo run +``` + +This prints timing information, buffer sizes, and overrun warnings to help debug audio issues. + +## Project Structure + +``` +lightningbeam-2/ +├── lightningbeam-ui/ # Rust UI workspace (current) +│ ├── lightningbeam-editor/ # Main application +│ │ └── src/ +│ │ ├── main.rs # Entry point +│ │ ├── panes/ # UI panes (stage, timeline, etc.) +│ │ └── tools/ # Drawing and editing tools +│ └── lightningbeam-core/ # Core data model +│ └── src/ +│ ├── document.rs # Document structure +│ ├── clip.rs # Clips and instances +│ ├── action.rs # Undo/redo system +│ └── tools.rs # Tool system +├── daw-backend/ # Audio engine +│ └── src/ +│ ├── lib.rs # Audio system setup +│ ├── audio/ +│ │ ├── engine.rs # Audio callback +│ │ ├── track.rs # Track management +│ │ └── project.rs # Project state +│ └── effects/ # Audio effects +├── src-tauri/ # Legacy Tauri backend +└── src/ # Legacy JavaScript frontend +``` + +## Making Changes + +### Branching Strategy + +- `main` - Stable branch +- `rust-ui` - Active development branch for Rust UI rewrite +- Feature branches - Create from `rust-ui` for new features + +### Before You Start + +1. Check existing issues or create a new one to discuss your change +2. Make sure you're on the latest `rust-ui` branch: + ```bash + git checkout rust-ui + git pull origin rust-ui + ``` +3. Create a feature branch: + ```bash + git checkout -b feature/your-feature-name + ``` + +## Code Style + +### Rust Style + +- Follow standard Rust formatting: `cargo fmt` +- Check for common issues: `cargo clippy` +- Use meaningful variable names +- Add comments for non-obvious code +- Keep functions focused and reasonably sized + +### Key Patterns + +#### Pane ID Salting +When implementing new panes, **always salt egui IDs** with the node path to avoid collisions when users add multiple instances of the same pane: + +```rust +ui.horizontal(|ui| { + ui.label("My Widget"); +}).id.with(&node_path); // Salt with node path +``` + +#### Splitting Borrows with `std::mem::take` +When you need to split borrows from a struct, use `std::mem::take`: + +```rust +let mut clips = std::mem::take(&mut self.clips); +// Now you can borrow other fields while processing clips +``` + +#### Two-Phase Dispatch +Panes register handlers during render, execution happens after: + +```rust +// During render +shared_state.register_action(Box::new(MyAction { ... })); + +// After all panes rendered +for action in shared_state.pending_actions.drain(..) { + action.execute(&mut document); +} +``` + +## Testing + +### Running Tests + +```bash +# Run all tests +cargo test + +# Test specific package +cargo test -p lightningbeam-core +cargo test -p daw-backend + +# Run with output +cargo test -- --nocapture +``` + +### Audio Testing + +Test audio functionality: +```bash +# Run with audio debug output +DAW_AUDIO_DEBUG=1 cargo run + +# Check for audio dropouts or timing issues in the console output +``` + +## Submitting Changes + +### Before Submitting + +1. **Format your code**: `cargo fmt --all` +2. **Run clippy**: `cargo clippy --all-targets --all-features` +3. **Run tests**: `cargo test --all` +4. **Test manually**: Build and run the application to verify your changes work +5. **Write clear commit messages**: Describe what and why, not just what + +### Commit Message Format + +``` +Short summary (50 chars or less) + +More detailed explanation if needed. Wrap at 72 characters. +Explain the problem this commit solves and why you chose +this solution. + +- Bullet points are fine +- Use present tense: "Add feature" not "Added feature" +``` + +### Pull Request Process + +1. Push your branch to GitHub or Gitea +2. Open a pull request against `rust-ui` branch + - GitHub: https://github.com/skykooler/lightningbeam + - Gitea: https://git.skyler.io/skyler/lightningbeam +3. Provide a clear description of: + - What problem does this solve? + - How does it work? + - Any testing you've done + - Screenshots/videos if applicable (especially for UI changes) +4. Address review feedback +5. Once approved, a maintainer will merge your PR + +### PR Checklist + +- [ ] Code follows project style (`cargo fmt`, `cargo clippy`) +- [ ] Tests pass (`cargo test`) +- [ ] New code has appropriate tests (if applicable) +- [ ] Documentation updated (if needed) +- [ ] Commit messages are clear +- [ ] PR description explains the change + +## Getting Help + +- **Issues**: Check issues on [GitHub](https://github.com/skykooler/lightningbeam/issues) or [Gitea](https://git.skyler.io/skyler/lightningbeam/issues) for existing discussions +- **Documentation**: See `ARCHITECTURE.md` and `docs/` folder for technical details +- **Questions**: Open a discussion or issue with the "question" label on either platform + +## Additional Resources + +- [ARCHITECTURE.md](ARCHITECTURE.md) - System architecture overview +- [docs/AUDIO_SYSTEM.md](docs/AUDIO_SYSTEM.md) - Audio engine details +- [docs/UI_SYSTEM.md](docs/UI_SYSTEM.md) - UI and pane system + +## License + +By contributing, you agree that your contributions will be licensed under the same license as the project. diff --git a/docs/AUDIO_SYSTEM.md b/docs/AUDIO_SYSTEM.md new file mode 100644 index 0000000..635c31a --- /dev/null +++ b/docs/AUDIO_SYSTEM.md @@ -0,0 +1,1092 @@ +# Audio System Architecture + +This document describes the architecture of Lightningbeam's audio engine (`daw-backend`), including real-time constraints, lock-free design patterns, and how to extend the system with new effects and features. + +## Table of Contents + +- [Overview](#overview) +- [Architecture](#architecture) +- [Real-Time Constraints](#real-time-constraints) +- [Lock-Free Communication](#lock-free-communication) +- [Audio Processing Pipeline](#audio-processing-pipeline) +- [Adding Effects](#adding-effects) +- [Adding Synthesizers](#adding-synthesizers) +- [MIDI System](#midi-system) +- [Performance Optimization](#performance-optimization) +- [Debugging Audio Issues](#debugging-audio-issues) + +## Overview + +The `daw-backend` crate is a standalone real-time audio engine designed for: + +- **Multi-track audio playback and recording** +- **Real-time audio effects processing** +- **MIDI input/output and sequencing** +- **Modular audio routing** (node graph system) +- **Audio export** (WAV, MP3, AAC) + +### Key Features + +- Lock-free design for real-time safety +- Cross-platform audio I/O via cpal +- Audio decoding via symphonia (MP3, FLAC, WAV, Ogg, AAC) +- Node-based audio graph processing +- Comprehensive effects library +- Multiple synthesizer types +- Zero-allocation audio thread + +## Architecture + +``` +┌─────────────────────────────────────────────────────────────┐ +│ UI Thread │ +│ (lightningbeam-editor or other application) │ +├─────────────────────────────────────────────────────────────┤ +│ │ +│ AudioSystem::new() ─────> Creates audio stream │ +│ │ │ +│ ├─> command_sender (rtrb::Producer) │ +│ └─> state_receiver (rtrb::Consumer) │ +│ │ +│ Commands sent: │ +│ - Play / Stop / Seek │ +│ - Add / Remove tracks │ +│ - Load audio files │ +│ - Add / Remove effects │ +│ - Update parameters │ +│ │ +└──────────────────────┬──────────────────────────────────────┘ + │ + │ Lock-free queues (rtrb) + │ +┌──────────────────────▼──────────────────────────────────────┐ +│ Audio Thread (Real-Time) │ +├─────────────────────────────────────────────────────────────┤ +│ │ +│ Engine::process(output_buffer) │ +│ │ │ +│ ├─> Receive commands from queue │ +│ ├─> Update playhead position │ +│ ├─> For each track: │ +│ │ ├─> Read audio samples at playhead │ +│ │ ├─> Apply effects chain │ +│ │ └─> Mix to output │ +│ ├─> Apply master effects │ +│ └─> Write samples to output_buffer │ +│ │ +│ Send state updates back to UI thread │ +│ - Playhead position │ +│ - Meter levels │ +│ - Overrun warnings │ +│ │ +└──────────────────────┬──────────────────────────────────────┘ + │ + ▼ + ┌───────────┐ + │ cpal │ + │ (Audio │ + │ I/O) │ + └───────────┘ + │ + ▼ + ┌──────────────┐ + │ Audio Output │ + │ (Speakers) │ + └──────────────┘ +``` + +### Core Components + +#### AudioSystem (`src/lib.rs`) +- Entry point for the audio engine +- Creates the audio stream +- Sets up lock-free communication channels +- Manages audio device configuration + +#### Engine (`src/audio/engine.rs`) +- The main audio callback +- Runs on the real-time audio thread +- Processes commands, mixes tracks, applies effects +- Must complete in ~5ms (at 44.1kHz, 256 frame buffer) + +#### Project (`src/audio/project.rs`) +- Top-level audio state +- Contains tracks, tempo, time signature +- Manages global settings + +#### Track (`src/audio/track.rs`) +- Individual audio track +- Contains audio clips and effects chain +- Handles track-specific state (volume, pan, mute, solo) + +## Real-Time Constraints + +### The Golden Rule + +**The audio thread must NEVER block.** + +Audio callbacks run with strict timing deadlines: +- **Buffer size**: 256 frames (default) = ~5.8ms at 44.1kHz +- **ALSA on Linux**: May provide smaller buffers (64-75 frames = ~1.5ms) +- **Deadline**: Audio callback must complete before next buffer is needed + +If the audio callback takes too long: +- **Audio dropout**: Audible glitch/pop in output +- **Buffer underrun**: Missing samples +- **System instability**: Priority inversion, thread starvation + +### Forbidden Operations in Audio Thread + +❌ **Never do these in the audio callback:** + +- **Locking**: `Mutex`, `RwLock`, or any blocking synchronization +- **Allocation**: `Vec::push()`, `Box::new()`, `String` operations +- **I/O**: File operations, network, print statements +- **System calls**: Most OS operations +- **Unbounded loops**: Must have guaranteed completion time + +✅ **Safe operations:** + +- Reading/writing lock-free queues (rtrb) +- Fixed-size array operations +- Arithmetic and DSP calculations +- Pre-allocated buffer operations + +### Optimized Debug Builds + +To meet real-time deadlines, audio code is compiled with optimizations even in debug builds: + +```toml +# In lightningbeam-ui/Cargo.toml +[profile.dev.package.daw-backend] +opt-level = 2 + +[profile.dev.package.symphonia] +opt-level = 2 +# ... other audio libraries +``` + +This allows fast iteration while maintaining audio performance. + +## Lock-Free Communication + +### Command Queue (UI → Audio) + +The UI thread sends commands to the audio thread via a lock-free ringbuffer: + +```rust +// UI Thread +let command = AudioCommand::Play; +command_sender.push(command).ok(); + +// Audio Thread (in Engine::process) +while let Ok(command) = command_receiver.pop() { + match command { + AudioCommand::Play => self.playing = true, + AudioCommand::Stop => self.playing = false, + AudioCommand::Seek(time) => self.playhead = time, + // ... handle other commands + } +} +``` + +### State Updates (Audio → UI) + +The audio thread sends state updates back to the UI: + +```rust +// Audio Thread +let state = AudioState { + playhead: self.playhead, + is_playing: self.playing, + meter_levels: self.compute_meters(), +}; +state_sender.push(state).ok(); + +// UI Thread +if let Ok(state) = state_receiver.pop() { + // Update UI with new state +} +``` + +### Design Pattern: Command-Response + +1. **UI initiates action**: Send command to audio thread +2. **Audio thread executes**: In `Engine::process()`, between buffer fills +3. **Audio thread confirms**: Send state update back to UI +4. **UI updates**: Reflect new state in user interface + +This pattern ensures: +- No blocking on either side +- UI remains responsive +- Audio thread never waits + +## Audio Processing Pipeline + +### Per-Buffer Processing + +Every audio buffer (typically 256 frames), the `Engine::process()` callback: + +```rust +pub fn process(&mut self, output: &mut [f32]) -> Result<(), AudioError> { + // 1. Process commands from UI thread + self.process_commands(); + + // 2. Update playhead + if self.playing { + self.playhead += buffer_duration; + } + + // 3. Clear output buffer + output.fill(0.0); + + // 4. Process each track + for track in &mut self.tracks { + if track.muted { + continue; + } + + // Read audio samples at playhead position + let samples = track.read_samples(self.playhead, output.len()); + + // Apply track effects chain + let mut processed = samples; + for effect in &mut track.effects { + processed = effect.process(processed); + } + + // Mix to output with volume/pan + mix_to_output(output, &processed, track.volume, track.pan); + } + + // 5. Apply master effects + for effect in &mut self.master_effects { + effect.process_in_place(output); + } + + // 6. Send state updates to UI + self.send_state_update(); + + Ok(()) +} +``` + +### Sample Rate and Buffer Size + +- **Sample rate**: 44.1kHz (default) or 48kHz +- **Buffer size**: 256 frames (configurable) +- **Channels**: Stereo (2 channels) + +Buffer is interleaved: `[L, R, L, R, L, R, ...]` + +### Time Representation + +- **Playhead position**: Stored as `f64` seconds +- **Sample index**: `(playhead * sample_rate) as usize` +- **Frame index**: `sample_index / channels` + +## Node Graph System + +### Overview + +Tracks use a node graph architecture powered by `dasp_graph` for flexible audio routing. Unlike simple serial effect chains, the node graph allows: + +- **Parallel processing**: Multiple effects processing the same input +- **Complex routing**: Effects feeding into each other in arbitrary configurations +- **Modular synthesis**: Build synthesizers from oscillators, filters, and modulators +- **Send/return chains**: Shared effects (reverb, delay) fed by multiple tracks +- **Sidechain processing**: One signal controlling another (compression, vocoding) + +### Node Graph Architecture + +``` +┌─────────────────────────────────────────────────────────┐ +│ Track Node Graph │ +├─────────────────────────────────────────────────────────┤ +│ │ +│ ┌─────────┐ │ +│ │ Input │ (Audio clip or synthesizer) │ +│ └────┬────┘ │ +│ │ │ +│ ├──────┬──────────────┬─────────────┐ │ +│ │ │ │ │ │ +│ ▼ ▼ ▼ ▼ │ +│ ┌────────┐ ┌────────┐ ┌────────┐ ┌─────────┐ │ +│ │Filter │ │Distort │ │ EQ │ │ Reverb │ │ +│ │(Node 1)│ │(Node 2)│ │(Node 3)│ │(Node 4) │ │ +│ └───┬────┘ └───┬────┘ └───┬────┘ └────┬────┘ │ +│ │ │ │ │ │ +│ └────┬─────┴──────┬───┘ │ │ +│ │ │ │ │ +│ ▼ ▼ │ │ +│ ┌─────────┐ ┌─────────┐ │ │ +│ │ Mixer │ │Compress │ │ │ +│ │(Node 5) │ │(Node 6) │◄──────────┘ │ +│ └────┬────┘ └────┬────┘ (sidechain) │ +│ │ │ │ +│ └─────┬──────┘ │ +│ │ │ +│ ▼ │ +│ ┌──────────┐ │ +│ │ Output │ │ +│ └──────────┘ │ +│ │ +└────────────────────────────────────────────────────────┘ +``` + +### Node Types + +#### Input Nodes +- **Audio Clip Reader**: Reads samples from audio file +- **Oscillator**: Generates waveforms (sine, saw, square, triangle) +- **Noise Generator**: White/pink noise +- **External Input**: Microphone or line-in + +#### Processing Nodes +- **Effects**: Any audio effect (see [Adding Effects](#adding-effects)) +- **Filters**: Low-pass, high-pass, band-pass, notch +- **Mixers**: Combine multiple inputs with gain control +- **Splitters**: Duplicate signal to multiple outputs + +#### Output Nodes +- **Track Output**: Sends to mixer or master bus +- **Send Output**: Feeds auxiliary effects + +### Building a Node Graph + +```rust +use dasp_graph::{Node, NodeData, Input, BoxedNode}; +use petgraph::graph::NodeIndex; + +pub struct TrackGraph { + graph: dasp_graph::Graph, + input_node: NodeIndex, + output_node: NodeIndex, +} + +impl TrackGraph { + pub fn new() -> Self { + let mut graph = dasp_graph::Graph::new(); + + // Create input and output nodes + let input_node = graph.add_node(NodeData::new1( + Input::default(), + PassThrough, // Simple input node + )); + + let output_node = graph.add_node(NodeData::new1( + Input::default(), + PassThrough, // Simple output node + )); + + Self { + graph, + input_node, + output_node, + } + } + + pub fn add_effect(&mut self, effect: BoxedNode) -> NodeIndex { + // Add effect node between input and output + let effect_node = self.graph.add_node(NodeData::new1( + Input::default(), + effect, + )); + + // Connect: input -> effect -> output + self.graph.add_edge(self.input_node, effect_node, ()); + self.graph.add_edge(effect_node, self.output_node, ()); + + effect_node + } + + pub fn connect(&mut self, from: NodeIndex, to: NodeIndex) { + self.graph.add_edge(from, to, ()); + } + + pub fn process(&mut self, input: &[f32], output: &mut [f32]) { + // Set input samples + self.graph.set_input(self.input_node, input); + + // Process entire graph + self.graph.process(); + + // Read output samples + self.graph.get_output(self.output_node, output); + } +} +``` + +### Example: Serial Effect Chain + +Simple effects chain (the most common case): + +```rust +// Input -> Distortion -> EQ -> Reverb -> Output + +let mut graph = TrackGraph::new(); + +let distortion = graph.add_effect(Box::new(Distortion::new(0.5))); +let eq = graph.add_effect(Box::new(EQ::new())); +let reverb = graph.add_effect(Box::new(Reverb::new())); + +// Connect in series +graph.connect(graph.input_node, distortion); +graph.connect(distortion, eq); +graph.connect(eq, reverb); +graph.connect(reverb, graph.output_node); +``` + +### Example: Parallel Processing + +Split signal into parallel paths: + +```rust +// Input -> Split -> [Distortion + Clean] -> Mix -> Output + +let mut graph = TrackGraph::new(); + +// Create parallel paths +let distortion = graph.add_effect(Box::new(Distortion::new(0.7))); +let clean = graph.add_effect(Box::new(Gain::new(1.0))); +let mixer = graph.add_effect(Box::new(Mixer::new(2))); // 2 inputs + +// Connect parallel paths +graph.connect(graph.input_node, distortion); +graph.connect(graph.input_node, clean); +graph.connect(distortion, mixer); +graph.connect(clean, mixer); +graph.connect(mixer, graph.output_node); +``` + +### Example: Modular Synthesizer + +Build a synthesizer from basic components: + +```rust +// ┌─ LFO ────┐ (modulation) +// │ ▼ +// Oscillator -> Filter -> Envelope -> Output + +let mut graph = TrackGraph::new(); + +// Sound source +let oscillator = graph.add_effect(Box::new(Oscillator::new(440.0))); + +// Modulation source +let lfo = graph.add_effect(Box::new(LFO::new(5.0))); // 5 Hz + +// Filter with LFO modulation +let filter = graph.add_effect(Box::new(Filter::new_modulated())); + +// Envelope +let envelope = graph.add_effect(Box::new(ADSREnvelope::new())); + +// Connect sound path +graph.connect(oscillator, filter); +graph.connect(filter, envelope); +graph.connect(envelope, graph.output_node); + +// Connect modulation path +graph.connect(lfo, filter); // LFO modulates filter cutoff +``` + +### Example: Sidechain Compression + +One signal controls another: + +```rust +// Input (bass) ──────────────────┐ +// ▼ +// Kick drum ────> Compressor (sidechain) -> Output + +let mut graph = TrackGraph::new(); + +// Main signal input (bass) +let bass_input = graph.add_effect(Box::new(PassThrough)); + +// Sidechain signal input (kick drum) +let kick_input = graph.add_effect(Box::new(PassThrough)); + +// Compressor with sidechain +let compressor = graph.add_effect(Box::new(SidechainCompressor::new())); + +// Connect main signal +graph.connect(bass_input, compressor); + +// Connect sidechain signal (port 1 = main, port 2 = sidechain) +graph.connect_to_port(kick_input, compressor, 1); + +graph.connect(compressor, graph.output_node); +``` + +### Node Interface + +All nodes implement the `dasp_graph::Node` trait: + +```rust +pub trait Node { + /// Process audio for this node + fn process(&mut self, inputs: &[Input], output: &mut [f32]); + + /// Number of input ports + fn num_inputs(&self) -> usize; + + /// Number of output ports + fn num_outputs(&self) -> usize; + + /// Reset internal state + fn reset(&mut self); +} +``` + +### Multi-Channel Processing + +Nodes can have multiple input/output channels: + +```rust +pub struct StereoEffect { + left_processor: Processor, + right_processor: Processor, +} + +impl Node for StereoEffect { + fn process(&mut self, inputs: &[Input], output: &mut [f32]) { + // Split stereo input + let (left_in, right_in) = inputs[0].as_stereo(); + + // Process each channel + let left_out = self.left_processor.process(left_in); + let right_out = self.right_processor.process(right_in); + + // Interleave output + for i in 0..left_out.len() { + output[i * 2] = left_out[i]; + output[i * 2 + 1] = right_out[i]; + } + } + + fn num_inputs(&self) -> usize { 1 } // One stereo input + fn num_outputs(&self) -> usize { 1 } // One stereo output + + fn reset(&mut self) { + self.left_processor.reset(); + self.right_processor.reset(); + } +} +``` + +### Parameter Modulation + +Nodes can expose parameters for automation or modulation: + +```rust +pub struct ModulatableFilter { + filter: Filter, + cutoff: f32, + resonance: f32, +} + +impl Node for ModulatableFilter { + fn process(&mut self, inputs: &[Input], output: &mut [f32]) { + let audio_in = &inputs[0]; // Port 0: audio input + + // Port 1 (optional): cutoff modulation + if inputs.len() > 1 { + let mod_signal = &inputs[1]; + // Modulate cutoff: base + modulation + self.filter.set_cutoff(self.cutoff + mod_signal[0] * 1000.0); + } + + // Process audio + self.filter.process(audio_in, output); + } + + fn num_inputs(&self) -> usize { 2 } // Audio + modulation + fn num_outputs(&self) -> usize { 1 } + + fn reset(&mut self) { + self.filter.reset(); + } +} +``` + +### Graph Execution Order + +`dasp_graph` automatically determines execution order using topological sort: + +1. Nodes with no dependencies execute first (inputs, oscillators) +2. Nodes execute when all inputs are ready +3. Cycles are detected and prevented +4. Output nodes execute last + +This ensures: +- No node processes before its inputs are ready +- Efficient CPU cache usage +- Deterministic execution + +### Performance Considerations + +#### Graph Overhead + +Node graphs have small overhead: +- **Topological sort**: Done once when graph changes, not per-buffer +- **Buffer copying**: Minimized by reusing buffers +- **Indirection**: Virtual function calls (unavoidable with trait objects) + +For simple serial chains, the overhead is negligible (<1% CPU). + +#### When to Use Node Graphs vs Simple Chains + +**Use node graphs when:** +- Complex routing (parallel, feedback, modulation) +- Building synthesizers from components +- User-configurable effect routing +- Sidechain processing + +**Use simple chains when:** +- Just a few effects in series +- Performance is critical +- Graph structure never changes + +**Note**: In Lightningbeam, audio layers always use node graphs to provide maximum flexibility for users. This allows any track to have complex routing, modular synthesis, or effect configurations without requiring different track types. + +```rust +// Simple chain (no graph overhead) +pub struct SimpleChain { + effects: Vec>, +} + +impl SimpleChain { + fn process(&mut self, buffer: &mut [f32]) { + for effect in &mut self.effects { + effect.process_in_place(buffer); + } + } +} +``` + +### Debugging Node Graphs + +Enable graph visualization: + +```rust +// Print graph structure +println!("{:?}", graph); + +// Export to DOT format for visualization +let dot = graph.to_dot(); +std::fs::write("graph.dot", dot)?; +// Then: dot -Tpng graph.dot -o graph.png +``` + +Trace signal flow: + +```rust +// Add probe nodes to inspect signals +let probe = graph.add_effect(Box::new(SignalProbe::new("After Filter"))); +graph.connect(filter, probe); +graph.connect(probe, output); + +// Probe prints min/max/RMS of signal +``` + +## Adding Effects + +### Effect Trait + +All effects implement the `AudioEffect` trait: + +```rust +pub trait AudioEffect: Send { + fn process(&mut self, input: &[f32], output: &mut [f32]); + fn process_in_place(&mut self, buffer: &mut [f32]); + fn reset(&mut self); +} +``` + +### Example: Simple Gain Effect + +```rust +pub struct Gain { + gain: f32, +} + +impl Gain { + pub fn new(gain: f32) -> Self { + Self { gain } + } +} + +impl AudioEffect for Gain { + fn process(&mut self, input: &[f32], output: &mut [f32]) { + for (i, &sample) in input.iter().enumerate() { + output[i] = sample * self.gain; + } + } + + fn process_in_place(&mut self, buffer: &mut [f32]) { + for sample in buffer.iter_mut() { + *sample *= self.gain; + } + } + + fn reset(&mut self) { + // No state to reset for gain + } +} +``` + +### Example: Delay Effect (with state) + +```rust +pub struct Delay { + buffer: Vec, + write_pos: usize, + delay_samples: usize, + feedback: f32, + mix: f32, +} + +impl Delay { + pub fn new(sample_rate: f32, delay_time: f32, feedback: f32, mix: f32) -> Self { + let delay_samples = (delay_time * sample_rate) as usize; + let buffer_size = delay_samples.next_power_of_two(); + + Self { + buffer: vec![0.0; buffer_size], + write_pos: 0, + delay_samples, + feedback, + mix, + } + } +} + +impl AudioEffect for Delay { + fn process_in_place(&mut self, buffer: &mut [f32]) { + for sample in buffer.iter_mut() { + // Read delayed sample + let read_pos = (self.write_pos + self.buffer.len() - self.delay_samples) + % self.buffer.len(); + let delayed = self.buffer[read_pos]; + + // Write new sample with feedback + self.buffer[self.write_pos] = *sample + delayed * self.feedback; + self.write_pos = (self.write_pos + 1) % self.buffer.len(); + + // Mix dry and wet signals + *sample = *sample * (1.0 - self.mix) + delayed * self.mix; + } + } + + fn reset(&mut self) { + self.buffer.fill(0.0); + self.write_pos = 0; + } +} +``` + +### Adding Effects to Tracks + +```rust +// UI Thread +let command = AudioCommand::AddEffect { + track_id: track_id, + effect: Box::new(Delay::new(44100.0, 0.5, 0.3, 0.5)), +}; +command_sender.push(command).ok(); +``` + +### Built-In Effects + +Located in `daw-backend/src/effects/`: + +- **reverb.rs**: Reverb +- **delay.rs**: Delay +- **eq.rs**: Equalizer +- **compressor.rs**: Dynamic range compressor +- **distortion.rs**: Distortion/overdrive +- **chorus.rs**: Chorus +- **flanger.rs**: Flanger +- **phaser.rs**: Phaser +- **limiter.rs**: Brick-wall limiter + +## Adding Synthesizers + +### Synthesizer Trait + +```rust +pub trait Synthesizer: Send { + fn process(&mut self, output: &mut [f32], sample_rate: f32); + fn note_on(&mut self, note: u8, velocity: u8); + fn note_off(&mut self, note: u8); + fn reset(&mut self); +} +``` + +### Example: Simple Oscillator + +```rust +pub struct Oscillator { + phase: f32, + frequency: f32, + amplitude: f32, + sample_rate: f32, +} + +impl Oscillator { + pub fn new(sample_rate: f32) -> Self { + Self { + phase: 0.0, + frequency: 440.0, + amplitude: 0.0, + sample_rate, + } + } +} + +impl Synthesizer for Oscillator { + fn process(&mut self, output: &mut [f32], _sample_rate: f32) { + for sample in output.iter_mut() { + // Generate sine wave + *sample = (self.phase * 2.0 * std::f32::consts::PI).sin() * self.amplitude; + + // Advance phase + self.phase += self.frequency / self.sample_rate; + if self.phase >= 1.0 { + self.phase -= 1.0; + } + } + } + + fn note_on(&mut self, note: u8, velocity: u8) { + // Convert MIDI note to frequency + self.frequency = 440.0 * 2.0_f32.powf((note as f32 - 69.0) / 12.0); + self.amplitude = velocity as f32 / 127.0; + } + + fn note_off(&mut self, _note: u8) { + self.amplitude = 0.0; + } + + fn reset(&mut self) { + self.phase = 0.0; + self.amplitude = 0.0; + } +} +``` + +### Built-In Synthesizers + +Located in `daw-backend/src/synth/`: + +- **oscillator.rs**: Basic waveform generator (sine, saw, square, triangle) +- **fm_synth.rs**: FM synthesis +- **wavetable.rs**: Wavetable synthesis +- **sampler.rs**: Sample-based synthesis + +## MIDI System + +### MIDI Input + +```rust +// Setup MIDI input (UI thread) +let midi_input = midir::MidiInput::new("Lightningbeam")?; +let port = midi_input.ports()[0]; + +midi_input.connect(&port, "input", move |_timestamp, message, _| { + // Parse MIDI message + match message[0] & 0xF0 { + 0x90 => { + // Note On + let note = message[1]; + let velocity = message[2]; + command_sender.push(AudioCommand::NoteOn { note, velocity }).ok(); + } + 0x80 => { + // Note Off + let note = message[1]; + command_sender.push(AudioCommand::NoteOff { note }).ok(); + } + _ => {} + } +}, ())?; +``` + +### MIDI File Parsing + +```rust +use midly::{Smf, TrackEventKind}; + +let smf = Smf::parse(&midi_data)?; +for track in smf.tracks { + for event in track { + match event.kind { + TrackEventKind::Midi { channel, message } => { + // Process MIDI message + } + _ => {} + } + } +} +``` + +## Performance Optimization + +### Pre-Allocation + +Allocate all buffers before audio thread starts: + +```rust +// Good: Pre-allocated +pub struct Track { + buffer: Vec, // Allocated once in constructor + // ... +} + +// Bad: Allocates in audio thread +fn process(&mut self) { + let mut temp = Vec::new(); // ❌ Allocates! + // ... +} +``` + +### Memory-Mapped Audio Files + +Large audio files use memory-mapped I/O for zero-copy access: + +```rust +use memmap2::Mmap; + +let file = File::open(path)?; +let mmap = unsafe { Mmap::map(&file)? }; +// Audio samples can be read directly from mmap +``` + +### SIMD Optimization + +For portable SIMD operations, use the `fearless_simd` crate: + +```rust +use fearless_simd::*; + +fn process_simd(samples: &mut [f32], gain: f32) { + // Automatically uses best available SIMD instructions + // (SSE, AVX, NEON, etc.) without unsafe code + for chunk in samples.chunks_exact_mut(f32x8::LEN) { + let simd_samples = f32x8::from_slice(chunk); + let simd_gain = f32x8::splat(gain); + let result = simd_samples * simd_gain; + result.write_to_slice(chunk); + } + + // Handle remainder + let remainder = samples.chunks_exact_mut(f32x8::LEN).into_remainder(); + for sample in remainder { + *sample *= gain; + } +} +``` + +This approach is: +- **Portable**: Works across x86, ARM, and other architectures +- **Safe**: No unsafe code required +- **Automatic**: Uses best available SIMD instructions for the target +- **Fallback**: Gracefully degrades on platforms without SIMD + +### Avoid Branching in Inner Loops + +```rust +// Bad: Branch in inner loop +for sample in samples.iter_mut() { + if self.gain > 0.5 { + *sample *= 2.0; + } +} + +// Good: Branch outside loop +let multiplier = if self.gain > 0.5 { 2.0 } else { 1.0 }; +for sample in samples.iter_mut() { + *sample *= multiplier; +} +``` + +## Debugging Audio Issues + +### Enable Debug Logging + +```bash +DAW_AUDIO_DEBUG=1 cargo run +``` + +Output includes: +``` +[AUDIO] Buffer size: 256 frames (5.8ms at 44100 Hz) +[AUDIO] Processing time: avg=0.8ms, worst=2.1ms +[AUDIO] Playhead: 1.234s +[AUDIO] WARNING: Audio overrun detected! +``` + +### Common Issues + +#### Audio Dropouts + +**Symptoms**: Clicks, pops, glitches in audio output + +**Causes**: +- Audio callback taking too long +- Blocking operation in audio thread +- Insufficient CPU resources + +**Solutions**: +- Increase buffer size (reduces CPU pressure, increases latency) +- Optimize audio processing code +- Remove debug prints from audio thread +- Check `DAW_AUDIO_DEBUG=1` output for timing info + +#### Crackling/Distortion + +**Symptoms**: Harsh, noisy audio + +**Causes**: +- Samples exceeding [-1.0, 1.0] range (clipping) +- Incorrect sample rate conversion +- Denormal numbers in filters + +**Solutions**: +- Add limiter to master output +- Use hard clipping: `sample.clamp(-1.0, 1.0)` +- Enable flush-to-zero for denormals + +#### No Audio Output + +**Symptoms**: Silence, but no errors + +**Causes**: +- Audio device not found +- Wrong device selected +- All tracks muted +- Volume set to zero + +**Solutions**: +- Check `cpal` device enumeration +- Verify track volumes and mute states +- Check master volume +- Test with simple sine wave + +### Profiling Audio Performance + +```bash +# Use perf on Linux +perf record --call-graph dwarf cargo run --release +perf report + +# Look for hot spots in Engine::process() +``` + +## Related Documentation + +- [ARCHITECTURE.md](../ARCHITECTURE.md) - Overall system architecture +- [docs/UI_SYSTEM.md](UI_SYSTEM.md) - UI integration with audio system +- [docs/BUILDING.md](BUILDING.md) - Build troubleshooting diff --git a/docs/BUILDING.md b/docs/BUILDING.md new file mode 100644 index 0000000..12cb42f --- /dev/null +++ b/docs/BUILDING.md @@ -0,0 +1,523 @@ +# Building Lightningbeam + +This guide provides detailed instructions for building Lightningbeam on different platforms, including dependency installation, troubleshooting, and advanced build configurations. + +## Table of Contents + +- [Quick Start](#quick-start) +- [Platform-Specific Instructions](#platform-specific-instructions) +- [Dependencies](#dependencies) +- [Build Configurations](#build-configurations) +- [Troubleshooting](#troubleshooting) +- [Development Builds](#development-builds) + +## Quick Start + +```bash +# Clone the repository +git clone https://github.com/skykooler/lightningbeam.git +cd lightningbeam/lightningbeam-ui + +# Build and run +cargo build +cargo run +``` + +## Platform-Specific Instructions + +### Linux + +#### Ubuntu/Debian + +**Important**: Lightningbeam requires FFmpeg 8, which may not be in the default repositories. + +```bash +# Install basic dependencies +sudo apt update +sudo apt install -y \ + build-essential \ + pkg-config \ + libasound2-dev \ + clang \ + libclang-dev + +# Install FFmpeg 8 from PPA (Ubuntu) +sudo add-apt-repository ppa:ubuntuhandbook1/ffmpeg7 +sudo apt update +sudo apt install -y \ + ffmpeg \ + libavcodec-dev \ + libavformat-dev \ + libavutil-dev \ + libswscale-dev \ + libswresample-dev + +# Verify FFmpeg version (should be 8.x) +ffmpeg -version + +# Install Rust if needed +curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh + +# Build +cd lightningbeam-ui +cargo build --release +``` + +**Note**: If the PPA doesn't provide FFmpeg 8, you may need to compile FFmpeg from source or find an alternative PPA. See [FFmpeg Issues](#ffmpeg-issues) for details. + +#### Arch Linux/Manjaro + +```bash +# Install system dependencies +sudo pacman -S --needed \ + base-devel \ + rust \ + alsa-lib \ + ffmpeg \ + clang + +# Build +cd lightningbeam-ui +cargo build --release +``` + +#### Fedora/RHEL + +```bash +# Install system dependencies +sudo dnf install -y \ + gcc \ + gcc-c++ \ + make \ + pkg-config \ + alsa-lib-devel \ + ffmpeg \ + ffmpeg-devel \ + clang \ + clang-devel + +# Install Rust if needed +curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh + +# Build +cd lightningbeam-ui +cargo build --release +``` + +### macOS + +```bash +# Install Homebrew if needed +/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" + +# Install dependencies +brew install rust ffmpeg pkg-config + +# Build +cd lightningbeam-ui +cargo build --release +``` + +**Note**: macOS uses CoreAudio for audio I/O (via cpal), so no additional audio libraries are needed. + +### Windows + +#### Using Visual Studio + +1. Install [Visual Studio 2022](https://visualstudio.microsoft.com/) with "Desktop development with C++" workload +2. Install [Rust](https://rustup.rs/) +3. Install [FFmpeg](https://ffmpeg.org/download.html#build-windows): + - Download a shared build from https://www.gyan.dev/ffmpeg/builds/ + - Extract to `C:\ffmpeg` + - Add `C:\ffmpeg\bin` to PATH + - Set environment variables: + ```cmd + set FFMPEG_DIR=C:\ffmpeg + set PKG_CONFIG_PATH=C:\ffmpeg\lib\pkgconfig + ``` + +4. Build: + ```cmd + cd lightningbeam-ui + cargo build --release + ``` + +#### Using MSYS2/MinGW + +```bash +# In MSYS2 shell +pacman -S mingw-w64-x86_64-rust \ + mingw-w64-x86_64-ffmpeg \ + mingw-w64-x86_64-pkg-config + +cd lightningbeam-ui +cargo build --release +``` + +**Note**: Windows uses WASAPI for audio I/O (via cpal), which is built into Windows. + +## Dependencies + +### Required Dependencies + +#### Rust Toolchain +- **Version**: Stable (1.70+) +- **Install**: https://rustup.rs/ +- **Components**: Default installation includes everything needed + +#### Audio I/O (ALSA on Linux) +- **Ubuntu/Debian**: `libasound2-dev` +- **Arch**: `alsa-lib` +- **Fedora**: `alsa-lib-devel` +- **macOS**: CoreAudio (built-in) +- **Windows**: WASAPI (built-in) + +#### FFmpeg +**Version Required**: FFmpeg 8.x + +Required for video encoding/decoding. Note that many distribution repositories may have older versions. + +- **Ubuntu/Debian**: Use PPA for FFmpeg 8 (see [Ubuntu/Debian instructions](#ubuntudebian)) +- **Arch**: `ffmpeg` (usually up-to-date) +- **Fedora**: `ffmpeg ffmpeg-devel` (check version with `ffmpeg -version`) +- **macOS**: `brew install ffmpeg` (Homebrew usually has latest) +- **Windows**: Download FFmpeg 8 from https://ffmpeg.org/download.html + +#### Build Tools +- **Linux**: `build-essential` (Ubuntu), `base-devel` (Arch) +- **macOS**: Xcode Command Line Tools (`xcode-select --install`) +- **Windows**: Visual Studio with C++ tools or MinGW + +#### pkg-config +Required for finding system libraries. + +- **Linux**: Usually included with build tools +- **macOS**: `brew install pkg-config` +- **Windows**: Included with MSYS2/MinGW, or use vcpkg + +### Optional Dependencies + +#### GPU Drivers +Vello requires a GPU with Vulkan (Linux/Windows) or Metal (macOS) support: + +- **Linux Vulkan**: + - NVIDIA: Install proprietary drivers + - AMD: `mesa-vulkan-drivers` (Ubuntu) or `vulkan-radeon` (Arch) + - Intel: `mesa-vulkan-drivers` (Ubuntu) or `vulkan-intel` (Arch) + +- **macOS Metal**: Built-in (macOS 10.13+) + +- **Windows Vulkan**: + - Usually included with GPU drivers + - Manual install: https://vulkan.lunarg.com/ + +## Build Configurations + +### Release Build (Optimized) + +```bash +cargo build --release +``` + +- Optimizations: Level 3 +- LTO: Enabled +- Debug info: None +- Build time: Slower (~5-10 minutes) +- Runtime: Fast + +Binary location: `target/release/lightningbeam-editor` + +### Debug Build (Default) + +```bash +cargo build +``` + +- Optimizations: Level 1 (Level 2 for audio code) +- LTO: Disabled +- Debug info: Full +- Build time: Faster (~2-5 minutes) +- Runtime: Slower (but audio is still optimized) + +Binary location: `target/debug/lightningbeam-editor` + +**Note**: Audio code is always compiled with `opt-level = 2` even in debug builds to meet real-time deadlines. This is configured in `lightningbeam-ui/Cargo.toml`: + +```toml +[profile.dev.package.daw-backend] +opt-level = 2 +``` + +### Check Without Building + +Quickly check for compilation errors without producing binaries: + +```bash +cargo check +``` + +Useful for rapid feedback during development. + +### Build Specific Package + +```bash +# Check only the audio backend +cargo check -p daw-backend + +# Build only the core library +cargo build -p lightningbeam-core +``` + +## Troubleshooting + +### Audio Issues + +#### "ALSA lib cannot find card" or similar errors + +**Solution**: Install ALSA development files: +```bash +# Ubuntu/Debian +sudo apt install libasound2-dev + +# Arch +sudo pacman -S alsa-lib +``` + +#### Audio dropouts or crackling + +**Symptoms**: Console shows "Audio overrun" or timing warnings. + +**Solutions**: +1. Increase buffer size in `daw-backend/src/lib.rs` (default: 256 frames) +2. Enable audio debug logging: + ```bash + DAW_AUDIO_DEBUG=1 cargo run + ``` +3. Make sure audio code is optimized (check `Cargo.toml` profile settings) +4. Close other audio applications + +#### "PulseAudio" or "JACK" errors in container + +**Note**: This is expected in containerized environments without audio support. These errors don't occur on native systems. + +### FFmpeg Issues + +#### "Could not find FFmpeg libraries" or linking errors + +**Version Check First**: +```bash +ffmpeg -version +# Should show version 8.x +``` + +**Linux**: +```bash +# Ubuntu/Debian - requires FFmpeg 8 from PPA +sudo add-apt-repository ppa:ubuntuhandbook1/ffmpeg7 +sudo apt update +sudo apt install libavcodec-dev libavformat-dev libavutil-dev libswscale-dev libswresample-dev + +# Arch (usually has latest) +sudo pacman -S ffmpeg + +# Check installation +pkg-config --modversion libavcodec +# Should show 61.x or higher (FFmpeg 8) +``` + +If the PPA doesn't work or doesn't have FFmpeg 8, you may need to compile from source: +```bash +# Download and compile FFmpeg 8 +wget https://ffmpeg.org/releases/ffmpeg-8.0.tar.xz +tar xf ffmpeg-8.0.tar.xz +cd ffmpeg-8.0 +./configure --enable-shared --disable-static +make -j$(nproc) +sudo make install +sudo ldconfig +``` + +**macOS**: +```bash +brew install ffmpeg +export PKG_CONFIG_PATH="/opt/homebrew/opt/ffmpeg/lib/pkgconfig:$PKG_CONFIG_PATH" +``` + +**Windows**: +Set environment variables: +```cmd +set FFMPEG_DIR=C:\path\to\ffmpeg +set PKG_CONFIG_PATH=C:\path\to\ffmpeg\lib\pkgconfig +``` + +#### "Unsupported codec" or video not playing + +Make sure FFmpeg was compiled with the necessary codecs: +```bash +ffmpeg -codecs | grep h264 # Check for H.264 +ffmpeg -codecs | grep vp9 # Check for VP9 +``` + +### GPU/Rendering Issues + +#### Black screen or no rendering + +**Check GPU support**: +```bash +# Linux - check Vulkan +vulkaninfo | grep deviceName + +# macOS - Metal is built-in on 10.13+ +system_profiler SPDisplaysDataType +``` + +**Solutions**: +1. Update GPU drivers +2. Install Vulkan runtime (Linux) +3. Check console for wgpu errors + +#### "No suitable GPU adapter found" + +This usually means missing Vulkan/Metal support. + +**Linux**: Install Vulkan drivers (see [Optional Dependencies](#optional-dependencies)) + +**macOS**: Requires macOS 10.13+ (Metal support) + +**Windows**: Update GPU drivers + +### Build Performance + +#### Slow compilation times + +**Solutions**: +1. Use `cargo check` instead of `cargo build` during development +2. Enable incremental compilation (enabled by default) +3. Use `mold` linker (Linux): + ```bash + # Install mold + sudo apt install mold # Ubuntu 22.04+ + + # Use mold + mold -run cargo build + ``` +4. Increase parallel jobs: + ```bash + cargo build -j 8 # Use 8 parallel jobs + ``` + +#### Out of memory during compilation + +**Solution**: Reduce parallel jobs: +```bash +cargo build -j 2 # Use only 2 parallel jobs +``` + +### Linker Errors + +#### "undefined reference to..." or "cannot find -l..." + +**Cause**: Missing system libraries. + +**Solution**: Install all dependencies listed in [Platform-Specific Instructions](#platform-specific-instructions). + +#### Windows: "LNK1181: cannot open input file" + +**Cause**: FFmpeg libraries not found. + +**Solution**: +1. Download FFmpeg shared build +2. Set `FFMPEG_DIR` environment variable +3. Add FFmpeg bin directory to PATH + +## Development Builds + +### Enable Audio Debug Logging + +```bash +DAW_AUDIO_DEBUG=1 cargo run +``` + +Output includes: +- Buffer sizes +- Average/worst-case processing times +- Audio overruns/underruns +- Playhead position updates + +### Disable Optimizations for Specific Crates + +Edit `lightningbeam-ui/Cargo.toml`: + +```toml +[profile.dev.package.my-crate] +opt-level = 0 # No optimizations +``` + +**Warning**: Do not disable optimizations for `daw-backend` or audio-related crates, as this will cause audio dropouts. + +### Build with Specific Features + +```bash +# Build with all features +cargo build --all-features + +# Build with no default features +cargo build --no-default-features +``` + +### Clean Build + +Remove all build artifacts and start fresh: + +```bash +cargo clean +cargo build +``` + +Useful when dependencies change or build cache becomes corrupted. + +### Cross-Compilation + +Cross-compiling is not currently documented but should be possible using `cross`: + +```bash +cargo install cross +cross build --target x86_64-unknown-linux-gnu +``` + +See [cross documentation](https://github.com/cross-rs/cross) for details. + +## Running Tests + +```bash +# Run all tests +cargo test + +# Run tests for specific package +cargo test -p lightningbeam-core + +# Run with output +cargo test -- --nocapture + +# Run specific test +cargo test test_name +``` + +## Building Documentation + +Generate and open Rust API documentation: + +```bash +cargo doc --open +``` + +This generates HTML documentation from code comments and opens it in your browser. + +## Next Steps + +After building successfully: + +- See [CONTRIBUTING.md](../CONTRIBUTING.md) for development workflow +- See [ARCHITECTURE.md](../ARCHITECTURE.md) for system architecture +- See [docs/AUDIO_SYSTEM.md](AUDIO_SYSTEM.md) for audio engine details +- See [docs/UI_SYSTEM.md](UI_SYSTEM.md) for UI development diff --git a/docs/RENDERING.md b/docs/RENDERING.md new file mode 100644 index 0000000..060f7c4 --- /dev/null +++ b/docs/RENDERING.md @@ -0,0 +1,812 @@ +# GPU Rendering Architecture + +This document describes Lightningbeam's GPU rendering pipeline, including Vello integration for vector graphics, custom WGSL shaders for waveforms, and wgpu integration patterns. + +## Table of Contents + +- [Overview](#overview) +- [Rendering Pipeline](#rendering-pipeline) +- [Vello Integration](#vello-integration) +- [Waveform Rendering](#waveform-rendering) +- [WGSL Shaders](#wgsl-shaders) +- [Uniform Buffer Alignment](#uniform-buffer-alignment) +- [Custom wgpu Integration](#custom-wgpu-integration) +- [Performance Optimization](#performance-optimization) +- [Debugging Rendering Issues](#debugging-rendering-issues) + +## Overview + +Lightningbeam uses GPU-accelerated rendering for high-performance 2D graphics: + +- **Vello**: Compute shader-based 2D vector rendering +- **wgpu 27**: Cross-platform GPU API (Vulkan, Metal, D3D12) +- **egui-wgpu**: Integration layer between egui and wgpu +- **Custom WGSL shaders**: For specialized rendering (waveforms, effects) + +### Supported Backends + +- **Linux**: Vulkan (primary), OpenGL (fallback) +- **macOS**: Metal +- **Windows**: Vulkan, DirectX 12 + +## Rendering Pipeline + +### High-Level Flow + +``` +┌─────────────────────────────────────────────────────────────┐ +│ Application Frame │ +├─────────────────────────────────────────────────────────────┤ +│ │ +│ 1. egui Layout Phase │ +│ - Build UI tree │ +│ - Collect paint primitives │ +│ - Register wgpu callbacks │ +│ │ +│ 2. Custom GPU Rendering (via egui_wgpu::Callback) │ +│ ┌────────────────────────────────────────────────┐ │ +│ │ prepare(): │ │ +│ │ - Build Vello scene from document │ │ +│ │ - Update uniform buffers │ │ +│ │ - Generate waveform mipmaps (if needed) │ │ +│ └────────────────────────────────────────────────┘ │ +│ ┌────────────────────────────────────────────────┐ │ +│ │ paint(): │ │ +│ │ - Render Vello scene to texture │ │ +│ │ - Render waveforms │ │ +│ │ - Composite layers │ │ +│ └────────────────────────────────────────────────┘ │ +│ │ +│ 3. egui Paint │ +│ - Render egui UI elements │ +│ - Composite with custom rendering │ +│ │ +│ 4. Present to Screen │ +│ │ +└─────────────────────────────────────────────────────────────┘ +``` + +### Render Pass Structure + +``` +Main Render Pass +├─> Clear screen +├─> Custom wgpu callbacks (Stage pane, etc.) +│ ├─> Vello vector rendering +│ └─> Waveform rendering +└─> egui UI rendering (text, widgets, overlays) +``` + +## Vello Integration + +Vello is a GPU-accelerated 2D rendering engine that uses compute shaders for high-performance vector graphics. + +### Vello Architecture + +``` +Document Shapes + ↓ +Convert to kurbo paths + ↓ +Build Vello Scene + ↓ +Vello Renderer (compute shaders) + ↓ +Render to GPU texture + ↓ +Composite with UI +``` + +### Building a Vello Scene + +```rust +use vello::{Scene, SceneBuilder, kurbo::{Affine, BezPath}}; +use peniko::{Color, Fill, Brush}; + +fn build_vello_scene(document: &Document) -> Scene { + let mut scene = Scene::new(); + let mut builder = SceneBuilder::for_scene(&mut scene); + + for layer in &document.layers { + if let Layer::VectorLayer { clips, visible, .. } = layer { + if !visible { + continue; + } + + for clip in clips { + for shape_instance in &clip.shapes { + // Get transform for this shape + let transform = shape_instance.compute_world_transform(); + let affine = to_vello_affine(transform); + + // Convert shape to kurbo path + let path = shape_to_kurbo_path(&shape_instance.shape); + + // Fill + if let Some(fill_color) = shape_instance.shape.fill { + let brush = Brush::Solid(to_peniko_color(fill_color)); + builder.fill( + Fill::NonZero, + affine, + &brush, + None, + &path, + ); + } + + // Stroke + if let Some(stroke) = &shape_instance.shape.stroke { + let brush = Brush::Solid(to_peniko_color(stroke.color)); + let stroke_style = vello::kurbo::Stroke::new(stroke.width); + builder.stroke( + &stroke_style, + affine, + &brush, + None, + &path, + ); + } + } + } + } + } + + scene +} +``` + +### Shape to Kurbo Path Conversion + +```rust +use kurbo::{BezPath, PathEl, Point}; + +fn shape_to_kurbo_path(shape: &Shape) -> BezPath { + let mut path = BezPath::new(); + + if shape.curves.is_empty() { + return path; + } + + // Start at first point + path.move_to(Point::new( + shape.curves[0].start.x as f64, + shape.curves[0].start.y as f64, + )); + + // Add curves + for curve in &shape.curves { + match curve.curve_type { + CurveType::Linear => { + path.line_to(Point::new( + curve.end.x as f64, + curve.end.y as f64, + )); + } + CurveType::Quadratic => { + path.quad_to( + Point::new(curve.control1.x as f64, curve.control1.y as f64), + Point::new(curve.end.x as f64, curve.end.y as f64), + ); + } + CurveType::Cubic => { + path.curve_to( + Point::new(curve.control1.x as f64, curve.control1.y as f64), + Point::new(curve.control2.x as f64, curve.control2.y as f64), + Point::new(curve.end.x as f64, curve.end.y as f64), + ); + } + } + } + + // Close path if needed + if shape.closed { + path.close_path(); + } + + path +} +``` + +### Vello Renderer Setup + +```rust +use vello::{Renderer, RendererOptions, RenderParams}; +use wgpu; + +pub struct VelloRenderer { + renderer: Renderer, + surface_format: wgpu::TextureFormat, +} + +impl VelloRenderer { + pub fn new(device: &wgpu::Device, surface_format: wgpu::TextureFormat) -> Self { + let renderer = Renderer::new( + device, + RendererOptions { + surface_format: Some(surface_format), + use_cpu: false, + antialiasing_support: vello::AaSupport::all(), + num_init_threads: None, + }, + ).expect("Failed to create Vello renderer"); + + Self { + renderer, + surface_format, + } + } + + pub fn render( + &mut self, + device: &wgpu::Device, + queue: &wgpu::Queue, + scene: &Scene, + texture: &wgpu::TextureView, + width: u32, + height: u32, + ) { + let params = RenderParams { + base_color: peniko::Color::TRANSPARENT, + width, + height, + antialiasing_method: vello::AaConfig::Msaa16, + }; + + self.renderer + .render_to_texture(device, queue, scene, texture, ¶ms) + .expect("Failed to render Vello scene"); + } +} +``` + +## Waveform Rendering + +Audio waveforms are rendered on the GPU using custom WGSL shaders with mipmapping for efficient zooming. + +### Waveform GPU Resources + +```rust +pub struct WaveformGPU { + // Waveform data texture (min/max per sample) + texture: wgpu::Texture, + texture_view: wgpu::TextureView, + + // Mipmap chain for level-of-detail + mip_levels: Vec, + + // Render pipeline + pipeline: wgpu::RenderPipeline, + + // Uniform buffer for view parameters + uniform_buffer: wgpu::Buffer, + bind_group: wgpu::BindGroup, +} +``` + +### Waveform Texture Format + +Each texel stores min/max amplitude for a sample range: + +``` +Texture Format: Rgba16Float (4 channels, 16-bit float each) +- R channel: Left channel minimum amplitude in range [-1, 1] +- G channel: Left channel maximum amplitude in range [-1, 1] +- B channel: Right channel minimum amplitude in range [-1, 1] +- A channel: Right channel maximum amplitude in range [-1, 1] + +Mip level 0: Per-sample min/max (1x) +Mip level 1: Per-4-sample min/max (1/4x) +Mip level 2: Per-16-sample min/max (1/16x) +Mip level 3: Per-64-sample min/max (1/64x) +... + +Each mip level reduces by 4x, not 2x, for efficient zooming. +``` + +### Generating Waveform Texture + +```rust +fn generate_waveform_texture( + device: &wgpu::Device, + queue: &wgpu::Queue, + audio_samples: &[f32], +) -> wgpu::Texture { + // Calculate mip levels + let width = audio_samples.len() as u32; + let mip_levels = (width as f32).log2().floor() as u32 + 1; + + // Create texture + let texture = device.create_texture(&wgpu::TextureDescriptor { + label: Some("Waveform Texture"), + size: wgpu::Extent3d { + width, + height: 1, + depth_or_array_layers: 1, + }, + mip_level_count: mip_levels, + sample_count: 1, + dimension: wgpu::TextureDimension::D1, + format: wgpu::TextureFormat::Rg32Float, + usage: wgpu::TextureUsages::TEXTURE_BINDING | wgpu::TextureUsages::COPY_DST, + view_formats: &[], + }); + + // Upload base level (per-sample min/max) + let mut data: Vec = Vec::with_capacity(width as usize * 2); + for &sample in audio_samples { + data.push(sample); // min + data.push(sample); // max + } + + queue.write_texture( + wgpu::ImageCopyTexture { + texture: &texture, + mip_level: 0, + origin: wgpu::Origin3d::ZERO, + aspect: wgpu::TextureAspect::All, + }, + bytemuck::cast_slice(&data), + wgpu::ImageDataLayout { + offset: 0, + bytes_per_row: Some(width * 8), // 2 floats * 4 bytes + rows_per_image: None, + }, + wgpu::Extent3d { + width, + height: 1, + depth_or_array_layers: 1, + }, + ); + + texture +} +``` + +### Mipmap Generation (Compute Shader) + +```rust +// Compute shader generates mipmaps by taking min/max of 4 parent samples +// Each mip level is 4x smaller than the previous level +fn generate_mipmaps( + device: &wgpu::Device, + queue: &wgpu::Queue, + texture: &wgpu::Texture, + base_width: u32, + base_height: u32, + mip_count: u32, + base_sample_count: u32, +) -> Vec { + if mip_count <= 1 { + return Vec::new(); + } + + let mut encoder = device.create_command_encoder(&Default::default()); + + let mut src_width = base_width; + let mut src_height = base_height; + let mut src_sample_count = base_sample_count; + + for level in 1..mip_count { + // Dimensions halve (2x2 texels -> 1 texel) + let dst_width = (src_width / 2).max(1); + let dst_height = (src_height / 2).max(1); + // But sample count reduces by 4x (4 samples -> 1) + let dst_sample_count = (src_sample_count + 3) / 4; + + let src_view = texture.create_view(&wgpu::TextureViewDescriptor { + base_mip_level: level - 1, + mip_level_count: Some(1), + ..Default::default() + }); + + let dst_view = texture.create_view(&wgpu::TextureViewDescriptor { + base_mip_level: level, + mip_level_count: Some(1), + ..Default::default() + }); + + let params = MipgenParams { + src_width, + dst_width, + src_sample_count, + _pad: 0, + }; + let params_buffer = device.create_buffer_init(&wgpu::util::BufferInitDescriptor { + contents: bytemuck::cast_slice(&[params]), + usage: wgpu::BufferUsages::UNIFORM, + }); + + let bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor { + layout: &mipgen_bind_group_layout, + entries: &[ + wgpu::BindGroupEntry { + binding: 0, + resource: wgpu::BindingResource::TextureView(&src_view), + }, + wgpu::BindGroupEntry { + binding: 1, + resource: wgpu::BindingResource::TextureView(&dst_view), + }, + wgpu::BindGroupEntry { + binding: 2, + resource: params_buffer.as_entire_binding(), + }, + ], + }); + + // Dispatch compute shader + let total_dst_texels = dst_width * dst_height; + let workgroup_count = (total_dst_texels + 63) / 64; + + let mut pass = encoder.begin_compute_pass(&Default::default()); + pass.set_pipeline(&mipgen_pipeline); + pass.set_bind_group(0, &bind_group, &[]); + pass.dispatch_workgroups(workgroup_count, 1, 1); + drop(pass); + + src_width = dst_width; + src_height = dst_height; + src_sample_count = dst_sample_count; + } + + vec![encoder.finish()] +} +``` + +## WGSL Shaders + +### Waveform Render Shader + +```wgsl +// waveform.wgsl + +struct WaveformParams { + view_matrix: mat4x4, // 64 bytes + viewport_size: vec2, // 8 bytes + zoom: f32, // 4 bytes + _pad1: f32, // 4 bytes (padding) + tint_color: vec4, // 16 bytes (requires 16-byte alignment) + // Total: 96 bytes +} + +@group(0) @binding(0) var params: WaveformParams; +@group(0) @binding(1) var waveform_texture: texture_1d; +@group(0) @binding(2) var waveform_sampler: sampler; + +struct VertexOutput { + @builtin(position) position: vec4, + @location(0) uv: vec2, +} + +@vertex +fn vs_main(@builtin(vertex_index) vertex_index: u32) -> VertexOutput { + // Generate fullscreen quad + var positions = array, 6>( + vec2(-1.0, -1.0), + vec2( 1.0, -1.0), + vec2( 1.0, 1.0), + vec2(-1.0, -1.0), + vec2( 1.0, 1.0), + vec2(-1.0, 1.0), + ); + + var output: VertexOutput; + output.position = vec4(positions[vertex_index], 0.0, 1.0); + output.uv = (positions[vertex_index] + 1.0) * 0.5; + return output; +} + +@fragment +fn fs_main(input: VertexOutput) -> @location(0) vec4 { + // Sample waveform texture + let sample_pos = input.uv.x; + let waveform = textureSample(waveform_texture, waveform_sampler, sample_pos); + + // waveform.r = min amplitude, waveform.g = max amplitude + let min_amp = waveform.r; + let max_amp = waveform.g; + + // Map amplitude to vertical position + let center_y = 0.5; + let min_y = center_y - min_amp * 0.5; + let max_y = center_y + max_amp * 0.5; + + // Check if pixel is within waveform range + if (input.uv.y >= min_y && input.uv.y <= max_y) { + return params.tint_color; + } else { + return vec4(0.0, 0.0, 0.0, 0.0); // Transparent + } +} +``` + +### Mipmap Generation Shader + +```wgsl +// waveform_mipgen.wgsl + +struct MipgenParams { + src_width: u32, + dst_width: u32, + src_sample_count: u32, +} + +@group(0) @binding(0) var src_texture: texture_2d; +@group(0) @binding(1) var dst_texture: texture_storage_2d; +@group(0) @binding(2) var params: MipgenParams; + +@compute @workgroup_size(64) +fn main(@builtin(global_invocation_id) global_id: vec3) { + let linear_index = global_id.x; + + // Convert linear index to 2D coordinates + let dst_x = linear_index % params.dst_width; + let dst_y = linear_index / params.dst_width; + + // Each dst texel corresponds to 4 src samples (not 4 src texels) + // But 2D texture layout halves in each dimension + let src_x = dst_x * 2u; + let src_y = dst_y * 2u; + + // Sample 4 texels from parent level (2x2 block) + let s00 = textureLoad(src_texture, vec2(i32(src_x), i32(src_y)), 0); + let s10 = textureLoad(src_texture, vec2(i32(src_x + 1u), i32(src_y)), 0); + let s01 = textureLoad(src_texture, vec2(i32(src_x), i32(src_y + 1u)), 0); + let s11 = textureLoad(src_texture, vec2(i32(src_x + 1u), i32(src_y + 1u)), 0); + + // Compute min/max across all 4 samples for each channel + let left_min = min(min(s00.r, s10.r), min(s01.r, s11.r)); + let left_max = max(max(s00.g, s10.g), max(s01.g, s11.g)); + let right_min = min(min(s00.b, s10.b), min(s01.b, s11.b)); + let right_max = max(max(s00.a, s10.a), max(s01.a, s11.a)); + + // Write to destination mip level + textureStore(dst_texture, vec2(i32(dst_x), i32(dst_y)), + vec4(left_min, left_max, right_min, right_max)); +} +``` + +## Uniform Buffer Alignment + +WGSL has strict alignment requirements. The most common issue is `vec4` requiring 16-byte alignment. + +### Alignment Rules + +```rust +// ❌ Bad: tint_color not aligned to 16 bytes +#[repr(C)] +struct WaveformParams { + view_matrix: [f32; 16], // 64 bytes (offset 0) + viewport_size: [f32; 2], // 8 bytes (offset 64) + zoom: f32, // 4 bytes (offset 72) + tint_color: [f32; 4], // 16 bytes (offset 76) ❌ Not 16-byte aligned! +} + +// ✅ Good: explicit padding for alignment +#[repr(C)] +#[derive(Copy, Clone, bytemuck::Pod, bytemuck::Zeroable)] +struct WaveformParams { + view_matrix: [f32; 16], // 64 bytes (offset 0) + viewport_size: [f32; 2], // 8 bytes (offset 64) + zoom: f32, // 4 bytes (offset 72) + _pad1: f32, // 4 bytes (offset 76) - padding + tint_color: [f32; 4], // 16 bytes (offset 80) ✅ 16-byte aligned! +} +// Total size: 96 bytes +``` + +### Common Alignment Requirements + +| WGSL Type | Size | Alignment | +|-----------|------|-----------| +| `f32` | 4 bytes | 4 bytes | +| `vec2` | 8 bytes | 8 bytes | +| `vec3` | 12 bytes | 16 bytes ⚠️ | +| `vec4` | 16 bytes | 16 bytes | +| `mat4x4` | 64 bytes | 16 bytes | +| Struct | Sum of members | 16 bytes (uniform buffers) | + +### Debug Alignment Issues + +```rust +// Use static_assertions to catch alignment bugs at compile time +use static_assertions::const_assert_eq; + +const_assert_eq!(std::mem::size_of::(), 96); +const_assert_eq!(std::mem::align_of::(), 16); + +// Runtime validation +fn validate_uniform_buffer(data: &T) { + let size = std::mem::size_of::(); + let align = std::mem::align_of::(); + + assert!(size % 16 == 0, "Uniform buffer size must be multiple of 16"); + assert!(align >= 16, "Uniform buffer must be 16-byte aligned"); +} +``` + +## Custom wgpu Integration + +### egui-wgpu Callback Pattern + +```rust +use egui_wgpu::CallbackTrait; + +struct CustomRenderCallback { + // Data needed for rendering + scene: Scene, + params: UniformData, +} + +impl CallbackTrait for CustomRenderCallback { + fn prepare( + &self, + device: &wgpu::Device, + queue: &wgpu::Queue, + _screen_descriptor: &egui_wgpu::ScreenDescriptor, + _encoder: &mut wgpu::CommandEncoder, + resources: &mut egui_wgpu::CallbackResources, + ) -> Vec { + // Update GPU resources (buffers, textures, etc.) + // This runs before rendering + + // Get or create renderer + let renderer: &mut MyRenderer = resources.get_or_insert_with(|| { + MyRenderer::new(device) + }); + + // Update uniform buffer + queue.write_buffer(&renderer.uniform_buffer, 0, bytemuck::bytes_of(&self.params)); + + vec![] // Return additional command buffers if needed + } + + fn paint<'a>( + &'a self, + _info: egui::PaintCallbackInfo, + render_pass: &mut wgpu::RenderPass<'a>, + resources: &'a egui_wgpu::CallbackResources, + ) { + // Actual rendering + let renderer: &MyRenderer = resources.get().unwrap(); + + render_pass.set_pipeline(&renderer.pipeline); + render_pass.set_bind_group(0, &renderer.bind_group, &[]); + render_pass.draw(0..6, 0..1); // Draw fullscreen quad + } +} +``` + +### Registering Callback in egui + +```rust +// In Stage pane render method +let callback = egui_wgpu::Callback::new_paint_callback( + rect, + CustomRenderCallback { + scene: self.build_scene(document), + params: self.compute_params(), + }, +); + +ui.painter().add(callback); +``` + +## Performance Optimization + +### Minimize GPU↔CPU Transfer + +```rust +// ❌ Bad: Update uniform buffer every frame +for frame in frames { + queue.write_buffer(&uniform_buffer, 0, ¶ms); + render(); +} + +// ✅ Good: Only update when changed +if params_changed { + queue.write_buffer(&uniform_buffer, 0, ¶ms); +} +render(); +``` + +### Reuse GPU Resources + +```rust +// ✅ Good: Reuse textures and buffers +struct WaveformCache { + textures: HashMap, +} + +impl WaveformCache { + fn get_or_create(&mut self, clip_id: Uuid, audio_data: &[f32]) -> &wgpu::Texture { + self.textures.entry(clip_id).or_insert_with(|| { + generate_waveform_texture(device, queue, audio_data) + }) + } +} +``` + +### Batch Draw Calls + +```rust +// ❌ Bad: One draw call per shape +for shape in shapes { + render_pass.set_bind_group(0, &shape.bind_group, &[]); + render_pass.draw(0..shape.vertex_count, 0..1); +} + +// ✅ Good: Batch into single draw call +let batched_vertices = batch_shapes(shapes); +render_pass.set_bind_group(0, &batched_bind_group, &[]); +render_pass.draw(0..batched_vertices.len(), 0..1); +``` + +### Use Mipmaps for Zooming + +```rust +// ✅ Good: Select appropriate mip level based on zoom +let mip_level = ((1.0 / zoom).log2().floor() as u32).min(max_mip_level); +let texture_view = texture.create_view(&wgpu::TextureViewDescriptor { + base_mip_level: mip_level, + mip_level_count: Some(1), + ..Default::default() +}); +``` + +## Debugging Rendering Issues + +### Enable wgpu Validation + +```rust +let instance = wgpu::Instance::new(wgpu::InstanceDescriptor { + backends: wgpu::Backends::all(), + dx12_shader_compiler: Default::default(), + flags: wgpu::InstanceFlags::validation(), // Enable validation + gles_minor_version: wgpu::Gles3MinorVersion::Automatic, +}); +``` + +### Check for Errors + +```rust +// Set error handler +device.on_uncaptured_error(Box::new(|error| { + eprintln!("wgpu error: {:?}", error); +})); +``` + +### Capture GPU Frame + +**Linux** (RenderDoc): +```bash +renderdoccmd capture ./lightningbeam-editor +``` + +**macOS** (Xcode): +- Run with GPU Frame Capture enabled +- Trigger capture with Cmd+Option+G + +### Common Issues + +#### Black Screen +- Check that vertex shader outputs correct clip-space coordinates +- Verify texture bindings are correct +- Check that render pipeline format matches surface format + +#### Validation Errors +- Check uniform buffer alignment (see [Uniform Buffer Alignment](#uniform-buffer-alignment)) +- Verify texture formats match shader expectations +- Ensure bind groups match pipeline layout + +#### Performance Issues +- Use GPU profiler (RenderDoc, Xcode) +- Check for redundant buffer uploads +- Profile shader performance +- Reduce draw call count via batching + +## Related Documentation + +- [ARCHITECTURE.md](../ARCHITECTURE.md) - Overall system architecture +- [docs/UI_SYSTEM.md](UI_SYSTEM.md) - UI and pane integration +- [CONTRIBUTING.md](../CONTRIBUTING.md) - Development workflow diff --git a/docs/UI_SYSTEM.md b/docs/UI_SYSTEM.md new file mode 100644 index 0000000..3d1ecac --- /dev/null +++ b/docs/UI_SYSTEM.md @@ -0,0 +1,848 @@ +# UI System Architecture + +This document describes Lightningbeam's UI architecture, including the pane system, tool system, GPU integration, and patterns for extending the UI with new features. + +## Table of Contents + +- [Overview](#overview) +- [Pane System](#pane-system) +- [Shared State](#shared-state) +- [Two-Phase Dispatch](#two-phase-dispatch) +- [ID Collision Avoidance](#id-collision-avoidance) +- [Tool System](#tool-system) +- [GPU Integration](#gpu-integration) +- [Adding New Panes](#adding-new-panes) +- [Adding New Tools](#adding-new-tools) +- [Event Handling](#event-handling) +- [Best Practices](#best-practices) + +## Overview + +Lightningbeam's UI is built with **egui**, an immediate-mode GUI framework. Unlike retained-mode frameworks (Qt, GTK), immediate-mode rebuilds the UI every frame by running code that describes what should be displayed. + +### Key Technologies + +- **egui 0.33.3**: Immediate-mode GUI framework +- **eframe**: Application framework wrapping egui +- **winit**: Cross-platform windowing +- **Vello**: GPU-accelerated 2D vector rendering +- **wgpu**: Low-level GPU API +- **egui-wgpu**: Integration layer between egui and wgpu + +### Immediate Mode Overview + +```rust +// Immediate mode: UI is described every frame +fn render(&mut self, ui: &mut egui::Ui) { + if ui.button("Click me").clicked() { + self.counter += 1; + } + ui.label(format!("Count: {}", self.counter)); +} +``` + +**Benefits**: +- Simple mental model (just describe what you see) +- No manual synchronization between state and UI +- Easy to compose and reuse components + +**Considerations**: +- Must avoid expensive operations in render code +- IDs needed for stateful widgets (handled automatically in most cases) + +## Pane System + +Lightningbeam uses a flexible pane system where the UI is composed of independent, reusable panes (Stage, Timeline, Asset Library, etc.). + +### Pane Architecture + +``` +┌─────────────────────────────────────────────────────────┐ +│ Main Application │ +│ (LightningbeamApp) │ +├─────────────────────────────────────────────────────────┤ +│ │ +│ ┌────────────────────────────────────────────────┐ │ +│ │ Pane Tree (egui_tiles) │ │ +│ │ │ │ +│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │ +│ │ │ Stage │ │ Timeline │ │ Asset │ │ │ +│ │ │ Pane │ │ Pane │ │ Library │ │ │ +│ │ └──────────┘ └──────────┘ └──────────┘ │ │ +│ │ │ │ +│ │ Each pane: │ │ +│ │ - Renders its UI │ │ +│ │ - Registers actions with SharedPaneState │ │ +│ │ - Accesses shared document state │ │ +│ └────────────────────────────────────────────────┘ │ +│ │ +│ ┌────────────────────────────────────────────────┐ │ +│ │ SharedPaneState │ │ +│ │ - Document │ │ +│ │ - Selected tool │ │ +│ │ - Pending actions │ │ +│ │ - Audio system │ │ +│ └────────────────────────────────────────────────┘ │ +│ │ +│ After all panes render: │ +│ - Execute pending actions │ +│ - Update undo/redo stacks │ +│ - Synchronize with audio engine │ +│ │ +└─────────────────────────────────────────────────────────┘ +``` + +### PaneInstance Enum + +All panes are variants of the `PaneInstance` enum: + +```rust +// In lightningbeam-editor/src/panes/mod.rs +pub enum PaneInstance { + Stage(Stage), + Timeline(Timeline), + AssetLibrary(AssetLibrary), + InfoPanel(InfoPanel), + VirtualPiano(VirtualPiano), + Toolbar(Toolbar), + NodeEditor(NodeEditor), + PianoRoll(PianoRoll), + Outliner(Outliner), + PresetBrowser(PresetBrowser), +} + +impl PaneInstance { + pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) { + match self { + PaneInstance::Stage(stage) => stage.render(ui, shared_state), + PaneInstance::Timeline(timeline) => timeline.render(ui, shared_state), + PaneInstance::AssetLibrary(lib) => lib.render(ui, shared_state), + // ... dispatch to specific pane + } + } + + pub fn title(&self) -> &str { + match self { + PaneInstance::Stage(_) => "Stage", + PaneInstance::Timeline(_) => "Timeline", + // ... + } + } +} +``` + +### Individual Pane Structure + +Each pane is a struct with its own state and a `render` method: + +```rust +pub struct MyPane { + // Pane-specific state + scroll_offset: f32, + selected_item: Option, + // ... other state +} + +impl MyPane { + pub fn new() -> Self { + Self { + scroll_offset: 0.0, + selected_item: None, + } + } + + pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) { + // Render pane UI + ui.heading("My Pane"); + + // Access shared state + let document = &shared_state.document; + + // Create actions + if ui.button("Do something").clicked() { + let action = Box::new(MyAction { /* ... */ }); + shared_state.pending_actions.push(action); + } + } +} +``` + +### Key Panes + +Located in `lightningbeam-editor/src/panes/`: + +- **stage.rs** (214KB): Main canvas for drawing and transform tools +- **timeline.rs** (84KB): Multi-track timeline with clip editing +- **asset_library.rs** (70KB): Asset browser with drag-to-timeline +- **infopanel.rs** (31KB): Context-sensitive property editor +- **virtual_piano.rs** (31KB): On-screen MIDI keyboard +- **toolbar.rs** (9KB): Tool palette + +## Shared State + +`SharedPaneState` is passed to all panes during rendering to share data and coordinate actions. + +### SharedPaneState Structure + +```rust +pub struct SharedPaneState { + // Document state + pub document: Document, + pub undo_stack: Vec>, + pub redo_stack: Vec>, + + // Tool state + pub selected_tool: Tool, + pub tool_state: ToolState, + + // Actions to execute after rendering + pub pending_actions: Vec>, + + // Audio engine + pub audio_system: AudioSystem, + pub playhead_position: f64, + pub is_playing: bool, + + // Selection state + pub selected_clips: HashSet, + pub selected_shapes: HashSet, + + // Clipboard + pub clipboard: Option, + + // UI state + pub show_grid: bool, + pub snap_to_grid: bool, + pub grid_size: f32, +} +``` + +### Accessing Shared State + +```rust +impl MyPane { + pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) { + // Read from document + let layer_count = shared_state.document.layers.len(); + ui.label(format!("Layers: {}", layer_count)); + + // Check tool state + if shared_state.selected_tool == Tool::Select { + // ... render selection-specific UI + } + + // Check playback state + if shared_state.is_playing { + ui.label("▶ Playing"); + } + } +} +``` + +## Two-Phase Dispatch + +Panes cannot directly mutate shared state during rendering due to Rust's borrowing rules. Instead, they register actions to be executed after all panes have rendered. + +### Why Two-Phase? + +```rust +// This doesn't work: can't borrow shared_state as mutable twice +pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) { + if ui.button("Add layer").clicked() { + // ❌ Can't mutate document while borrowed by render + shared_state.document.layers.push(Layer::new()); + } +} +``` + +### Solution: Pending Actions + +```rust +// Phase 1: Register action during render +pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) { + if ui.button("Add layer").clicked() { + let action = Box::new(AddLayerAction::new()); + shared_state.pending_actions.push(action); + } +} + +// Phase 2: Execute after all panes rendered (in main app) +for action in shared_state.pending_actions.drain(..) { + action.apply(&mut shared_state.document); + shared_state.undo_stack.push(action); +} +``` + +### Action Trait + +All actions implement the `Action` trait: + +```rust +pub trait Action: Send { + fn apply(&mut self, document: &mut Document); + fn undo(&mut self, document: &mut Document); + fn redo(&mut self, document: &mut Document); +} +``` + +Example action: + +```rust +pub struct AddLayerAction { + layer_id: Uuid, + layer_type: LayerType, +} + +impl Action for AddLayerAction { + fn apply(&mut self, document: &mut Document) { + let layer = Layer::new(self.layer_id, self.layer_type); + document.layers.push(layer); + } + + fn undo(&mut self, document: &mut Document) { + document.layers.retain(|l| l.id != self.layer_id); + } + + fn redo(&mut self, document: &mut Document) { + self.apply(document); + } +} +``` + +## ID Collision Avoidance + +egui uses IDs to track widget state across frames (e.g., scroll position, collapse state). When multiple instances of the same pane exist, IDs can collide. + +### The Problem + +```rust +// If two Timeline panes exist, they'll share the same ID +ui.collapsing("Track 1", |ui| { + // ... content +}); // ID is derived from label "Track 1" +``` + +Both timeline instances would have the same "Track 1" ID, causing state conflicts. + +### Solution: Salt IDs with Node Path + +Each pane has a unique node path (e.g., `"root/0/1/2"`). Salt all IDs with this path: + +```rust +pub struct Timeline { + node_path: String, // Unique path for this pane instance +} + +impl Timeline { + pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) { + // Salt IDs with node path + ui.push_id(&self.node_path, |ui| { + // Now all IDs within this closure are unique to this instance + ui.collapsing("Track 1", |ui| { + // ... content + }); + }); + } +} +``` + +### Alternative: Per-Widget Salting + +For individual widgets: + +```rust +ui.collapsing("Track 1", |ui| { + // ... content +}).id.with(&self.node_path); // Salt this specific ID +``` + +### Best Practice + +**Always salt IDs in new panes** to support multiple instances: + +```rust +impl NewPane { + pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) { + ui.push_id(&self.node_path, |ui| { + // All rendering code goes here + }); + } +} +``` + +## Tool System + +Tools handle user input on the Stage pane (drawing, selection, transforms, etc.). + +### Tool Enum + +```rust +pub enum Tool { + Select, + Draw, + Rectangle, + Ellipse, + Line, + PaintBucket, + Transform, + Eyedropper, +} +``` + +### Tool State + +```rust +pub struct ToolState { + // Generic tool state + pub mouse_pos: Pos2, + pub mouse_down: bool, + pub drag_start: Option, + + // Tool-specific state + pub draw_points: Vec, + pub transform_mode: TransformMode, + pub paint_bucket_tolerance: f32, +} +``` + +### Tool Implementation + +Tools implement the `ToolBehavior` trait: + +```rust +pub trait ToolBehavior { + fn on_mouse_down(&mut self, pos: Pos2, shared_state: &mut SharedPaneState); + fn on_mouse_move(&mut self, pos: Pos2, shared_state: &mut SharedPaneState); + fn on_mouse_up(&mut self, pos: Pos2, shared_state: &mut SharedPaneState); + fn on_key(&mut self, key: Key, shared_state: &mut SharedPaneState); + fn render_overlay(&self, painter: &Painter); +} +``` + +Example: Rectangle tool: + +```rust +pub struct RectangleTool { + start_pos: Option, +} + +impl ToolBehavior for RectangleTool { + fn on_mouse_down(&mut self, pos: Pos2, _shared_state: &mut SharedPaneState) { + self.start_pos = Some(pos); + } + + fn on_mouse_move(&mut self, pos: Pos2, _shared_state: &mut SharedPaneState) { + // Visual feedback handled in render_overlay + } + + fn on_mouse_up(&mut self, pos: Pos2, shared_state: &mut SharedPaneState) { + if let Some(start) = self.start_pos.take() { + // Create rectangle shape + let rect = Rect::from_two_pos(start, pos); + let action = Box::new(AddShapeAction::rectangle(rect)); + shared_state.pending_actions.push(action); + } + } + + fn render_overlay(&self, painter: &Painter) { + if let Some(start) = self.start_pos { + let current = painter.mouse_pos(); + let rect = Rect::from_two_pos(start, current); + painter.rect_stroke(rect, 0.0, Stroke::new(2.0, Color32::WHITE)); + } + } +} +``` + +### Tool Selection + +```rust +// In Toolbar pane +if ui.button("✏ Draw").clicked() { + shared_state.selected_tool = Tool::Draw; +} + +// In Stage pane +match shared_state.selected_tool { + Tool::Draw => self.draw_tool.on_mouse_move(pos, shared_state), + Tool::Select => self.select_tool.on_mouse_move(pos, shared_state), + // ... +} +``` + +## GPU Integration + +The Stage pane uses custom wgpu rendering for vector graphics and waveforms. + +### egui-wgpu Callbacks + +```rust +// In Stage::render() +ui.painter().add(egui_wgpu::Callback::new_paint_callback( + rect, + StageCallback { + document: shared_state.document.clone(), + vello_renderer: self.vello_renderer.clone(), + waveform_renderer: self.waveform_renderer.clone(), + }, +)); +``` + +### Callback Implementation + +```rust +struct StageCallback { + document: Document, + vello_renderer: Arc>, + waveform_renderer: Arc>, +} + +impl egui_wgpu::CallbackTrait for StageCallback { + fn prepare( + &self, + device: &wgpu::Device, + queue: &wgpu::Queue, + encoder: &mut wgpu::CommandEncoder, + resources: &egui_wgpu::CallbackResources, + ) -> Vec { + // Prepare GPU resources + let mut vello = self.vello_renderer.lock().unwrap(); + vello.prepare_scene(&self.document); + + vec![] + } + + fn paint<'a>( + &'a self, + info: egui::PaintCallbackInfo, + render_pass: &mut wgpu::RenderPass<'a>, + resources: &'a egui_wgpu::CallbackResources, + ) { + // Render vector graphics + let vello = self.vello_renderer.lock().unwrap(); + vello.render(render_pass); + + // Render waveforms + let waveforms = self.waveform_renderer.lock().unwrap(); + waveforms.render(render_pass); + } +} +``` + +### Vello Integration + +Vello renders 2D vector graphics using GPU compute shaders: + +```rust +use vello::{Scene, SceneBuilder, kurbo}; + +fn build_vello_scene(document: &Document) -> Scene { + let mut scene = Scene::new(); + let mut builder = SceneBuilder::for_scene(&mut scene); + + for layer in &document.layers { + if let Layer::VectorLayer { clips, .. } = layer { + for clip in clips { + for shape in &clip.shapes { + // Convert shape to kurbo path + let path = shape.to_kurbo_path(); + + // Add to scene with fill/stroke + builder.fill( + Fill::NonZero, + Affine::IDENTITY, + &shape.fill_color, + None, + &path, + ); + } + } + } + } + + scene +} +``` + +## Adding New Panes + +### Step 1: Create Pane Struct + +```rust +// In lightningbeam-editor/src/panes/my_pane.rs +pub struct MyPane { + node_path: String, + // Pane-specific state + selected_index: usize, + scroll_offset: f32, +} + +impl MyPane { + pub fn new(node_path: String) -> Self { + Self { + node_path, + selected_index: 0, + scroll_offset: 0.0, + } + } + + pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) { + // IMPORTANT: Salt IDs with node path + ui.push_id(&self.node_path, |ui| { + ui.heading("My Pane"); + + // Render pane content + // ... + }); + } +} +``` + +### Step 2: Add to PaneInstance Enum + +```rust +// In lightningbeam-editor/src/panes/mod.rs +pub enum PaneInstance { + // ... existing variants + MyPane(MyPane), +} + +impl PaneInstance { + pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) { + match self { + // ... existing cases + PaneInstance::MyPane(pane) => pane.render(ui, shared_state), + } + } + + pub fn title(&self) -> &str { + match self { + // ... existing cases + PaneInstance::MyPane(_) => "My Pane", + } + } +} +``` + +### Step 3: Add to Menu + +```rust +// In main application +if ui.button("My Pane").clicked() { + let pane = PaneInstance::MyPane(MyPane::new(generate_node_path())); + app.add_pane(pane); +} +``` + +## Adding New Tools + +### Step 1: Add to Tool Enum + +```rust +pub enum Tool { + // ... existing tools + MyTool, +} +``` + +### Step 2: Implement Tool Behavior + +```rust +pub struct MyToolState { + // Tool-specific state + start_pos: Option, +} + +impl MyToolState { + pub fn handle_input( + &mut self, + response: &Response, + shared_state: &mut SharedPaneState, + ) { + if response.clicked() { + self.start_pos = response.interact_pointer_pos(); + } + + if response.drag_released() { + if let Some(start) = self.start_pos.take() { + // Create action + let action = Box::new(MyAction { /* ... */ }); + shared_state.pending_actions.push(action); + } + } + } + + pub fn render_overlay(&self, painter: &Painter) { + // Draw tool-specific overlay + } +} +``` + +### Step 3: Add to Toolbar + +```rust +// In Toolbar pane +if ui.button("🔧 My Tool").clicked() { + shared_state.selected_tool = Tool::MyTool; +} +``` + +### Step 4: Handle in Stage Pane + +```rust +// In Stage pane +match shared_state.selected_tool { + // ... existing tools + Tool::MyTool => self.my_tool_state.handle_input(&response, shared_state), +} + +// Render overlay +match shared_state.selected_tool { + // ... existing tools + Tool::MyTool => self.my_tool_state.render_overlay(&painter), +} +``` + +## Event Handling + +### Mouse Events + +```rust +let response = ui.allocate_rect(rect, Sense::click_and_drag()); + +if response.clicked() { + let pos = response.interact_pointer_pos().unwrap(); + // Handle click at pos +} + +if response.dragged() { + let delta = response.drag_delta(); + // Handle drag by delta +} + +if response.drag_released() { + // Handle drag end +} +``` + +### Keyboard Events + +```rust +ui.input(|i| { + if i.key_pressed(Key::Delete) { + // Delete selected items + } + + if i.modifiers.ctrl && i.key_pressed(Key::Z) { + // Undo + } + + if i.modifiers.ctrl && i.key_pressed(Key::Y) { + // Redo + } +}); +``` + +### Drag and Drop + +```rust +// Source (Asset Library) +let response = ui.label("Audio Clip"); +if response.dragged() { + let payload = DragPayload::AudioClip(clip_id); + ui.memory_mut(|mem| { + mem.data.insert_temp(Id::new("drag_payload"), payload); + }); +} + +// Target (Timeline) +let response = ui.allocate_rect(rect, Sense::hover()); +if response.hovered() { + if let Some(payload) = ui.memory(|mem| mem.data.get_temp::(Id::new("drag_payload"))) { + // Handle drop + let action = Box::new(AddClipAction { clip_id: payload.clip_id(), position }); + shared_state.pending_actions.push(action); + } +} +``` + +## Best Practices + +### 1. Always Salt IDs + +```rust +// ✅ Good +ui.push_id(&self.node_path, |ui| { + // All rendering here +}); + +// ❌ Bad (ID collisions if multiple instances) +ui.collapsing("Settings", |ui| { + // ... +}); +``` + +### 2. Use Pending Actions + +```rust +// ✅ Good +shared_state.pending_actions.push(Box::new(action)); + +// ❌ Bad (borrowing conflicts) +shared_state.document.layers.push(layer); +``` + +### 3. Split Borrows with std::mem::take + +```rust +// ✅ Good +let mut clips = std::mem::take(&mut self.clips); +for clip in &mut clips { + self.render_clip(ui, clip); // Can borrow self immutably +} +self.clips = clips; + +// ❌ Bad (can't borrow self while iterating clips) +for clip in &mut self.clips { + self.render_clip(ui, clip); // Error! +} +``` + +### 4. Avoid Expensive Operations in Render + +```rust +// ❌ Bad (heavy computation every frame) +pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) { + let thumbnail = self.generate_thumbnail(); // Expensive! + ui.image(thumbnail); +} + +// ✅ Good (cache result) +pub fn render(&mut self, ui: &mut Ui, shared_state: &mut SharedPaneState) { + if self.thumbnail_cache.is_none() { + self.thumbnail_cache = Some(self.generate_thumbnail()); + } + ui.image(self.thumbnail_cache.as_ref().unwrap()); +} +``` + +### 5. Handle Missing State Gracefully + +```rust +// ✅ Good +if let Some(layer) = document.layers.get(layer_index) { + // Render layer +} else { + ui.label("Layer not found"); +} + +// ❌ Bad (panics if layer missing) +let layer = &document.layers[layer_index]; // May panic! +``` + +## Related Documentation + +- [ARCHITECTURE.md](../ARCHITECTURE.md) - Overall system architecture +- [docs/AUDIO_SYSTEM.md](AUDIO_SYSTEM.md) - Audio engine integration +- [docs/RENDERING.md](RENDERING.md) - GPU rendering details +- [CONTRIBUTING.md](../CONTRIBUTING.md) - Development workflow