Compare commits

..

No commits in common. "12d927ed3d5dcac4c005849fdb7dbc517bb10679" and "908da99321b1775b37b97922f194468a5968384c" have entirely different histories.

58 changed files with 830 additions and 3709 deletions

108
README.md
View File

@ -10,118 +10,42 @@ A free and open-source 2D multimedia editor combining vector animation, audio pr
![Video Editing View](screenshots/video.png)
## Features
## Current Features
**Vector Animation**
- GPU-accelerated vector rendering with Vello
- Draw and animate vector shapes with keyframe-based timeline
- Non-destructive editing workflow
- Paint bucket tool for automatic fill detection
**Audio Production**
- Real-time multi-track audio recording and playback
- Node graph-based effects processing
- MIDI sequencing with synthesizers and samplers
- Comprehensive effects library (reverb, delay, EQ, compression, distortion, etc.)
- Custom audio engine with lock-free design for glitch-free playback
- Multi-track audio recording
- MIDI sequencing with synthesized and sampled instruments
- Integrated DAW functionality
**Video Editing**
- Video timeline and editing with FFmpeg-based decoding
- GPU-accelerated waveform rendering with mipmaps
- Audio integration from video soundtracks
- Basic video timeline and editing (early stage)
- FFmpeg-based video decoding
## Technical Stack
**Current Implementation (Rust UI)**
- **UI Framework:** egui (immediate-mode GUI)
- **GPU Rendering:** Vello + wgpu (Vulkan/Metal/DirectX 12)
- **Audio Engine:** Custom real-time engine (`daw-backend`)
- cpal for cross-platform audio I/O
- symphonia for audio decoding
- dasp for node graph processing
- **Video:** FFmpeg 8 for encode/decode
- **Platform:** Cross-platform (Linux, macOS, Windows)
**Legacy Implementation (Deprecated)**
- Frontend: Vanilla JavaScript
- Backend: Rust (Tauri framework)
- **Frontend:** Vanilla JavaScript
- **Backend:** Rust (Tauri framework)
- **Audio:** cpal + dasp for audio processing
- **Video:** FFmpeg for encode/decode
## Project Status
Lightningbeam is under active development on the `rust-ui` branch. The project has been rewritten from a Tauri/JavaScript prototype to a pure Rust application to eliminate IPC bottlenecks and achieve better performance for real-time video and audio processing.
Lightningbeam is under active development. Current focus is on core functionality and architecture. Full project export is not yet fully implemented.
**Current Status:**
- ✅ Core UI panes (Stage, Timeline, Asset Library, Info Panel, Toolbar)
- ✅ Drawing tools (Select, Draw, Rectangle, Ellipse, Paint Bucket, Transform)
- ✅ Undo/redo system
- ✅ GPU-accelerated vector rendering
- ✅ Audio engine with node graph processing
- ✅ GPU waveform rendering with mipmaps
- ✅ Video decoding integration
- 🚧 Export system (in progress)
- 🚧 Node editor UI (planned)
- 🚧 Piano roll editor (planned)
### Known Architectural Challenge
## Getting Started
The current Tauri implementation hits IPC bandwidth limitations when streaming decoded video frames from Rust to JavaScript. Tauri's IPC layer has significant serialization overhead (~few MB/s), which is insufficient for real-time high-resolution video rendering.
### Prerequisites
- Rust (stable toolchain via [rustup](https://rustup.rs/))
- System dependencies:
- **Linux:** ALSA development files, FFmpeg 8
- **macOS:** FFmpeg (via Homebrew)
- **Windows:** FFmpeg 8, Visual Studio with C++ tools
See [docs/BUILDING.md](docs/BUILDING.md) for detailed setup instructions.
### Building and Running
```bash
# Clone the repository
git clone https://github.com/skykooler/lightningbeam.git
# Or from Gitea
git clone https://git.skyler.io/skyler/lightningbeam.git
cd lightningbeam/lightningbeam-ui
# Build and run
cargo run
# Or build optimized release version
cargo build --release
```
### Documentation
- **[CONTRIBUTING.md](CONTRIBUTING.md)** - Development setup and contribution guidelines
- **[ARCHITECTURE.md](ARCHITECTURE.md)** - System architecture overview
- **[docs/BUILDING.md](docs/BUILDING.md)** - Detailed build instructions and troubleshooting
- **[docs/AUDIO_SYSTEM.md](docs/AUDIO_SYSTEM.md)** - Audio engine architecture and development
- **[docs/UI_SYSTEM.md](docs/UI_SYSTEM.md)** - UI pane system and tool development
- **[docs/RENDERING.md](docs/RENDERING.md)** - GPU rendering pipeline and shaders
I'm currently exploring a full Rust rewrite using wgpu/egui to eliminate the IPC bottleneck and handle rendering entirely in native code.
## Project History
Lightningbeam evolved from earlier multimedia editing projects I've worked on since 2010, including the FreeJam DAW. The JavaScript/Tauri prototype began in November 2023, and the Rust UI rewrite started in late 2024 to eliminate performance bottlenecks and provide a more integrated native experience.
Lightningbeam evolved from earlier multimedia editing projects I've worked on since 2010, including the FreeJam DAW. The current JavaScript/Tauri iteration began in November 2023.
## Goals
Create a comprehensive FOSS alternative for 2D-focused multimedia work, integrating animation, audio, and video editing in a unified workflow. Lightningbeam aims to be:
- **Fast:** GPU-accelerated rendering and real-time audio processing
- **Flexible:** Node graph-based audio routing and modular synthesis
- **Integrated:** Seamless workflow across animation, audio, and video
- **Open:** Free and open-source, built on open standards
## Contributing
Contributions are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
## License
[License information to be added]
## Links
- **GitHub:** https://github.com/skykooler/lightningbeam
- **Gitea:** https://git.skyler.io/skyler/lightningbeam
Create a comprehensive FOSS alternative for 2D-focused multimedia work, integrating animation, audio, and video editing in a unified workflow.

BIN
daw-backend/Fade.wav Normal file

Binary file not shown.

BIN
daw-backend/audio.flac Normal file

Binary file not shown.

View File

@ -69,11 +69,6 @@ pub struct ReadAheadBuffer {
channels: u32,
/// Source file sample rate.
sample_rate: u32,
/// Last file-local frame requested by the audio callback.
/// Written by the consumer (render_from_file), read by the disk reader.
/// The disk reader uses this instead of the global playhead to know
/// where in the file to buffer around.
target_frame: AtomicU64,
}
// SAFETY: See the doc comment on ReadAheadBuffer for the full safety argument.
@ -107,7 +102,6 @@ impl ReadAheadBuffer {
capacity_frames,
channels,
sample_rate,
target_frame: AtomicU64::new(0),
}
}
@ -164,20 +158,6 @@ impl ReadAheadBuffer {
self.valid_frames.load(Ordering::Acquire)
}
/// Update the target frame — the file-local frame the audio callback
/// is currently reading from. Called by `render_from_file` (consumer).
#[inline]
pub fn set_target_frame(&self, frame: u64) {
self.target_frame.store(frame, Ordering::Relaxed);
}
/// Get the target frame set by the audio callback.
/// Called by the disk reader thread (producer).
#[inline]
pub fn target_frame(&self) -> u64 {
self.target_frame.load(Ordering::Relaxed)
}
/// Reset the buffer to start at `new_start` with zero valid frames.
/// Called by the **disk reader thread** (producer) after a seek.
pub fn reset(&self, new_start: u64) {
@ -451,16 +431,20 @@ pub struct DiskReader {
impl DiskReader {
/// Create a new disk reader with a background thread.
///
/// `playhead_frame` should be the same `Arc<AtomicU64>` used by the engine
/// so the disk reader knows where to fill ahead.
pub fn new(playhead_frame: Arc<AtomicU64>, _sample_rate: u32) -> Self {
let (command_tx, command_rx) = rtrb::RingBuffer::new(64);
let running = Arc::new(AtomicBool::new(true));
let thread_running = running.clone();
let thread_playhead = playhead_frame.clone();
let thread_handle = std::thread::Builder::new()
.name("disk-reader".into())
.spawn(move || {
Self::reader_thread(command_rx, thread_running);
Self::reader_thread(command_rx, thread_playhead, thread_running);
})
.expect("Failed to spawn disk reader thread");
@ -489,6 +473,7 @@ impl DiskReader {
/// The disk reader background thread.
fn reader_thread(
mut command_rx: rtrb::Consumer<DiskReaderCommand>,
playhead_frame: Arc<AtomicU64>,
running: Arc<AtomicBool>,
) {
let mut active_files: HashMap<usize, (CompressedReader, Arc<ReadAheadBuffer>)> =
@ -521,7 +506,6 @@ impl DiskReader {
}
DiskReaderCommand::Seek { frame } => {
for (_, (reader, buffer)) in active_files.iter_mut() {
buffer.set_target_frame(frame);
buffer.reset(frame);
if let Err(e) = reader.seek(frame) {
eprintln!("[DiskReader] Seek error: {}", e);
@ -534,28 +518,26 @@ impl DiskReader {
}
}
// Fill each active file's buffer ahead of its target frame.
// Each file's target_frame is set by the audio callback in
// render_from_file, giving the file-local frame being read.
// This is independent of the global engine playhead.
let playhead = playhead_frame.load(Ordering::Relaxed);
// Fill each active file's buffer ahead of the playhead.
for (_pool_index, (reader, buffer)) in active_files.iter_mut() {
let target = buffer.target_frame();
let buf_start = buffer.start_frame();
let buf_valid = buffer.valid_frames_count();
let buf_end = buf_start + buf_valid;
// If the target has jumped behind or far ahead of the buffer,
// If the playhead has jumped behind or far ahead of the buffer,
// seek the decoder and reset.
if target < buf_start || target > buf_end + reader.sample_rate as u64 {
buffer.reset(target);
let _ = reader.seek(target);
if playhead < buf_start || playhead > buf_end + reader.sample_rate as u64 {
buffer.reset(playhead);
let _ = reader.seek(playhead);
continue;
}
// Advance the buffer start to reclaim space behind the target.
// Advance the buffer start to reclaim space behind the playhead.
// Keep a small lookback for sinc interpolation (~32 frames).
let lookback = 64u64;
let advance_to = target.saturating_sub(lookback);
let advance_to = playhead.saturating_sub(lookback);
if advance_to > buf_start {
buffer.advance_start(advance_to);
}
@ -565,7 +547,7 @@ impl DiskReader {
let buf_valid = buffer.valid_frames_count();
let buf_end = buf_start + buf_valid;
let prefetch_target =
target + (PREFETCH_SECONDS * reader.sample_rate as f64) as u64;
playhead + (PREFETCH_SECONDS * reader.sample_rate as f64) as u64;
if buf_end >= prefetch_target {
continue; // Already filled far enough ahead.

View File

@ -272,16 +272,24 @@ impl Engine {
// Forward chunk generation events from background threads
while let Ok(event) = self.chunk_generation_rx.try_recv() {
match event {
AudioEvent::WaveformDecodeComplete { pool_index, samples, decoded_frames: _df, total_frames: _tf } => {
// Forward samples directly to UI — no clone, just move
if let Some(file) = self.audio_pool.get_file(pool_index) {
let sr = file.sample_rate;
let ch = file.channels;
AudioEvent::WaveformDecodeComplete { pool_index, samples } => {
// Update pool entry with decoded waveform samples
if let Some(file) = self.audio_pool.get_file_mut(pool_index) {
let total = file.frames;
if let crate::audio::pool::AudioStorage::Compressed {
ref mut decoded_for_waveform,
ref mut decoded_frames,
..
} = file.storage {
eprintln!("[ENGINE] Waveform decode complete for pool {}: {} samples", pool_index, samples.len());
*decoded_for_waveform = samples;
*decoded_frames = total;
}
// Notify frontend that waveform data is ready
let _ = self.event_tx.push(AudioEvent::AudioDecodeProgress {
pool_index,
samples,
sample_rate: sr,
channels: ch,
decoded_frames: total,
total_frames: total,
});
}
}
@ -481,10 +489,6 @@ impl Engine {
self.playhead_atomic.store(0, Ordering::Relaxed);
// Stop all MIDI notes when stopping playback
self.project.stop_all_notes();
// Reset disk reader buffers to the new playhead position
if let Some(ref mut dr) = self.disk_reader {
dr.send(crate::audio::disk_reader::DiskReaderCommand::Seek { frame: 0 });
}
}
Command::Pause => {
self.playing = false;
@ -1682,157 +1686,163 @@ impl Engine {
}
Command::ImportAudio(path) => {
if let Err(e) = self.do_import_audio(&path) {
eprintln!("[ENGINE] ImportAudio failed for {:?}: {}", path, e);
}
}
}
}
let path_str = path.to_string_lossy().to_string();
/// Import an audio file into the pool: mmap for PCM, streaming for compressed.
/// Returns the pool index on success. Emits AudioFileReady event.
fn do_import_audio(&mut self, path: &std::path::Path) -> Result<usize, String> {
let path_str = path.to_string_lossy().to_string();
let metadata = crate::io::read_metadata(path)
.map_err(|e| format!("Failed to read metadata for {:?}: {}", path, e))?;
eprintln!("[ENGINE] ImportAudio: format={:?}, ch={}, sr={}, n_frames={:?}, duration={:.2}s, path={}",
metadata.format, metadata.channels, metadata.sample_rate, metadata.n_frames, metadata.duration, path_str);
let pool_index = match metadata.format {
crate::io::AudioFormat::Pcm => {
let file = std::fs::File::open(path)
.map_err(|e| format!("Failed to open {:?}: {}", path, e))?;
// SAFETY: The file is opened read-only. The mmap is shared
// immutably. We never write to it.
let mmap = unsafe { memmap2::Mmap::map(&file) }
.map_err(|e| format!("mmap failed for {:?}: {}", path, e))?;
let header = crate::io::parse_wav_header(&mmap)
.map_err(|e| format!("WAV parse failed for {:?}: {}", path, e))?;
let audio_file = crate::audio::pool::AudioFile::from_mmap(
path.to_path_buf(),
mmap,
header.data_offset,
header.sample_format,
header.channels,
header.sample_rate,
header.total_frames,
);
self.audio_pool.add_file(audio_file)
}
crate::io::AudioFormat::Compressed => {
let sync_decode = std::env::var("DAW_SYNC_DECODE").is_ok();
if sync_decode {
eprintln!("[ENGINE] DAW_SYNC_DECODE: doing full decode of {:?}", path);
let loaded = crate::io::AudioFile::load(path)
.map_err(|e| format!("DAW_SYNC_DECODE failed: {}", e))?;
let ext = path.extension()
.and_then(|e| e.to_str())
.map(|s| s.to_lowercase());
let audio_file = crate::audio::pool::AudioFile::with_format(
path.to_path_buf(),
loaded.data,
loaded.channels,
loaded.sample_rate,
ext,
);
let idx = self.audio_pool.add_file(audio_file);
eprintln!("[ENGINE] DAW_SYNC_DECODE: pool_index={}, frames={}", idx, loaded.frames);
idx
} else {
let ext = path.extension()
.and_then(|e| e.to_str())
.map(|s| s.to_lowercase());
let total_frames = metadata.n_frames.unwrap_or_else(|| {
(metadata.duration * metadata.sample_rate as f64).ceil() as u64
});
let mut audio_file = crate::audio::pool::AudioFile::from_compressed(
path.to_path_buf(),
metadata.channels,
metadata.sample_rate,
total_frames,
ext,
);
let buffer = crate::audio::disk_reader::DiskReader::create_buffer(
metadata.sample_rate,
metadata.channels,
);
audio_file.read_ahead = Some(buffer.clone());
let idx = self.audio_pool.add_file(audio_file);
eprintln!("[ENGINE] Compressed: total_frames={}, pool_index={}, has_disk_reader={}",
total_frames, idx, self.disk_reader.is_some());
if let Some(ref mut dr) = self.disk_reader {
dr.send(crate::audio::disk_reader::DiskReaderCommand::ActivateFile {
pool_index: idx,
path: path.to_path_buf(),
buffer,
});
// Step 1: Read metadata (fast — no decoding)
let metadata = match crate::io::read_metadata(&path) {
Ok(m) => m,
Err(e) => {
eprintln!("[ENGINE] ImportAudio failed to read metadata for {:?}: {}", path, e);
return;
}
};
// Spawn background thread to decode file progressively for waveform display
let bg_tx = self.chunk_generation_tx.clone();
let bg_path = path.to_path_buf();
let bg_total_frames = total_frames;
let _ = std::thread::Builder::new()
.name(format!("waveform-decode-{}", idx))
.spawn(move || {
crate::io::AudioFile::decode_progressive(
&bg_path,
bg_total_frames,
|audio_data, decoded_frames, total| {
let _ = bg_tx.send(AudioEvent::WaveformDecodeComplete {
pool_index: idx,
samples: audio_data.to_vec(),
decoded_frames,
total_frames: total,
});
},
let pool_index;
eprintln!("[ENGINE] ImportAudio: format={:?}, ch={}, sr={}, n_frames={:?}, duration={:.2}s, path={}",
metadata.format, metadata.channels, metadata.sample_rate, metadata.n_frames, metadata.duration, path_str);
match metadata.format {
crate::io::AudioFormat::Pcm => {
// WAV/AIFF: memory-map the file for instant availability
let file = match std::fs::File::open(&path) {
Ok(f) => f,
Err(e) => {
eprintln!("[ENGINE] ImportAudio failed to open {:?}: {}", path, e);
return;
}
};
// SAFETY: The file is opened read-only. The mmap is shared
// immutably. We never write to it.
let mmap = match unsafe { memmap2::Mmap::map(&file) } {
Ok(m) => m,
Err(e) => {
eprintln!("[ENGINE] ImportAudio mmap failed for {:?}: {}", path, e);
return;
}
};
// Parse WAV header to find PCM data offset and format
let header = match crate::io::parse_wav_header(&mmap) {
Ok(h) => h,
Err(e) => {
eprintln!("[ENGINE] ImportAudio WAV parse failed for {:?}: {}", path, e);
return;
}
};
let audio_file = crate::audio::pool::AudioFile::from_mmap(
path.clone(),
mmap,
header.data_offset,
header.sample_format,
header.channels,
header.sample_rate,
header.total_frames,
);
pool_index = self.audio_pool.add_file(audio_file);
}
crate::io::AudioFormat::Compressed => {
let sync_decode = std::env::var("DAW_SYNC_DECODE").is_ok();
if sync_decode {
// Diagnostic: full synchronous decode to InMemory (bypasses ring buffer)
eprintln!("[ENGINE] DAW_SYNC_DECODE: doing full decode of {:?}", path);
match crate::io::AudioFile::load(&path) {
Ok(loaded) => {
let ext = path.extension()
.and_then(|e| e.to_str())
.map(|s| s.to_lowercase());
let audio_file = crate::audio::pool::AudioFile::with_format(
path.clone(),
loaded.data,
loaded.channels,
loaded.sample_rate,
ext,
);
pool_index = self.audio_pool.add_file(audio_file);
eprintln!("[ENGINE] DAW_SYNC_DECODE: pool_index={}, frames={}", pool_index, loaded.frames);
}
Err(e) => {
eprintln!("[ENGINE] DAW_SYNC_DECODE failed: {}", e);
return;
}
}
} else {
// Normal path: stream decode via disk reader
let ext = path.extension()
.and_then(|e| e.to_str())
.map(|s| s.to_lowercase());
let total_frames = metadata.n_frames.unwrap_or_else(|| {
(metadata.duration * metadata.sample_rate as f64).ceil() as u64
});
let mut audio_file = crate::audio::pool::AudioFile::from_compressed(
path.clone(),
metadata.channels,
metadata.sample_rate,
total_frames,
ext,
);
});
idx
}
}
};
// Emit AudioFileReady event
let _ = self.event_tx.push(AudioEvent::AudioFileReady {
pool_index,
path: path_str,
channels: metadata.channels,
sample_rate: metadata.sample_rate,
duration: metadata.duration,
format: metadata.format,
});
let buffer = crate::audio::disk_reader::DiskReader::create_buffer(
metadata.sample_rate,
metadata.channels,
);
audio_file.read_ahead = Some(buffer.clone());
// For PCM files, send samples inline so the UI doesn't need to
// do a blocking get_pool_audio_samples() query.
if metadata.format == crate::io::AudioFormat::Pcm {
if let Some(file) = self.audio_pool.get_file(pool_index) {
let samples = file.data().to_vec();
if !samples.is_empty() {
let _ = self.event_tx.push(AudioEvent::AudioDecodeProgress {
pool_index,
samples,
sample_rate: metadata.sample_rate,
channels: metadata.channels,
});
pool_index = self.audio_pool.add_file(audio_file);
eprintln!("[ENGINE] Compressed: total_frames={}, pool_index={}, has_disk_reader={}",
total_frames, pool_index, self.disk_reader.is_some());
if let Some(ref mut dr) = self.disk_reader {
dr.send(crate::audio::disk_reader::DiskReaderCommand::ActivateFile {
pool_index,
path: path.clone(),
buffer,
});
}
// Spawn background thread to decode full file for waveform display
let bg_tx = self.chunk_generation_tx.clone();
let bg_path = path.clone();
let _ = std::thread::Builder::new()
.name(format!("waveform-decode-{}", pool_index))
.spawn(move || {
eprintln!("[WAVEFORM DECODE] Starting full decode of {:?}", bg_path);
match crate::io::AudioFile::load(&bg_path) {
Ok(loaded) => {
eprintln!("[WAVEFORM DECODE] Complete: {} frames, {} channels",
loaded.frames, loaded.channels);
let _ = bg_tx.send(AudioEvent::WaveformDecodeComplete {
pool_index,
samples: loaded.data,
});
}
Err(e) => {
eprintln!("[WAVEFORM DECODE] Failed to decode {:?}: {}", bg_path, e);
}
}
});
}
}
}
// Emit AudioFileReady event
let _ = self.event_tx.push(AudioEvent::AudioFileReady {
pool_index,
path: path_str,
channels: metadata.channels,
sample_rate: metadata.sample_rate,
duration: metadata.duration,
format: metadata.format,
});
}
}
Ok(pool_index)
}
/// Handle synchronous queries from the UI thread
@ -2221,9 +2231,6 @@ impl Engine {
QueryResponse::AudioFileAddedSync(Ok(pool_index))
}
Query::ImportAudioSync(path) => {
QueryResponse::AudioImportedSync(self.do_import_audio(&path))
}
Query::GetProject => {
// Clone the entire project for serialization
QueryResponse::ProjectRetrieved(Ok(Box::new(self.project.clone())))
@ -2424,12 +2431,6 @@ impl Engine {
fn handle_stop_midi_recording(&mut self) {
eprintln!("[MIDI_RECORDING] handle_stop_midi_recording called");
if let Some(mut recording) = self.midi_recording_state.take() {
// Send note-off to the synth for any notes still held, so they don't get stuck
let track_id_for_noteoff = recording.track_id;
for note_num in recording.active_note_numbers() {
self.project.send_midi_note_off(track_id_for_noteoff, note_num);
}
// Close out any active notes at the current playhead position
let end_time = self.playhead as f64 / self.sample_rate as f64;
eprintln!("[MIDI_RECORDING] Closing active notes at time {}", end_time);
@ -2667,21 +2668,6 @@ impl EngineController {
let _ = self.command_tx.push(Command::ImportAudio(path));
}
/// Import an audio file synchronously and get the pool index.
/// Does the same work as `import_audio` (mmap for PCM, streaming for
/// compressed) but returns the real pool index directly.
/// NOTE: briefly blocks the UI thread during file setup (sub-ms for PCM
/// mmap; a few ms for compressed streaming init). If this becomes a
/// problem for very large files, switch to async import with event-based
/// pool index reconciliation.
pub fn import_audio_sync(&mut self, path: std::path::PathBuf) -> Result<usize, String> {
let query = Query::ImportAudioSync(path);
match self.send_query(query)? {
QueryResponse::AudioImportedSync(result) => result,
_ => Err("Unexpected query response".to_string()),
}
}
/// Add a clip to an audio track
pub fn add_audio_clip(&mut self, track_id: TrackId, pool_index: usize, start_time: f64, duration: f64, offset: f64) {
let _ = self.command_tx.push(Command::AddAudioClip(track_id, pool_index, start_time, duration, offset));

View File

@ -72,7 +72,7 @@ pub fn export_audio<P: AsRef<Path>>(
midi_pool: &MidiClipPool,
settings: &ExportSettings,
output_path: P,
event_tx: Option<&mut rtrb::Producer<AudioEvent>>,
mut event_tx: Option<&mut rtrb::Producer<AudioEvent>>,
) -> Result<(), String>
{
// Route to appropriate export implementation based on format
@ -435,6 +435,8 @@ fn export_mp3<P: AsRef<Path>>(
channel_layout,
pts,
)?;
frames_rendered += final_frame_size;
}
// Flush encoder
@ -600,6 +602,8 @@ fn export_aac<P: AsRef<Path>>(
channel_layout,
pts,
)?;
frames_rendered += final_frame_size;
}
// Flush encoder
@ -613,6 +617,35 @@ fn export_aac<P: AsRef<Path>>(
Ok(())
}
/// Convert interleaved f32 samples to planar i16 format
fn convert_to_planar_i16(interleaved: &[f32], channels: u32) -> Vec<Vec<i16>> {
let num_frames = interleaved.len() / channels as usize;
let mut planar = vec![vec![0i16; num_frames]; channels as usize];
for (i, chunk) in interleaved.chunks(channels as usize).enumerate() {
for (ch, &sample) in chunk.iter().enumerate() {
let clamped = sample.max(-1.0).min(1.0);
planar[ch][i] = (clamped * 32767.0) as i16;
}
}
planar
}
/// Convert interleaved f32 samples to planar f32 format
fn convert_to_planar_f32(interleaved: &[f32], channels: u32) -> Vec<Vec<f32>> {
let num_frames = interleaved.len() / channels as usize;
let mut planar = vec![vec![0.0f32; num_frames]; channels as usize];
for (i, chunk) in interleaved.chunks(channels as usize).enumerate() {
for (ch, &sample) in chunk.iter().enumerate() {
planar[ch][i] = sample;
}
}
planar
}
/// Convert a chunk of interleaved f32 samples to planar i16 format
fn convert_chunk_to_planar_i16(interleaved: &[f32], channels: u32) -> Vec<Vec<i16>> {
let num_frames = interleaved.len() / channels as usize;

View File

@ -256,8 +256,7 @@ impl MidiClipInstance {
// Get events from the clip that fall within the internal range
for event in &clip.events {
// Skip events outside the trimmed region
// Use > (not >=) for internal_end so note-offs at the clip boundary are included
if event.timestamp < self.internal_start || event.timestamp > self.internal_end {
if event.timestamp < self.internal_start || event.timestamp >= self.internal_end {
continue;
}
@ -266,10 +265,9 @@ impl MidiClipInstance {
let timeline_time = self.external_start + loop_offset + relative_content_time;
// Check if within current buffer range and instance bounds
// Use <= for external_end so note-offs at the clip boundary are included
if timeline_time >= range_start_seconds
&& timeline_time < range_end_seconds
&& timeline_time <= external_end
&& timeline_time < external_end
{
let mut adjusted_event = *event;
adjusted_event.timestamp = timeline_time;

View File

@ -511,11 +511,6 @@ impl AudioClipPool {
let src_start_position = start_time_seconds * audio_file.sample_rate as f64;
// Tell the disk reader where we're reading so it buffers the right region.
if use_read_ahead {
read_ahead.unwrap().set_target_frame(src_start_position as u64);
}
let mut rendered_frames = 0;
if audio_file.sample_rate == engine_sample_rate {

View File

@ -253,11 +253,6 @@ impl MidiRecordingState {
self.completed_notes.len()
}
/// Get the note numbers of all currently held (active) notes
pub fn active_note_numbers(&self) -> Vec<u8> {
self.active_notes.keys().copied().collect()
}
/// Close out all active notes at the given time
/// This should be called when stopping recording to end any held notes
pub fn close_active_notes(&mut self, end_time: f64) {

View File

@ -7,7 +7,7 @@ use super::node_graph::nodes::{AudioInputNode, AudioOutputNode};
use super::node_graph::preset::GraphPreset;
use super::pool::AudioClipPool;
use serde::{Serialize, Deserialize};
use std::collections::{HashMap, HashSet};
use std::collections::HashMap;
/// Track ID type
pub type TrackId = u32;
@ -334,10 +334,6 @@ pub struct MidiTrack {
/// Queue for live MIDI input (virtual keyboard, MIDI controllers)
#[serde(skip)]
live_midi_queue: Vec<MidiEvent>,
/// Clip instances that were active (overlapping playhead) in the previous render buffer.
/// Used to detect when the playhead exits a clip, so we can send all-notes-off.
#[serde(skip)]
prev_active_instances: HashSet<MidiClipInstanceId>,
}
impl Clone for MidiTrack {
@ -354,7 +350,6 @@ impl Clone for MidiTrack {
automation_lanes: self.automation_lanes.clone(),
next_automation_id: self.next_automation_id,
live_midi_queue: Vec::new(), // Don't clone live MIDI queue
prev_active_instances: HashSet::new(),
}
}
}
@ -377,7 +372,6 @@ impl MidiTrack {
automation_lanes: HashMap::new(),
next_automation_id: 0,
live_midi_queue: Vec::new(),
prev_active_instances: HashSet::new(),
}
}
@ -511,11 +505,7 @@ impl MidiTrack {
// Collect MIDI events from all clip instances that overlap with current time range
let mut midi_events = Vec::new();
let mut currently_active = HashSet::new();
for instance in &self.clip_instances {
if instance.overlaps_range(playhead_seconds, buffer_end_seconds) {
currently_active.insert(instance.id);
}
// Get the clip content from the pool
if let Some(clip) = midi_pool.get_clip(instance.clip_id) {
let events = instance.get_events_in_range(
@ -527,18 +517,6 @@ impl MidiTrack {
}
}
// Send all-notes-off for clip instances that just became inactive
// (playhead exited the clip). This prevents stuck notes from malformed clips.
for prev_id in &self.prev_active_instances {
if !currently_active.contains(prev_id) {
for note in 0..128u8 {
midi_events.push(MidiEvent::note_off(playhead_seconds, 0, note, 0));
}
break; // One round of all-notes-off is enough
}
}
self.prev_active_instances = currently_active;
// Add live MIDI events (from virtual keyboard or MIDI controllers)
// This allows real-time input to be heard during playback/recording
midi_events.extend(self.live_midi_queue.drain(..));

View File

@ -64,7 +64,7 @@ pub struct WaveformCache {
chunks: HashMap<WaveformChunkKey, Vec<WaveformPeak>>,
/// Maximum memory usage in MB (for future LRU eviction)
_max_memory_mb: usize,
max_memory_mb: usize,
/// Current memory usage estimate in bytes
current_memory_bytes: usize,
@ -75,7 +75,7 @@ impl WaveformCache {
pub fn new(max_memory_mb: usize) -> Self {
Self {
chunks: HashMap::new(),
_max_memory_mb: max_memory_mb,
max_memory_mb,
current_memory_bytes: 0,
}
}

View File

@ -274,22 +274,18 @@ pub enum AudioEvent {
},
/// Progressive decode progress for a compressed audio file's waveform data.
/// Carries the samples inline so the UI doesn't need to query back.
/// The UI can use this to update waveform display incrementally.
AudioDecodeProgress {
pool_index: usize,
samples: Vec<f32>,
sample_rate: u32,
channels: u32,
decoded_frames: u64,
total_frames: u64,
},
/// Background waveform decode progress/completion for a compressed audio file.
/// Background waveform decode completed for a compressed audio file.
/// Internal event — consumed by the engine to update the pool, not forwarded to UI.
/// `decoded_frames` < `total_frames` means partial; equal means complete.
WaveformDecodeComplete {
pool_index: usize,
samples: Vec<f32>,
decoded_frames: u64,
total_frames: u64,
},
}
@ -337,14 +333,6 @@ pub enum Query {
AddAudioClipSync(TrackId, usize, f64, f64, f64),
/// Add an audio file to the pool synchronously (path, data, channels, sample_rate) - returns pool index
AddAudioFileSync(String, Vec<f32>, u32, u32),
/// Import an audio file synchronously (path) - returns pool index.
/// Does the same work as Command::ImportAudio (mmap for PCM, streaming
/// setup for compressed) but returns the real pool index in the response.
/// NOTE: briefly blocks the UI thread during file setup (sub-ms for PCM
/// mmap; a few ms for compressed streaming init). If this becomes a
/// problem for very large files, switch to async import with event-based
/// pool index reconciliation.
ImportAudioSync(std::path::PathBuf),
/// Get raw audio samples from pool (pool_index) - returns (samples, sample_rate, channels)
GetPoolAudioSamples(usize),
/// Get a clone of the current project for serialization
@ -416,8 +404,6 @@ pub enum QueryResponse {
AudioClipInstanceAdded(Result<AudioClipInstanceId, String>),
/// Audio file added to pool (returns pool index)
AudioFileAddedSync(Result<usize, String>),
/// Audio file imported to pool (returns pool index)
AudioImportedSync(Result<usize, String>),
/// Raw audio samples from pool (samples, sample_rate, channels)
PoolAudioSamples(Result<(Vec<f32>, u32, u32), String>),
/// Project retrieved

View File

@ -338,123 +338,6 @@ impl AudioFile {
})
}
/// Decode a compressed audio file progressively, calling `on_progress` with
/// partial data snapshots so the UI can display waveforms as they decode.
/// Sends updates roughly every 2 seconds of decoded audio.
pub fn decode_progressive<P: AsRef<Path>, F>(path: P, total_frames: u64, on_progress: F)
where
F: Fn(&[f32], u64, u64),
{
let path = path.as_ref();
let file = match std::fs::File::open(path) {
Ok(f) => f,
Err(e) => {
eprintln!("[WAVEFORM DECODE] Failed to open {:?}: {}", path, e);
return;
}
};
let mss = MediaSourceStream::new(Box::new(file), Default::default());
let mut hint = Hint::new();
if let Some(extension) = path.extension() {
if let Some(ext_str) = extension.to_str() {
hint.with_extension(ext_str);
}
}
let probed = match symphonia::default::get_probe()
.format(&hint, mss, &FormatOptions::default(), &MetadataOptions::default())
{
Ok(p) => p,
Err(e) => {
eprintln!("[WAVEFORM DECODE] Failed to probe {:?}: {}", path, e);
return;
}
};
let mut format = probed.format;
let track = match format.tracks().iter()
.find(|t| t.codec_params.codec != symphonia::core::codecs::CODEC_TYPE_NULL)
{
Some(t) => t,
None => {
eprintln!("[WAVEFORM DECODE] No audio tracks in {:?}", path);
return;
}
};
let track_id = track.id;
let channels = track.codec_params.channels
.map(|c| c.count() as u32)
.unwrap_or(2);
let sample_rate = track.codec_params.sample_rate.unwrap_or(44100);
let mut decoder = match symphonia::default::get_codecs()
.make(&track.codec_params, &DecoderOptions::default())
{
Ok(d) => d,
Err(e) => {
eprintln!("[WAVEFORM DECODE] Failed to create decoder for {:?}: {}", path, e);
return;
}
};
let mut audio_data = Vec::new();
let mut sample_buf = None;
// Send a progress update roughly every 2 seconds of audio
// Send first update quickly (0.25s), then every 2s of audio
let initial_interval = (sample_rate as usize * channels as usize) / 4;
let steady_interval = (sample_rate as usize * channels as usize) * 2;
let mut sent_first = false;
let mut last_update_len = 0usize;
loop {
let packet = match format.next_packet() {
Ok(packet) => packet,
Err(Error::IoError(e)) if e.kind() == std::io::ErrorKind::UnexpectedEof => break,
Err(Error::ResetRequired) => break,
Err(_) => break,
};
if packet.track_id() != track_id {
continue;
}
match decoder.decode(&packet) {
Ok(decoded) => {
if sample_buf.is_none() {
let spec = *decoded.spec();
let duration = decoded.capacity() as u64;
sample_buf = Some(SampleBuffer::<f32>::new(duration, spec));
}
if let Some(ref mut buf) = sample_buf {
buf.copy_interleaved_ref(decoded);
audio_data.extend_from_slice(buf.samples());
}
// Send progressive update (fast initial, then periodic)
// Only send NEW samples since last update (delta) to avoid large copies
let interval = if sent_first { steady_interval } else { initial_interval };
if audio_data.len() - last_update_len >= interval {
let decoded_frames = audio_data.len() as u64 / channels as u64;
on_progress(&audio_data[last_update_len..], decoded_frames, total_frames);
last_update_len = audio_data.len();
sent_first = true;
}
}
Err(Error::DecodeError(_)) => continue,
Err(_) => break,
}
}
// Final update with remaining data (delta since last update)
let decoded_frames = audio_data.len() as u64 / channels as u64;
on_progress(&audio_data[last_update_len..], decoded_frames, decoded_frames.max(total_frames));
}
/// Calculate the duration of the audio file in seconds
pub fn duration(&self) -> f64 {
self.frames as f64 / self.sample_rate as f64

View File

@ -42,30 +42,3 @@ pollster = "0.3"
# Desktop notifications
notify-rust = "4.11"
# Optimize the audio backend even in debug builds — the audio callback
# runs on a real-time thread with ~1.5ms deadlines at small buffer sizes,
# so it cannot tolerate unoptimized code.
[profile.dev.package.daw-backend]
opt-level = 2
# Also optimize symphonia (audio decoder) and cpal (audio I/O) — these
# run in the audio callback path and are heavily numeric.
[profile.dev.package.symphonia]
opt-level = 2
[profile.dev.package.symphonia-core]
opt-level = 2
[profile.dev.package.symphonia-bundle-mp3]
opt-level = 2
[profile.dev.package.symphonia-bundle-flac]
opt-level = 2
[profile.dev.package.symphonia-format-ogg]
opt-level = 2
[profile.dev.package.symphonia-codec-vorbis]
opt-level = 2
[profile.dev.package.symphonia-codec-aac]
opt-level = 2
[profile.dev.package.symphonia-format-isomp4]
opt-level = 2
[profile.dev.package.cpal]
opt-level = 2

View File

@ -96,18 +96,6 @@ pub trait Action: Send {
fn rollback_backend(&mut self, _backend: &mut BackendContext, _document: &Document) -> Result<(), String> {
Ok(())
}
/// Return MIDI cache data reflecting the state after execute/redo.
/// Format: (clip_id, notes) where notes are (start_time, note, velocity, duration).
/// Used to keep the frontend MIDI event cache in sync after undo/redo.
fn midi_notes_after_execute(&self) -> Option<(u32, &[(f64, u8, u8, f64)])> {
None
}
/// Return MIDI cache data reflecting the state after rollback/undo.
fn midi_notes_after_rollback(&self) -> Option<(u32, &[(f64, u8, u8, f64)])> {
None
}
}
/// Action executor that wraps the document and manages undo/redo
@ -257,18 +245,6 @@ impl ActionExecutor {
self.undo_stack.last().map(|a| a.description())
}
/// Get MIDI cache data from the last action on the undo stack (after redo).
/// Returns the notes reflecting execute state.
pub fn last_undo_midi_notes(&self) -> Option<(u32, &[(f64, u8, u8, f64)])> {
self.undo_stack.last().and_then(|a| a.midi_notes_after_execute())
}
/// Get MIDI cache data from the last action on the redo stack (after undo).
/// Returns the notes reflecting rollback state.
pub fn last_redo_midi_notes(&self) -> Option<(u32, &[(f64, u8, u8, f64)])> {
self.redo_stack.last().and_then(|a| a.midi_notes_after_rollback())
}
/// Get the description of the next action to redo
pub fn redo_description(&self) -> Option<String> {
self.redo_stack.last().map(|a| a.description())

View File

@ -24,7 +24,6 @@ pub mod create_folder;
pub mod rename_folder;
pub mod delete_folder;
pub mod move_asset_to_folder;
pub mod update_midi_notes;
pub use add_clip_instance::AddClipInstanceAction;
pub use add_effect::AddEffectAction;
@ -47,4 +46,3 @@ pub use create_folder::CreateFolderAction;
pub use rename_folder::RenameFolderAction;
pub use delete_folder::{DeleteFolderAction, DeleteStrategy};
pub use move_asset_to_folder::MoveAssetToFolderAction;
pub use update_midi_notes::UpdateMidiNotesAction;

View File

@ -32,7 +32,7 @@ impl Action for MoveClipInstancesAction {
let mut expanded_moves = self.layer_moves.clone();
let mut already_processed = std::collections::HashSet::new();
for (_layer_id, moves) in &self.layer_moves {
for (layer_id, moves) in &self.layer_moves {
for (instance_id, old_start, new_start) in moves {
// Skip if already processed
if already_processed.contains(instance_id) {

View File

@ -26,10 +26,10 @@ pub struct PaintBucketAction {
fill_color: ShapeColor,
/// Tolerance for gap bridging (in pixels)
_tolerance: f64,
tolerance: f64,
/// Gap handling mode
_gap_mode: GapHandlingMode,
gap_mode: GapHandlingMode,
/// ID of the created shape (set after execution)
created_shape_id: Option<Uuid>,
@ -59,8 +59,8 @@ impl PaintBucketAction {
layer_id,
click_point,
fill_color,
_tolerance: tolerance,
_gap_mode: gap_mode,
tolerance,
gap_mode,
created_shape_id: None,
created_shape_instance_id: None,
}

View File

@ -68,6 +68,26 @@ impl SetInstancePropertiesAction {
}
}
fn get_instance_value(&self, document: &Document, instance_id: &Uuid) -> Option<f64> {
if let Some(layer) = document.get_layer(&self.layer_id) {
if let AnyLayer::Vector(vector_layer) = layer {
if let Some(instance) = vector_layer.get_object(instance_id) {
return Some(match &self.property {
InstancePropertyChange::X(_) => instance.transform.x,
InstancePropertyChange::Y(_) => instance.transform.y,
InstancePropertyChange::Rotation(_) => instance.transform.rotation,
InstancePropertyChange::ScaleX(_) => instance.transform.scale_x,
InstancePropertyChange::ScaleY(_) => instance.transform.scale_y,
InstancePropertyChange::SkewX(_) => instance.transform.skew_x,
InstancePropertyChange::SkewY(_) => instance.transform.skew_y,
InstancePropertyChange::Opacity(_) => instance.opacity,
});
}
}
}
None
}
fn apply_to_instance(&self, document: &mut Document, instance_id: &Uuid, value: f64) {
if let Some(layer) = document.get_layer_mut(&self.layer_id) {
if let AnyLayer::Vector(vector_layer) = layer {

View File

@ -68,7 +68,7 @@ impl Action for TrimClipInstancesAction {
let mut expanded_trims = self.layer_trims.clone();
let mut already_processed = std::collections::HashSet::new();
for (_layer_id, trims) in &self.layer_trims {
for (layer_id, trims) in &self.layer_trims {
for (instance_id, trim_type, old, new) in trims {
// Skip if already processed
if already_processed.contains(instance_id) {
@ -189,7 +189,7 @@ impl Action for TrimClipInstancesAction {
match trim_type {
TrimType::TrimLeft => {
if let (Some(old_trim), Some(new_trim), Some(old_timeline), Some(_new_timeline)) =
if let (Some(old_trim), Some(new_trim), Some(old_timeline), Some(new_timeline)) =
(old.trim_value, new.trim_value, old.timeline_start, new.timeline_start)
{
// If extending to the left (new_trim < old_trim)
@ -365,7 +365,7 @@ impl Action for TrimClipInstancesAction {
.ok_or_else(|| format!("Layer {} not mapped to backend track", layer_id))?;
// Process each clip instance trim
for (instance_id, _trim_type, _old, _new) in trims {
for (instance_id, trim_type, _old, new) in trims {
// Get clip instances from the layer
let clip_instances = match layer {
AnyLayer::Audio(al) => &al.clip_instances,

View File

@ -1,82 +0,0 @@
use crate::action::Action;
use crate::document::Document;
use uuid::Uuid;
/// Action to update MIDI notes in a clip (supports undo/redo)
///
/// Stores the before and after note states. MIDI note data lives in the backend,
/// so execute/rollback are no-ops on the document — all changes go through
/// execute_backend/rollback_backend.
pub struct UpdateMidiNotesAction {
/// Layer containing the MIDI clip
pub layer_id: Uuid,
/// Backend MIDI clip ID
pub midi_clip_id: u32,
/// Notes before the edit: (start_time, note, velocity, duration)
pub old_notes: Vec<(f64, u8, u8, f64)>,
/// Notes after the edit: (start_time, note, velocity, duration)
pub new_notes: Vec<(f64, u8, u8, f64)>,
/// Human-readable description
pub description_text: String,
}
impl Action for UpdateMidiNotesAction {
fn execute(&mut self, _document: &mut Document) -> Result<(), String> {
// MIDI note data lives in the backend, not the document
Ok(())
}
fn rollback(&mut self, _document: &mut Document) -> Result<(), String> {
Ok(())
}
fn description(&self) -> String {
self.description_text.clone()
}
fn execute_backend(
&mut self,
backend: &mut crate::action::BackendContext,
_document: &Document,
) -> Result<(), String> {
let controller = match backend.audio_controller.as_mut() {
Some(c) => c,
None => return Ok(()),
};
let track_id = backend
.layer_to_track_map
.get(&self.layer_id)
.ok_or_else(|| format!("Layer {} not mapped to backend track", self.layer_id))?;
controller.update_midi_clip_notes(*track_id, self.midi_clip_id, self.new_notes.clone());
Ok(())
}
fn rollback_backend(
&mut self,
backend: &mut crate::action::BackendContext,
_document: &Document,
) -> Result<(), String> {
let controller = match backend.audio_controller.as_mut() {
Some(c) => c,
None => return Ok(()),
};
let track_id = backend
.layer_to_track_map
.get(&self.layer_id)
.ok_or_else(|| format!("Layer {} not mapped to backend track", self.layer_id))?;
controller.update_midi_clip_notes(*track_id, self.midi_clip_id, self.old_notes.clone());
Ok(())
}
fn midi_notes_after_execute(&self) -> Option<(u32, &[(f64, u8, u8, f64)])> {
Some((self.midi_clip_id, &self.new_notes))
}
fn midi_notes_after_rollback(&self) -> Option<(u32, &[(f64, u8, u8, f64)])> {
Some((self.midi_clip_id, &self.old_notes))
}
}

View File

@ -229,6 +229,27 @@ pub fn find_closest_approach(
}
}
/// Refine intersection parameters using Newton's method
fn refine_intersection(
curve1: &CubicBez,
curve2: &CubicBez,
mut t1: f64,
mut t2: f64,
) -> (f64, f64) {
// Simple refinement: just find nearest points iteratively
for _ in 0..5 {
let p1 = curve1.eval(t1);
let nearest2 = curve2.nearest(p1, 1e-6);
t2 = nearest2.t;
let p2 = curve2.eval(t2);
let nearest1 = curve1.nearest(p2, 1e-6);
t1 = nearest1.t;
}
(t1.clamp(0.0, 1.0), t2.clamp(0.0, 1.0))
}
/// Refine self-intersection parameters
fn refine_self_intersection(curve: &CubicBez, mut t1: f64, mut t2: f64) -> (f64, f64) {
// Refine by moving parameters closer to where curves actually meet

View File

@ -189,6 +189,22 @@ impl EffectLayer {
self.clip_instances = new_order;
}
// === MUTATION METHODS (pub(crate) - only accessible to action module) ===
/// Add a clip instance (internal, for actions only)
pub(crate) fn add_clip_instance_internal(&mut self, instance: ClipInstance) -> Uuid {
self.add_clip_instance(instance)
}
/// Remove a clip instance (internal, for actions only)
pub(crate) fn remove_clip_instance_internal(&mut self, id: &Uuid) -> Option<ClipInstance> {
self.remove_clip_instance(id)
}
/// Insert a clip instance at a specific index (internal, for actions only)
pub(crate) fn insert_clip_instance_internal(&mut self, index: usize, instance: ClipInstance) -> Uuid {
self.insert_clip_instance(index, instance)
}
}
#[cfg(test)]

View File

@ -455,23 +455,23 @@ struct CurveIntersection {
t_on_current: f64,
/// Parameter on other curve
_t_on_other: f64,
t_on_other: f64,
/// ID of the other curve
_other_curve_id: usize,
other_curve_id: usize,
/// Intersection point
point: Point,
/// Whether this is a gap (within tolerance but not exact intersection)
_is_gap: bool,
is_gap: bool,
}
/// Find all intersections on a given curve
fn find_intersections_on_curve(
curve_id: usize,
curves: &[CubicBez],
_processed_curves: &HashSet<usize>,
processed_curves: &HashSet<usize>,
quadtree: &ToleranceQuadtree,
tolerance: f64,
debug_info: &mut WalkDebugInfo,
@ -489,10 +489,10 @@ fn find_intersections_on_curve(
for int in self_ints {
intersections.push(CurveIntersection {
t_on_current: int.t1,
_t_on_other: int.t2.unwrap_or(int.t1),
_other_curve_id: curve_id,
t_on_other: int.t2.unwrap_or(int.t1),
other_curve_id: curve_id,
point: int.point,
_is_gap: false,
is_gap: false,
});
debug_info.intersections_found += 1;
}
@ -504,10 +504,10 @@ fn find_intersections_on_curve(
for int in exact_ints {
intersections.push(CurveIntersection {
t_on_current: int.t1,
_t_on_other: int.t2.unwrap_or(0.0),
_other_curve_id: other_id,
t_on_other: int.t2.unwrap_or(0.0),
other_curve_id: other_id,
point: int.point,
_is_gap: false,
is_gap: false,
});
debug_info.intersections_found += 1;
}
@ -516,10 +516,10 @@ fn find_intersections_on_curve(
if let Some(approach) = find_closest_approach(current_curve, other_curve, tolerance) {
intersections.push(CurveIntersection {
t_on_current: approach.t1,
_t_on_other: approach.t2,
_other_curve_id: other_id,
t_on_other: approach.t2,
other_curve_id: other_id,
point: approach.p1,
_is_gap: true,
is_gap: true,
});
}
}

View File

@ -478,7 +478,7 @@ fn map_t_to_relative_distances(bez: &[Point; 4], b_parts: usize) -> Vec<f64> {
}
/// Find t value for a given parameter distance
fn find_t(_bez: &[Point; 4], param: f64, t_dist_map: &[f64], b_parts: usize) -> f64 {
fn find_t(bez: &[Point; 4], param: f64, t_dist_map: &[f64], b_parts: usize) -> f64 {
if param < 0.0 {
return 0.0;
}

View File

@ -122,7 +122,7 @@ impl PlanarGraph {
// Initialize with endpoints for all curves
for (i, curve) in curves.iter().enumerate() {
let curve_intersections = vec![
let mut curve_intersections = vec![
(0.0, curve.p0),
(1.0, curve.p3),
];
@ -202,7 +202,7 @@ impl PlanarGraph {
/// Build nodes and edges from curves and their intersections
fn build_nodes_and_edges(
_curves: &[CubicBez],
curves: &[CubicBez],
intersections: HashMap<usize, Vec<(f64, Point)>>,
) -> (Vec<GraphNode>, Vec<GraphEdge>) {
let mut nodes = Vec::new();
@ -459,6 +459,11 @@ impl PlanarGraph {
// Get the end node of this half-edge
let edge = &self.edges[current_edge];
let start_node_this_edge = if current_forward {
edge.start_node
} else {
edge.end_node
};
let end_node = if current_forward {
edge.end_node
} else {

View File

@ -32,9 +32,9 @@ struct ExtractedSegment {
/// Original curve index
curve_index: usize,
/// Minimum parameter value from boundary points
_t_min: f64,
t_min: f64,
/// Maximum parameter value from boundary points
_t_max: f64,
t_max: f64,
/// The curve segment (trimmed to [t_min, t_max])
segment: CurveSegment,
}
@ -148,8 +148,8 @@ fn split_segments_at_intersections(segments: Vec<ExtractedSegment>) -> Vec<Extra
result.push(ExtractedSegment {
curve_index: seg.curve_index,
_t_min: t_start,
_t_max: t_end,
t_min: t_start,
t_max: t_end,
segment: subseg,
});
}
@ -260,8 +260,8 @@ fn extract_segments(
segments.push(ExtractedSegment {
curve_index: curve_idx,
_t_min: t_min,
_t_max: t_max,
t_min,
t_max,
segment,
});
}
@ -540,7 +540,7 @@ enum ConnectedSegment {
Curve {
segment: CurveSegment,
start: Point,
_end: Point,
end: Point,
},
/// A line segment bridging a gap
Line { start: Point, end: Point },
@ -550,7 +550,7 @@ enum ConnectedSegment {
fn connect_segments(
extracted: &[ExtractedSegment],
config: &SegmentBuilderConfig,
_click_point: Point,
click_point: Point,
) -> Option<Vec<ConnectedSegment>> {
if extracted.is_empty() {
println!("connect_segments: No segments to connect");
@ -575,7 +575,7 @@ fn connect_segments(
connected.push(ConnectedSegment::Curve {
segment: current.segment.clone(),
start: current.segment.eval_at(0.0),
_end: current_end,
end: current_end,
});
// Check if we need to connect to the next segment
@ -794,7 +794,7 @@ mod tests {
// If it found segments, verify they're valid
assert!(!segments.is_empty());
for seg in &segments {
assert!(seg._t_min <= seg._t_max);
assert!(seg.t_min <= seg.t_max);
}
}
// If None, the algorithm couldn't form a cycle - that's okay for this test

View File

@ -23,12 +23,12 @@ pub struct VideoMetadata {
/// Video decoder with LRU frame caching
pub struct VideoDecoder {
path: String,
_width: u32, // Original video width
_height: u32, // Original video height
width: u32, // Original video width
height: u32, // Original video height
output_width: u32, // Scaled output width
output_height: u32, // Scaled output height
fps: f64,
_duration: f64,
duration: f64,
time_base: f64,
stream_index: usize,
frame_cache: LruCache<i64, Vec<u8>>, // timestamp -> RGBA data
@ -107,12 +107,12 @@ impl VideoDecoder {
Ok(Self {
path,
_width: width,
_height: height,
width,
height,
output_width,
output_height,
fps,
_duration: duration,
duration,
time_base,
stream_index,
frame_cache: LruCache::new(

View File

@ -1,717 +0,0 @@
/// GPU-based Constant-Q Transform (CQT) spectrogram with streaming ring-buffer cache.
///
/// Replaces the old FFT spectrogram with a CQT that has logarithmic frequency spacing
/// (bins map directly to MIDI notes). Only the visible viewport is computed, with results
/// cached in a ring-buffer texture so scrolling only computes new columns.
///
/// Architecture:
/// - CqtGpuResources stored in CallbackResources (long-lived, holds pipelines)
/// - CqtCacheEntry per pool_index (cache texture, bin params, ring buffer state)
/// - CqtCallback implements CallbackTrait (per-frame compute + render)
/// - Compute shader reads audio from waveform mip-0 textures (already on GPU)
/// - Render shader reads from cache texture with colormap
use std::collections::HashMap;
use wgpu::util::DeviceExt;
use crate::waveform_gpu::WaveformGpuResources;
/// CQT parameters
const BINS_PER_OCTAVE: u32 = 24;
const FREQ_BINS: u32 = 174; // ceil(log2(4186.0 / 27.5) * 24) = ceil(173.95)
const HOP_SIZE: u32 = 512;
const CACHE_CAPACITY: u32 = 4096;
const MAX_COLS_PER_FRAME: u32 = 128;
const F_MIN: f64 = 27.5; // A0 = MIDI 21
const WAVEFORM_TEX_WIDTH: u32 = 2048;
/// Per-bin CQT kernel parameters, uploaded as a storage buffer.
/// Must match BinInfo in cqt_compute.wgsl.
#[repr(C)]
#[derive(Debug, Copy, Clone, bytemuck::Pod, bytemuck::Zeroable)]
struct CqtBinParams {
window_length: u32,
phase_step: f32, // 2*pi*Q / N_k
_pad0: u32,
_pad1: u32,
}
/// Compute shader uniform params. Must match CqtParams in cqt_compute.wgsl.
#[repr(C)]
#[derive(Debug, Copy, Clone, bytemuck::Pod, bytemuck::Zeroable)]
struct CqtComputeParams {
hop_size: u32,
freq_bins: u32,
cache_capacity: u32,
cache_write_offset: u32,
num_columns: u32,
column_start: u32,
tex_width: u32,
total_frames: u32,
sample_rate: f32,
column_stride: u32,
_pad1: u32,
_pad2: u32,
}
/// Render shader uniform params. Must match Params in cqt_render.wgsl exactly.
/// Layout: clip_rect(16) + 18 × f32(72) + pad vec2(8) = 96 bytes
#[repr(C)]
#[derive(Debug, Copy, Clone, bytemuck::Pod, bytemuck::Zeroable)]
pub struct CqtRenderParams {
pub clip_rect: [f32; 4], // 16 bytes @ offset 0
pub viewport_start_time: f32, // 4 @ 16
pub pixels_per_second: f32, // 4 @ 20
pub audio_duration: f32, // 4 @ 24
pub sample_rate: f32, // 4 @ 28
pub clip_start_time: f32, // 4 @ 32
pub trim_start: f32, // 4 @ 36
pub freq_bins: f32, // 4 @ 40
pub bins_per_octave: f32, // 4 @ 44
pub hop_size: f32, // 4 @ 48
pub scroll_y: f32, // 4 @ 52
pub note_height: f32, // 4 @ 56
pub min_note: f32, // 4 @ 60
pub max_note: f32, // 4 @ 64
pub gamma: f32, // 4 @ 68
pub cache_capacity: f32, // 4 @ 72
pub cache_start_column: f32, // 4 @ 76
pub cache_valid_start: f32, // 4 @ 80
pub cache_valid_end: f32, // 4 @ 84
pub column_stride: f32, // 4 @ 88
pub _pad: f32, // 4 @ 92, total 96
}
/// Per-pool-index cache entry with ring buffer and GPU resources.
#[allow(dead_code)]
struct CqtCacheEntry {
// Cache texture (Rgba16Float for universal filterable + storage support)
cache_texture: wgpu::Texture,
cache_texture_view: wgpu::TextureView,
cache_storage_view: wgpu::TextureView,
cache_capacity: u32,
freq_bins: u32,
// Ring buffer state
cache_start_column: i64,
cache_valid_start: i64,
cache_valid_end: i64,
// CQT kernel data
bin_params_buffer: wgpu::Buffer,
// Waveform texture reference (cloned from WaveformGpuEntry)
waveform_texture_view: wgpu::TextureView,
waveform_total_frames: u64,
// Bind groups
compute_bind_group: wgpu::BindGroup,
compute_uniform_buffer: wgpu::Buffer,
render_bind_group: wgpu::BindGroup,
render_uniform_buffer: wgpu::Buffer,
// Metadata
sample_rate: u32,
current_stride: u32,
}
/// Global GPU resources for CQT (stored in egui_wgpu::CallbackResources).
pub struct CqtGpuResources {
entries: HashMap<usize, CqtCacheEntry>,
compute_pipeline: wgpu::ComputePipeline,
compute_bind_group_layout: wgpu::BindGroupLayout,
render_pipeline: wgpu::RenderPipeline,
render_bind_group_layout: wgpu::BindGroupLayout,
sampler: wgpu::Sampler,
}
/// Per-frame callback for computing and rendering a CQT spectrogram.
pub struct CqtCallback {
pub pool_index: usize,
pub params: CqtRenderParams,
pub target_format: wgpu::TextureFormat,
pub sample_rate: u32,
/// Visible column range (global CQT column indices)
pub visible_col_start: i64,
pub visible_col_end: i64,
/// Column stride: 1 = full resolution, N = compute every Nth column
pub stride: u32,
}
/// Precompute CQT bin parameters for a given sample rate.
fn precompute_bin_params(sample_rate: u32) -> Vec<CqtBinParams> {
let b = BINS_PER_OCTAVE as f64;
let q = 1.0 / (2.0_f64.powf(1.0 / b) - 1.0);
(0..FREQ_BINS)
.map(|k| {
let f_k = F_MIN * 2.0_f64.powf(k as f64 / b);
let n_k = (q * sample_rate as f64 / f_k).ceil() as u32;
let phase_step = (2.0 * std::f64::consts::PI * q / n_k as f64) as f32;
CqtBinParams {
window_length: n_k,
phase_step,
_pad0: 0,
_pad1: 0,
}
})
.collect()
}
impl CqtGpuResources {
pub fn new(device: &wgpu::Device, target_format: wgpu::TextureFormat) -> Self {
// Compute shader
let compute_shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
label: Some("cqt_compute_shader"),
source: wgpu::ShaderSource::Wgsl(
include_str!("panes/shaders/cqt_compute.wgsl").into(),
),
});
// Render shader
let render_shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
label: Some("cqt_render_shader"),
source: wgpu::ShaderSource::Wgsl(
include_str!("panes/shaders/cqt_render.wgsl").into(),
),
});
// Compute bind group layout:
// 0: audio_tex (texture_2d<f32>, read)
// 1: cqt_out (texture_storage_2d<rgba16float, write>)
// 2: params (uniform)
// 3: bins (storage, read)
let compute_bind_group_layout =
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("cqt_compute_bgl"),
entries: &[
wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::COMPUTE,
ty: wgpu::BindingType::Texture {
sample_type: wgpu::TextureSampleType::Float { filterable: false },
view_dimension: wgpu::TextureViewDimension::D2,
multisampled: false,
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 1,
visibility: wgpu::ShaderStages::COMPUTE,
ty: wgpu::BindingType::StorageTexture {
access: wgpu::StorageTextureAccess::WriteOnly,
format: wgpu::TextureFormat::Rgba16Float,
view_dimension: wgpu::TextureViewDimension::D2,
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 2,
visibility: wgpu::ShaderStages::COMPUTE,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: false,
min_binding_size: None,
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 3,
visibility: wgpu::ShaderStages::COMPUTE,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Storage { read_only: true },
has_dynamic_offset: false,
min_binding_size: None,
},
count: None,
},
],
});
// Render bind group layout: cache_tex + sampler + uniforms
let render_bind_group_layout =
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("cqt_render_bgl"),
entries: &[
wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Texture {
sample_type: wgpu::TextureSampleType::Float { filterable: true },
view_dimension: wgpu::TextureViewDimension::D2,
multisampled: false,
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 1,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Sampler(wgpu::SamplerBindingType::Filtering),
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 2,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: false,
min_binding_size: None,
},
count: None,
},
],
});
// Compute pipeline
let compute_pipeline_layout =
device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
label: Some("cqt_compute_pipeline_layout"),
bind_group_layouts: &[&compute_bind_group_layout],
push_constant_ranges: &[],
});
let compute_pipeline =
device.create_compute_pipeline(&wgpu::ComputePipelineDescriptor {
label: Some("cqt_compute_pipeline"),
layout: Some(&compute_pipeline_layout),
module: &compute_shader,
entry_point: Some("main"),
compilation_options: Default::default(),
cache: None,
});
// Render pipeline
let render_pipeline_layout =
device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
label: Some("cqt_render_pipeline_layout"),
bind_group_layouts: &[&render_bind_group_layout],
push_constant_ranges: &[],
});
let render_pipeline = device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
label: Some("cqt_render_pipeline"),
layout: Some(&render_pipeline_layout),
vertex: wgpu::VertexState {
module: &render_shader,
entry_point: Some("vs_main"),
buffers: &[],
compilation_options: Default::default(),
},
fragment: Some(wgpu::FragmentState {
module: &render_shader,
entry_point: Some("fs_main"),
targets: &[Some(wgpu::ColorTargetState {
format: target_format,
blend: Some(wgpu::BlendState::ALPHA_BLENDING),
write_mask: wgpu::ColorWrites::ALL,
})],
compilation_options: Default::default(),
}),
primitive: wgpu::PrimitiveState {
topology: wgpu::PrimitiveTopology::TriangleList,
..Default::default()
},
depth_stencil: None,
multisample: wgpu::MultisampleState::default(),
multiview: None,
cache: None,
});
// Bilinear sampler for smooth interpolation in render shader
let sampler = device.create_sampler(&wgpu::SamplerDescriptor {
label: Some("cqt_sampler"),
mag_filter: wgpu::FilterMode::Linear,
min_filter: wgpu::FilterMode::Linear,
mipmap_filter: wgpu::FilterMode::Nearest,
..Default::default()
});
Self {
entries: HashMap::new(),
compute_pipeline,
compute_bind_group_layout,
render_pipeline,
render_bind_group_layout,
sampler,
}
}
/// Create a cache entry for a pool index, referencing the waveform texture.
fn ensure_cache_entry(
&mut self,
device: &wgpu::Device,
pool_index: usize,
waveform_texture_view: wgpu::TextureView,
total_frames: u64,
sample_rate: u32,
) {
// If entry exists, check if waveform data has grown (progressive decode)
if let Some(entry) = self.entries.get_mut(&pool_index) {
if entry.waveform_total_frames != total_frames {
// Waveform texture updated in-place with more data.
// The texture view is still valid (no destroy/recreate),
// so just update total_frames to allow computing new columns.
entry.waveform_total_frames = total_frames;
}
return;
}
// Create cache texture (ring buffer)
let cache_texture = device.create_texture(&wgpu::TextureDescriptor {
label: Some(&format!("cqt_cache_{}", pool_index)),
size: wgpu::Extent3d {
width: CACHE_CAPACITY,
height: FREQ_BINS,
depth_or_array_layers: 1,
},
mip_level_count: 1,
sample_count: 1,
dimension: wgpu::TextureDimension::D2,
format: wgpu::TextureFormat::Rgba16Float,
usage: wgpu::TextureUsages::STORAGE_BINDING | wgpu::TextureUsages::TEXTURE_BINDING,
view_formats: &[],
});
let cache_texture_view = cache_texture.create_view(&wgpu::TextureViewDescriptor {
label: Some(&format!("cqt_cache_{}_view", pool_index)),
..Default::default()
});
let cache_storage_view = cache_texture.create_view(&wgpu::TextureViewDescriptor {
label: Some(&format!("cqt_cache_{}_storage", pool_index)),
..Default::default()
});
// Precompute bin params
let bin_params = precompute_bin_params(sample_rate);
let bin_params_buffer = device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some(&format!("cqt_bins_{}", pool_index)),
contents: bytemuck::cast_slice(&bin_params),
usage: wgpu::BufferUsages::STORAGE,
});
// Compute uniform buffer
let compute_uniform_buffer = device.create_buffer(&wgpu::BufferDescriptor {
label: Some(&format!("cqt_compute_uniforms_{}", pool_index)),
size: std::mem::size_of::<CqtComputeParams>() as u64,
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
mapped_at_creation: false,
});
// Render uniform buffer
let render_uniform_buffer = device.create_buffer(&wgpu::BufferDescriptor {
label: Some(&format!("cqt_render_uniforms_{}", pool_index)),
size: std::mem::size_of::<CqtRenderParams>() as u64,
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
mapped_at_creation: false,
});
// Compute bind group
let compute_bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some(&format!("cqt_compute_bg_{}", pool_index)),
layout: &self.compute_bind_group_layout,
entries: &[
wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::TextureView(&waveform_texture_view),
},
wgpu::BindGroupEntry {
binding: 1,
resource: wgpu::BindingResource::TextureView(&cache_storage_view),
},
wgpu::BindGroupEntry {
binding: 2,
resource: compute_uniform_buffer.as_entire_binding(),
},
wgpu::BindGroupEntry {
binding: 3,
resource: bin_params_buffer.as_entire_binding(),
},
],
});
// Render bind group
let render_bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some(&format!("cqt_render_bg_{}", pool_index)),
layout: &self.render_bind_group_layout,
entries: &[
wgpu::BindGroupEntry {
binding: 0,
resource: wgpu::BindingResource::TextureView(&cache_texture_view),
},
wgpu::BindGroupEntry {
binding: 1,
resource: wgpu::BindingResource::Sampler(&self.sampler),
},
wgpu::BindGroupEntry {
binding: 2,
resource: render_uniform_buffer.as_entire_binding(),
},
],
});
self.entries.insert(
pool_index,
CqtCacheEntry {
cache_texture,
cache_texture_view,
cache_storage_view,
cache_capacity: CACHE_CAPACITY,
freq_bins: FREQ_BINS,
cache_start_column: 0,
cache_valid_start: 0,
cache_valid_end: 0,
bin_params_buffer,
waveform_texture_view,
waveform_total_frames: total_frames,
compute_bind_group,
compute_uniform_buffer,
render_bind_group,
render_uniform_buffer,
sample_rate,
current_stride: 1,
},
);
}
}
/// Dispatch compute shader to fill CQT columns in the cache.
/// Free function to avoid borrow conflicts with CqtGpuResources.entries.
fn dispatch_cqt_compute(
device: &wgpu::Device,
queue: &wgpu::Queue,
pipeline: &wgpu::ComputePipeline,
entry: &CqtCacheEntry,
start_col: i64,
end_col: i64,
stride: u32,
) -> Vec<wgpu::CommandBuffer> {
// Number of cache slots needed (each slot covers `stride` global columns)
let num_cols = ((end_col - start_col) as u32 / stride).max(1);
if end_col <= start_col {
return Vec::new();
}
// Clamp to max per frame
let num_cols = num_cols.min(MAX_COLS_PER_FRAME);
// Calculate ring buffer write offset (in cache slots, not global columns)
let cache_write_offset =
(((start_col - entry.cache_start_column) / stride as i64) as u32) % entry.cache_capacity;
let params = CqtComputeParams {
hop_size: HOP_SIZE,
freq_bins: FREQ_BINS,
cache_capacity: entry.cache_capacity,
cache_write_offset,
num_columns: num_cols,
column_start: start_col.max(0) as u32,
tex_width: WAVEFORM_TEX_WIDTH,
total_frames: entry.waveform_total_frames as u32,
sample_rate: entry.sample_rate as f32,
column_stride: stride,
_pad1: 0,
_pad2: 0,
};
queue.write_buffer(
&entry.compute_uniform_buffer,
0,
bytemuck::cast_slice(&[params]),
);
let mut encoder = device.create_command_encoder(&wgpu::CommandEncoderDescriptor {
label: Some("cqt_compute_encoder"),
});
{
let mut pass = encoder.begin_compute_pass(&wgpu::ComputePassDescriptor {
label: Some("cqt_compute_pass"),
timestamp_writes: None,
});
pass.set_pipeline(pipeline);
pass.set_bind_group(0, &entry.compute_bind_group, &[]);
// Dispatch: X = ceil(freq_bins / 64), Y = num_columns
let workgroups_x = (FREQ_BINS + 63) / 64;
pass.dispatch_workgroups(workgroups_x, num_cols, 1);
}
vec![encoder.finish()]
}
impl egui_wgpu::CallbackTrait for CqtCallback {
fn prepare(
&self,
device: &wgpu::Device,
queue: &wgpu::Queue,
_screen_descriptor: &egui_wgpu::ScreenDescriptor,
_egui_encoder: &mut wgpu::CommandEncoder,
resources: &mut egui_wgpu::CallbackResources,
) -> Vec<wgpu::CommandBuffer> {
// Initialize CQT resources if needed
if !resources.contains::<CqtGpuResources>() {
resources.insert(CqtGpuResources::new(device, self.target_format));
}
// First, check if waveform data is available and extract what we need
let waveform_info: Option<(wgpu::TextureView, u64)> = {
let waveform_gpu: Option<&WaveformGpuResources> = resources.get();
waveform_gpu.and_then(|wgpu_res| {
wgpu_res.entries.get(&self.pool_index).map(|entry| {
// Clone the texture view (Arc internally, cheap)
(entry.texture_views[0].clone(), entry.total_frames)
})
})
};
let (waveform_view, total_frames) = match waveform_info {
Some(info) => info,
None => return Vec::new(), // Waveform not uploaded yet
};
let cqt_gpu: &mut CqtGpuResources = resources.get_mut().unwrap();
// Ensure cache entry exists
cqt_gpu.ensure_cache_entry(
device,
self.pool_index,
waveform_view,
total_frames,
self.sample_rate,
);
// Determine which columns need computing
let stride = self.stride.max(1) as i64;
let vis_start = self.visible_col_start.max(0);
let max_col = (total_frames as i64) / HOP_SIZE as i64;
let vis_end_raw = self.visible_col_end.min(max_col);
// Clamp visible range to cache capacity (in global columns, accounting for stride)
let vis_end = vis_end_raw.min(vis_start + CACHE_CAPACITY as i64 * stride);
// If stride changed, invalidate cache
{
let entry = cqt_gpu.entries.get_mut(&self.pool_index).unwrap();
if entry.current_stride != self.stride {
entry.current_stride = self.stride;
entry.cache_start_column = vis_start;
entry.cache_valid_start = vis_start;
entry.cache_valid_end = vis_start;
}
}
// Stride-aware max columns per frame (in global column units)
let max_cols_global = MAX_COLS_PER_FRAME as i64 * stride;
// Read current cache state, compute what's needed, then update state.
// We split borrows carefully: read entry state, compute, then write back.
let cmds;
{
let entry = cqt_gpu.entries.get(&self.pool_index).unwrap();
let cache_valid_start = entry.cache_valid_start;
let cache_valid_end = entry.cache_valid_end;
if vis_start >= vis_end {
cmds = Vec::new();
} else if vis_start >= cache_valid_start && vis_end <= cache_valid_end {
// Fully cached
cmds = Vec::new();
} else if vis_start >= cache_valid_start
&& vis_start < cache_valid_end
&& vis_end > cache_valid_end
{
// Scrolling right — align to stride boundary
let actual_end =
cache_valid_end + (vis_end - cache_valid_end).min(max_cols_global);
cmds = dispatch_cqt_compute(
device, queue, &cqt_gpu.compute_pipeline, entry,
cache_valid_end, actual_end, self.stride,
);
let entry = cqt_gpu.entries.get_mut(&self.pool_index).unwrap();
entry.cache_valid_end = actual_end;
let cache_cap_global = entry.cache_capacity as i64 * stride;
if entry.cache_valid_end - entry.cache_valid_start > cache_cap_global {
entry.cache_valid_start = entry.cache_valid_end - cache_cap_global;
entry.cache_start_column = entry.cache_valid_start;
}
} else if vis_end <= cache_valid_end
&& vis_end > cache_valid_start
&& vis_start < cache_valid_start
{
// Scrolling left
let actual_start =
cache_valid_start - (cache_valid_start - vis_start).min(max_cols_global);
cmds = dispatch_cqt_compute(
device, queue, &cqt_gpu.compute_pipeline, entry,
actual_start, cache_valid_start, self.stride,
);
let entry = cqt_gpu.entries.get_mut(&self.pool_index).unwrap();
entry.cache_valid_start = actual_start;
entry.cache_start_column = actual_start;
let cache_cap_global = entry.cache_capacity as i64 * stride;
if entry.cache_valid_end - entry.cache_valid_start > cache_cap_global {
entry.cache_valid_end = entry.cache_valid_start + cache_cap_global;
}
} else {
// No overlap or first compute — reset cache
let entry = cqt_gpu.entries.get_mut(&self.pool_index).unwrap();
entry.cache_start_column = vis_start;
entry.cache_valid_start = vis_start;
entry.cache_valid_end = vis_start;
let compute_end = vis_start + (vis_end - vis_start).min(max_cols_global);
let entry = cqt_gpu.entries.get(&self.pool_index).unwrap();
cmds = dispatch_cqt_compute(
device, queue, &cqt_gpu.compute_pipeline, entry,
vis_start, compute_end, self.stride,
);
let entry = cqt_gpu.entries.get_mut(&self.pool_index).unwrap();
entry.cache_valid_end = compute_end;
}
}
// Update render uniform buffer
let entry = cqt_gpu.entries.get(&self.pool_index).unwrap();
let mut params = self.params;
params.cache_start_column = entry.cache_start_column as f32;
params.cache_valid_start = entry.cache_valid_start as f32;
params.cache_valid_end = entry.cache_valid_end as f32;
params.cache_capacity = entry.cache_capacity as f32;
params.column_stride = self.stride as f32;
queue.write_buffer(
&entry.render_uniform_buffer,
0,
bytemuck::cast_slice(&[params]),
);
cmds
}
fn paint(
&self,
_info: eframe::egui::PaintCallbackInfo,
render_pass: &mut wgpu::RenderPass<'static>,
resources: &egui_wgpu::CallbackResources,
) {
let cqt_gpu: &CqtGpuResources = match resources.get() {
Some(r) => r,
None => return,
};
let entry = match cqt_gpu.entries.get(&self.pool_index) {
Some(e) => e,
None => return,
};
// Don't render if nothing is cached yet
if entry.cache_valid_start >= entry.cache_valid_end {
return;
}
render_pass.set_pipeline(&cqt_gpu.render_pipeline);
render_pass.set_bind_group(0, &entry.render_bind_group, &[]);
render_pass.draw(0..3, 0..1);
}
}

View File

@ -4,7 +4,7 @@
//! using the actual WGSL shaders.
use lightningbeam_core::effect::{EffectDefinition, EffectInstance};
use lightningbeam_core::gpu::effect_processor::EffectProcessor;
use lightningbeam_core::gpu::effect_processor::{EffectProcessor, EffectUniforms};
use std::collections::HashMap;
use uuid::Uuid;
@ -19,7 +19,6 @@ pub struct EffectThumbnailGenerator {
/// Effect processor for compiling and applying shaders
effect_processor: EffectProcessor,
/// Source texture (still-life image scaled to thumbnail size)
#[allow(dead_code)] // Must stay alive — source_view is a view into this texture
source_texture: wgpu::Texture,
/// View of the source texture
source_view: wgpu::TextureView,
@ -102,7 +101,7 @@ impl EffectThumbnailGenerator {
let dest_view = dest_texture.create_view(&wgpu::TextureViewDescriptor::default());
// Create readback buffer
let _buffer_size = (EFFECT_THUMBNAIL_SIZE * EFFECT_THUMBNAIL_SIZE * 4) as u64;
let buffer_size = (EFFECT_THUMBNAIL_SIZE * EFFECT_THUMBNAIL_SIZE * 4) as u64;
// Align to 256 bytes for wgpu requirements
let aligned_bytes_per_row = ((EFFECT_THUMBNAIL_SIZE * 4 + 255) / 256) * 256;
let readback_buffer = device.create_buffer(&wgpu::BufferDescriptor {
@ -161,13 +160,11 @@ impl EffectThumbnailGenerator {
}
/// Get a cached thumbnail, or None if not yet generated
#[allow(dead_code)]
pub fn get_thumbnail(&self, effect_id: &Uuid) -> Option<&Vec<u8>> {
self.thumbnail_cache.get(effect_id)
}
/// Check if a thumbnail is cached
#[allow(dead_code)]
pub fn has_thumbnail(&self, effect_id: &Uuid) -> bool {
self.thumbnail_cache.contains_key(effect_id)
}

View File

@ -1,4 +1,3 @@
#![allow(dead_code)]
//! Audio export functionality
//!
//! Exports audio from the timeline to various formats:
@ -169,7 +168,7 @@ fn export_audio_ffmpeg_mp3<P: AsRef<Path>>(
// Step 3: Encode frames and write to output
// Convert interleaved f32 samples to planar i16 format
let num_frames = pcm_samples.len() / settings.channels as usize;
let planar_samples = convert_to_planar_i16(&pcm_samples, settings.channels);
let mut planar_samples = convert_to_planar_i16(&pcm_samples, settings.channels);
// Get encoder frame size
let frame_size = encoder.frame_size();

View File

@ -182,7 +182,7 @@ impl ExportDialog {
("Podcast AAC", AudioExportSettings::podcast_aac()),
];
egui::ComboBox::from_id_salt("export_preset")
egui::ComboBox::from_id_source("export_preset")
.selected_text(presets[self.selected_audio_preset].0)
.show_ui(ui, |ui| {
for (i, (name, _)) in presets.iter().enumerate() {
@ -207,7 +207,7 @@ impl ExportDialog {
ui.heading("Format");
ui.horizontal(|ui| {
ui.label("Format:");
egui::ComboBox::from_id_salt("audio_format")
egui::ComboBox::from_id_source("audio_format")
.selected_text(self.audio_settings.format.name())
.show_ui(ui, |ui| {
ui.selectable_value(&mut self.audio_settings.format, AudioFormat::Wav, "WAV (Uncompressed)");
@ -222,7 +222,7 @@ impl ExportDialog {
// Audio settings
ui.horizontal(|ui| {
ui.label("Sample Rate:");
egui::ComboBox::from_id_salt("sample_rate")
egui::ComboBox::from_id_source("sample_rate")
.selected_text(format!("{} Hz", self.audio_settings.sample_rate))
.show_ui(ui, |ui| {
ui.selectable_value(&mut self.audio_settings.sample_rate, 44100, "44100 Hz");
@ -251,7 +251,7 @@ impl ExportDialog {
if self.audio_settings.format.uses_bitrate() {
ui.horizontal(|ui| {
ui.label("Bitrate:");
egui::ComboBox::from_id_salt("bitrate")
egui::ComboBox::from_id_source("bitrate")
.selected_text(format!("{} kbps", self.audio_settings.bitrate_kbps))
.show_ui(ui, |ui| {
ui.selectable_value(&mut self.audio_settings.bitrate_kbps, 128, "128 kbps");
@ -269,7 +269,7 @@ impl ExportDialog {
ui.heading("Codec");
ui.horizontal(|ui| {
ui.label("Codec:");
egui::ComboBox::from_id_salt("video_codec")
egui::ComboBox::from_id_source("video_codec")
.selected_text(format!("{:?}", self.video_settings.codec))
.show_ui(ui, |ui| {
ui.selectable_value(&mut self.video_settings.codec, VideoCodec::H264, "H.264 (Most Compatible)");
@ -287,13 +287,13 @@ impl ExportDialog {
ui.horizontal(|ui| {
ui.label("Width:");
let mut custom_width = self.video_settings.width.unwrap_or(1920);
if ui.add(egui::DragValue::new(&mut custom_width).range(1..=7680)).changed() {
if ui.add(egui::DragValue::new(&mut custom_width).clamp_range(1..=7680)).changed() {
self.video_settings.width = Some(custom_width);
}
ui.label("Height:");
let mut custom_height = self.video_settings.height.unwrap_or(1080);
if ui.add(egui::DragValue::new(&mut custom_height).range(1..=4320)).changed() {
if ui.add(egui::DragValue::new(&mut custom_height).clamp_range(1..=4320)).changed() {
self.video_settings.height = Some(custom_height);
}
});
@ -320,7 +320,7 @@ impl ExportDialog {
ui.heading("Framerate");
ui.horizontal(|ui| {
ui.label("FPS:");
egui::ComboBox::from_id_salt("framerate")
egui::ComboBox::from_id_source("framerate")
.selected_text(format!("{}", self.video_settings.framerate as u32))
.show_ui(ui, |ui| {
ui.selectable_value(&mut self.video_settings.framerate, 24.0, "24");
@ -335,7 +335,7 @@ impl ExportDialog {
ui.heading("Quality");
ui.horizontal(|ui| {
ui.label("Quality:");
egui::ComboBox::from_id_salt("video_quality")
egui::ComboBox::from_id_source("video_quality")
.selected_text(self.video_settings.quality.name())
.show_ui(ui, |ui| {
ui.selectable_value(&mut self.video_settings.quality, VideoQuality::Low, VideoQuality::Low.name());
@ -363,13 +363,13 @@ impl ExportDialog {
ui.label("Start:");
ui.add(egui::DragValue::new(start_time)
.speed(0.1)
.range(0.0..=*end_time)
.clamp_range(0.0..=*end_time)
.suffix(" s"));
ui.label("End:");
ui.add(egui::DragValue::new(end_time)
.speed(0.1)
.range(*start_time..=f64::MAX)
.clamp_range(*start_time..=f64::MAX)
.suffix(" s"));
});

View File

@ -42,7 +42,6 @@ pub struct VideoExportState {
/// Start time in seconds
start_time: f64,
/// End time in seconds
#[allow(dead_code)]
end_time: f64,
/// Frames per second
framerate: f64,
@ -164,7 +163,7 @@ impl ExportOrchestrator {
/// For parallel video+audio exports, returns combined progress.
pub fn poll_progress(&mut self) -> Option<ExportProgress> {
// Handle parallel video+audio export
if let Some(ref mut _parallel) = self.parallel_export {
if let Some(ref mut parallel) = self.parallel_export {
return self.poll_parallel_progress();
}
@ -462,7 +461,6 @@ impl ExportOrchestrator {
/// Wait for the export to complete
///
/// This blocks until the export thread finishes.
#[allow(dead_code)]
pub fn wait_for_completion(&mut self) {
if let Some(handle) = self.thread_handle.take() {
handle.join().ok();
@ -917,7 +915,7 @@ impl ExportOrchestrator {
}
// Render to GPU (timed)
let _render_start = Instant::now();
let render_start = Instant::now();
let encoder = video_exporter::render_frame_to_gpu_rgba(
document, timestamp, width, height,
device, queue, renderer, image_cache, video_manager,
@ -1051,7 +1049,7 @@ impl ExportOrchestrator {
// Determine dimensions from first frame
let (width, height) = if let Some((_, _, ref y_plane, _, _)) = first_frame {
// Calculate dimensions from Y plane size (full resolution, 1 byte per pixel)
let _pixel_count = y_plane.len();
let pixel_count = y_plane.len();
// Use settings dimensions if provided, otherwise infer from buffer
let w = settings.width.unwrap_or(1920); // Default to 1920 if not specified
let h = settings.height.unwrap_or(1080); // Default to 1080 if not specified
@ -1090,7 +1088,7 @@ impl ExportOrchestrator {
println!("🧵 [ENCODER] Encoder initialized, ready to encode frames");
// Process first frame
if let Some((_frame_num, timestamp, y_plane, u_plane, v_plane)) = first_frame {
if let Some((frame_num, timestamp, y_plane, u_plane, v_plane)) = first_frame {
Self::encode_frame(
&mut encoder,
&mut output,
@ -1117,7 +1115,7 @@ impl ExportOrchestrator {
}
match frame_rx.recv() {
Ok(VideoFrameMessage::Frame { frame_num: _, timestamp, y_plane, u_plane, v_plane }) => {
Ok(VideoFrameMessage::Frame { frame_num, timestamp, y_plane, u_plane, v_plane }) => {
Self::encode_frame(
&mut encoder,
&mut output,

View File

@ -216,7 +216,7 @@ impl ReadbackPipeline {
/// Call this frequently to process completed transfers.
pub fn poll_nonblocking(&mut self) -> Vec<ReadbackResult> {
// Poll GPU without blocking
let _ = self.device.poll(wgpu::PollType::Poll);
self.device.poll(wgpu::PollType::Poll);
// Collect all completed readbacks
let mut results = Vec::new();
@ -269,14 +269,13 @@ impl ReadbackPipeline {
/// Flush pipeline and wait for all pending operations
///
/// Call this at the end of export to ensure all frames are processed
#[allow(dead_code)]
pub fn flush(&mut self) -> Vec<ReadbackResult> {
let mut all_results = Vec::new();
// Keep polling until all buffers are Free
loop {
// Poll for new completions
let _ = self.device.poll(wgpu::PollType::Poll);
self.device.poll(wgpu::PollType::Poll);
while let Ok(result) = self.readback_rx.try_recv() {
self.buffers[result.buffer_id].state = BufferState::Mapped;
@ -311,4 +310,8 @@ impl ReadbackPipeline {
all_results
}
/// Get buffer count currently in flight (for monitoring)
pub fn buffers_in_flight(&self) -> usize {
self.buffers.iter().filter(|b| b.state != BufferState::Free).count()
}
}

View File

@ -1,4 +1,3 @@
#![allow(dead_code)]
//! Video export functionality
//!
//! Exports video from the timeline using FFmpeg encoding:

View File

@ -20,7 +20,6 @@ mod theme;
use theme::{Theme, ThemeMode};
mod waveform_gpu;
mod cqt_gpu;
mod config;
use config::AppConfig;
@ -402,7 +401,6 @@ enum FileCommand {
}
/// Progress updates from file operations worker
#[allow(dead_code)] // EncodingAudio/DecodingAudio planned for granular progress reporting
enum FileProgress {
SerializingAudioPool,
EncodingAudio { current: usize, total: usize },
@ -428,7 +426,6 @@ enum FileOperation {
/// Information about an imported asset (for auto-placement)
#[derive(Debug, Clone)]
#[allow(dead_code)] // name/duration populated for future import UX features
struct ImportedAssetInfo {
clip_id: uuid::Uuid,
clip_type: panes::DragClipType,
@ -620,7 +617,6 @@ enum RecordingArmMode {
#[default]
Auto,
/// User explicitly arms tracks (multi-track recording workflow)
#[allow(dead_code)]
Manual,
}
@ -652,15 +648,12 @@ struct EditorApp {
rdp_tolerance: f64, // RDP simplification tolerance (default: 10.0)
schneider_max_error: f64, // Schneider curve fitting max error (default: 30.0)
// Audio engine integration
#[allow(dead_code)] // Must be kept alive to maintain audio output
audio_stream: Option<cpal::Stream>,
audio_controller: Option<std::sync::Arc<std::sync::Mutex<daw_backend::EngineController>>>,
audio_event_rx: Option<rtrb::Consumer<daw_backend::AudioEvent>>,
audio_events_pending: std::sync::Arc<std::sync::atomic::AtomicBool>,
#[allow(dead_code)] // Stored for future export/recording configuration
audio_sample_rate: u32,
#[allow(dead_code)]
audio_channels: u32,
audio_stream: Option<cpal::Stream>, // Audio stream (must be kept alive)
audio_controller: Option<std::sync::Arc<std::sync::Mutex<daw_backend::EngineController>>>, // Shared audio controller
audio_event_rx: Option<rtrb::Consumer<daw_backend::AudioEvent>>, // Audio event receiver
audio_events_pending: std::sync::Arc<std::sync::atomic::AtomicBool>, // Flag set when audio events arrive
audio_sample_rate: u32, // Audio sample rate
audio_channels: u32, // Audio channel count
// Video decoding and management
video_manager: std::sync::Arc<std::sync::Mutex<lightningbeam_core::video::VideoManager>>, // Shared video manager
// Track ID mapping (Document layer UUIDs <-> daw-backend TrackIds)
@ -672,10 +665,8 @@ struct EditorApp {
playback_time: f64, // Current playback position in seconds (persistent - save with document)
is_playing: bool, // Whether playback is currently active (transient - don't save)
// Recording state
#[allow(dead_code)] // Infrastructure for Manual recording mode
recording_arm_mode: RecordingArmMode,
#[allow(dead_code)]
armed_layers: HashSet<Uuid>,
recording_arm_mode: RecordingArmMode, // How tracks are armed for recording
armed_layers: HashSet<Uuid>, // Explicitly armed layers (used in Manual mode)
is_recording: bool, // Whether recording is currently active
recording_clips: HashMap<Uuid, u32>, // layer_id -> backend clip_id during recording
recording_start_time: f64, // Playback time when recording started
@ -696,8 +687,8 @@ struct EditorApp {
/// Cache for MIDI event data (keyed by backend midi_clip_id)
/// Prevents repeated backend queries for the same MIDI clip
/// Format: (timestamp, note_number, velocity, is_note_on)
midi_event_cache: HashMap<u32, Vec<(f64, u8, u8, bool)>>,
/// Format: (timestamp, note_number, is_note_on)
midi_event_cache: HashMap<u32, Vec<(f64, u8, bool)>>,
/// Cache for audio file durations to avoid repeated queries
/// Format: pool_index -> duration in seconds
audio_duration_cache: HashMap<usize, f64>,
@ -760,10 +751,6 @@ impl EditorApp {
fn new(cc: &eframe::CreationContext, layouts: Vec<LayoutDefinition>, theme: Theme) -> Self {
let current_layout = layouts[0].layout.clone();
// Disable egui's "Unaligned" debug overlay (on by default in debug builds)
#[cfg(debug_assertions)]
cc.egui_ctx.style_mut(|style| style.debug.show_unaligned = false);
// Load application config
let config = AppConfig::load();
@ -950,7 +937,7 @@ impl EditorApp {
egui::vec2(content_width, content_height),
);
ui.scope_builder(egui::UiBuilder::new().max_rect(content_rect), |ui| {
ui.allocate_ui_at_rect(content_rect, |ui| {
ui.vertical_centered(|ui| {
// Title
ui.heading(egui::RichText::new("Welcome to Lightningbeam!")
@ -1480,6 +1467,10 @@ impl EditorApp {
self.pane_instances.clear();
}
fn current_layout_def(&self) -> &LayoutDefinition {
&self.layouts[self.current_layout_index]
}
fn apply_layout_action(&mut self, action: LayoutAction) {
match action {
LayoutAction::SplitHorizontal(path, percent) => {
@ -1671,7 +1662,6 @@ impl EditorApp {
let file = dialog.pick_file();
if let Some(path) = file {
let _import_timer = std::time::Instant::now();
// Get extension and detect file type
let extension = path.extension()
.and_then(|e| e.to_str())
@ -1700,16 +1690,12 @@ impl EditorApp {
}
};
eprintln!("[TIMING] import took {:.1}ms", _import_timer.elapsed().as_secs_f64() * 1000.0);
// Auto-place if this is "Import" (not "Import to Library")
if auto_place {
if let Some(asset_info) = imported_asset {
let _place_timer = std::time::Instant::now();
self.auto_place_asset(asset_info);
eprintln!("[TIMING] auto_place took {:.1}ms", _place_timer.elapsed().as_secs_f64() * 1000.0);
}
}
eprintln!("[TIMING] total import+place took {:.1}ms", _import_timer.elapsed().as_secs_f64() * 1000.0);
}
}
MenuAction::Export => {
@ -1725,72 +1711,46 @@ impl EditorApp {
// Edit menu
MenuAction::Undo => {
let undo_succeeded = if let Some(ref controller_arc) = self.audio_controller {
if let Some(ref controller_arc) = self.audio_controller {
let mut controller = controller_arc.lock().unwrap();
let mut backend_context = lightningbeam_core::action::BackendContext {
audio_controller: Some(&mut *controller),
layer_to_track_map: &self.layer_to_track_map,
clip_instance_to_backend_map: &mut self.clip_instance_to_backend_map,
};
match self.action_executor.undo_with_backend(&mut backend_context) {
Ok(true) => {
println!("Undid: {}", self.action_executor.redo_description().unwrap_or_default());
true
}
Ok(false) => { println!("Nothing to undo"); false }
Err(e) => { eprintln!("Undo failed: {}", e); false }
Ok(true) => println!("Undid: {}", self.action_executor.redo_description().unwrap_or_default()),
Ok(false) => println!("Nothing to undo"),
Err(e) => eprintln!("Undo failed: {}", e),
}
} else {
match self.action_executor.undo() {
Ok(true) => {
println!("Undid: {}", self.action_executor.redo_description().unwrap_or_default());
true
}
Ok(false) => { println!("Nothing to undo"); false }
Err(e) => { eprintln!("Undo failed: {}", e); false }
}
};
// Rebuild MIDI cache after undo (backend_context dropped, borrows released)
if undo_succeeded {
let midi_update = self.action_executor.last_redo_midi_notes()
.map(|(id, notes)| (id, notes.to_vec()));
if let Some((clip_id, notes)) = midi_update {
self.rebuild_midi_cache_entry(clip_id, &notes);
Ok(true) => println!("Undid: {}", self.action_executor.redo_description().unwrap_or_default()),
Ok(false) => println!("Nothing to undo"),
Err(e) => eprintln!("Undo failed: {}", e),
}
}
}
MenuAction::Redo => {
let redo_succeeded = if let Some(ref controller_arc) = self.audio_controller {
if let Some(ref controller_arc) = self.audio_controller {
let mut controller = controller_arc.lock().unwrap();
let mut backend_context = lightningbeam_core::action::BackendContext {
audio_controller: Some(&mut *controller),
layer_to_track_map: &self.layer_to_track_map,
clip_instance_to_backend_map: &mut self.clip_instance_to_backend_map,
};
match self.action_executor.redo_with_backend(&mut backend_context) {
Ok(true) => {
println!("Redid: {}", self.action_executor.undo_description().unwrap_or_default());
true
}
Ok(false) => { println!("Nothing to redo"); false }
Err(e) => { eprintln!("Redo failed: {}", e); false }
Ok(true) => println!("Redid: {}", self.action_executor.undo_description().unwrap_or_default()),
Ok(false) => println!("Nothing to redo"),
Err(e) => eprintln!("Redo failed: {}", e),
}
} else {
match self.action_executor.redo() {
Ok(true) => {
println!("Redid: {}", self.action_executor.undo_description().unwrap_or_default());
true
}
Ok(false) => { println!("Nothing to redo"); false }
Err(e) => { eprintln!("Redo failed: {}", e); false }
}
};
// Rebuild MIDI cache after redo (backend_context dropped, borrows released)
if redo_succeeded {
let midi_update = self.action_executor.last_undo_midi_notes()
.map(|(id, notes)| (id, notes.to_vec()));
if let Some((clip_id, notes)) = midi_update {
self.rebuild_midi_cache_entry(clip_id, &notes);
Ok(true) => println!("Redid: {}", self.action_executor.undo_description().unwrap_or_default()),
Ok(false) => println!("Nothing to redo"),
Err(e) => eprintln!("Redo failed: {}", e),
}
}
}
@ -2359,6 +2319,8 @@ impl EditorApp {
}
/// Import an audio file via daw-backend (async — non-blocking)
///
/// Reads only metadata from the file (sub-millisecond), then sends the path
/// to the engine for async import. The engine memory-maps WAV files or sets
/// up stream decoding for compressed formats. An `AudioFileReady` event is
/// emitted when the file is playback-ready; the event handler populates the
@ -2385,20 +2347,16 @@ impl EditorApp {
let sample_rate = metadata.sample_rate;
if let Some(ref controller_arc) = self.audio_controller {
// Import synchronously to get the real pool index from the engine.
// NOTE: briefly blocks the UI thread (sub-ms for PCM mmap; a few ms
// for compressed streaming init).
let pool_index = {
let mut controller = controller_arc.lock().unwrap();
match controller.import_audio_sync(path.to_path_buf()) {
Ok(idx) => idx,
Err(e) => {
eprintln!("Failed to import audio '{}': {}", path.display(), e);
return None;
}
}
};
// Predict the pool index (engine assigns sequentially)
let pool_index = self.action_executor.document().audio_clips.len();
// Send async import command (non-blocking)
{
let mut controller = controller_arc.lock().unwrap();
controller.import_audio(path.to_path_buf());
}
// Create audio clip in document immediately (metadata is enough)
let clip = AudioClip::new_sampled(&name, pool_index, duration);
let clip_id = self.action_executor.document_mut().add_audio_clip(clip);
@ -2419,18 +2377,6 @@ impl EditorApp {
}
}
/// Rebuild a MIDI event cache entry from backend note format.
/// Called after undo/redo to keep the cache consistent with the backend.
fn rebuild_midi_cache_entry(&mut self, clip_id: u32, notes: &[(f64, u8, u8, f64)]) {
let mut events: Vec<(f64, u8, u8, bool)> = Vec::with_capacity(notes.len() * 2);
for &(start_time, note, velocity, duration) in notes {
events.push((start_time, note, velocity, true));
events.push((start_time + duration, note, velocity, false));
}
events.sort_by(|a, b| a.0.partial_cmp(&b.0).unwrap());
self.midi_event_cache.insert(clip_id, events);
}
/// Import a MIDI file via daw-backend
fn import_midi(&mut self, path: &std::path::Path) -> Option<ImportedAssetInfo> {
use lightningbeam_core::clip::AudioClip;
@ -2446,15 +2392,15 @@ impl EditorApp {
let duration = midi_clip.duration;
let event_count = midi_clip.events.len();
// Process MIDI events to cache format: (timestamp, note_number, velocity, is_note_on)
// Process MIDI events to cache format: (timestamp, note_number, is_note_on)
// Filter to note events only (status 0x90 = note-on, 0x80 = note-off)
let processed_events: Vec<(f64, u8, u8, bool)> = midi_clip.events.iter()
let processed_events: Vec<(f64, u8, bool)> = midi_clip.events.iter()
.filter_map(|event| {
let status_type = event.status & 0xF0;
if status_type == 0x90 || status_type == 0x80 {
// Note-on is 0x90 with velocity > 0, Note-off is 0x80 or velocity = 0
let is_note_on = status_type == 0x90 && event.data2 > 0;
Some((event.timestamp, event.data1, event.data2, is_note_on))
Some((event.timestamp, event.data1, is_note_on))
} else {
None // Ignore non-note events (CC, pitch bend, etc.)
}
@ -2522,7 +2468,7 @@ impl EditorApp {
};
// Create video clip with real metadata
let clip = VideoClip::new(
let mut clip = VideoClip::new(
&name,
path_str.clone(),
metadata.width as f64,
@ -2753,37 +2699,10 @@ impl EditorApp {
// Get the newly created layer ID (it's the last child in the document)
let doc = self.action_executor.document();
if let Some(last_layer) = doc.root.children.last() {
let layer_id = last_layer.id();
target_layer_id = Some(layer_id);
target_layer_id = Some(last_layer.id());
// Update active layer to the new layer
self.active_layer_id = target_layer_id;
// Create a backend audio/MIDI track and add the mapping
if let Some(ref controller_arc) = self.audio_controller {
let mut controller = controller_arc.lock().unwrap();
match asset_info.clip_type {
panes::DragClipType::AudioSampled => {
match controller.create_audio_track_sync(layer_name.clone()) {
Ok(track_id) => {
self.layer_to_track_map.insert(layer_id, track_id);
self.track_to_layer_map.insert(track_id, layer_id);
}
Err(e) => eprintln!("Failed to create audio track for auto-place: {}", e),
}
}
panes::DragClipType::AudioMidi => {
match controller.create_midi_track_sync(layer_name.clone()) {
Ok(track_id) => {
self.layer_to_track_map.insert(layer_id, track_id);
self.track_to_layer_map.insert(track_id, layer_id);
}
Err(e) => eprintln!("Failed to create MIDI track for auto-place: {}", e),
}
}
_ => {} // Other types don't need backend tracks
}
}
}
}
@ -3086,8 +3005,6 @@ impl EditorApp {
impl eframe::App for EditorApp {
fn update(&mut self, ctx: &egui::Context, frame: &mut eframe::Frame) {
let _frame_start = std::time::Instant::now();
// Disable egui's built-in Ctrl+Plus/Minus zoom behavior
// We handle zoom ourselves for the Stage pane
ctx.options_mut(|o| {
@ -3119,10 +3036,37 @@ impl eframe::App for EditorApp {
// Will switch to editor mode when file finishes loading
}
// NOTE: Missing raw audio samples for newly imported files will arrive
// via AudioDecodeProgress events (compressed) or inline with AudioFileReady
// (PCM). No blocking query needed here.
// For project loading, audio files are re-imported which also sends events.
// Fetch missing raw audio on-demand (for lazy loading after project load)
// Collect pool indices that need raw audio data
let missing_raw_audio: Vec<usize> = self.action_executor.document()
.audio_clips.values()
.filter_map(|clip| {
if let lightningbeam_core::clip::AudioClipType::Sampled { audio_pool_index } = &clip.clip_type {
if !self.raw_audio_cache.contains_key(audio_pool_index) {
Some(*audio_pool_index)
} else {
None
}
} else {
None
}
})
.collect();
// Fetch missing raw audio samples
for pool_index in missing_raw_audio {
if let Some(ref controller_arc) = self.audio_controller {
let mut controller = controller_arc.lock().unwrap();
match controller.get_pool_audio_samples(pool_index) {
Ok((samples, sr, ch)) => {
self.raw_audio_cache.insert(pool_index, (samples, sr, ch));
self.waveform_gpu_dirty.insert(pool_index);
self.audio_pools_with_new_waveforms.insert(pool_index);
}
Err(e) => eprintln!("Failed to fetch raw audio for pool {}: {}", pool_index, e),
}
}
}
// Initialize and update effect thumbnail generator (GPU-based effect previews)
if let Some(render_state) = frame.wgpu_render_state() {
@ -3277,7 +3221,6 @@ impl eframe::App for EditorApp {
ctx.request_repaint();
}
let _pre_events_ms = _frame_start.elapsed().as_secs_f64() * 1000.0;
// Check if audio events are pending and request repaint if needed
if self.audio_events_pending.load(std::sync::atomic::Ordering::Relaxed) {
ctx.request_repaint();
@ -3496,103 +3439,13 @@ impl eframe::App for EditorApp {
self.recording_layer_id = None;
ctx.request_repaint();
}
AudioEvent::MidiRecordingProgress(_track_id, clip_id, duration, notes) => {
// Update clip duration in document (so timeline bar grows)
if let Some(layer_id) = self.recording_layer_id {
let doc_clip_id = {
let document = self.action_executor.document();
document.root.children.iter()
.find(|l| l.id() == layer_id)
.and_then(|layer| {
if let lightningbeam_core::layer::AnyLayer::Audio(audio_layer) = layer {
audio_layer.clip_instances.last().map(|i| i.clip_id)
} else {
None
}
})
};
if let Some(doc_clip_id) = doc_clip_id {
if let Some(clip) = self.action_executor.document_mut().audio_clips.get_mut(&doc_clip_id) {
clip.duration = duration;
}
}
}
// Update midi_event_cache with notes captured so far
// (inlined instead of calling rebuild_midi_cache_entry to avoid
// conflicting &mut self borrow with event_rx loop)
{
let mut events: Vec<(f64, u8, u8, bool)> = Vec::with_capacity(notes.len() * 2);
for &(start_time, note, velocity, dur) in &notes {
events.push((start_time, note, velocity, true));
events.push((start_time + dur, note, velocity, false));
}
events.sort_by(|a, b| a.0.partial_cmp(&b.0).unwrap());
self.midi_event_cache.insert(clip_id, events);
}
AudioEvent::MidiRecordingProgress(_track_id, _clip_id, duration, _notes) => {
println!("🎹 MIDI recording progress: {:.2}s", duration);
ctx.request_repaint();
}
AudioEvent::MidiRecordingStopped(track_id, clip_id, note_count) => {
println!("🎹 MIDI recording stopped: track={:?}, clip_id={}, {} notes",
track_id, clip_id, note_count);
// Query backend for the definitive final note data
if let Some(ref controller_arc) = self.audio_controller {
let mut controller = controller_arc.lock().unwrap();
match controller.query_midi_clip(track_id, clip_id) {
Ok(midi_clip_data) => {
// Convert backend MidiEvent format to cache format
let cache_events: Vec<(f64, u8, u8, bool)> = midi_clip_data.events.iter()
.filter_map(|event| {
let status_type = event.status & 0xF0;
if status_type == 0x90 || status_type == 0x80 {
let is_note_on = status_type == 0x90 && event.data2 > 0;
Some((event.timestamp, event.data1, event.data2, is_note_on))
} else {
None
}
})
.collect();
drop(controller);
self.midi_event_cache.insert(clip_id, cache_events);
// Update document clip with final duration and name
if let Some(layer_id) = self.recording_layer_id {
let doc_clip_id = {
let document = self.action_executor.document();
document.root.children.iter()
.find(|l| l.id() == layer_id)
.and_then(|layer| {
if let lightningbeam_core::layer::AnyLayer::Audio(audio_layer) = layer {
audio_layer.clip_instances.last().map(|i| i.clip_id)
} else {
None
}
})
};
if let Some(doc_clip_id) = doc_clip_id {
if let Some(clip) = self.action_executor.document_mut().audio_clips.get_mut(&doc_clip_id) {
clip.duration = midi_clip_data.duration;
clip.name = format!("MIDI Recording {}", clip_id);
}
}
}
println!("✅ Finalized MIDI recording: {} notes, {:.2}s",
note_count, midi_clip_data.duration);
}
Err(e) => {
eprintln!("Failed to query MIDI clip data after recording: {}", e);
// Cache was already populated by last MidiRecordingProgress event
}
}
}
// TODO: Store clip_instance_to_backend_map entry for this MIDI clip.
// The backend created the instance in create_midi_clip(), but doesn't
// report the instance_id back. Needed for move/trim operations later.
// Clear recording state
self.is_recording = false;
self.recording_clips.clear();
@ -3620,15 +3473,22 @@ impl eframe::App for EditorApp {
// via AudioDecodeProgress events.
ctx.request_repaint();
}
AudioEvent::AudioDecodeProgress { pool_index, samples, sample_rate, channels } => {
// Samples arrive as deltas — append to existing cache
if let Some(entry) = self.raw_audio_cache.get_mut(&pool_index) {
entry.0.extend_from_slice(&samples);
} else {
self.raw_audio_cache.insert(pool_index, (samples, sample_rate, channels));
AudioEvent::AudioDecodeProgress { pool_index, decoded_frames, total_frames } => {
// Waveform decode complete — fetch samples for GPU waveform
if decoded_frames == total_frames {
if let Some(ref controller_arc) = self.audio_controller {
let mut controller = controller_arc.lock().unwrap();
match controller.get_pool_audio_samples(pool_index) {
Ok((samples, sr, ch)) => {
println!("Waveform decode complete for pool {}: {} samples", pool_index, samples.len());
self.raw_audio_cache.insert(pool_index, (samples, sr, ch));
self.waveform_gpu_dirty.insert(pool_index);
}
Err(e) => eprintln!("Failed to fetch decoded audio for pool {}: {}", pool_index, e),
}
}
ctx.request_repaint();
}
self.waveform_gpu_dirty.insert(pool_index);
ctx.request_repaint();
}
_ => {} // Ignore other events for now
}
@ -3644,8 +3504,6 @@ impl eframe::App for EditorApp {
}
}
let _post_events_ms = _frame_start.elapsed().as_secs_f64() * 1000.0;
// Request continuous repaints when playing to update time display
if self.is_playing {
ctx.request_repaint();
@ -3785,11 +3643,12 @@ impl eframe::App for EditorApp {
// Poll export orchestrator for progress
if let Some(orchestrator) = &mut self.export_orchestrator {
// Only log occasionally to avoid spam
use std::sync::atomic::{AtomicU32, Ordering as AtomicOrdering};
static POLL_COUNT: AtomicU32 = AtomicU32::new(0);
let count = POLL_COUNT.fetch_add(1, AtomicOrdering::Relaxed) + 1;
if count % 60 == 0 {
println!("🔍 [MAIN] Polling orchestrator (poll #{})...", count);
static mut POLL_COUNT: u32 = 0;
unsafe {
POLL_COUNT += 1;
if POLL_COUNT % 60 == 0 {
println!("🔍 [MAIN] Polling orchestrator (poll #{})...", POLL_COUNT);
}
}
if let Some(progress) = orchestrator.poll_progress() {
match progress {
@ -3934,7 +3793,7 @@ impl eframe::App for EditorApp {
paint_bucket_gap_tolerance: &mut self.paint_bucket_gap_tolerance,
polygon_sides: &mut self.polygon_sides,
layer_to_track_map: &self.layer_to_track_map,
midi_event_cache: &mut self.midi_event_cache,
midi_event_cache: &self.midi_event_cache,
audio_pools_with_new_waveforms: &self.audio_pools_with_new_waveforms,
raw_audio_cache: &self.raw_audio_cache,
waveform_gpu_dirty: &mut self.waveform_gpu_dirty,
@ -4059,19 +3918,6 @@ impl eframe::App for EditorApp {
self.split_clips_at_playhead();
}
// Space bar toggles play/pause (only when no text input is focused)
if !wants_keyboard && ctx.input(|i| i.key_pressed(egui::Key::Space)) {
self.is_playing = !self.is_playing;
if let Some(ref controller_arc) = self.audio_controller {
let mut controller = controller_arc.lock().unwrap();
if self.is_playing {
controller.play();
} else {
controller.pause();
}
}
}
ctx.input(|i| {
// Check menu shortcuts that use modifiers (Cmd+S, etc.) - allow even when typing
// But skip shortcuts without modifiers when keyboard input is claimed (e.g., virtual piano)
@ -4133,12 +3979,6 @@ impl eframe::App for EditorApp {
);
debug_overlay::render_debug_overlay(ctx, &stats);
}
let frame_ms = _frame_start.elapsed().as_secs_f64() * 1000.0;
if frame_ms > 50.0 {
eprintln!("[TIMING] SLOW FRAME: {:.1}ms (pre-events={:.1}, events={:.1}, post-events={:.1})",
frame_ms, _pre_events_ms, _post_events_ms - _pre_events_ms, frame_ms - _post_events_ms);
}
}
}
@ -4183,7 +4023,7 @@ struct RenderContext<'a> {
/// Mapping from Document layer UUIDs to daw-backend TrackIds
layer_to_track_map: &'a std::collections::HashMap<Uuid, daw_backend::TrackId>,
/// Cache of MIDI events for rendering (keyed by backend midi_clip_id)
midi_event_cache: &'a mut HashMap<u32, Vec<(f64, u8, u8, bool)>>,
midi_event_cache: &'a HashMap<u32, Vec<(f64, u8, bool)>>,
/// Audio pool indices with new raw audio data this frame (for thumbnail invalidation)
audio_pools_with_new_waveforms: &'a HashSet<usize>,
/// Raw audio samples for GPU waveform rendering (pool_index -> (samples, sample_rate, channels))
@ -4302,12 +4142,12 @@ fn render_layout_node(
if ui.button("Split Horizontal ->").clicked() {
*layout_action = Some(LayoutAction::EnterSplitPreviewHorizontal);
ui.close();
ui.close_menu();
}
if ui.button("Split Vertical |").clicked() {
*layout_action = Some(LayoutAction::EnterSplitPreviewVertical);
ui.close();
ui.close_menu();
}
ui.separator();
@ -4316,14 +4156,14 @@ fn render_layout_node(
let mut path_keep_right = path.clone();
path_keep_right.push(1); // Remove left, keep right child
*layout_action = Some(LayoutAction::RemoveSplit(path_keep_right));
ui.close();
ui.close_menu();
}
if ui.button("Join Right >").clicked() {
let mut path_keep_left = path.clone();
path_keep_left.push(0); // Remove right, keep left child
*layout_action = Some(LayoutAction::RemoveSplit(path_keep_left));
ui.close();
ui.close_menu();
}
});
@ -4424,12 +4264,12 @@ fn render_layout_node(
if ui.button("Split Horizontal ->").clicked() {
*layout_action = Some(LayoutAction::EnterSplitPreviewHorizontal);
ui.close();
ui.close_menu();
}
if ui.button("Split Vertical |").clicked() {
*layout_action = Some(LayoutAction::EnterSplitPreviewVertical);
ui.close();
ui.close_menu();
}
ui.separator();
@ -4438,14 +4278,14 @@ fn render_layout_node(
let mut path_keep_bottom = path.clone();
path_keep_bottom.push(1); // Remove top, keep bottom child
*layout_action = Some(LayoutAction::RemoveSplit(path_keep_bottom));
ui.close();
ui.close_menu();
}
if ui.button("Join Down v").clicked() {
let mut path_keep_top = path.clone();
path_keep_top.push(0); // Remove bottom, keep top child
*layout_action = Some(LayoutAction::RemoveSplit(path_keep_top));
ui.close();
ui.close_menu();
}
});
@ -4855,6 +4695,100 @@ fn render_pane(
}
}
/// Render toolbar with tool buttons
fn render_toolbar(
ui: &mut egui::Ui,
rect: egui::Rect,
tool_icon_cache: &mut ToolIconCache,
selected_tool: &mut Tool,
path: &NodePath,
) {
let button_size = 60.0; // 50% bigger (was 40.0)
let button_padding = 8.0;
let button_spacing = 4.0;
// Calculate how many columns we can fit
let available_width = rect.width() - (button_padding * 2.0);
let columns = ((available_width + button_spacing) / (button_size + button_spacing)).floor() as usize;
let columns = columns.max(1); // At least 1 column
let mut x = rect.left() + button_padding;
let mut y = rect.top() + button_padding;
let mut col = 0;
for tool in Tool::all() {
let button_rect = egui::Rect::from_min_size(
egui::pos2(x, y),
egui::vec2(button_size, button_size),
);
// Check if this is the selected tool
let is_selected = *selected_tool == *tool;
// Button background
let bg_color = if is_selected {
egui::Color32::from_rgb(70, 100, 150) // Highlighted blue
} else {
egui::Color32::from_rgb(50, 50, 50)
};
ui.painter().rect_filled(button_rect, 4.0, bg_color);
// Load and render tool icon
if let Some(icon) = tool_icon_cache.get_or_load(*tool, ui.ctx()) {
let icon_rect = button_rect.shrink(8.0); // Padding inside button
ui.painter().image(
icon.id(),
icon_rect,
egui::Rect::from_min_max(egui::pos2(0.0, 0.0), egui::pos2(1.0, 1.0)),
egui::Color32::WHITE,
);
}
// Make button interactive (include path to ensure unique IDs across panes)
let button_id = ui.id().with(("tool_button", path, *tool as usize));
let response = ui.interact(button_rect, button_id, egui::Sense::click());
// Check for click first
if response.clicked() {
*selected_tool = *tool;
}
if response.hovered() {
ui.painter().rect_stroke(
button_rect,
4.0,
egui::Stroke::new(2.0, egui::Color32::from_gray(180)),
egui::StrokeKind::Middle,
);
}
// Show tooltip with tool name and shortcut (consumes response)
response.on_hover_text(format!("{} ({})", tool.display_name(), tool.shortcut_hint()));
// Draw selection border
if is_selected {
ui.painter().rect_stroke(
button_rect,
4.0,
egui::Stroke::new(2.0, egui::Color32::from_rgb(100, 150, 255)),
egui::StrokeKind::Middle,
);
}
// Move to next position in grid
col += 1;
if col >= columns {
// Move to next row
col = 0;
x = rect.left() + button_padding;
y += button_size + button_spacing;
} else {
// Move to next column
x += button_size + button_spacing;
}
}
}
/// Get a color for each pane type for visualization
fn pane_color(pane_type: PaneType) -> egui::Color32 {
match pane_type {

View File

@ -29,9 +29,7 @@ pub enum ShortcutKey {
// Numbers
Num0,
// Symbols
Comma, Minus, Equals,
#[allow(dead_code)] // Completes keyboard mapping set
Plus,
Comma, Minus, Equals, Plus,
BracketLeft, BracketRight,
// Special
Delete,
@ -191,7 +189,6 @@ pub enum MenuAction {
RecenterView,
NextLayout,
PreviousLayout,
#[allow(dead_code)] // Handler exists in main.rs, menu item not yet wired
SwitchLayout(usize),
// Help menu
@ -222,7 +219,6 @@ pub enum MenuDef {
// Shortcut constants for clarity
const CTRL: bool = true;
const SHIFT: bool = true;
#[allow(dead_code)]
const ALT: bool = true;
const NO_CTRL: bool = false;
const NO_SHIFT: bool = false;
@ -292,9 +288,7 @@ impl MenuItemDef {
// macOS app menu items
const SETTINGS: Self = Self { label: "Settings", action: MenuAction::Settings, shortcut: Some(Shortcut::new(ShortcutKey::Comma, CTRL, NO_SHIFT, NO_ALT)) };
const CLOSE_WINDOW: Self = Self { label: "Close Window", action: MenuAction::CloseWindow, shortcut: Some(Shortcut::new(ShortcutKey::W, CTRL, NO_SHIFT, NO_ALT)) };
#[allow(dead_code)] // Used in #[cfg(target_os = "macos")] block
const QUIT_MACOS: Self = Self { label: "Quit Lightningbeam", action: MenuAction::Quit, shortcut: Some(Shortcut::new(ShortcutKey::Q, CTRL, NO_SHIFT, NO_ALT)) };
#[allow(dead_code)]
const ABOUT_MACOS: Self = Self { label: "About Lightningbeam", action: MenuAction::About, shortcut: None };
/// Get all menu items with shortcuts (for keyboard handling)
@ -599,7 +593,7 @@ impl MenuSystem {
pub fn render_egui_menu_bar(&self, ui: &mut egui::Ui, recent_files: &[std::path::PathBuf]) -> Option<MenuAction> {
let mut action = None;
egui::MenuBar::new().ui(ui, |ui| {
egui::menu::bar(ui, |ui| {
for menu_def in MenuItemDef::menu_structure() {
if let Some(a) = self.render_menu_def(ui, menu_def, recent_files) {
action = Some(a);
@ -638,7 +632,7 @@ impl MenuSystem {
if ui.button(display_name).clicked() {
action = Some(MenuAction::OpenRecent(index));
ui.close();
ui.close_menu();
}
}
@ -649,14 +643,14 @@ impl MenuSystem {
if ui.button("Clear Recent Files").clicked() {
action = Some(MenuAction::ClearRecentFiles);
ui.close();
ui.close_menu();
}
} else {
// Normal submenu rendering
for child in *children {
if let Some(a) = self.render_menu_def(ui, child, recent_files) {
action = Some(a);
ui.close();
ui.close_menu();
}
}
}

View File

@ -62,7 +62,6 @@ const SEARCH_BAR_HEIGHT: f32 = 30.0;
const CATEGORY_TAB_HEIGHT: f32 = 28.0;
const BREADCRUMB_HEIGHT: f32 = 24.0;
const ITEM_HEIGHT: f32 = 40.0;
#[allow(dead_code)]
const ITEM_PADDING: f32 = 4.0;
const LIST_THUMBNAIL_SIZE: f32 = 32.0;
const GRID_ITEM_SIZE: f32 = 80.0;
@ -138,7 +137,6 @@ impl ThumbnailCache {
}
/// Check if a thumbnail is already cached (and not dirty)
#[allow(dead_code)]
pub fn has(&self, asset_id: &Uuid) -> bool {
self.textures.contains_key(asset_id) && !self.dirty.contains(asset_id)
}
@ -148,6 +146,11 @@ impl ThumbnailCache {
self.dirty.insert(*asset_id);
}
/// Clear all cached thumbnails
pub fn clear(&mut self) {
self.textures.clear();
self.dirty.clear();
}
}
// ============================================================================
@ -282,7 +285,7 @@ fn generate_waveform_thumbnail(
// Draw waveform
let center_y = size / 2;
let _num_peaks = waveform_peaks.len().min(size);
let num_peaks = waveform_peaks.len().min(size);
for (x, &(min_val, max_val)) in waveform_peaks.iter().take(size).enumerate() {
// Scale peaks to pixel range (center ± half height)
@ -373,7 +376,7 @@ fn generate_video_thumbnail(
/// Generate a piano roll thumbnail for MIDI clips
/// Shows notes as horizontal bars with Y position = note % 12 (one octave)
fn generate_midi_thumbnail(
events: &[(f64, u8, u8, bool)], // (timestamp, note_number, velocity, is_note_on)
events: &[(f64, u8, bool)], // (timestamp, note_number, is_note_on)
duration: f64,
bg_color: egui::Color32,
note_color: egui::Color32,
@ -391,7 +394,7 @@ fn generate_midi_thumbnail(
}
// Draw note events
for &(timestamp, note_number, _velocity, is_note_on) in events {
for &(timestamp, note_number, is_note_on) in events {
if !is_note_on || timestamp > preview_duration {
continue;
}
@ -549,7 +552,6 @@ fn shape_color_to_tiny_skia(color: &ShapeColor) -> tiny_skia::Color {
}
/// Generate a simple effect thumbnail with a pink gradient
#[allow(dead_code)]
fn generate_effect_thumbnail() -> Vec<u8> {
let size = THUMBNAIL_SIZE as usize;
let mut rgba = vec![0u8; size * size * 4];
@ -626,7 +628,6 @@ fn generate_effect_thumbnail() -> Vec<u8> {
}
/// Ellipsize a string to fit within a maximum character count
#[allow(dead_code)]
fn ellipsize(s: &str, max_chars: usize) -> String {
if s.chars().count() <= max_chars {
s.to_string()
@ -705,7 +706,6 @@ pub struct AssetEntry {
pub struct FolderEntry {
pub id: Uuid,
pub name: String,
#[allow(dead_code)]
pub category: AssetCategory,
pub item_count: usize,
}
@ -718,7 +718,6 @@ pub enum LibraryItem {
}
impl LibraryItem {
#[allow(dead_code)]
pub fn id(&self) -> Uuid {
match self {
LibraryItem::Folder(f) => f.id,
@ -811,7 +810,6 @@ pub struct AssetLibraryPane {
current_folders: HashMap<u8, Option<Uuid>>,
/// Set of expanded folder IDs (for tree view - future enhancement)
#[allow(dead_code)]
expanded_folders: HashSet<Uuid>,
/// Cached folder icon texture
@ -1285,7 +1283,6 @@ impl AssetLibraryPane {
}
/// Filter assets based on current category and search text
#[allow(dead_code)]
fn filter_assets<'a>(&self, assets: &'a [AssetEntry]) -> Vec<&'a AssetEntry> {
let search_lower = self.search_filter.to_lowercase();
@ -1730,7 +1727,6 @@ impl AssetLibraryPane {
}
/// Render a section header for effect categories
#[allow(dead_code)] // Part of List/Grid view rendering subsystem, not yet wired
fn render_section_header(ui: &mut egui::Ui, label: &str, color: egui::Color32) {
ui.add_space(4.0);
let (header_rect, _) = ui.allocate_exact_size(
@ -1748,7 +1744,7 @@ impl AssetLibraryPane {
}
/// Render a grid of asset items
#[allow(clippy::too_many_arguments, dead_code)]
#[allow(clippy::too_many_arguments)]
fn render_grid_items(
&mut self,
ui: &mut egui::Ui,
@ -1759,7 +1755,7 @@ impl AssetLibraryPane {
shared: &mut SharedPaneState,
document: &Document,
text_color: egui::Color32,
_secondary_text_color: egui::Color32,
secondary_text_color: egui::Color32,
) {
if assets.is_empty() {
return;
@ -2007,7 +2003,7 @@ impl AssetLibraryPane {
&mut self,
ui: &mut egui::Ui,
rect: egui::Rect,
_path: &NodePath,
path: &NodePath,
shared: &mut SharedPaneState,
items: &[&LibraryItem],
document: &Document,
@ -2016,7 +2012,7 @@ impl AssetLibraryPane {
let folder_icon = self.get_folder_icon(ui.ctx()).cloned();
let _scroll_area = egui::ScrollArea::vertical()
.id_salt("asset_library_scroll")
.id_source("asset_library_scroll")
.show_viewport(ui, |ui, viewport| {
ui.set_min_width(rect.width());
@ -2175,7 +2171,7 @@ impl AssetLibraryPane {
// Load folder icon if needed
let folder_icon = self.get_folder_icon(ui.ctx()).cloned();
ui.scope_builder(egui::UiBuilder::new().max_rect(rect), |ui| {
ui.allocate_new_ui(egui::UiBuilder::new().max_rect(rect), |ui| {
egui::ScrollArea::vertical()
.id_salt(("asset_library_grid_scroll", path))
.auto_shrink([false, false])
@ -2665,7 +2661,6 @@ impl AssetLibraryPane {
}
/// Render assets based on current view mode
#[allow(dead_code)]
fn render_assets(
&mut self,
ui: &mut egui::Ui,
@ -2686,7 +2681,6 @@ impl AssetLibraryPane {
}
/// Render the asset list view
#[allow(dead_code)]
fn render_asset_list_view(
&mut self,
ui: &mut egui::Ui,
@ -2730,7 +2724,7 @@ impl AssetLibraryPane {
// Use egui's built-in ScrollArea for scrolling
let scroll_area_rect = rect;
ui.scope_builder(egui::UiBuilder::new().max_rect(scroll_area_rect), |ui| {
ui.allocate_new_ui(egui::UiBuilder::new().max_rect(scroll_area_rect), |ui| {
egui::ScrollArea::vertical()
.id_salt(("asset_list_scroll", path))
.auto_shrink([false, false])
@ -2763,7 +2757,7 @@ impl AssetLibraryPane {
};
let mut rendered_builtin_header = false;
let mut rendered_custom_header = false;
let mut _builtin_rendered = 0;
let mut builtin_rendered = 0;
for asset in assets_to_render {
// Render section headers for Effects tab
@ -2787,7 +2781,7 @@ impl AssetLibraryPane {
rendered_custom_header = true;
}
if asset.is_builtin {
_builtin_rendered += 1;
builtin_rendered += 1;
}
}
@ -3099,7 +3093,6 @@ impl AssetLibraryPane {
}
/// Render the asset grid view
#[allow(dead_code)]
fn render_asset_grid_view(
&mut self,
ui: &mut egui::Ui,
@ -3172,7 +3165,7 @@ impl AssetLibraryPane {
0
};
ui.scope_builder(egui::UiBuilder::new().max_rect(rect), |ui| {
ui.allocate_new_ui(egui::UiBuilder::new().max_rect(rect), |ui| {
egui::ScrollArea::vertical()
.id_salt(("asset_grid_scroll", path))
.auto_shrink([false, false])

View File

@ -47,7 +47,6 @@ pub struct DraggingAsset {
/// Display name
pub name: String,
/// Duration in seconds
#[allow(dead_code)] // Populated during drag, consumed when drag-and-drop features expand
pub duration: f64,
/// Dimensions (width, height) for vector/video clips, None for audio
pub dimensions: Option<(f64, f64)>,
@ -133,7 +132,6 @@ pub fn find_sampled_audio_track(document: &lightningbeam_core::document::Documen
/// Shared state that all panes can access
pub struct SharedPaneState<'a> {
pub tool_icon_cache: &'a mut crate::ToolIconCache,
#[allow(dead_code)] // Used by pane chrome rendering in main.rs
pub icon_cache: &'a mut crate::IconCache,
pub selected_tool: &'a mut Tool,
pub fill_color: &'a mut egui::Color32,
@ -189,12 +187,8 @@ pub struct SharedPaneState<'a> {
pub paint_bucket_gap_tolerance: &'a mut f64,
/// Number of sides for polygon tool
pub polygon_sides: &'a mut u32,
/// Cache of MIDI events for rendering (keyed by backend midi_clip_id).
/// Mutable so panes can update the cache immediately on edits (avoiding 1-frame snap-back).
/// NOTE: If an action later fails during execution, the cache may be out of sync with the
/// backend. This is acceptable because MIDI note edits are simple and unlikely to fail.
/// Undo/redo rebuilds affected entries from the backend to restore consistency.
pub midi_event_cache: &'a mut std::collections::HashMap<u32, Vec<(f64, u8, u8, bool)>>,
/// Cache of MIDI events for rendering (keyed by backend midi_clip_id)
pub midi_event_cache: &'a std::collections::HashMap<u32, Vec<(f64, u8, bool)>>,
/// Audio pool indices that got new raw audio data this frame (for thumbnail invalidation)
pub audio_pools_with_new_waveforms: &'a std::collections::HashSet<usize>,
/// Raw audio samples for GPU waveform rendering (pool_index -> (samples, sample_rate, channels))
@ -222,7 +216,7 @@ pub trait PaneRenderer {
/// Render the optional header section with controls
///
/// Returns true if a header was rendered, false if no header
fn render_header(&mut self, _ui: &mut egui::Ui, _shared: &mut SharedPaneState) -> bool {
fn render_header(&mut self, ui: &mut egui::Ui, shared: &mut SharedPaneState) -> bool {
false // Default: no header
}
@ -236,7 +230,6 @@ pub trait PaneRenderer {
);
/// Get the display name of this pane
#[allow(dead_code)] // Implemented by all panes, dispatch infrastructure complete
fn name(&self) -> &str;
}

View File

@ -15,7 +15,6 @@ use uuid::Uuid;
pub enum NodeGraphAction {
AddNode(AddNodeAction),
RemoveNode(RemoveNodeAction),
#[allow(dead_code)]
MoveNode(MoveNodeAction),
Connect(ConnectAction),
Disconnect(DisconnectAction),
@ -241,7 +240,6 @@ impl RemoveNodeAction {
// MoveNodeAction
// ============================================================================
#[allow(dead_code)]
pub struct MoveNodeAction {
layer_id: Uuid,
backend_node_id: BackendNodeId,
@ -250,7 +248,6 @@ pub struct MoveNodeAction {
}
impl MoveNodeAction {
#[allow(dead_code)]
pub fn new(layer_id: Uuid, backend_node_id: BackendNodeId, new_position: (f32, f32)) -> Self {
Self {
layer_id,

View File

@ -17,8 +17,8 @@ pub struct AudioGraphBackend {
audio_controller: Arc<Mutex<EngineController>>,
/// Maps backend NodeIndex to stable IDs for round-trip serialization
_node_index_to_stable: HashMap<NodeIndex, u32>,
_next_stable_id: u32,
node_index_to_stable: HashMap<NodeIndex, u32>,
next_stable_id: u32,
}
impl AudioGraphBackend {
@ -26,8 +26,8 @@ impl AudioGraphBackend {
Self {
track_id,
audio_controller,
_node_index_to_stable: HashMap::new(),
_next_stable_id: 0,
node_index_to_stable: HashMap::new(),
next_stable_id: 0,
}
}
}
@ -41,23 +41,25 @@ impl GraphBackend for AudioGraphBackend {
// Generate placeholder node ID
// This will be replaced with actual backend NodeIndex from sync query
let stable_id = self._next_stable_id;
self._next_stable_id += 1;
let stable_id = self.next_stable_id;
self.next_stable_id += 1;
// Placeholder: use stable_id as backend index (will be wrong, but compiles)
let node_idx = NodeIndex::new(stable_id as usize);
self._node_index_to_stable.insert(node_idx, stable_id);
self.node_index_to_stable.insert(node_idx, stable_id);
Ok(BackendNodeId::Audio(node_idx))
}
fn remove_node(&mut self, backend_id: BackendNodeId) -> Result<(), String> {
let BackendNodeId::Audio(node_idx) = backend_id;
let BackendNodeId::Audio(node_idx) = backend_id else {
return Err("Invalid backend node type".to_string());
};
let mut controller = self.audio_controller.lock().unwrap();
controller.graph_remove_node(self.track_id, node_idx.index() as u32);
self._node_index_to_stable.remove(&node_idx);
self.node_index_to_stable.remove(&node_idx);
Ok(())
}
@ -69,8 +71,12 @@ impl GraphBackend for AudioGraphBackend {
input_node: BackendNodeId,
input_port: usize,
) -> Result<(), String> {
let BackendNodeId::Audio(from_idx) = output_node;
let BackendNodeId::Audio(to_idx) = input_node;
let BackendNodeId::Audio(from_idx) = output_node else {
return Err("Invalid output node type".to_string());
};
let BackendNodeId::Audio(to_idx) = input_node else {
return Err("Invalid input node type".to_string());
};
let mut controller = self.audio_controller.lock().unwrap();
controller.graph_connect(
@ -91,8 +97,12 @@ impl GraphBackend for AudioGraphBackend {
input_node: BackendNodeId,
input_port: usize,
) -> Result<(), String> {
let BackendNodeId::Audio(from_idx) = output_node;
let BackendNodeId::Audio(to_idx) = input_node;
let BackendNodeId::Audio(from_idx) = output_node else {
return Err("Invalid output node type".to_string());
};
let BackendNodeId::Audio(to_idx) = input_node else {
return Err("Invalid input node type".to_string());
};
let mut controller = self.audio_controller.lock().unwrap();
controller.graph_disconnect(
@ -112,7 +122,9 @@ impl GraphBackend for AudioGraphBackend {
param_id: u32,
value: f64,
) -> Result<(), String> {
let BackendNodeId::Audio(node_idx) = backend_id;
let BackendNodeId::Audio(node_idx) = backend_id else {
return Err("Invalid backend node type".to_string());
};
let mut controller = self.audio_controller.lock().unwrap();
controller.graph_set_parameter(
@ -168,7 +180,9 @@ impl GraphBackend for AudioGraphBackend {
x: f32,
y: f32,
) -> Result<BackendNodeId, String> {
let BackendNodeId::Audio(allocator_idx) = voice_allocator_id;
let BackendNodeId::Audio(allocator_idx) = voice_allocator_id else {
return Err("Invalid voice allocator node type".to_string());
};
let mut controller = self.audio_controller.lock().unwrap();
controller.graph_add_node_to_template(
@ -180,8 +194,8 @@ impl GraphBackend for AudioGraphBackend {
);
// Placeholder return
let stable_id = self._next_stable_id;
self._next_stable_id += 1;
let stable_id = self.next_stable_id;
self.next_stable_id += 1;
let node_idx = NodeIndex::new(stable_id as usize);
Ok(BackendNodeId::Audio(node_idx))
@ -195,9 +209,15 @@ impl GraphBackend for AudioGraphBackend {
input_node: BackendNodeId,
input_port: usize,
) -> Result<(), String> {
let BackendNodeId::Audio(allocator_idx) = voice_allocator_id;
let BackendNodeId::Audio(from_idx) = output_node;
let BackendNodeId::Audio(to_idx) = input_node;
let BackendNodeId::Audio(allocator_idx) = voice_allocator_id else {
return Err("Invalid voice allocator node type".to_string());
};
let BackendNodeId::Audio(from_idx) = output_node else {
return Err("Invalid output node type".to_string());
};
let BackendNodeId::Audio(to_idx) = input_node else {
return Err("Invalid input node type".to_string());
};
let mut controller = self.audio_controller.lock().unwrap();
controller.graph_connect_in_template(

View File

@ -18,7 +18,6 @@ pub enum BackendNodeId {
/// Implementations:
/// - AudioGraphBackend: Wraps daw_backend::AudioGraph via EngineController
/// - VfxGraphBackend (future): GPU-based shader graph
#[allow(dead_code)]
pub trait GraphBackend: Send {
/// Add a node to the backend graph
fn add_node(&mut self, node_type: &str, x: f32, y: f32) -> Result<BackendNodeId, String>;

View File

@ -67,7 +67,6 @@ impl NodeGraphPane {
}
}
#[allow(dead_code)]
pub fn with_track_id(
track_id: Uuid,
audio_controller: std::sync::Arc<std::sync::Mutex<daw_backend::EngineController>>,
@ -208,7 +207,7 @@ impl NodeGraphPane {
// Set parameter values
for (&param_id, &value) in &node.parameters {
// Find the input param in the graph and set its value
if let Some(_node_data) = self.state.graph.nodes.get_mut(frontend_id) {
if let Some(node_data) = self.state.graph.nodes.get_mut(frontend_id) {
// TODO: Set parameter values on the node's input params
// This requires matching param_id to the input param by index
let _ = (param_id, value); // Silence unused warning for now
@ -429,25 +428,25 @@ impl NodeGraphPane {
fn check_parameter_changes(&mut self) {
// Check all input parameters for value changes
let mut _checked_count = 0;
let mut _connection_only_count = 0;
let mut _non_float_count = 0;
let mut checked_count = 0;
let mut connection_only_count = 0;
let mut non_float_count = 0;
for (input_id, input_param) in &self.state.graph.inputs {
// Only check parameters that can have constant values (not ConnectionOnly)
if matches!(input_param.kind, InputParamKind::ConnectionOnly) {
_connection_only_count += 1;
connection_only_count += 1;
continue;
}
// Get current value
let current_value = match &input_param.value {
ValueType::Float { value } => {
_checked_count += 1;
checked_count += 1;
*value
},
other => {
_non_float_count += 1;
non_float_count += 1;
eprintln!("[DEBUG] Non-float parameter type: {:?}", std::mem::discriminant(other));
continue;
}
@ -573,7 +572,7 @@ impl crate::panes::PaneRenderer for NodeGraphPane {
// Check if track is MIDI or Audio
if let Some(audio_controller) = &shared.audio_controller {
let is_valid_track = {
let _controller = audio_controller.lock().unwrap();
let controller = audio_controller.lock().unwrap();
// TODO: Query track type from backend
// For now, assume it's valid if we have a track ID mapping
true
@ -625,17 +624,13 @@ impl crate::panes::PaneRenderer for NodeGraphPane {
let grid_color = grid_style.background_color.unwrap_or(egui::Color32::from_gray(55));
// Allocate the rect and render the graph editor within it
ui.scope_builder(egui::UiBuilder::new().max_rect(rect), |ui| {
ui.allocate_ui_at_rect(rect, |ui| {
// Check for scroll input to override library's default zoom behavior
// Only handle scroll when mouse is over the node graph area
let pointer_over_graph = ui.rect_contains_pointer(rect);
let modifiers = ui.input(|i| i.modifiers);
let has_ctrl = modifiers.ctrl || modifiers.command;
// When ctrl is held, check for raw scroll events in the events list
let scroll_delta = if !pointer_over_graph {
egui::Vec2::ZERO
} else if has_ctrl {
let scroll_delta = if has_ctrl {
// Sum up scroll events from the raw event list
ui.input(|i| {
let mut total_scroll = egui::Vec2::ZERO;
@ -706,8 +701,8 @@ impl crate::panes::PaneRenderer for NodeGraphPane {
// Draw menu button in top-left corner
let button_pos = rect.min + egui::vec2(8.0, 8.0);
ui.scope_builder(
egui::UiBuilder::new().max_rect(egui::Rect::from_min_size(button_pos, egui::vec2(100.0, 24.0))),
ui.allocate_ui_at_rect(
egui::Rect::from_min_size(button_pos, egui::vec2(100.0, 24.0)),
|ui| {
if ui.button(" Add Node").clicked() {
// Open node finder at button's top-left position

View File

@ -1,4 +1,3 @@
#![allow(dead_code)]
//! Node Type Registry
//!
//! Defines metadata for all available node types

View File

@ -38,7 +38,7 @@ impl NodePalette {
.rect_filled(rect, 0.0, egui::Color32::from_rgb(30, 30, 30));
// Create UI within the palette rect
ui.scope_builder(egui::UiBuilder::new().max_rect(rect), |ui| {
ui.allocate_ui_at_rect(rect, |ui| {
ui.vertical(|ui| {
ui.add_space(8.0);

File diff suppressed because it is too large Load Diff

View File

@ -219,7 +219,6 @@ pub struct ShaderEditorPane {
/// The shader source code being edited
shader_code: String,
/// Whether to show the template selector
#[allow(dead_code)]
show_templates: bool,
/// Error message from last compilation attempt (if any)
compile_error: Option<String>,

View File

@ -1,101 +0,0 @@
// GPU Constant-Q Transform (CQT) compute shader.
//
// Reads raw audio samples from a waveform mip-0 texture (Rgba16Float, packed
// row-major at TEX_WIDTH=2048) and computes CQT magnitude for each
// (freq_bin, time_column) pair, writing normalized dB values into a ring-buffer
// cache texture (R32Float, width=cache_capacity, height=freq_bins).
//
// Dispatch: (ceil(freq_bins / 64), num_columns, 1)
// Each thread handles one frequency bin for one time column.
struct CqtParams {
hop_size: u32,
freq_bins: u32,
cache_capacity: u32,
cache_write_offset: u32, // ring buffer position to start writing
num_columns: u32, // how many columns in this dispatch
column_start: u32, // global CQT column index of first column
tex_width: u32, // waveform texture width (2048)
total_frames: u32, // total audio frames in waveform texture
sample_rate: f32,
column_stride: u32,
_pad1: u32,
_pad2: u32,
}
struct BinInfo {
window_length: u32,
phase_step: f32, // 2*pi*Q / N_k
_pad0: u32,
_pad1: u32,
}
@group(0) @binding(0) var audio_tex: texture_2d<f32>;
@group(0) @binding(1) var cqt_out: texture_storage_2d<rgba16float, write>;
@group(0) @binding(2) var<uniform> params: CqtParams;
@group(0) @binding(3) var<storage, read> bins: array<BinInfo>;
const PI2: f32 = 6.283185307;
@compute @workgroup_size(64)
fn main(@builtin(global_invocation_id) gid: vec3<u32>) {
let bin_k = gid.x;
let col_rel = gid.y; // relative to this dispatch batch
if bin_k >= params.freq_bins || col_rel >= params.num_columns {
return;
}
let global_col = params.column_start + col_rel * params.column_stride;
let sample_start = global_col * params.hop_size;
let info = bins[bin_k];
let n_k = info.window_length;
// Center the analysis window: offset by half the window length so the
// column timestamp refers to the center of the window, not the start.
// This gives better time alignment, especially for low-frequency bins
// that have very long windows.
let half_win = n_k / 2u;
// Accumulate complex inner product: sum of x[n] * w[n] * exp(-i * phase_step * n)
var sum_re: f32 = 0.0;
var sum_im: f32 = 0.0;
for (var n = 0u; n < n_k; n++) {
// Center the window around the hop position
let raw_idx = i32(sample_start) + i32(n) - i32(half_win);
if raw_idx < 0 || u32(raw_idx) >= params.total_frames {
continue;
}
let sample_idx = u32(raw_idx);
// Read audio sample from 2D waveform texture (mip 0)
// At mip 0: R=G=left, B=A=right; average to mono
let tx = sample_idx % params.tex_width;
let ty = sample_idx / params.tex_width;
let texel = textureLoad(audio_tex, vec2<i32>(i32(tx), i32(ty)), 0);
let sample_val = (texel.r + texel.b) * 0.5;
// Hann window computed analytically
let window = 0.5 * (1.0 - cos(PI2 * f32(n) / f32(n_k)));
// Complex exponential: exp(-i * phase_step * n)
let angle = info.phase_step * f32(n);
let windowed = sample_val * window;
sum_re += windowed * cos(angle);
sum_im -= windowed * sin(angle);
}
// Magnitude, normalized by window length
let mag = sqrt(sum_re * sum_re + sum_im * sum_im) / f32(n_k);
// Convert to dB, map -80dB..0dB -> 0.0..1.0
// WGSL log() is natural log, so log10(x) = log(x) / log(10)
let db = 20.0 * log(mag + 1e-10) / 2.302585093;
let normalized = clamp((db + 80.0) / 80.0, 0.0, 1.0);
// Write to ring buffer cache texture
let cache_x = (params.cache_write_offset + col_rel) % params.cache_capacity;
textureStore(cqt_out, vec2<i32>(i32(cache_x), i32(bin_k)), vec4(normalized, 0.0, 0.0, 1.0));
}

View File

@ -1,155 +0,0 @@
// CQT spectrogram render shader.
//
// Reads from a ring-buffer cache texture (Rgba16Float) where:
// X = time column (ring buffer index), Y = CQT frequency bin
// CQT bins map directly to MIDI notes via: bin = (note - min_note) * bins_per_octave / 12
//
// Applies the same colormap as the old FFT spectrogram.
// Must match CqtRenderParams in cqt_gpu.rs exactly (96 bytes).
struct Params {
clip_rect: vec4<f32>, // 16 @ 0
viewport_start_time: f32, // 4 @ 16
pixels_per_second: f32, // 4 @ 20
audio_duration: f32, // 4 @ 24
sample_rate: f32, // 4 @ 28
clip_start_time: f32, // 4 @ 32
trim_start: f32, // 4 @ 36
freq_bins: f32, // 4 @ 40
bins_per_octave: f32, // 4 @ 44
hop_size: f32, // 4 @ 48
scroll_y: f32, // 4 @ 52
note_height: f32, // 4 @ 56
min_note: f32, // 4 @ 60
max_note: f32, // 4 @ 64
gamma: f32, // 4 @ 68
cache_capacity: f32, // 4 @ 72
cache_start_column: f32, // 4 @ 76
cache_valid_start: f32, // 4 @ 80
cache_valid_end: f32, // 4 @ 84
column_stride: f32, // 4 @ 88
_pad: f32, // 4 @ 92, total 96
}
@group(0) @binding(0) var cache_tex: texture_2d<f32>;
@group(0) @binding(1) var cache_sampler: sampler;
@group(0) @binding(2) var<uniform> params: Params;
struct VertexOutput {
@builtin(position) position: vec4<f32>,
@location(0) uv: vec2<f32>,
}
@vertex
fn vs_main(@builtin(vertex_index) vi: u32) -> VertexOutput {
var out: VertexOutput;
let x = f32(i32(vi) / 2) * 4.0 - 1.0;
let y = f32(i32(vi) % 2) * 4.0 - 1.0;
out.position = vec4(x, y, 0.0, 1.0);
out.uv = vec2((x + 1.0) * 0.5, (1.0 - y) * 0.5);
return out;
}
fn rounded_rect_sdf(pos: vec2<f32>, rect_min: vec2<f32>, rect_max: vec2<f32>, r: f32) -> f32 {
let center = (rect_min + rect_max) * 0.5;
let half_size = (rect_max - rect_min) * 0.5;
let q = abs(pos - center) - half_size + vec2(r);
return length(max(q, vec2(0.0))) - r;
}
// Colormap: black -> blue -> purple -> red -> orange -> yellow -> white
fn colormap(v: f32, gamma: f32) -> vec4<f32> {
let t = pow(clamp(v, 0.0, 1.0), gamma);
if t < 1.0 / 6.0 {
let s = t * 6.0;
return vec4(0.0, 0.0, s, 1.0);
} else if t < 2.0 / 6.0 {
let s = (t - 1.0 / 6.0) * 6.0;
return vec4(s * 0.6, 0.0, 1.0 - s * 0.2, 1.0);
} else if t < 3.0 / 6.0 {
let s = (t - 2.0 / 6.0) * 6.0;
return vec4(0.6 + s * 0.4, 0.0, 0.8 - s * 0.8, 1.0);
} else if t < 4.0 / 6.0 {
let s = (t - 3.0 / 6.0) * 6.0;
return vec4(1.0, s * 0.5, 0.0, 1.0);
} else if t < 5.0 / 6.0 {
let s = (t - 4.0 / 6.0) * 6.0;
return vec4(1.0, 0.5 + s * 0.5, 0.0, 1.0);
} else {
let s = (t - 5.0 / 6.0) * 6.0;
return vec4(1.0, 1.0, s, 1.0);
}
}
@fragment
fn fs_main(in: VertexOutput) -> @location(0) vec4<f32> {
let frag_x = in.position.x;
let frag_y = in.position.y;
// Clip to view rectangle
if frag_x < params.clip_rect.x || frag_x > params.clip_rect.z ||
frag_y < params.clip_rect.y || frag_y > params.clip_rect.w {
discard;
}
// Compute the content rect in screen space
let content_left = params.clip_rect.x + (params.clip_start_time - params.trim_start - params.viewport_start_time) * params.pixels_per_second;
let content_right = content_left + params.audio_duration * params.pixels_per_second;
let content_top = params.clip_rect.y - params.scroll_y;
let content_bottom = params.clip_rect.y + (params.max_note - params.min_note + 1.0) * params.note_height - params.scroll_y;
// Rounded corners
let vis_top = max(content_top, params.clip_rect.y);
let vis_bottom = min(content_bottom, params.clip_rect.w);
let corner_radius = 6.0;
let dist = rounded_rect_sdf(
vec2(frag_x, frag_y),
vec2(content_left, vis_top),
vec2(content_right, vis_bottom),
corner_radius
);
if dist > 0.0 {
discard;
}
// Fragment X -> audio time -> global CQT column
let timeline_time = params.viewport_start_time + (frag_x - params.clip_rect.x) / params.pixels_per_second;
let audio_time = timeline_time - params.clip_start_time + params.trim_start;
if audio_time < 0.0 || audio_time > params.audio_duration {
discard;
}
let global_col = audio_time * params.sample_rate / params.hop_size;
// Check if this column is in the cached range
if global_col < params.cache_valid_start || global_col >= params.cache_valid_end {
discard;
}
// Fragment Y -> MIDI note -> CQT bin (direct mapping!)
let note = params.max_note - ((frag_y - params.clip_rect.y + params.scroll_y) / params.note_height);
if note < params.min_note || note > params.max_note {
discard;
}
// CQT bin: each octave has bins_per_octave bins, starting from min_note
let bin = (note - params.min_note) * params.bins_per_octave / 12.0;
if bin < 0.0 || bin >= params.freq_bins {
discard;
}
// Map global column to ring buffer position (accounting for stride)
let ring_pos = (global_col - params.cache_start_column) / params.column_stride;
let cache_x = ring_pos % params.cache_capacity;
// Sample cache texture with bilinear filtering
let u = (cache_x + 0.5) / params.cache_capacity;
let v = (bin + 0.5) / params.freq_bins;
let magnitude = textureSampleLevel(cache_tex, cache_sampler, vec2(u, v), 0.0).r;
return colormap(magnitude, params.gamma);
}

View File

@ -63,9 +63,8 @@ fn fs_main(in: VertexOutput) -> @location(0) vec4<f32> {
}
// Fragment X position audio time
// clip_start_time is the screen X of the (unclamped) clip left edge.
// (frag_x - clip_start_time) / pps gives the time offset from the clip's start.
let audio_time = (frag_x - params.clip_start_time) / params.pixels_per_second + params.trim_start;
let timeline_time = params.viewport_start_time + (frag_x - params.clip_rect.x) / params.pixels_per_second;
let audio_time = timeline_time - params.clip_start_time + params.trim_start;
// Audio time frame index
let frame_f = audio_time * params.sample_rate - params.segment_start_frame;

View File

@ -6,8 +6,8 @@
use eframe::egui;
use lightningbeam_core::action::Action;
use lightningbeam_core::clip::ClipInstance;
use lightningbeam_core::gpu::{BufferPool, BufferFormat, BufferSpec, Compositor, EffectProcessor, SrgbToLinearConverter};
use lightningbeam_core::layer::{AnyLayer, AudioLayer};
use lightningbeam_core::gpu::{BufferPool, BufferFormat, BufferSpec, Compositor, EffectProcessor, HDR_FORMAT, SrgbToLinearConverter};
use lightningbeam_core::layer::{AnyLayer, AudioLayer, AudioLayerType, VideoLayer, VectorLayer};
use lightningbeam_core::renderer::RenderedLayerType;
use super::{DragClipType, NodePath, PaneRenderer, SharedPaneState};
use std::sync::{Arc, Mutex, OnceLock};
@ -872,7 +872,7 @@ impl egui_wgpu::CallbackTrait for VelloCallback {
}
// Also draw selection outlines for clip instances
let _clip_instance_count = self.selection.clip_instances().len();
let clip_instance_count = self.selection.clip_instances().len();
for &clip_id in self.selection.clip_instances() {
if let Some(clip_instance) = vector_layer.clip_instances.iter().find(|ci| ci.id == clip_id) {
// Calculate clip-local time
@ -1865,7 +1865,7 @@ impl egui_wgpu::CallbackTrait for VelloCallback {
// Clamp to texture bounds
if tex_x < width && tex_y < height {
// Create a staging buffer to read back the pixel
let _bytes_per_pixel = 4; // RGBA8
let bytes_per_pixel = 4; // RGBA8
// Align bytes_per_row to 256 (wgpu::COPY_BYTES_PER_ROW_ALIGNMENT)
let bytes_per_row_alignment = 256u32;
let bytes_per_row = bytes_per_row_alignment; // Single pixel, use minimum alignment
@ -2128,6 +2128,7 @@ impl StagePane {
use lightningbeam_core::tool::ToolState;
use lightningbeam_core::layer::AnyLayer;
use lightningbeam_core::hit_test::{self, hit_test_vector_editing, EditingHitTolerance, VectorEditHit};
use lightningbeam_core::bezpath_editing::{extract_editable_curves, mold_curve};
use vello::kurbo::{Point, Rect as KurboRect, Affine};
// Check if we have an active vector layer
@ -2617,8 +2618,9 @@ impl StagePane {
mouse_pos: vello::kurbo::Point,
shared: &mut SharedPaneState,
) {
use lightningbeam_core::bezpath_editing::mold_curve;
use lightningbeam_core::bezpath_editing::{mold_curve, rebuild_bezpath};
use lightningbeam_core::tool::ToolState;
use vello::kurbo::Point;
// Clone tool state to get owned values
let tool_state = shared.tool_state.clone();
@ -2797,12 +2799,12 @@ impl StagePane {
ui: &mut egui::Ui,
response: &egui::Response,
world_pos: egui::Vec2,
_shift_held: bool,
shift_held: bool,
shared: &mut SharedPaneState,
) {
use lightningbeam_core::tool::ToolState;
use lightningbeam_core::layer::AnyLayer;
use lightningbeam_core::hit_test::{hit_test_vector_editing, EditingHitTolerance, VectorEditHit};
use lightningbeam_core::hit_test::{self, hit_test_vector_editing, EditingHitTolerance, VectorEditHit};
use vello::kurbo::{Point, Affine};
// Check if we have an active vector layer
@ -2895,7 +2897,7 @@ impl StagePane {
shape_instance_id: uuid::Uuid,
curve_index: usize,
point_index: u8,
_mouse_pos: vello::kurbo::Point,
mouse_pos: vello::kurbo::Point,
active_layer_id: uuid::Uuid,
shared: &mut SharedPaneState,
) {
@ -3604,7 +3606,7 @@ impl StagePane {
// Mouse drag: add points to path
if response.dragged() {
if let ToolState::DrawingPath { points, simplify_mode: _ } = &mut *shared.tool_state {
if let ToolState::DrawingPath { points, simplify_mode } = &mut *shared.tool_state {
// Only add point if it's far enough from the last point (reduce noise)
const MIN_POINT_DISTANCE: f64 = 2.0;
@ -3758,12 +3760,63 @@ impl StagePane {
}
}
/// Decompose an affine matrix into transform components
/// Returns (translation_x, translation_y, rotation_deg, scale_x, scale_y, skew_x_deg, skew_y_deg)
fn decompose_affine(affine: kurbo::Affine) -> (f64, f64, f64, f64, f64, f64, f64) {
let coeffs = affine.as_coeffs();
let a = coeffs[0];
let b = coeffs[1];
let c = coeffs[2];
let d = coeffs[3];
let e = coeffs[4]; // translation_x
let f = coeffs[5]; // translation_y
// Extract translation
let tx = e;
let ty = f;
// Decompose linear part [[a, c], [b, d]] into rotate * scale * skew
// Using QR-like decomposition
// Extract rotation
let rotation_rad = b.atan2(a);
let cos_r = rotation_rad.cos();
let sin_r = rotation_rad.sin();
// Remove rotation to get scale * skew
// R^(-1) * M where M = [[a, c], [b, d]]
let m11 = a * cos_r + b * sin_r;
let m12 = c * cos_r + d * sin_r;
let m21 = -a * sin_r + b * cos_r;
let m22 = -c * sin_r + d * cos_r;
// Now [[m11, m12], [m21, m22]] = scale * skew
// scale * skew = [[sx, 0], [0, sy]] * [[1, tan(skew_y)], [tan(skew_x), 1]]
// = [[sx, sx*tan(skew_y)], [sy*tan(skew_x), sy]]
let scale_x = m11;
let scale_y = m22;
let skew_x_rad = if scale_y.abs() > 0.001 { (m21 / scale_y).atan() } else { 0.0 };
let skew_y_rad = if scale_x.abs() > 0.001 { (m12 / scale_x).atan() } else { 0.0 };
(
tx,
ty,
rotation_rad.to_degrees(),
scale_x,
scale_y,
skew_x_rad.to_degrees(),
skew_y_rad.to_degrees(),
)
}
/// Apply transform preview to objects based on current mouse position
fn apply_transform_preview(
vector_layer: &mut lightningbeam_core::layer::VectorLayer,
mode: &lightningbeam_core::tool::TransformMode,
original_transforms: &std::collections::HashMap<uuid::Uuid, lightningbeam_core::object::Transform>,
_pivot: vello::kurbo::Point,
pivot: vello::kurbo::Point,
start_mouse: vello::kurbo::Point,
current_mouse: vello::kurbo::Point,
original_bbox: vello::kurbo::Rect,
@ -4731,6 +4784,7 @@ impl StagePane {
_ => egui::CursorIcon::Default,
};
ui.ctx().set_cursor_icon(cursor);
hovering_handle = true;
break;
}
}
@ -5165,6 +5219,7 @@ impl StagePane {
}
}
}
_ => {}
}
}
} else if let AnyLayer::Video(video_layer) = layer {

View File

@ -125,7 +125,6 @@ impl TimelinePane {
/// Execute a view action with the given parameters
/// Called from main.rs after determining this is the best handler
#[allow(dead_code)] // Mirrors StagePane; wiring in main.rs pending (see TODO at view action dispatch)
pub fn execute_view_action(&mut self, action: &crate::menu::MenuAction, zoom_center: egui::Vec2) {
use crate::menu::MenuAction;
match action {
@ -151,25 +150,21 @@ impl TimelinePane {
/// Start recording on the active audio layer
fn start_recording(&mut self, shared: &mut SharedPaneState) {
use lightningbeam_core::clip::{AudioClip, ClipInstance};
let Some(active_layer_id) = *shared.active_layer_id else {
println!("⚠️ No active layer selected for recording");
return;
};
// Get layer type (copy it so we can drop the document borrow before mutating)
let layer_type = {
let document = shared.action_executor.document();
let Some(layer) = document.root.children.iter().find(|l| l.id() == active_layer_id) else {
println!("⚠️ Active layer not found in document");
return;
};
let AnyLayer::Audio(audio_layer) = layer else {
println!("⚠️ Active layer is not an audio layer - cannot record");
return;
};
audio_layer.audio_layer_type
// Get the active layer and check if it's an audio layer
let document = shared.action_executor.document();
let Some(layer) = document.root.children.iter().find(|l| l.id() == active_layer_id) else {
println!("⚠️ Active layer not found in document");
return;
};
let AnyLayer::Audio(audio_layer) = layer else {
println!("⚠️ Active layer is not an audio layer - cannot record");
return;
};
// Get the backend track ID for this layer
@ -184,53 +179,31 @@ impl TimelinePane {
if let Some(controller_arc) = shared.audio_controller {
let mut controller = controller_arc.lock().unwrap();
match layer_type {
match audio_layer.audio_layer_type {
AudioLayerType::Midi => {
// Create backend MIDI clip and start recording
// For MIDI recording, we need to create a clip first
// The backend will emit MidiRecordingStarted with the clip_id
let clip_id = controller.create_midi_clip(track_id, start_time, 4.0);
controller.start_midi_recording(track_id, clip_id, start_time);
shared.recording_clips.insert(active_layer_id, clip_id);
println!("🎹 Started MIDI recording on track {:?} at {:.2}s, clip_id={}",
track_id, start_time, clip_id);
// Drop controller lock before document mutation
drop(controller);
// Create document clip + clip instance immediately (clip_id is known synchronously)
let doc_clip = AudioClip::new_midi("Recording...", clip_id, 4.0);
let doc_clip_id = shared.action_executor.document_mut().add_audio_clip(doc_clip);
let clip_instance = ClipInstance::new(doc_clip_id)
.with_timeline_start(start_time);
if let Some(layer) = shared.action_executor.document_mut().root.children.iter_mut()
.find(|l| l.id() == active_layer_id)
{
if let lightningbeam_core::layer::AnyLayer::Audio(audio_layer) = layer {
audio_layer.clip_instances.push(clip_instance);
}
}
// Initialize empty cache entry for this clip
shared.midi_event_cache.insert(clip_id, Vec::new());
}
AudioLayerType::Sampled => {
// For audio recording, backend creates the clip
controller.start_recording(track_id, start_time);
println!("🎤 Started audio recording on track {:?} at {:.2}s", track_id, start_time);
drop(controller);
}
}
// Re-acquire lock for playback start
// Auto-start playback if not already playing
if !*shared.is_playing {
let mut controller = controller_arc.lock().unwrap();
controller.play();
*shared.is_playing = true;
println!("▶ Auto-started playback for recording");
}
// Store recording state
// Store recording state for clip creation when RecordingStarted event arrives
*shared.is_recording = true;
*shared.recording_start_time = start_time;
*shared.recording_layer_id = Some(active_layer_id);
@ -532,7 +505,7 @@ impl TimelinePane {
painter: &egui::Painter,
clip_rect: egui::Rect,
rect_min_x: f32, // Timeline panel left edge (for proper viewport-relative positioning)
events: &[(f64, u8, u8, bool)], // (timestamp, note_number, velocity, is_note_on)
events: &[(f64, u8, bool)], // (timestamp, note_number, is_note_on)
trim_start: f64,
visible_duration: f64,
timeline_start: f64,
@ -554,7 +527,7 @@ impl TimelinePane {
let mut note_rectangles: Vec<(egui::Rect, u8)> = Vec::new();
// First pass: pair note-ons with note-offs to calculate durations
for &(timestamp, note_number, _velocity, is_note_on) in events {
for &(timestamp, note_number, is_note_on) in events {
if is_note_on {
// Store note-on timestamp
active_notes.insert(note_number, timestamp);
@ -782,7 +755,7 @@ impl TimelinePane {
// Mute button
// TODO: Replace with SVG icon (volume-up-fill.svg / volume-mute.svg)
let mute_response = ui.scope_builder(egui::UiBuilder::new().max_rect(mute_button_rect), |ui| {
let mute_response = ui.allocate_new_ui(egui::UiBuilder::new().max_rect(mute_button_rect), |ui| {
let mute_text = if is_muted { "🔇" } else { "🔊" };
let button = egui::Button::new(mute_text)
.fill(if is_muted {
@ -806,7 +779,7 @@ impl TimelinePane {
// Solo button
// TODO: Replace with SVG headphones icon
let solo_response = ui.scope_builder(egui::UiBuilder::new().max_rect(solo_button_rect), |ui| {
let solo_response = ui.allocate_new_ui(egui::UiBuilder::new().max_rect(solo_button_rect), |ui| {
let button = egui::Button::new("🎧")
.fill(if is_soloed {
egui::Color32::from_rgba_unmultiplied(100, 200, 100, 100)
@ -829,7 +802,7 @@ impl TimelinePane {
// Lock button
// TODO: Replace with SVG lock/lock-open icons
let lock_response = ui.scope_builder(egui::UiBuilder::new().max_rect(lock_button_rect), |ui| {
let lock_response = ui.allocate_new_ui(egui::UiBuilder::new().max_rect(lock_button_rect), |ui| {
let lock_text = if is_locked { "🔒" } else { "🔓" };
let button = egui::Button::new(lock_text)
.fill(if is_locked {
@ -852,7 +825,7 @@ impl TimelinePane {
}
// Volume slider (nonlinear: 0-70% slider = 0-100% volume, 70-100% slider = 100-200% volume)
let volume_response = ui.scope_builder(egui::UiBuilder::new().max_rect(volume_slider_rect), |ui| {
let volume_response = ui.allocate_new_ui(egui::UiBuilder::new().max_rect(volume_slider_rect), |ui| {
// Map volume (0.0-2.0) to slider position (0.0-1.0)
let slider_value = if current_volume <= 1.0 {
// 0.0-1.0 volume maps to 0.0-0.7 slider (70%)
@ -919,7 +892,7 @@ impl TimelinePane {
document: &lightningbeam_core::document::Document,
active_layer_id: &Option<uuid::Uuid>,
selection: &lightningbeam_core::selection::Selection,
midi_event_cache: &std::collections::HashMap<u32, Vec<(f64, u8, u8, bool)>>,
midi_event_cache: &std::collections::HashMap<u32, Vec<(f64, u8, bool)>>,
raw_audio_cache: &std::collections::HashMap<usize, (Vec<f32>, u32, u32)>,
waveform_gpu_dirty: &mut std::collections::HashSet<usize>,
target_format: wgpu::TextureFormat,
@ -1218,7 +1191,7 @@ impl TimelinePane {
if let Some((samples, sr, ch)) = raw_audio_cache.get(audio_pool_index) {
let total_frames = samples.len() / (*ch).max(1) as usize;
let audio_file_duration = total_frames as f64 / *sr as f64;
let screen_size = ui.ctx().content_rect().size();
let screen_size = ui.ctx().screen_rect().size();
let pending_upload = if waveform_gpu_dirty.contains(audio_pool_index) {
waveform_gpu_dirty.remove(audio_pool_index);
@ -1255,7 +1228,7 @@ impl TimelinePane {
pixels_per_second: self.pixels_per_second as f32,
audio_duration: audio_file_duration as f32,
sample_rate: *sr as f32,
clip_start_time: clip_screen_start,
clip_start_time: instance_start as f32,
trim_start: preview_trim_start as f32,
tex_width: crate::waveform_gpu::tex_width() as f32,
total_frames: total_frames as f32,
@ -1348,7 +1321,7 @@ impl TimelinePane {
fn handle_input(
&mut self,
ui: &mut egui::Ui,
_full_timeline_rect: egui::Rect,
full_timeline_rect: egui::Rect,
ruler_rect: egui::Rect,
content_rect: egui::Rect,
header_rect: egui::Rect,
@ -1358,7 +1331,7 @@ impl TimelinePane {
selection: &mut lightningbeam_core::selection::Selection,
pending_actions: &mut Vec<Box<dyn lightningbeam_core::action::Action>>,
playback_time: &mut f64,
_is_playing: &mut bool,
is_playing: &mut bool,
audio_controller: Option<&std::sync::Arc<std::sync::Mutex<daw_backend::EngineController>>>,
) {
// Don't allocate the header area for input - let widgets handle it directly
@ -1786,10 +1759,8 @@ impl TimelinePane {
}
// Distinguish between mouse wheel (discrete) and trackpad (smooth)
// Only handle scroll when mouse is over the timeline area
let mut handled = false;
let pointer_over_timeline = response.hovered() || ui.rect_contains_pointer(header_rect);
if pointer_over_timeline { ui.input(|i| {
ui.input(|i| {
for event in &i.raw.events {
if let egui::Event::MouseWheel { unit, delta, modifiers, .. } = event {
match unit {
@ -1816,10 +1787,10 @@ impl TimelinePane {
}
}
}
}); }
});
// Handle scroll_delta for trackpad panning (when Ctrl not held)
if pointer_over_timeline && !handled {
if !handled {
let scroll_delta = ui.input(|i| i.smooth_scroll_delta);
if scroll_delta.x.abs() > 0.0 || scroll_delta.y.abs() > 0.0 {
// Horizontal scroll: pan timeline (inverted: positive delta scrolls left/earlier in time)
@ -2297,7 +2268,7 @@ impl PaneRenderer for TimelinePane {
*shared.dragging_asset = None;
} else {
// Get document dimensions for centering and create clip instance
let (_center_x, _center_y, clip_instance) = {
let (center_x, center_y, mut clip_instance) = {
let doc = shared.action_executor.document();
let center_x = doc.width / 2.0;
let center_y = doc.height / 2.0;

View File

@ -215,7 +215,7 @@ impl VirtualPianoPane {
// Handle interaction (skip if a black key is being interacted with)
let key_id = ui.id().with(("white_key", note));
let _response = ui.interact(key_rect, key_id, egui::Sense::click_and_drag());
let response = ui.interact(key_rect, key_id, egui::Sense::click_and_drag());
// Visual feedback for pressed keys (check both pressed_notes and current pointer state)
let pointer_over_key = ui.input(|i| {
@ -298,7 +298,7 @@ impl VirtualPianoPane {
// Handle interaction (same as white keys)
let key_id = ui.id().with(("black_key", note));
let _response = ui.interact(key_rect, key_id, egui::Sense::click_and_drag());
let response = ui.interact(key_rect, key_id, egui::Sense::click_and_drag());
// Visual feedback for pressed keys (check both pressed_notes and current pointer state)
let pointer_over_key = ui.input(|i| {

View File

@ -192,7 +192,7 @@ impl PreferencesDialog {
ui.label("Default BPM:");
ui.add(
egui::DragValue::new(&mut self.working_prefs.bpm)
.range(20..=300)
.clamp_range(20..=300)
.speed(1.0),
);
});
@ -201,7 +201,7 @@ impl PreferencesDialog {
ui.label("Default Framerate:");
ui.add(
egui::DragValue::new(&mut self.working_prefs.framerate)
.range(1..=120)
.clamp_range(1..=120)
.speed(1.0)
.suffix(" fps"),
);
@ -211,7 +211,7 @@ impl PreferencesDialog {
ui.label("Default File Width:");
ui.add(
egui::DragValue::new(&mut self.working_prefs.file_width)
.range(100..=10000)
.clamp_range(100..=10000)
.speed(10.0)
.suffix(" px"),
);
@ -221,7 +221,7 @@ impl PreferencesDialog {
ui.label("Default File Height:");
ui.add(
egui::DragValue::new(&mut self.working_prefs.file_height)
.range(100..=10000)
.clamp_range(100..=10000)
.speed(10.0)
.suffix(" px"),
);
@ -231,7 +231,7 @@ impl PreferencesDialog {
ui.label("Scroll Speed:");
ui.add(
egui::DragValue::new(&mut self.working_prefs.scroll_speed)
.range(0.1..=10.0)
.clamp_range(0.1..=10.0)
.speed(0.1),
);
});
@ -245,7 +245,7 @@ impl PreferencesDialog {
ui.horizontal(|ui| {
ui.label("Audio Buffer Size:");
egui::ComboBox::from_id_salt("audio_buffer_size")
egui::ComboBox::from_id_source("audio_buffer_size")
.selected_text(format!("{} samples", self.working_prefs.audio_buffer_size))
.show_ui(ui, |ui| {
ui.selectable_value(
@ -292,7 +292,7 @@ impl PreferencesDialog {
ui.horizontal(|ui| {
ui.label("Theme:");
egui::ComboBox::from_id_salt("theme_mode")
egui::ComboBox::from_id_source("theme_mode")
.selected_text(format!("{:?}", self.working_prefs.theme_mode))
.show_ui(ui, |ui| {
ui.selectable_value(

View File

@ -4,3 +4,4 @@
pub mod dialog;
pub use dialog::{PreferencesDialog, PreferencesSaveResult};

View File

@ -46,6 +46,27 @@ pub struct Style {
// Add more properties as needed
}
impl Style {
/// Merge another style into this one (other's properties override if present)
pub fn merge(&mut self, other: &Style) {
if other.background_color.is_some() {
self.background_color = other.background_color;
}
if other.border_color.is_some() {
self.border_color = other.border_color;
}
if other.text_color.is_some() {
self.text_color = other.text_color;
}
if other.width.is_some() {
self.width = other.width;
}
if other.height.is_some() {
self.height = other.height;
}
}
}
#[derive(Debug, Clone)]
pub struct Theme {
light_variables: HashMap<String, String>,
@ -208,13 +229,21 @@ impl Theme {
}
}
/// Get a CSS variable value and parse as color (backward compatibility helper)
/// This allows old code using theme.color("variable-name") to work
pub fn color(&self, var_name: &str) -> Option<egui::Color32> {
// Try light variables first, then dark variables
let value = self.light_variables.get(var_name)
.or_else(|| self.dark_variables.get(var_name))?;
parse_hex_color(value)
}
/// Get the number of loaded selectors
pub fn len(&self) -> usize {
self.light_styles.len()
}
/// Check if theme has no styles
#[allow(dead_code)] // Used in tests
pub fn is_empty(&self) -> bool {
self.light_styles.is_empty()
}

View File

@ -14,6 +14,9 @@ use wgpu::util::DeviceExt;
/// Fixed texture width (power of 2) for all waveform textures
const TEX_WIDTH: u32 = 2048;
/// Maximum number of texture segments per audio clip
const MAX_SEGMENTS: u32 = 16;
/// GPU resources for all waveform textures, stored in CallbackResources
pub struct WaveformGpuResources {
/// Per-audio-pool-index GPU data
@ -31,7 +34,6 @@ pub struct WaveformGpuResources {
}
/// GPU data for a single audio file
#[allow(dead_code)] // textures/texture_views must stay alive to back bind groups; metadata for future use
pub struct WaveformGpuEntry {
/// One texture per segment (for long audio split across multiple textures)
pub textures: Vec<wgpu::Texture>,
@ -43,10 +45,8 @@ pub struct WaveformGpuEntry {
pub uniform_buffers: Vec<wgpu::Buffer>,
/// Frames covered by each texture segment
pub frames_per_segment: u32,
/// Total frame count of data currently in the texture
/// Total frame count
pub total_frames: u64,
/// Allocated texture height (may be larger than needed for current total_frames)
pub tex_height: u32,
/// Sample rate
pub sample_rate: u32,
/// Number of channels in source audio
@ -273,99 +273,13 @@ impl WaveformGpuResources {
sample_rate: u32,
channels: u32,
) -> Vec<wgpu::CommandBuffer> {
let new_total_frames = samples.len() / channels.max(1) as usize;
if new_total_frames == 0 {
return Vec::new();
}
// If entry exists and texture is large enough, do an incremental update
let incremental = if let Some(entry) = self.entries.get(&pool_index) {
let new_tex_height = (new_total_frames as u32 + TEX_WIDTH - 1) / TEX_WIDTH;
if new_tex_height <= entry.tex_height && new_total_frames > entry.total_frames as usize {
Some((entry.total_frames as usize, entry.tex_height))
} else if new_total_frames <= entry.total_frames as usize {
return Vec::new(); // No new data
} else {
None // Texture too small, need full recreate
}
} else {
None // No entry yet
};
if let Some((old_frames, tex_height)) = incremental {
// Write only the NEW rows into the existing texture
let start_row = old_frames as u32 / TEX_WIDTH;
let end_row = (new_total_frames as u32 + TEX_WIDTH - 1) / TEX_WIDTH;
let rows_to_write = end_row - start_row;
let row_texel_count = (TEX_WIDTH * rows_to_write) as usize;
let mut row_data: Vec<half::f16> = vec![half::f16::ZERO; row_texel_count * 4];
let row_start_frame = start_row as usize * TEX_WIDTH as usize;
for frame in 0..(rows_to_write as usize * TEX_WIDTH as usize) {
let global_frame = row_start_frame + frame;
if global_frame >= new_total_frames {
break;
}
let sample_offset = global_frame * channels as usize;
let left = if sample_offset < samples.len() {
samples[sample_offset]
} else {
0.0
};
let right = if channels >= 2 && sample_offset + 1 < samples.len() {
samples[sample_offset + 1]
} else {
left
};
let texel_offset = frame * 4;
row_data[texel_offset] = half::f16::from_f32(left);
row_data[texel_offset + 1] = half::f16::from_f32(left);
row_data[texel_offset + 2] = half::f16::from_f32(right);
row_data[texel_offset + 3] = half::f16::from_f32(right);
}
let entry = self.entries.get(&pool_index).unwrap();
queue.write_texture(
wgpu::TexelCopyTextureInfo {
texture: &entry.textures[0],
mip_level: 0,
origin: wgpu::Origin3d { x: 0, y: start_row, z: 0 },
aspect: wgpu::TextureAspect::All,
},
bytemuck::cast_slice(&row_data),
wgpu::TexelCopyBufferLayout {
offset: 0,
bytes_per_row: Some(TEX_WIDTH * 8),
rows_per_image: Some(rows_to_write),
},
wgpu::Extent3d {
width: TEX_WIDTH,
height: rows_to_write,
depth_or_array_layers: 1,
},
);
// Regenerate mipmaps
let mip_count = compute_mip_count(TEX_WIDTH, tex_height);
let cmds = self.generate_mipmaps(
device,
&entry.textures[0],
TEX_WIDTH,
tex_height,
mip_count,
new_total_frames as u32,
);
// Update total_frames after borrow of entry is done
self.entries.get_mut(&pool_index).unwrap().total_frames = new_total_frames as u64;
return cmds;
}
// Full create (first upload or texture needs to grow)
// Remove old entry if exists
self.entries.remove(&pool_index);
let total_frames = new_total_frames;
let total_frames = samples.len() / channels.max(1) as usize;
if total_frames == 0 {
return Vec::new();
}
let max_frames_per_segment = (TEX_WIDTH as u64)
* (device.limits().max_texture_dimension_2d as u64);
@ -411,6 +325,7 @@ impl WaveformGpuResources {
});
// Pack raw samples into Rgba16Float data for mip 0
// R=left_min=left_sample, G=left_max=left_sample, B=right_min, A=right_max
let texel_count = (TEX_WIDTH * tex_height) as usize;
let mut mip0_data: Vec<half::f16> = vec![half::f16::ZERO; texel_count * 4];
@ -426,14 +341,14 @@ impl WaveformGpuResources {
let right = if channels >= 2 && sample_offset + 1 < samples.len() {
samples[sample_offset + 1]
} else {
left
left // Mono: duplicate left to right
};
let texel_offset = frame * 4;
mip0_data[texel_offset] = half::f16::from_f32(left);
mip0_data[texel_offset + 1] = half::f16::from_f32(left);
mip0_data[texel_offset + 2] = half::f16::from_f32(right);
mip0_data[texel_offset + 3] = half::f16::from_f32(right);
mip0_data[texel_offset] = half::f16::from_f32(left); // R = left_min
mip0_data[texel_offset + 1] = half::f16::from_f32(left); // G = left_max
mip0_data[texel_offset + 2] = half::f16::from_f32(right); // B = right_min
mip0_data[texel_offset + 3] = half::f16::from_f32(right); // A = right_max
}
// Upload mip 0
@ -447,7 +362,7 @@ impl WaveformGpuResources {
bytemuck::cast_slice(&mip0_data),
wgpu::TexelCopyBufferLayout {
offset: 0,
bytes_per_row: Some(TEX_WIDTH * 8),
bytes_per_row: Some(TEX_WIDTH * 8), // 4 channels × 2 bytes (f16)
rows_per_image: Some(tex_height),
},
wgpu::Extent3d {
@ -474,7 +389,7 @@ impl WaveformGpuResources {
..Default::default()
});
// Create uniform buffer placeholder
// Create uniform buffer placeholder (will be filled per-draw in paint)
let uniform_buffer = device.create_buffer(&wgpu::BufferDescriptor {
label: Some(&format!("waveform_{}_seg{}_uniforms", pool_index, seg)),
size: std::mem::size_of::<WaveformParams>() as u64,
@ -517,7 +432,6 @@ impl WaveformGpuResources {
uniform_buffers,
frames_per_segment,
total_frames: total_frames as u64,
tex_height: (total_frames as u32 + TEX_WIDTH - 1) / TEX_WIDTH,
sample_rate,
channels,
},
@ -698,6 +612,12 @@ fn compute_mip_count(width: u32, height: u32) -> u32 {
(max_dim as f32).log2().floor() as u32 + 1
}
/// Calculate how many texture segments are needed for a given frame count
pub fn segment_count_for_frames(total_frames: u64, max_texture_height: u32) -> u32 {
let max_frames_per_segment = TEX_WIDTH as u64 * max_texture_height as u64;
((total_frames + max_frames_per_segment - 1) / max_frames_per_segment) as u32
}
/// Get the fixed texture width used for all waveform textures
pub fn tex_width() -> u32 {
TEX_WIDTH

View File

@ -72,7 +72,6 @@ fn key_to_char(key: egui::Key, shift: bool) -> Option<char> {
}
/// Response from the IME text field widget
#[allow(dead_code)] // Standard widget response fields; callers will use as features expand
pub struct ImeTextFieldResponse {
/// The egui response for the text field area
pub response: egui::Response,