Compare commits
10 Commits
908da99321
...
12d927ed3d
| Author | SHA1 | Date |
|---|---|---|
|
|
12d927ed3d | |
|
|
408343094a | |
|
|
04a7f35b84 | |
|
|
068715c0fa | |
|
|
777d3ef6be | |
|
|
82b58ae0dc | |
|
|
b86af7bbf5 | |
|
|
c11dab928c | |
|
|
ad81cce0c6 | |
|
|
93a2252a58 |
108
README.md
108
README.md
|
|
@ -10,42 +10,118 @@ A free and open-source 2D multimedia editor combining vector animation, audio pr
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
## Current Features
|
## Features
|
||||||
|
|
||||||
**Vector Animation**
|
**Vector Animation**
|
||||||
|
- GPU-accelerated vector rendering with Vello
|
||||||
- Draw and animate vector shapes with keyframe-based timeline
|
- Draw and animate vector shapes with keyframe-based timeline
|
||||||
- Non-destructive editing workflow
|
- Non-destructive editing workflow
|
||||||
|
- Paint bucket tool for automatic fill detection
|
||||||
|
|
||||||
**Audio Production**
|
**Audio Production**
|
||||||
- Multi-track audio recording
|
- Real-time multi-track audio recording and playback
|
||||||
- MIDI sequencing with synthesized and sampled instruments
|
- Node graph-based effects processing
|
||||||
- Integrated DAW functionality
|
- MIDI sequencing with synthesizers and samplers
|
||||||
|
- Comprehensive effects library (reverb, delay, EQ, compression, distortion, etc.)
|
||||||
|
- Custom audio engine with lock-free design for glitch-free playback
|
||||||
|
|
||||||
**Video Editing**
|
**Video Editing**
|
||||||
- Basic video timeline and editing (early stage)
|
- Video timeline and editing with FFmpeg-based decoding
|
||||||
- FFmpeg-based video decoding
|
- GPU-accelerated waveform rendering with mipmaps
|
||||||
|
- Audio integration from video soundtracks
|
||||||
|
|
||||||
## Technical Stack
|
## Technical Stack
|
||||||
|
|
||||||
- **Frontend:** Vanilla JavaScript
|
**Current Implementation (Rust UI)**
|
||||||
- **Backend:** Rust (Tauri framework)
|
- **UI Framework:** egui (immediate-mode GUI)
|
||||||
- **Audio:** cpal + dasp for audio processing
|
- **GPU Rendering:** Vello + wgpu (Vulkan/Metal/DirectX 12)
|
||||||
- **Video:** FFmpeg for encode/decode
|
- **Audio Engine:** Custom real-time engine (`daw-backend`)
|
||||||
|
- cpal for cross-platform audio I/O
|
||||||
|
- symphonia for audio decoding
|
||||||
|
- dasp for node graph processing
|
||||||
|
- **Video:** FFmpeg 8 for encode/decode
|
||||||
|
- **Platform:** Cross-platform (Linux, macOS, Windows)
|
||||||
|
|
||||||
|
**Legacy Implementation (Deprecated)**
|
||||||
|
- Frontend: Vanilla JavaScript
|
||||||
|
- Backend: Rust (Tauri framework)
|
||||||
|
|
||||||
## Project Status
|
## Project Status
|
||||||
|
|
||||||
Lightningbeam is under active development. Current focus is on core functionality and architecture. Full project export is not yet fully implemented.
|
Lightningbeam is under active development on the `rust-ui` branch. The project has been rewritten from a Tauri/JavaScript prototype to a pure Rust application to eliminate IPC bottlenecks and achieve better performance for real-time video and audio processing.
|
||||||
|
|
||||||
### Known Architectural Challenge
|
**Current Status:**
|
||||||
|
- ✅ Core UI panes (Stage, Timeline, Asset Library, Info Panel, Toolbar)
|
||||||
|
- ✅ Drawing tools (Select, Draw, Rectangle, Ellipse, Paint Bucket, Transform)
|
||||||
|
- ✅ Undo/redo system
|
||||||
|
- ✅ GPU-accelerated vector rendering
|
||||||
|
- ✅ Audio engine with node graph processing
|
||||||
|
- ✅ GPU waveform rendering with mipmaps
|
||||||
|
- ✅ Video decoding integration
|
||||||
|
- 🚧 Export system (in progress)
|
||||||
|
- 🚧 Node editor UI (planned)
|
||||||
|
- 🚧 Piano roll editor (planned)
|
||||||
|
|
||||||
The current Tauri implementation hits IPC bandwidth limitations when streaming decoded video frames from Rust to JavaScript. Tauri's IPC layer has significant serialization overhead (~few MB/s), which is insufficient for real-time high-resolution video rendering.
|
## Getting Started
|
||||||
|
|
||||||
I'm currently exploring a full Rust rewrite using wgpu/egui to eliminate the IPC bottleneck and handle rendering entirely in native code.
|
### Prerequisites
|
||||||
|
|
||||||
|
- Rust (stable toolchain via [rustup](https://rustup.rs/))
|
||||||
|
- System dependencies:
|
||||||
|
- **Linux:** ALSA development files, FFmpeg 8
|
||||||
|
- **macOS:** FFmpeg (via Homebrew)
|
||||||
|
- **Windows:** FFmpeg 8, Visual Studio with C++ tools
|
||||||
|
|
||||||
|
See [docs/BUILDING.md](docs/BUILDING.md) for detailed setup instructions.
|
||||||
|
|
||||||
|
### Building and Running
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Clone the repository
|
||||||
|
git clone https://github.com/skykooler/lightningbeam.git
|
||||||
|
# Or from Gitea
|
||||||
|
git clone https://git.skyler.io/skyler/lightningbeam.git
|
||||||
|
|
||||||
|
cd lightningbeam/lightningbeam-ui
|
||||||
|
|
||||||
|
# Build and run
|
||||||
|
cargo run
|
||||||
|
|
||||||
|
# Or build optimized release version
|
||||||
|
cargo build --release
|
||||||
|
```
|
||||||
|
|
||||||
|
### Documentation
|
||||||
|
|
||||||
|
- **[CONTRIBUTING.md](CONTRIBUTING.md)** - Development setup and contribution guidelines
|
||||||
|
- **[ARCHITECTURE.md](ARCHITECTURE.md)** - System architecture overview
|
||||||
|
- **[docs/BUILDING.md](docs/BUILDING.md)** - Detailed build instructions and troubleshooting
|
||||||
|
- **[docs/AUDIO_SYSTEM.md](docs/AUDIO_SYSTEM.md)** - Audio engine architecture and development
|
||||||
|
- **[docs/UI_SYSTEM.md](docs/UI_SYSTEM.md)** - UI pane system and tool development
|
||||||
|
- **[docs/RENDERING.md](docs/RENDERING.md)** - GPU rendering pipeline and shaders
|
||||||
|
|
||||||
## Project History
|
## Project History
|
||||||
|
|
||||||
Lightningbeam evolved from earlier multimedia editing projects I've worked on since 2010, including the FreeJam DAW. The current JavaScript/Tauri iteration began in November 2023.
|
Lightningbeam evolved from earlier multimedia editing projects I've worked on since 2010, including the FreeJam DAW. The JavaScript/Tauri prototype began in November 2023, and the Rust UI rewrite started in late 2024 to eliminate performance bottlenecks and provide a more integrated native experience.
|
||||||
|
|
||||||
## Goals
|
## Goals
|
||||||
|
|
||||||
Create a comprehensive FOSS alternative for 2D-focused multimedia work, integrating animation, audio, and video editing in a unified workflow.
|
Create a comprehensive FOSS alternative for 2D-focused multimedia work, integrating animation, audio, and video editing in a unified workflow. Lightningbeam aims to be:
|
||||||
|
|
||||||
|
- **Fast:** GPU-accelerated rendering and real-time audio processing
|
||||||
|
- **Flexible:** Node graph-based audio routing and modular synthesis
|
||||||
|
- **Integrated:** Seamless workflow across animation, audio, and video
|
||||||
|
- **Open:** Free and open-source, built on open standards
|
||||||
|
|
||||||
|
## Contributing
|
||||||
|
|
||||||
|
Contributions are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
[License information to be added]
|
||||||
|
|
||||||
|
## Links
|
||||||
|
|
||||||
|
- **GitHub:** https://github.com/skykooler/lightningbeam
|
||||||
|
- **Gitea:** https://git.skyler.io/skyler/lightningbeam
|
||||||
|
|
|
||||||
Binary file not shown.
Binary file not shown.
|
|
@ -69,6 +69,11 @@ pub struct ReadAheadBuffer {
|
||||||
channels: u32,
|
channels: u32,
|
||||||
/// Source file sample rate.
|
/// Source file sample rate.
|
||||||
sample_rate: u32,
|
sample_rate: u32,
|
||||||
|
/// Last file-local frame requested by the audio callback.
|
||||||
|
/// Written by the consumer (render_from_file), read by the disk reader.
|
||||||
|
/// The disk reader uses this instead of the global playhead to know
|
||||||
|
/// where in the file to buffer around.
|
||||||
|
target_frame: AtomicU64,
|
||||||
}
|
}
|
||||||
|
|
||||||
// SAFETY: See the doc comment on ReadAheadBuffer for the full safety argument.
|
// SAFETY: See the doc comment on ReadAheadBuffer for the full safety argument.
|
||||||
|
|
@ -102,6 +107,7 @@ impl ReadAheadBuffer {
|
||||||
capacity_frames,
|
capacity_frames,
|
||||||
channels,
|
channels,
|
||||||
sample_rate,
|
sample_rate,
|
||||||
|
target_frame: AtomicU64::new(0),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -158,6 +164,20 @@ impl ReadAheadBuffer {
|
||||||
self.valid_frames.load(Ordering::Acquire)
|
self.valid_frames.load(Ordering::Acquire)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Update the target frame — the file-local frame the audio callback
|
||||||
|
/// is currently reading from. Called by `render_from_file` (consumer).
|
||||||
|
#[inline]
|
||||||
|
pub fn set_target_frame(&self, frame: u64) {
|
||||||
|
self.target_frame.store(frame, Ordering::Relaxed);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the target frame set by the audio callback.
|
||||||
|
/// Called by the disk reader thread (producer).
|
||||||
|
#[inline]
|
||||||
|
pub fn target_frame(&self) -> u64 {
|
||||||
|
self.target_frame.load(Ordering::Relaxed)
|
||||||
|
}
|
||||||
|
|
||||||
/// Reset the buffer to start at `new_start` with zero valid frames.
|
/// Reset the buffer to start at `new_start` with zero valid frames.
|
||||||
/// Called by the **disk reader thread** (producer) after a seek.
|
/// Called by the **disk reader thread** (producer) after a seek.
|
||||||
pub fn reset(&self, new_start: u64) {
|
pub fn reset(&self, new_start: u64) {
|
||||||
|
|
@ -431,20 +451,16 @@ pub struct DiskReader {
|
||||||
|
|
||||||
impl DiskReader {
|
impl DiskReader {
|
||||||
/// Create a new disk reader with a background thread.
|
/// Create a new disk reader with a background thread.
|
||||||
///
|
|
||||||
/// `playhead_frame` should be the same `Arc<AtomicU64>` used by the engine
|
|
||||||
/// so the disk reader knows where to fill ahead.
|
|
||||||
pub fn new(playhead_frame: Arc<AtomicU64>, _sample_rate: u32) -> Self {
|
pub fn new(playhead_frame: Arc<AtomicU64>, _sample_rate: u32) -> Self {
|
||||||
let (command_tx, command_rx) = rtrb::RingBuffer::new(64);
|
let (command_tx, command_rx) = rtrb::RingBuffer::new(64);
|
||||||
let running = Arc::new(AtomicBool::new(true));
|
let running = Arc::new(AtomicBool::new(true));
|
||||||
|
|
||||||
let thread_running = running.clone();
|
let thread_running = running.clone();
|
||||||
let thread_playhead = playhead_frame.clone();
|
|
||||||
|
|
||||||
let thread_handle = std::thread::Builder::new()
|
let thread_handle = std::thread::Builder::new()
|
||||||
.name("disk-reader".into())
|
.name("disk-reader".into())
|
||||||
.spawn(move || {
|
.spawn(move || {
|
||||||
Self::reader_thread(command_rx, thread_playhead, thread_running);
|
Self::reader_thread(command_rx, thread_running);
|
||||||
})
|
})
|
||||||
.expect("Failed to spawn disk reader thread");
|
.expect("Failed to spawn disk reader thread");
|
||||||
|
|
||||||
|
|
@ -473,7 +489,6 @@ impl DiskReader {
|
||||||
/// The disk reader background thread.
|
/// The disk reader background thread.
|
||||||
fn reader_thread(
|
fn reader_thread(
|
||||||
mut command_rx: rtrb::Consumer<DiskReaderCommand>,
|
mut command_rx: rtrb::Consumer<DiskReaderCommand>,
|
||||||
playhead_frame: Arc<AtomicU64>,
|
|
||||||
running: Arc<AtomicBool>,
|
running: Arc<AtomicBool>,
|
||||||
) {
|
) {
|
||||||
let mut active_files: HashMap<usize, (CompressedReader, Arc<ReadAheadBuffer>)> =
|
let mut active_files: HashMap<usize, (CompressedReader, Arc<ReadAheadBuffer>)> =
|
||||||
|
|
@ -506,6 +521,7 @@ impl DiskReader {
|
||||||
}
|
}
|
||||||
DiskReaderCommand::Seek { frame } => {
|
DiskReaderCommand::Seek { frame } => {
|
||||||
for (_, (reader, buffer)) in active_files.iter_mut() {
|
for (_, (reader, buffer)) in active_files.iter_mut() {
|
||||||
|
buffer.set_target_frame(frame);
|
||||||
buffer.reset(frame);
|
buffer.reset(frame);
|
||||||
if let Err(e) = reader.seek(frame) {
|
if let Err(e) = reader.seek(frame) {
|
||||||
eprintln!("[DiskReader] Seek error: {}", e);
|
eprintln!("[DiskReader] Seek error: {}", e);
|
||||||
|
|
@ -518,26 +534,28 @@ impl DiskReader {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
let playhead = playhead_frame.load(Ordering::Relaxed);
|
// Fill each active file's buffer ahead of its target frame.
|
||||||
|
// Each file's target_frame is set by the audio callback in
|
||||||
// Fill each active file's buffer ahead of the playhead.
|
// render_from_file, giving the file-local frame being read.
|
||||||
|
// This is independent of the global engine playhead.
|
||||||
for (_pool_index, (reader, buffer)) in active_files.iter_mut() {
|
for (_pool_index, (reader, buffer)) in active_files.iter_mut() {
|
||||||
|
let target = buffer.target_frame();
|
||||||
let buf_start = buffer.start_frame();
|
let buf_start = buffer.start_frame();
|
||||||
let buf_valid = buffer.valid_frames_count();
|
let buf_valid = buffer.valid_frames_count();
|
||||||
let buf_end = buf_start + buf_valid;
|
let buf_end = buf_start + buf_valid;
|
||||||
|
|
||||||
// If the playhead has jumped behind or far ahead of the buffer,
|
// If the target has jumped behind or far ahead of the buffer,
|
||||||
// seek the decoder and reset.
|
// seek the decoder and reset.
|
||||||
if playhead < buf_start || playhead > buf_end + reader.sample_rate as u64 {
|
if target < buf_start || target > buf_end + reader.sample_rate as u64 {
|
||||||
buffer.reset(playhead);
|
buffer.reset(target);
|
||||||
let _ = reader.seek(playhead);
|
let _ = reader.seek(target);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Advance the buffer start to reclaim space behind the playhead.
|
// Advance the buffer start to reclaim space behind the target.
|
||||||
// Keep a small lookback for sinc interpolation (~32 frames).
|
// Keep a small lookback for sinc interpolation (~32 frames).
|
||||||
let lookback = 64u64;
|
let lookback = 64u64;
|
||||||
let advance_to = playhead.saturating_sub(lookback);
|
let advance_to = target.saturating_sub(lookback);
|
||||||
if advance_to > buf_start {
|
if advance_to > buf_start {
|
||||||
buffer.advance_start(advance_to);
|
buffer.advance_start(advance_to);
|
||||||
}
|
}
|
||||||
|
|
@ -547,7 +565,7 @@ impl DiskReader {
|
||||||
let buf_valid = buffer.valid_frames_count();
|
let buf_valid = buffer.valid_frames_count();
|
||||||
let buf_end = buf_start + buf_valid;
|
let buf_end = buf_start + buf_valid;
|
||||||
let prefetch_target =
|
let prefetch_target =
|
||||||
playhead + (PREFETCH_SECONDS * reader.sample_rate as f64) as u64;
|
target + (PREFETCH_SECONDS * reader.sample_rate as f64) as u64;
|
||||||
|
|
||||||
if buf_end >= prefetch_target {
|
if buf_end >= prefetch_target {
|
||||||
continue; // Already filled far enough ahead.
|
continue; // Already filled far enough ahead.
|
||||||
|
|
|
||||||
|
|
@ -272,24 +272,16 @@ impl Engine {
|
||||||
// Forward chunk generation events from background threads
|
// Forward chunk generation events from background threads
|
||||||
while let Ok(event) = self.chunk_generation_rx.try_recv() {
|
while let Ok(event) = self.chunk_generation_rx.try_recv() {
|
||||||
match event {
|
match event {
|
||||||
AudioEvent::WaveformDecodeComplete { pool_index, samples } => {
|
AudioEvent::WaveformDecodeComplete { pool_index, samples, decoded_frames: _df, total_frames: _tf } => {
|
||||||
// Update pool entry with decoded waveform samples
|
// Forward samples directly to UI — no clone, just move
|
||||||
if let Some(file) = self.audio_pool.get_file_mut(pool_index) {
|
if let Some(file) = self.audio_pool.get_file(pool_index) {
|
||||||
let total = file.frames;
|
let sr = file.sample_rate;
|
||||||
if let crate::audio::pool::AudioStorage::Compressed {
|
let ch = file.channels;
|
||||||
ref mut decoded_for_waveform,
|
|
||||||
ref mut decoded_frames,
|
|
||||||
..
|
|
||||||
} = file.storage {
|
|
||||||
eprintln!("[ENGINE] Waveform decode complete for pool {}: {} samples", pool_index, samples.len());
|
|
||||||
*decoded_for_waveform = samples;
|
|
||||||
*decoded_frames = total;
|
|
||||||
}
|
|
||||||
// Notify frontend that waveform data is ready
|
|
||||||
let _ = self.event_tx.push(AudioEvent::AudioDecodeProgress {
|
let _ = self.event_tx.push(AudioEvent::AudioDecodeProgress {
|
||||||
pool_index,
|
pool_index,
|
||||||
decoded_frames: total,
|
samples,
|
||||||
total_frames: total,
|
sample_rate: sr,
|
||||||
|
channels: ch,
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -489,6 +481,10 @@ impl Engine {
|
||||||
self.playhead_atomic.store(0, Ordering::Relaxed);
|
self.playhead_atomic.store(0, Ordering::Relaxed);
|
||||||
// Stop all MIDI notes when stopping playback
|
// Stop all MIDI notes when stopping playback
|
||||||
self.project.stop_all_notes();
|
self.project.stop_all_notes();
|
||||||
|
// Reset disk reader buffers to the new playhead position
|
||||||
|
if let Some(ref mut dr) = self.disk_reader {
|
||||||
|
dr.send(crate::audio::disk_reader::DiskReaderCommand::Seek { frame: 0 });
|
||||||
|
}
|
||||||
}
|
}
|
||||||
Command::Pause => {
|
Command::Pause => {
|
||||||
self.playing = false;
|
self.playing = false;
|
||||||
|
|
@ -1686,165 +1682,159 @@ impl Engine {
|
||||||
}
|
}
|
||||||
|
|
||||||
Command::ImportAudio(path) => {
|
Command::ImportAudio(path) => {
|
||||||
let path_str = path.to_string_lossy().to_string();
|
if let Err(e) = self.do_import_audio(&path) {
|
||||||
|
eprintln!("[ENGINE] ImportAudio failed for {:?}: {}", path, e);
|
||||||
// Step 1: Read metadata (fast — no decoding)
|
|
||||||
let metadata = match crate::io::read_metadata(&path) {
|
|
||||||
Ok(m) => m,
|
|
||||||
Err(e) => {
|
|
||||||
eprintln!("[ENGINE] ImportAudio failed to read metadata for {:?}: {}", path, e);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
let pool_index;
|
|
||||||
|
|
||||||
eprintln!("[ENGINE] ImportAudio: format={:?}, ch={}, sr={}, n_frames={:?}, duration={:.2}s, path={}",
|
|
||||||
metadata.format, metadata.channels, metadata.sample_rate, metadata.n_frames, metadata.duration, path_str);
|
|
||||||
|
|
||||||
match metadata.format {
|
|
||||||
crate::io::AudioFormat::Pcm => {
|
|
||||||
// WAV/AIFF: memory-map the file for instant availability
|
|
||||||
let file = match std::fs::File::open(&path) {
|
|
||||||
Ok(f) => f,
|
|
||||||
Err(e) => {
|
|
||||||
eprintln!("[ENGINE] ImportAudio failed to open {:?}: {}", path, e);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
// SAFETY: The file is opened read-only. The mmap is shared
|
|
||||||
// immutably. We never write to it.
|
|
||||||
let mmap = match unsafe { memmap2::Mmap::map(&file) } {
|
|
||||||
Ok(m) => m,
|
|
||||||
Err(e) => {
|
|
||||||
eprintln!("[ENGINE] ImportAudio mmap failed for {:?}: {}", path, e);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
// Parse WAV header to find PCM data offset and format
|
|
||||||
let header = match crate::io::parse_wav_header(&mmap) {
|
|
||||||
Ok(h) => h,
|
|
||||||
Err(e) => {
|
|
||||||
eprintln!("[ENGINE] ImportAudio WAV parse failed for {:?}: {}", path, e);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
let audio_file = crate::audio::pool::AudioFile::from_mmap(
|
|
||||||
path.clone(),
|
|
||||||
mmap,
|
|
||||||
header.data_offset,
|
|
||||||
header.sample_format,
|
|
||||||
header.channels,
|
|
||||||
header.sample_rate,
|
|
||||||
header.total_frames,
|
|
||||||
);
|
|
||||||
|
|
||||||
pool_index = self.audio_pool.add_file(audio_file);
|
|
||||||
}
|
|
||||||
crate::io::AudioFormat::Compressed => {
|
|
||||||
let sync_decode = std::env::var("DAW_SYNC_DECODE").is_ok();
|
|
||||||
|
|
||||||
if sync_decode {
|
|
||||||
// Diagnostic: full synchronous decode to InMemory (bypasses ring buffer)
|
|
||||||
eprintln!("[ENGINE] DAW_SYNC_DECODE: doing full decode of {:?}", path);
|
|
||||||
match crate::io::AudioFile::load(&path) {
|
|
||||||
Ok(loaded) => {
|
|
||||||
let ext = path.extension()
|
|
||||||
.and_then(|e| e.to_str())
|
|
||||||
.map(|s| s.to_lowercase());
|
|
||||||
let audio_file = crate::audio::pool::AudioFile::with_format(
|
|
||||||
path.clone(),
|
|
||||||
loaded.data,
|
|
||||||
loaded.channels,
|
|
||||||
loaded.sample_rate,
|
|
||||||
ext,
|
|
||||||
);
|
|
||||||
pool_index = self.audio_pool.add_file(audio_file);
|
|
||||||
eprintln!("[ENGINE] DAW_SYNC_DECODE: pool_index={}, frames={}", pool_index, loaded.frames);
|
|
||||||
}
|
|
||||||
Err(e) => {
|
|
||||||
eprintln!("[ENGINE] DAW_SYNC_DECODE failed: {}", e);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// Normal path: stream decode via disk reader
|
|
||||||
let ext = path.extension()
|
|
||||||
.and_then(|e| e.to_str())
|
|
||||||
.map(|s| s.to_lowercase());
|
|
||||||
|
|
||||||
let total_frames = metadata.n_frames.unwrap_or_else(|| {
|
|
||||||
(metadata.duration * metadata.sample_rate as f64).ceil() as u64
|
|
||||||
});
|
|
||||||
|
|
||||||
let mut audio_file = crate::audio::pool::AudioFile::from_compressed(
|
|
||||||
path.clone(),
|
|
||||||
metadata.channels,
|
|
||||||
metadata.sample_rate,
|
|
||||||
total_frames,
|
|
||||||
ext,
|
|
||||||
);
|
|
||||||
|
|
||||||
let buffer = crate::audio::disk_reader::DiskReader::create_buffer(
|
|
||||||
metadata.sample_rate,
|
|
||||||
metadata.channels,
|
|
||||||
);
|
|
||||||
audio_file.read_ahead = Some(buffer.clone());
|
|
||||||
|
|
||||||
pool_index = self.audio_pool.add_file(audio_file);
|
|
||||||
|
|
||||||
eprintln!("[ENGINE] Compressed: total_frames={}, pool_index={}, has_disk_reader={}",
|
|
||||||
total_frames, pool_index, self.disk_reader.is_some());
|
|
||||||
|
|
||||||
if let Some(ref mut dr) = self.disk_reader {
|
|
||||||
dr.send(crate::audio::disk_reader::DiskReaderCommand::ActivateFile {
|
|
||||||
pool_index,
|
|
||||||
path: path.clone(),
|
|
||||||
buffer,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
// Spawn background thread to decode full file for waveform display
|
|
||||||
let bg_tx = self.chunk_generation_tx.clone();
|
|
||||||
let bg_path = path.clone();
|
|
||||||
let _ = std::thread::Builder::new()
|
|
||||||
.name(format!("waveform-decode-{}", pool_index))
|
|
||||||
.spawn(move || {
|
|
||||||
eprintln!("[WAVEFORM DECODE] Starting full decode of {:?}", bg_path);
|
|
||||||
match crate::io::AudioFile::load(&bg_path) {
|
|
||||||
Ok(loaded) => {
|
|
||||||
eprintln!("[WAVEFORM DECODE] Complete: {} frames, {} channels",
|
|
||||||
loaded.frames, loaded.channels);
|
|
||||||
let _ = bg_tx.send(AudioEvent::WaveformDecodeComplete {
|
|
||||||
pool_index,
|
|
||||||
samples: loaded.data,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
Err(e) => {
|
|
||||||
eprintln!("[WAVEFORM DECODE] Failed to decode {:?}: {}", bg_path, e);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Emit AudioFileReady event
|
|
||||||
let _ = self.event_tx.push(AudioEvent::AudioFileReady {
|
|
||||||
pool_index,
|
|
||||||
path: path_str,
|
|
||||||
channels: metadata.channels,
|
|
||||||
sample_rate: metadata.sample_rate,
|
|
||||||
duration: metadata.duration,
|
|
||||||
format: metadata.format,
|
|
||||||
});
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Import an audio file into the pool: mmap for PCM, streaming for compressed.
|
||||||
|
/// Returns the pool index on success. Emits AudioFileReady event.
|
||||||
|
fn do_import_audio(&mut self, path: &std::path::Path) -> Result<usize, String> {
|
||||||
|
let path_str = path.to_string_lossy().to_string();
|
||||||
|
|
||||||
|
let metadata = crate::io::read_metadata(path)
|
||||||
|
.map_err(|e| format!("Failed to read metadata for {:?}: {}", path, e))?;
|
||||||
|
|
||||||
|
eprintln!("[ENGINE] ImportAudio: format={:?}, ch={}, sr={}, n_frames={:?}, duration={:.2}s, path={}",
|
||||||
|
metadata.format, metadata.channels, metadata.sample_rate, metadata.n_frames, metadata.duration, path_str);
|
||||||
|
|
||||||
|
let pool_index = match metadata.format {
|
||||||
|
crate::io::AudioFormat::Pcm => {
|
||||||
|
let file = std::fs::File::open(path)
|
||||||
|
.map_err(|e| format!("Failed to open {:?}: {}", path, e))?;
|
||||||
|
|
||||||
|
// SAFETY: The file is opened read-only. The mmap is shared
|
||||||
|
// immutably. We never write to it.
|
||||||
|
let mmap = unsafe { memmap2::Mmap::map(&file) }
|
||||||
|
.map_err(|e| format!("mmap failed for {:?}: {}", path, e))?;
|
||||||
|
|
||||||
|
let header = crate::io::parse_wav_header(&mmap)
|
||||||
|
.map_err(|e| format!("WAV parse failed for {:?}: {}", path, e))?;
|
||||||
|
|
||||||
|
let audio_file = crate::audio::pool::AudioFile::from_mmap(
|
||||||
|
path.to_path_buf(),
|
||||||
|
mmap,
|
||||||
|
header.data_offset,
|
||||||
|
header.sample_format,
|
||||||
|
header.channels,
|
||||||
|
header.sample_rate,
|
||||||
|
header.total_frames,
|
||||||
|
);
|
||||||
|
|
||||||
|
self.audio_pool.add_file(audio_file)
|
||||||
|
}
|
||||||
|
crate::io::AudioFormat::Compressed => {
|
||||||
|
let sync_decode = std::env::var("DAW_SYNC_DECODE").is_ok();
|
||||||
|
|
||||||
|
if sync_decode {
|
||||||
|
eprintln!("[ENGINE] DAW_SYNC_DECODE: doing full decode of {:?}", path);
|
||||||
|
let loaded = crate::io::AudioFile::load(path)
|
||||||
|
.map_err(|e| format!("DAW_SYNC_DECODE failed: {}", e))?;
|
||||||
|
let ext = path.extension()
|
||||||
|
.and_then(|e| e.to_str())
|
||||||
|
.map(|s| s.to_lowercase());
|
||||||
|
let audio_file = crate::audio::pool::AudioFile::with_format(
|
||||||
|
path.to_path_buf(),
|
||||||
|
loaded.data,
|
||||||
|
loaded.channels,
|
||||||
|
loaded.sample_rate,
|
||||||
|
ext,
|
||||||
|
);
|
||||||
|
let idx = self.audio_pool.add_file(audio_file);
|
||||||
|
eprintln!("[ENGINE] DAW_SYNC_DECODE: pool_index={}, frames={}", idx, loaded.frames);
|
||||||
|
idx
|
||||||
|
} else {
|
||||||
|
let ext = path.extension()
|
||||||
|
.and_then(|e| e.to_str())
|
||||||
|
.map(|s| s.to_lowercase());
|
||||||
|
|
||||||
|
let total_frames = metadata.n_frames.unwrap_or_else(|| {
|
||||||
|
(metadata.duration * metadata.sample_rate as f64).ceil() as u64
|
||||||
|
});
|
||||||
|
|
||||||
|
let mut audio_file = crate::audio::pool::AudioFile::from_compressed(
|
||||||
|
path.to_path_buf(),
|
||||||
|
metadata.channels,
|
||||||
|
metadata.sample_rate,
|
||||||
|
total_frames,
|
||||||
|
ext,
|
||||||
|
);
|
||||||
|
|
||||||
|
let buffer = crate::audio::disk_reader::DiskReader::create_buffer(
|
||||||
|
metadata.sample_rate,
|
||||||
|
metadata.channels,
|
||||||
|
);
|
||||||
|
audio_file.read_ahead = Some(buffer.clone());
|
||||||
|
|
||||||
|
let idx = self.audio_pool.add_file(audio_file);
|
||||||
|
|
||||||
|
eprintln!("[ENGINE] Compressed: total_frames={}, pool_index={}, has_disk_reader={}",
|
||||||
|
total_frames, idx, self.disk_reader.is_some());
|
||||||
|
|
||||||
|
if let Some(ref mut dr) = self.disk_reader {
|
||||||
|
dr.send(crate::audio::disk_reader::DiskReaderCommand::ActivateFile {
|
||||||
|
pool_index: idx,
|
||||||
|
path: path.to_path_buf(),
|
||||||
|
buffer,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Spawn background thread to decode file progressively for waveform display
|
||||||
|
let bg_tx = self.chunk_generation_tx.clone();
|
||||||
|
let bg_path = path.to_path_buf();
|
||||||
|
let bg_total_frames = total_frames;
|
||||||
|
let _ = std::thread::Builder::new()
|
||||||
|
.name(format!("waveform-decode-{}", idx))
|
||||||
|
.spawn(move || {
|
||||||
|
crate::io::AudioFile::decode_progressive(
|
||||||
|
&bg_path,
|
||||||
|
bg_total_frames,
|
||||||
|
|audio_data, decoded_frames, total| {
|
||||||
|
let _ = bg_tx.send(AudioEvent::WaveformDecodeComplete {
|
||||||
|
pool_index: idx,
|
||||||
|
samples: audio_data.to_vec(),
|
||||||
|
decoded_frames,
|
||||||
|
total_frames: total,
|
||||||
|
});
|
||||||
|
},
|
||||||
|
);
|
||||||
|
});
|
||||||
|
idx
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Emit AudioFileReady event
|
||||||
|
let _ = self.event_tx.push(AudioEvent::AudioFileReady {
|
||||||
|
pool_index,
|
||||||
|
path: path_str,
|
||||||
|
channels: metadata.channels,
|
||||||
|
sample_rate: metadata.sample_rate,
|
||||||
|
duration: metadata.duration,
|
||||||
|
format: metadata.format,
|
||||||
|
});
|
||||||
|
|
||||||
|
// For PCM files, send samples inline so the UI doesn't need to
|
||||||
|
// do a blocking get_pool_audio_samples() query.
|
||||||
|
if metadata.format == crate::io::AudioFormat::Pcm {
|
||||||
|
if let Some(file) = self.audio_pool.get_file(pool_index) {
|
||||||
|
let samples = file.data().to_vec();
|
||||||
|
if !samples.is_empty() {
|
||||||
|
let _ = self.event_tx.push(AudioEvent::AudioDecodeProgress {
|
||||||
|
pool_index,
|
||||||
|
samples,
|
||||||
|
sample_rate: metadata.sample_rate,
|
||||||
|
channels: metadata.channels,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(pool_index)
|
||||||
|
}
|
||||||
|
|
||||||
/// Handle synchronous queries from the UI thread
|
/// Handle synchronous queries from the UI thread
|
||||||
fn handle_query(&mut self, query: Query) {
|
fn handle_query(&mut self, query: Query) {
|
||||||
let response = match query {
|
let response = match query {
|
||||||
|
|
@ -2231,6 +2221,9 @@ impl Engine {
|
||||||
|
|
||||||
QueryResponse::AudioFileAddedSync(Ok(pool_index))
|
QueryResponse::AudioFileAddedSync(Ok(pool_index))
|
||||||
}
|
}
|
||||||
|
Query::ImportAudioSync(path) => {
|
||||||
|
QueryResponse::AudioImportedSync(self.do_import_audio(&path))
|
||||||
|
}
|
||||||
Query::GetProject => {
|
Query::GetProject => {
|
||||||
// Clone the entire project for serialization
|
// Clone the entire project for serialization
|
||||||
QueryResponse::ProjectRetrieved(Ok(Box::new(self.project.clone())))
|
QueryResponse::ProjectRetrieved(Ok(Box::new(self.project.clone())))
|
||||||
|
|
@ -2431,6 +2424,12 @@ impl Engine {
|
||||||
fn handle_stop_midi_recording(&mut self) {
|
fn handle_stop_midi_recording(&mut self) {
|
||||||
eprintln!("[MIDI_RECORDING] handle_stop_midi_recording called");
|
eprintln!("[MIDI_RECORDING] handle_stop_midi_recording called");
|
||||||
if let Some(mut recording) = self.midi_recording_state.take() {
|
if let Some(mut recording) = self.midi_recording_state.take() {
|
||||||
|
// Send note-off to the synth for any notes still held, so they don't get stuck
|
||||||
|
let track_id_for_noteoff = recording.track_id;
|
||||||
|
for note_num in recording.active_note_numbers() {
|
||||||
|
self.project.send_midi_note_off(track_id_for_noteoff, note_num);
|
||||||
|
}
|
||||||
|
|
||||||
// Close out any active notes at the current playhead position
|
// Close out any active notes at the current playhead position
|
||||||
let end_time = self.playhead as f64 / self.sample_rate as f64;
|
let end_time = self.playhead as f64 / self.sample_rate as f64;
|
||||||
eprintln!("[MIDI_RECORDING] Closing active notes at time {}", end_time);
|
eprintln!("[MIDI_RECORDING] Closing active notes at time {}", end_time);
|
||||||
|
|
@ -2668,6 +2667,21 @@ impl EngineController {
|
||||||
let _ = self.command_tx.push(Command::ImportAudio(path));
|
let _ = self.command_tx.push(Command::ImportAudio(path));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Import an audio file synchronously and get the pool index.
|
||||||
|
/// Does the same work as `import_audio` (mmap for PCM, streaming for
|
||||||
|
/// compressed) but returns the real pool index directly.
|
||||||
|
/// NOTE: briefly blocks the UI thread during file setup (sub-ms for PCM
|
||||||
|
/// mmap; a few ms for compressed streaming init). If this becomes a
|
||||||
|
/// problem for very large files, switch to async import with event-based
|
||||||
|
/// pool index reconciliation.
|
||||||
|
pub fn import_audio_sync(&mut self, path: std::path::PathBuf) -> Result<usize, String> {
|
||||||
|
let query = Query::ImportAudioSync(path);
|
||||||
|
match self.send_query(query)? {
|
||||||
|
QueryResponse::AudioImportedSync(result) => result,
|
||||||
|
_ => Err("Unexpected query response".to_string()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/// Add a clip to an audio track
|
/// Add a clip to an audio track
|
||||||
pub fn add_audio_clip(&mut self, track_id: TrackId, pool_index: usize, start_time: f64, duration: f64, offset: f64) {
|
pub fn add_audio_clip(&mut self, track_id: TrackId, pool_index: usize, start_time: f64, duration: f64, offset: f64) {
|
||||||
let _ = self.command_tx.push(Command::AddAudioClip(track_id, pool_index, start_time, duration, offset));
|
let _ = self.command_tx.push(Command::AddAudioClip(track_id, pool_index, start_time, duration, offset));
|
||||||
|
|
|
||||||
|
|
@ -72,7 +72,7 @@ pub fn export_audio<P: AsRef<Path>>(
|
||||||
midi_pool: &MidiClipPool,
|
midi_pool: &MidiClipPool,
|
||||||
settings: &ExportSettings,
|
settings: &ExportSettings,
|
||||||
output_path: P,
|
output_path: P,
|
||||||
mut event_tx: Option<&mut rtrb::Producer<AudioEvent>>,
|
event_tx: Option<&mut rtrb::Producer<AudioEvent>>,
|
||||||
) -> Result<(), String>
|
) -> Result<(), String>
|
||||||
{
|
{
|
||||||
// Route to appropriate export implementation based on format
|
// Route to appropriate export implementation based on format
|
||||||
|
|
@ -435,8 +435,6 @@ fn export_mp3<P: AsRef<Path>>(
|
||||||
channel_layout,
|
channel_layout,
|
||||||
pts,
|
pts,
|
||||||
)?;
|
)?;
|
||||||
|
|
||||||
frames_rendered += final_frame_size;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Flush encoder
|
// Flush encoder
|
||||||
|
|
@ -602,8 +600,6 @@ fn export_aac<P: AsRef<Path>>(
|
||||||
channel_layout,
|
channel_layout,
|
||||||
pts,
|
pts,
|
||||||
)?;
|
)?;
|
||||||
|
|
||||||
frames_rendered += final_frame_size;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Flush encoder
|
// Flush encoder
|
||||||
|
|
@ -617,35 +613,6 @@ fn export_aac<P: AsRef<Path>>(
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Convert interleaved f32 samples to planar i16 format
|
|
||||||
fn convert_to_planar_i16(interleaved: &[f32], channels: u32) -> Vec<Vec<i16>> {
|
|
||||||
let num_frames = interleaved.len() / channels as usize;
|
|
||||||
let mut planar = vec![vec![0i16; num_frames]; channels as usize];
|
|
||||||
|
|
||||||
for (i, chunk) in interleaved.chunks(channels as usize).enumerate() {
|
|
||||||
for (ch, &sample) in chunk.iter().enumerate() {
|
|
||||||
let clamped = sample.max(-1.0).min(1.0);
|
|
||||||
planar[ch][i] = (clamped * 32767.0) as i16;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
planar
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Convert interleaved f32 samples to planar f32 format
|
|
||||||
fn convert_to_planar_f32(interleaved: &[f32], channels: u32) -> Vec<Vec<f32>> {
|
|
||||||
let num_frames = interleaved.len() / channels as usize;
|
|
||||||
let mut planar = vec![vec![0.0f32; num_frames]; channels as usize];
|
|
||||||
|
|
||||||
for (i, chunk) in interleaved.chunks(channels as usize).enumerate() {
|
|
||||||
for (ch, &sample) in chunk.iter().enumerate() {
|
|
||||||
planar[ch][i] = sample;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
planar
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Convert a chunk of interleaved f32 samples to planar i16 format
|
/// Convert a chunk of interleaved f32 samples to planar i16 format
|
||||||
fn convert_chunk_to_planar_i16(interleaved: &[f32], channels: u32) -> Vec<Vec<i16>> {
|
fn convert_chunk_to_planar_i16(interleaved: &[f32], channels: u32) -> Vec<Vec<i16>> {
|
||||||
let num_frames = interleaved.len() / channels as usize;
|
let num_frames = interleaved.len() / channels as usize;
|
||||||
|
|
|
||||||
|
|
@ -256,7 +256,8 @@ impl MidiClipInstance {
|
||||||
// Get events from the clip that fall within the internal range
|
// Get events from the clip that fall within the internal range
|
||||||
for event in &clip.events {
|
for event in &clip.events {
|
||||||
// Skip events outside the trimmed region
|
// Skip events outside the trimmed region
|
||||||
if event.timestamp < self.internal_start || event.timestamp >= self.internal_end {
|
// Use > (not >=) for internal_end so note-offs at the clip boundary are included
|
||||||
|
if event.timestamp < self.internal_start || event.timestamp > self.internal_end {
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -265,9 +266,10 @@ impl MidiClipInstance {
|
||||||
let timeline_time = self.external_start + loop_offset + relative_content_time;
|
let timeline_time = self.external_start + loop_offset + relative_content_time;
|
||||||
|
|
||||||
// Check if within current buffer range and instance bounds
|
// Check if within current buffer range and instance bounds
|
||||||
|
// Use <= for external_end so note-offs at the clip boundary are included
|
||||||
if timeline_time >= range_start_seconds
|
if timeline_time >= range_start_seconds
|
||||||
&& timeline_time < range_end_seconds
|
&& timeline_time < range_end_seconds
|
||||||
&& timeline_time < external_end
|
&& timeline_time <= external_end
|
||||||
{
|
{
|
||||||
let mut adjusted_event = *event;
|
let mut adjusted_event = *event;
|
||||||
adjusted_event.timestamp = timeline_time;
|
adjusted_event.timestamp = timeline_time;
|
||||||
|
|
|
||||||
|
|
@ -511,6 +511,11 @@ impl AudioClipPool {
|
||||||
|
|
||||||
let src_start_position = start_time_seconds * audio_file.sample_rate as f64;
|
let src_start_position = start_time_seconds * audio_file.sample_rate as f64;
|
||||||
|
|
||||||
|
// Tell the disk reader where we're reading so it buffers the right region.
|
||||||
|
if use_read_ahead {
|
||||||
|
read_ahead.unwrap().set_target_frame(src_start_position as u64);
|
||||||
|
}
|
||||||
|
|
||||||
let mut rendered_frames = 0;
|
let mut rendered_frames = 0;
|
||||||
|
|
||||||
if audio_file.sample_rate == engine_sample_rate {
|
if audio_file.sample_rate == engine_sample_rate {
|
||||||
|
|
|
||||||
|
|
@ -253,6 +253,11 @@ impl MidiRecordingState {
|
||||||
self.completed_notes.len()
|
self.completed_notes.len()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Get the note numbers of all currently held (active) notes
|
||||||
|
pub fn active_note_numbers(&self) -> Vec<u8> {
|
||||||
|
self.active_notes.keys().copied().collect()
|
||||||
|
}
|
||||||
|
|
||||||
/// Close out all active notes at the given time
|
/// Close out all active notes at the given time
|
||||||
/// This should be called when stopping recording to end any held notes
|
/// This should be called when stopping recording to end any held notes
|
||||||
pub fn close_active_notes(&mut self, end_time: f64) {
|
pub fn close_active_notes(&mut self, end_time: f64) {
|
||||||
|
|
|
||||||
|
|
@ -7,7 +7,7 @@ use super::node_graph::nodes::{AudioInputNode, AudioOutputNode};
|
||||||
use super::node_graph::preset::GraphPreset;
|
use super::node_graph::preset::GraphPreset;
|
||||||
use super::pool::AudioClipPool;
|
use super::pool::AudioClipPool;
|
||||||
use serde::{Serialize, Deserialize};
|
use serde::{Serialize, Deserialize};
|
||||||
use std::collections::HashMap;
|
use std::collections::{HashMap, HashSet};
|
||||||
|
|
||||||
/// Track ID type
|
/// Track ID type
|
||||||
pub type TrackId = u32;
|
pub type TrackId = u32;
|
||||||
|
|
@ -334,6 +334,10 @@ pub struct MidiTrack {
|
||||||
/// Queue for live MIDI input (virtual keyboard, MIDI controllers)
|
/// Queue for live MIDI input (virtual keyboard, MIDI controllers)
|
||||||
#[serde(skip)]
|
#[serde(skip)]
|
||||||
live_midi_queue: Vec<MidiEvent>,
|
live_midi_queue: Vec<MidiEvent>,
|
||||||
|
/// Clip instances that were active (overlapping playhead) in the previous render buffer.
|
||||||
|
/// Used to detect when the playhead exits a clip, so we can send all-notes-off.
|
||||||
|
#[serde(skip)]
|
||||||
|
prev_active_instances: HashSet<MidiClipInstanceId>,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Clone for MidiTrack {
|
impl Clone for MidiTrack {
|
||||||
|
|
@ -350,6 +354,7 @@ impl Clone for MidiTrack {
|
||||||
automation_lanes: self.automation_lanes.clone(),
|
automation_lanes: self.automation_lanes.clone(),
|
||||||
next_automation_id: self.next_automation_id,
|
next_automation_id: self.next_automation_id,
|
||||||
live_midi_queue: Vec::new(), // Don't clone live MIDI queue
|
live_midi_queue: Vec::new(), // Don't clone live MIDI queue
|
||||||
|
prev_active_instances: HashSet::new(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -372,6 +377,7 @@ impl MidiTrack {
|
||||||
automation_lanes: HashMap::new(),
|
automation_lanes: HashMap::new(),
|
||||||
next_automation_id: 0,
|
next_automation_id: 0,
|
||||||
live_midi_queue: Vec::new(),
|
live_midi_queue: Vec::new(),
|
||||||
|
prev_active_instances: HashSet::new(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -505,7 +511,11 @@ impl MidiTrack {
|
||||||
|
|
||||||
// Collect MIDI events from all clip instances that overlap with current time range
|
// Collect MIDI events from all clip instances that overlap with current time range
|
||||||
let mut midi_events = Vec::new();
|
let mut midi_events = Vec::new();
|
||||||
|
let mut currently_active = HashSet::new();
|
||||||
for instance in &self.clip_instances {
|
for instance in &self.clip_instances {
|
||||||
|
if instance.overlaps_range(playhead_seconds, buffer_end_seconds) {
|
||||||
|
currently_active.insert(instance.id);
|
||||||
|
}
|
||||||
// Get the clip content from the pool
|
// Get the clip content from the pool
|
||||||
if let Some(clip) = midi_pool.get_clip(instance.clip_id) {
|
if let Some(clip) = midi_pool.get_clip(instance.clip_id) {
|
||||||
let events = instance.get_events_in_range(
|
let events = instance.get_events_in_range(
|
||||||
|
|
@ -517,6 +527,18 @@ impl MidiTrack {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Send all-notes-off for clip instances that just became inactive
|
||||||
|
// (playhead exited the clip). This prevents stuck notes from malformed clips.
|
||||||
|
for prev_id in &self.prev_active_instances {
|
||||||
|
if !currently_active.contains(prev_id) {
|
||||||
|
for note in 0..128u8 {
|
||||||
|
midi_events.push(MidiEvent::note_off(playhead_seconds, 0, note, 0));
|
||||||
|
}
|
||||||
|
break; // One round of all-notes-off is enough
|
||||||
|
}
|
||||||
|
}
|
||||||
|
self.prev_active_instances = currently_active;
|
||||||
|
|
||||||
// Add live MIDI events (from virtual keyboard or MIDI controllers)
|
// Add live MIDI events (from virtual keyboard or MIDI controllers)
|
||||||
// This allows real-time input to be heard during playback/recording
|
// This allows real-time input to be heard during playback/recording
|
||||||
midi_events.extend(self.live_midi_queue.drain(..));
|
midi_events.extend(self.live_midi_queue.drain(..));
|
||||||
|
|
|
||||||
|
|
@ -64,7 +64,7 @@ pub struct WaveformCache {
|
||||||
chunks: HashMap<WaveformChunkKey, Vec<WaveformPeak>>,
|
chunks: HashMap<WaveformChunkKey, Vec<WaveformPeak>>,
|
||||||
|
|
||||||
/// Maximum memory usage in MB (for future LRU eviction)
|
/// Maximum memory usage in MB (for future LRU eviction)
|
||||||
max_memory_mb: usize,
|
_max_memory_mb: usize,
|
||||||
|
|
||||||
/// Current memory usage estimate in bytes
|
/// Current memory usage estimate in bytes
|
||||||
current_memory_bytes: usize,
|
current_memory_bytes: usize,
|
||||||
|
|
@ -75,7 +75,7 @@ impl WaveformCache {
|
||||||
pub fn new(max_memory_mb: usize) -> Self {
|
pub fn new(max_memory_mb: usize) -> Self {
|
||||||
Self {
|
Self {
|
||||||
chunks: HashMap::new(),
|
chunks: HashMap::new(),
|
||||||
max_memory_mb,
|
_max_memory_mb: max_memory_mb,
|
||||||
current_memory_bytes: 0,
|
current_memory_bytes: 0,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -274,18 +274,22 @@ pub enum AudioEvent {
|
||||||
},
|
},
|
||||||
|
|
||||||
/// Progressive decode progress for a compressed audio file's waveform data.
|
/// Progressive decode progress for a compressed audio file's waveform data.
|
||||||
/// The UI can use this to update waveform display incrementally.
|
/// Carries the samples inline so the UI doesn't need to query back.
|
||||||
AudioDecodeProgress {
|
AudioDecodeProgress {
|
||||||
pool_index: usize,
|
pool_index: usize,
|
||||||
decoded_frames: u64,
|
samples: Vec<f32>,
|
||||||
total_frames: u64,
|
sample_rate: u32,
|
||||||
|
channels: u32,
|
||||||
},
|
},
|
||||||
|
|
||||||
/// Background waveform decode completed for a compressed audio file.
|
/// Background waveform decode progress/completion for a compressed audio file.
|
||||||
/// Internal event — consumed by the engine to update the pool, not forwarded to UI.
|
/// Internal event — consumed by the engine to update the pool, not forwarded to UI.
|
||||||
|
/// `decoded_frames` < `total_frames` means partial; equal means complete.
|
||||||
WaveformDecodeComplete {
|
WaveformDecodeComplete {
|
||||||
pool_index: usize,
|
pool_index: usize,
|
||||||
samples: Vec<f32>,
|
samples: Vec<f32>,
|
||||||
|
decoded_frames: u64,
|
||||||
|
total_frames: u64,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -333,6 +337,14 @@ pub enum Query {
|
||||||
AddAudioClipSync(TrackId, usize, f64, f64, f64),
|
AddAudioClipSync(TrackId, usize, f64, f64, f64),
|
||||||
/// Add an audio file to the pool synchronously (path, data, channels, sample_rate) - returns pool index
|
/// Add an audio file to the pool synchronously (path, data, channels, sample_rate) - returns pool index
|
||||||
AddAudioFileSync(String, Vec<f32>, u32, u32),
|
AddAudioFileSync(String, Vec<f32>, u32, u32),
|
||||||
|
/// Import an audio file synchronously (path) - returns pool index.
|
||||||
|
/// Does the same work as Command::ImportAudio (mmap for PCM, streaming
|
||||||
|
/// setup for compressed) but returns the real pool index in the response.
|
||||||
|
/// NOTE: briefly blocks the UI thread during file setup (sub-ms for PCM
|
||||||
|
/// mmap; a few ms for compressed streaming init). If this becomes a
|
||||||
|
/// problem for very large files, switch to async import with event-based
|
||||||
|
/// pool index reconciliation.
|
||||||
|
ImportAudioSync(std::path::PathBuf),
|
||||||
/// Get raw audio samples from pool (pool_index) - returns (samples, sample_rate, channels)
|
/// Get raw audio samples from pool (pool_index) - returns (samples, sample_rate, channels)
|
||||||
GetPoolAudioSamples(usize),
|
GetPoolAudioSamples(usize),
|
||||||
/// Get a clone of the current project for serialization
|
/// Get a clone of the current project for serialization
|
||||||
|
|
@ -404,6 +416,8 @@ pub enum QueryResponse {
|
||||||
AudioClipInstanceAdded(Result<AudioClipInstanceId, String>),
|
AudioClipInstanceAdded(Result<AudioClipInstanceId, String>),
|
||||||
/// Audio file added to pool (returns pool index)
|
/// Audio file added to pool (returns pool index)
|
||||||
AudioFileAddedSync(Result<usize, String>),
|
AudioFileAddedSync(Result<usize, String>),
|
||||||
|
/// Audio file imported to pool (returns pool index)
|
||||||
|
AudioImportedSync(Result<usize, String>),
|
||||||
/// Raw audio samples from pool (samples, sample_rate, channels)
|
/// Raw audio samples from pool (samples, sample_rate, channels)
|
||||||
PoolAudioSamples(Result<(Vec<f32>, u32, u32), String>),
|
PoolAudioSamples(Result<(Vec<f32>, u32, u32), String>),
|
||||||
/// Project retrieved
|
/// Project retrieved
|
||||||
|
|
|
||||||
|
|
@ -338,6 +338,123 @@ impl AudioFile {
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Decode a compressed audio file progressively, calling `on_progress` with
|
||||||
|
/// partial data snapshots so the UI can display waveforms as they decode.
|
||||||
|
/// Sends updates roughly every 2 seconds of decoded audio.
|
||||||
|
pub fn decode_progressive<P: AsRef<Path>, F>(path: P, total_frames: u64, on_progress: F)
|
||||||
|
where
|
||||||
|
F: Fn(&[f32], u64, u64),
|
||||||
|
{
|
||||||
|
let path = path.as_ref();
|
||||||
|
|
||||||
|
let file = match std::fs::File::open(path) {
|
||||||
|
Ok(f) => f,
|
||||||
|
Err(e) => {
|
||||||
|
eprintln!("[WAVEFORM DECODE] Failed to open {:?}: {}", path, e);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let mss = MediaSourceStream::new(Box::new(file), Default::default());
|
||||||
|
|
||||||
|
let mut hint = Hint::new();
|
||||||
|
if let Some(extension) = path.extension() {
|
||||||
|
if let Some(ext_str) = extension.to_str() {
|
||||||
|
hint.with_extension(ext_str);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let probed = match symphonia::default::get_probe()
|
||||||
|
.format(&hint, mss, &FormatOptions::default(), &MetadataOptions::default())
|
||||||
|
{
|
||||||
|
Ok(p) => p,
|
||||||
|
Err(e) => {
|
||||||
|
eprintln!("[WAVEFORM DECODE] Failed to probe {:?}: {}", path, e);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut format = probed.format;
|
||||||
|
|
||||||
|
let track = match format.tracks().iter()
|
||||||
|
.find(|t| t.codec_params.codec != symphonia::core::codecs::CODEC_TYPE_NULL)
|
||||||
|
{
|
||||||
|
Some(t) => t,
|
||||||
|
None => {
|
||||||
|
eprintln!("[WAVEFORM DECODE] No audio tracks in {:?}", path);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let track_id = track.id;
|
||||||
|
let channels = track.codec_params.channels
|
||||||
|
.map(|c| c.count() as u32)
|
||||||
|
.unwrap_or(2);
|
||||||
|
let sample_rate = track.codec_params.sample_rate.unwrap_or(44100);
|
||||||
|
|
||||||
|
let mut decoder = match symphonia::default::get_codecs()
|
||||||
|
.make(&track.codec_params, &DecoderOptions::default())
|
||||||
|
{
|
||||||
|
Ok(d) => d,
|
||||||
|
Err(e) => {
|
||||||
|
eprintln!("[WAVEFORM DECODE] Failed to create decoder for {:?}: {}", path, e);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut audio_data = Vec::new();
|
||||||
|
let mut sample_buf = None;
|
||||||
|
// Send a progress update roughly every 2 seconds of audio
|
||||||
|
// Send first update quickly (0.25s), then every 2s of audio
|
||||||
|
let initial_interval = (sample_rate as usize * channels as usize) / 4;
|
||||||
|
let steady_interval = (sample_rate as usize * channels as usize) * 2;
|
||||||
|
let mut sent_first = false;
|
||||||
|
let mut last_update_len = 0usize;
|
||||||
|
|
||||||
|
loop {
|
||||||
|
let packet = match format.next_packet() {
|
||||||
|
Ok(packet) => packet,
|
||||||
|
Err(Error::IoError(e)) if e.kind() == std::io::ErrorKind::UnexpectedEof => break,
|
||||||
|
Err(Error::ResetRequired) => break,
|
||||||
|
Err(_) => break,
|
||||||
|
};
|
||||||
|
|
||||||
|
if packet.track_id() != track_id {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
match decoder.decode(&packet) {
|
||||||
|
Ok(decoded) => {
|
||||||
|
if sample_buf.is_none() {
|
||||||
|
let spec = *decoded.spec();
|
||||||
|
let duration = decoded.capacity() as u64;
|
||||||
|
sample_buf = Some(SampleBuffer::<f32>::new(duration, spec));
|
||||||
|
}
|
||||||
|
if let Some(ref mut buf) = sample_buf {
|
||||||
|
buf.copy_interleaved_ref(decoded);
|
||||||
|
audio_data.extend_from_slice(buf.samples());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Send progressive update (fast initial, then periodic)
|
||||||
|
// Only send NEW samples since last update (delta) to avoid large copies
|
||||||
|
let interval = if sent_first { steady_interval } else { initial_interval };
|
||||||
|
if audio_data.len() - last_update_len >= interval {
|
||||||
|
let decoded_frames = audio_data.len() as u64 / channels as u64;
|
||||||
|
on_progress(&audio_data[last_update_len..], decoded_frames, total_frames);
|
||||||
|
last_update_len = audio_data.len();
|
||||||
|
sent_first = true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(Error::DecodeError(_)) => continue,
|
||||||
|
Err(_) => break,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Final update with remaining data (delta since last update)
|
||||||
|
let decoded_frames = audio_data.len() as u64 / channels as u64;
|
||||||
|
on_progress(&audio_data[last_update_len..], decoded_frames, decoded_frames.max(total_frames));
|
||||||
|
}
|
||||||
|
|
||||||
/// Calculate the duration of the audio file in seconds
|
/// Calculate the duration of the audio file in seconds
|
||||||
pub fn duration(&self) -> f64 {
|
pub fn duration(&self) -> f64 {
|
||||||
self.frames as f64 / self.sample_rate as f64
|
self.frames as f64 / self.sample_rate as f64
|
||||||
|
|
|
||||||
|
|
@ -42,3 +42,30 @@ pollster = "0.3"
|
||||||
|
|
||||||
# Desktop notifications
|
# Desktop notifications
|
||||||
notify-rust = "4.11"
|
notify-rust = "4.11"
|
||||||
|
|
||||||
|
# Optimize the audio backend even in debug builds — the audio callback
|
||||||
|
# runs on a real-time thread with ~1.5ms deadlines at small buffer sizes,
|
||||||
|
# so it cannot tolerate unoptimized code.
|
||||||
|
[profile.dev.package.daw-backend]
|
||||||
|
opt-level = 2
|
||||||
|
|
||||||
|
# Also optimize symphonia (audio decoder) and cpal (audio I/O) — these
|
||||||
|
# run in the audio callback path and are heavily numeric.
|
||||||
|
[profile.dev.package.symphonia]
|
||||||
|
opt-level = 2
|
||||||
|
[profile.dev.package.symphonia-core]
|
||||||
|
opt-level = 2
|
||||||
|
[profile.dev.package.symphonia-bundle-mp3]
|
||||||
|
opt-level = 2
|
||||||
|
[profile.dev.package.symphonia-bundle-flac]
|
||||||
|
opt-level = 2
|
||||||
|
[profile.dev.package.symphonia-format-ogg]
|
||||||
|
opt-level = 2
|
||||||
|
[profile.dev.package.symphonia-codec-vorbis]
|
||||||
|
opt-level = 2
|
||||||
|
[profile.dev.package.symphonia-codec-aac]
|
||||||
|
opt-level = 2
|
||||||
|
[profile.dev.package.symphonia-format-isomp4]
|
||||||
|
opt-level = 2
|
||||||
|
[profile.dev.package.cpal]
|
||||||
|
opt-level = 2
|
||||||
|
|
|
||||||
|
|
@ -96,6 +96,18 @@ pub trait Action: Send {
|
||||||
fn rollback_backend(&mut self, _backend: &mut BackendContext, _document: &Document) -> Result<(), String> {
|
fn rollback_backend(&mut self, _backend: &mut BackendContext, _document: &Document) -> Result<(), String> {
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Return MIDI cache data reflecting the state after execute/redo.
|
||||||
|
/// Format: (clip_id, notes) where notes are (start_time, note, velocity, duration).
|
||||||
|
/// Used to keep the frontend MIDI event cache in sync after undo/redo.
|
||||||
|
fn midi_notes_after_execute(&self) -> Option<(u32, &[(f64, u8, u8, f64)])> {
|
||||||
|
None
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Return MIDI cache data reflecting the state after rollback/undo.
|
||||||
|
fn midi_notes_after_rollback(&self) -> Option<(u32, &[(f64, u8, u8, f64)])> {
|
||||||
|
None
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Action executor that wraps the document and manages undo/redo
|
/// Action executor that wraps the document and manages undo/redo
|
||||||
|
|
@ -245,6 +257,18 @@ impl ActionExecutor {
|
||||||
self.undo_stack.last().map(|a| a.description())
|
self.undo_stack.last().map(|a| a.description())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Get MIDI cache data from the last action on the undo stack (after redo).
|
||||||
|
/// Returns the notes reflecting execute state.
|
||||||
|
pub fn last_undo_midi_notes(&self) -> Option<(u32, &[(f64, u8, u8, f64)])> {
|
||||||
|
self.undo_stack.last().and_then(|a| a.midi_notes_after_execute())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get MIDI cache data from the last action on the redo stack (after undo).
|
||||||
|
/// Returns the notes reflecting rollback state.
|
||||||
|
pub fn last_redo_midi_notes(&self) -> Option<(u32, &[(f64, u8, u8, f64)])> {
|
||||||
|
self.redo_stack.last().and_then(|a| a.midi_notes_after_rollback())
|
||||||
|
}
|
||||||
|
|
||||||
/// Get the description of the next action to redo
|
/// Get the description of the next action to redo
|
||||||
pub fn redo_description(&self) -> Option<String> {
|
pub fn redo_description(&self) -> Option<String> {
|
||||||
self.redo_stack.last().map(|a| a.description())
|
self.redo_stack.last().map(|a| a.description())
|
||||||
|
|
|
||||||
|
|
@ -24,6 +24,7 @@ pub mod create_folder;
|
||||||
pub mod rename_folder;
|
pub mod rename_folder;
|
||||||
pub mod delete_folder;
|
pub mod delete_folder;
|
||||||
pub mod move_asset_to_folder;
|
pub mod move_asset_to_folder;
|
||||||
|
pub mod update_midi_notes;
|
||||||
|
|
||||||
pub use add_clip_instance::AddClipInstanceAction;
|
pub use add_clip_instance::AddClipInstanceAction;
|
||||||
pub use add_effect::AddEffectAction;
|
pub use add_effect::AddEffectAction;
|
||||||
|
|
@ -46,3 +47,4 @@ pub use create_folder::CreateFolderAction;
|
||||||
pub use rename_folder::RenameFolderAction;
|
pub use rename_folder::RenameFolderAction;
|
||||||
pub use delete_folder::{DeleteFolderAction, DeleteStrategy};
|
pub use delete_folder::{DeleteFolderAction, DeleteStrategy};
|
||||||
pub use move_asset_to_folder::MoveAssetToFolderAction;
|
pub use move_asset_to_folder::MoveAssetToFolderAction;
|
||||||
|
pub use update_midi_notes::UpdateMidiNotesAction;
|
||||||
|
|
|
||||||
|
|
@ -32,7 +32,7 @@ impl Action for MoveClipInstancesAction {
|
||||||
let mut expanded_moves = self.layer_moves.clone();
|
let mut expanded_moves = self.layer_moves.clone();
|
||||||
let mut already_processed = std::collections::HashSet::new();
|
let mut already_processed = std::collections::HashSet::new();
|
||||||
|
|
||||||
for (layer_id, moves) in &self.layer_moves {
|
for (_layer_id, moves) in &self.layer_moves {
|
||||||
for (instance_id, old_start, new_start) in moves {
|
for (instance_id, old_start, new_start) in moves {
|
||||||
// Skip if already processed
|
// Skip if already processed
|
||||||
if already_processed.contains(instance_id) {
|
if already_processed.contains(instance_id) {
|
||||||
|
|
|
||||||
|
|
@ -26,10 +26,10 @@ pub struct PaintBucketAction {
|
||||||
fill_color: ShapeColor,
|
fill_color: ShapeColor,
|
||||||
|
|
||||||
/// Tolerance for gap bridging (in pixels)
|
/// Tolerance for gap bridging (in pixels)
|
||||||
tolerance: f64,
|
_tolerance: f64,
|
||||||
|
|
||||||
/// Gap handling mode
|
/// Gap handling mode
|
||||||
gap_mode: GapHandlingMode,
|
_gap_mode: GapHandlingMode,
|
||||||
|
|
||||||
/// ID of the created shape (set after execution)
|
/// ID of the created shape (set after execution)
|
||||||
created_shape_id: Option<Uuid>,
|
created_shape_id: Option<Uuid>,
|
||||||
|
|
@ -59,8 +59,8 @@ impl PaintBucketAction {
|
||||||
layer_id,
|
layer_id,
|
||||||
click_point,
|
click_point,
|
||||||
fill_color,
|
fill_color,
|
||||||
tolerance,
|
_tolerance: tolerance,
|
||||||
gap_mode,
|
_gap_mode: gap_mode,
|
||||||
created_shape_id: None,
|
created_shape_id: None,
|
||||||
created_shape_instance_id: None,
|
created_shape_instance_id: None,
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -68,26 +68,6 @@ impl SetInstancePropertiesAction {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fn get_instance_value(&self, document: &Document, instance_id: &Uuid) -> Option<f64> {
|
|
||||||
if let Some(layer) = document.get_layer(&self.layer_id) {
|
|
||||||
if let AnyLayer::Vector(vector_layer) = layer {
|
|
||||||
if let Some(instance) = vector_layer.get_object(instance_id) {
|
|
||||||
return Some(match &self.property {
|
|
||||||
InstancePropertyChange::X(_) => instance.transform.x,
|
|
||||||
InstancePropertyChange::Y(_) => instance.transform.y,
|
|
||||||
InstancePropertyChange::Rotation(_) => instance.transform.rotation,
|
|
||||||
InstancePropertyChange::ScaleX(_) => instance.transform.scale_x,
|
|
||||||
InstancePropertyChange::ScaleY(_) => instance.transform.scale_y,
|
|
||||||
InstancePropertyChange::SkewX(_) => instance.transform.skew_x,
|
|
||||||
InstancePropertyChange::SkewY(_) => instance.transform.skew_y,
|
|
||||||
InstancePropertyChange::Opacity(_) => instance.opacity,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
None
|
|
||||||
}
|
|
||||||
|
|
||||||
fn apply_to_instance(&self, document: &mut Document, instance_id: &Uuid, value: f64) {
|
fn apply_to_instance(&self, document: &mut Document, instance_id: &Uuid, value: f64) {
|
||||||
if let Some(layer) = document.get_layer_mut(&self.layer_id) {
|
if let Some(layer) = document.get_layer_mut(&self.layer_id) {
|
||||||
if let AnyLayer::Vector(vector_layer) = layer {
|
if let AnyLayer::Vector(vector_layer) = layer {
|
||||||
|
|
|
||||||
|
|
@ -68,7 +68,7 @@ impl Action for TrimClipInstancesAction {
|
||||||
let mut expanded_trims = self.layer_trims.clone();
|
let mut expanded_trims = self.layer_trims.clone();
|
||||||
let mut already_processed = std::collections::HashSet::new();
|
let mut already_processed = std::collections::HashSet::new();
|
||||||
|
|
||||||
for (layer_id, trims) in &self.layer_trims {
|
for (_layer_id, trims) in &self.layer_trims {
|
||||||
for (instance_id, trim_type, old, new) in trims {
|
for (instance_id, trim_type, old, new) in trims {
|
||||||
// Skip if already processed
|
// Skip if already processed
|
||||||
if already_processed.contains(instance_id) {
|
if already_processed.contains(instance_id) {
|
||||||
|
|
@ -189,7 +189,7 @@ impl Action for TrimClipInstancesAction {
|
||||||
|
|
||||||
match trim_type {
|
match trim_type {
|
||||||
TrimType::TrimLeft => {
|
TrimType::TrimLeft => {
|
||||||
if let (Some(old_trim), Some(new_trim), Some(old_timeline), Some(new_timeline)) =
|
if let (Some(old_trim), Some(new_trim), Some(old_timeline), Some(_new_timeline)) =
|
||||||
(old.trim_value, new.trim_value, old.timeline_start, new.timeline_start)
|
(old.trim_value, new.trim_value, old.timeline_start, new.timeline_start)
|
||||||
{
|
{
|
||||||
// If extending to the left (new_trim < old_trim)
|
// If extending to the left (new_trim < old_trim)
|
||||||
|
|
@ -365,7 +365,7 @@ impl Action for TrimClipInstancesAction {
|
||||||
.ok_or_else(|| format!("Layer {} not mapped to backend track", layer_id))?;
|
.ok_or_else(|| format!("Layer {} not mapped to backend track", layer_id))?;
|
||||||
|
|
||||||
// Process each clip instance trim
|
// Process each clip instance trim
|
||||||
for (instance_id, trim_type, _old, new) in trims {
|
for (instance_id, _trim_type, _old, _new) in trims {
|
||||||
// Get clip instances from the layer
|
// Get clip instances from the layer
|
||||||
let clip_instances = match layer {
|
let clip_instances = match layer {
|
||||||
AnyLayer::Audio(al) => &al.clip_instances,
|
AnyLayer::Audio(al) => &al.clip_instances,
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,82 @@
|
||||||
|
use crate::action::Action;
|
||||||
|
use crate::document::Document;
|
||||||
|
use uuid::Uuid;
|
||||||
|
|
||||||
|
/// Action to update MIDI notes in a clip (supports undo/redo)
|
||||||
|
///
|
||||||
|
/// Stores the before and after note states. MIDI note data lives in the backend,
|
||||||
|
/// so execute/rollback are no-ops on the document — all changes go through
|
||||||
|
/// execute_backend/rollback_backend.
|
||||||
|
pub struct UpdateMidiNotesAction {
|
||||||
|
/// Layer containing the MIDI clip
|
||||||
|
pub layer_id: Uuid,
|
||||||
|
/// Backend MIDI clip ID
|
||||||
|
pub midi_clip_id: u32,
|
||||||
|
/// Notes before the edit: (start_time, note, velocity, duration)
|
||||||
|
pub old_notes: Vec<(f64, u8, u8, f64)>,
|
||||||
|
/// Notes after the edit: (start_time, note, velocity, duration)
|
||||||
|
pub new_notes: Vec<(f64, u8, u8, f64)>,
|
||||||
|
/// Human-readable description
|
||||||
|
pub description_text: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Action for UpdateMidiNotesAction {
|
||||||
|
fn execute(&mut self, _document: &mut Document) -> Result<(), String> {
|
||||||
|
// MIDI note data lives in the backend, not the document
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn rollback(&mut self, _document: &mut Document) -> Result<(), String> {
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn description(&self) -> String {
|
||||||
|
self.description_text.clone()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn execute_backend(
|
||||||
|
&mut self,
|
||||||
|
backend: &mut crate::action::BackendContext,
|
||||||
|
_document: &Document,
|
||||||
|
) -> Result<(), String> {
|
||||||
|
let controller = match backend.audio_controller.as_mut() {
|
||||||
|
Some(c) => c,
|
||||||
|
None => return Ok(()),
|
||||||
|
};
|
||||||
|
|
||||||
|
let track_id = backend
|
||||||
|
.layer_to_track_map
|
||||||
|
.get(&self.layer_id)
|
||||||
|
.ok_or_else(|| format!("Layer {} not mapped to backend track", self.layer_id))?;
|
||||||
|
|
||||||
|
controller.update_midi_clip_notes(*track_id, self.midi_clip_id, self.new_notes.clone());
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn rollback_backend(
|
||||||
|
&mut self,
|
||||||
|
backend: &mut crate::action::BackendContext,
|
||||||
|
_document: &Document,
|
||||||
|
) -> Result<(), String> {
|
||||||
|
let controller = match backend.audio_controller.as_mut() {
|
||||||
|
Some(c) => c,
|
||||||
|
None => return Ok(()),
|
||||||
|
};
|
||||||
|
|
||||||
|
let track_id = backend
|
||||||
|
.layer_to_track_map
|
||||||
|
.get(&self.layer_id)
|
||||||
|
.ok_or_else(|| format!("Layer {} not mapped to backend track", self.layer_id))?;
|
||||||
|
|
||||||
|
controller.update_midi_clip_notes(*track_id, self.midi_clip_id, self.old_notes.clone());
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn midi_notes_after_execute(&self) -> Option<(u32, &[(f64, u8, u8, f64)])> {
|
||||||
|
Some((self.midi_clip_id, &self.new_notes))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn midi_notes_after_rollback(&self) -> Option<(u32, &[(f64, u8, u8, f64)])> {
|
||||||
|
Some((self.midi_clip_id, &self.old_notes))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
@ -229,27 +229,6 @@ pub fn find_closest_approach(
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Refine intersection parameters using Newton's method
|
|
||||||
fn refine_intersection(
|
|
||||||
curve1: &CubicBez,
|
|
||||||
curve2: &CubicBez,
|
|
||||||
mut t1: f64,
|
|
||||||
mut t2: f64,
|
|
||||||
) -> (f64, f64) {
|
|
||||||
// Simple refinement: just find nearest points iteratively
|
|
||||||
for _ in 0..5 {
|
|
||||||
let p1 = curve1.eval(t1);
|
|
||||||
let nearest2 = curve2.nearest(p1, 1e-6);
|
|
||||||
t2 = nearest2.t;
|
|
||||||
|
|
||||||
let p2 = curve2.eval(t2);
|
|
||||||
let nearest1 = curve1.nearest(p2, 1e-6);
|
|
||||||
t1 = nearest1.t;
|
|
||||||
}
|
|
||||||
|
|
||||||
(t1.clamp(0.0, 1.0), t2.clamp(0.0, 1.0))
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Refine self-intersection parameters
|
/// Refine self-intersection parameters
|
||||||
fn refine_self_intersection(curve: &CubicBez, mut t1: f64, mut t2: f64) -> (f64, f64) {
|
fn refine_self_intersection(curve: &CubicBez, mut t1: f64, mut t2: f64) -> (f64, f64) {
|
||||||
// Refine by moving parameters closer to where curves actually meet
|
// Refine by moving parameters closer to where curves actually meet
|
||||||
|
|
|
||||||
|
|
@ -189,22 +189,6 @@ impl EffectLayer {
|
||||||
self.clip_instances = new_order;
|
self.clip_instances = new_order;
|
||||||
}
|
}
|
||||||
|
|
||||||
// === MUTATION METHODS (pub(crate) - only accessible to action module) ===
|
|
||||||
|
|
||||||
/// Add a clip instance (internal, for actions only)
|
|
||||||
pub(crate) fn add_clip_instance_internal(&mut self, instance: ClipInstance) -> Uuid {
|
|
||||||
self.add_clip_instance(instance)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Remove a clip instance (internal, for actions only)
|
|
||||||
pub(crate) fn remove_clip_instance_internal(&mut self, id: &Uuid) -> Option<ClipInstance> {
|
|
||||||
self.remove_clip_instance(id)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Insert a clip instance at a specific index (internal, for actions only)
|
|
||||||
pub(crate) fn insert_clip_instance_internal(&mut self, index: usize, instance: ClipInstance) -> Uuid {
|
|
||||||
self.insert_clip_instance(index, instance)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
|
|
|
||||||
|
|
@ -455,23 +455,23 @@ struct CurveIntersection {
|
||||||
t_on_current: f64,
|
t_on_current: f64,
|
||||||
|
|
||||||
/// Parameter on other curve
|
/// Parameter on other curve
|
||||||
t_on_other: f64,
|
_t_on_other: f64,
|
||||||
|
|
||||||
/// ID of the other curve
|
/// ID of the other curve
|
||||||
other_curve_id: usize,
|
_other_curve_id: usize,
|
||||||
|
|
||||||
/// Intersection point
|
/// Intersection point
|
||||||
point: Point,
|
point: Point,
|
||||||
|
|
||||||
/// Whether this is a gap (within tolerance but not exact intersection)
|
/// Whether this is a gap (within tolerance but not exact intersection)
|
||||||
is_gap: bool,
|
_is_gap: bool,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Find all intersections on a given curve
|
/// Find all intersections on a given curve
|
||||||
fn find_intersections_on_curve(
|
fn find_intersections_on_curve(
|
||||||
curve_id: usize,
|
curve_id: usize,
|
||||||
curves: &[CubicBez],
|
curves: &[CubicBez],
|
||||||
processed_curves: &HashSet<usize>,
|
_processed_curves: &HashSet<usize>,
|
||||||
quadtree: &ToleranceQuadtree,
|
quadtree: &ToleranceQuadtree,
|
||||||
tolerance: f64,
|
tolerance: f64,
|
||||||
debug_info: &mut WalkDebugInfo,
|
debug_info: &mut WalkDebugInfo,
|
||||||
|
|
@ -489,10 +489,10 @@ fn find_intersections_on_curve(
|
||||||
for int in self_ints {
|
for int in self_ints {
|
||||||
intersections.push(CurveIntersection {
|
intersections.push(CurveIntersection {
|
||||||
t_on_current: int.t1,
|
t_on_current: int.t1,
|
||||||
t_on_other: int.t2.unwrap_or(int.t1),
|
_t_on_other: int.t2.unwrap_or(int.t1),
|
||||||
other_curve_id: curve_id,
|
_other_curve_id: curve_id,
|
||||||
point: int.point,
|
point: int.point,
|
||||||
is_gap: false,
|
_is_gap: false,
|
||||||
});
|
});
|
||||||
debug_info.intersections_found += 1;
|
debug_info.intersections_found += 1;
|
||||||
}
|
}
|
||||||
|
|
@ -504,10 +504,10 @@ fn find_intersections_on_curve(
|
||||||
for int in exact_ints {
|
for int in exact_ints {
|
||||||
intersections.push(CurveIntersection {
|
intersections.push(CurveIntersection {
|
||||||
t_on_current: int.t1,
|
t_on_current: int.t1,
|
||||||
t_on_other: int.t2.unwrap_or(0.0),
|
_t_on_other: int.t2.unwrap_or(0.0),
|
||||||
other_curve_id: other_id,
|
_other_curve_id: other_id,
|
||||||
point: int.point,
|
point: int.point,
|
||||||
is_gap: false,
|
_is_gap: false,
|
||||||
});
|
});
|
||||||
debug_info.intersections_found += 1;
|
debug_info.intersections_found += 1;
|
||||||
}
|
}
|
||||||
|
|
@ -516,10 +516,10 @@ fn find_intersections_on_curve(
|
||||||
if let Some(approach) = find_closest_approach(current_curve, other_curve, tolerance) {
|
if let Some(approach) = find_closest_approach(current_curve, other_curve, tolerance) {
|
||||||
intersections.push(CurveIntersection {
|
intersections.push(CurveIntersection {
|
||||||
t_on_current: approach.t1,
|
t_on_current: approach.t1,
|
||||||
t_on_other: approach.t2,
|
_t_on_other: approach.t2,
|
||||||
other_curve_id: other_id,
|
_other_curve_id: other_id,
|
||||||
point: approach.p1,
|
point: approach.p1,
|
||||||
is_gap: true,
|
_is_gap: true,
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -478,7 +478,7 @@ fn map_t_to_relative_distances(bez: &[Point; 4], b_parts: usize) -> Vec<f64> {
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Find t value for a given parameter distance
|
/// Find t value for a given parameter distance
|
||||||
fn find_t(bez: &[Point; 4], param: f64, t_dist_map: &[f64], b_parts: usize) -> f64 {
|
fn find_t(_bez: &[Point; 4], param: f64, t_dist_map: &[f64], b_parts: usize) -> f64 {
|
||||||
if param < 0.0 {
|
if param < 0.0 {
|
||||||
return 0.0;
|
return 0.0;
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -122,7 +122,7 @@ impl PlanarGraph {
|
||||||
|
|
||||||
// Initialize with endpoints for all curves
|
// Initialize with endpoints for all curves
|
||||||
for (i, curve) in curves.iter().enumerate() {
|
for (i, curve) in curves.iter().enumerate() {
|
||||||
let mut curve_intersections = vec![
|
let curve_intersections = vec![
|
||||||
(0.0, curve.p0),
|
(0.0, curve.p0),
|
||||||
(1.0, curve.p3),
|
(1.0, curve.p3),
|
||||||
];
|
];
|
||||||
|
|
@ -202,7 +202,7 @@ impl PlanarGraph {
|
||||||
|
|
||||||
/// Build nodes and edges from curves and their intersections
|
/// Build nodes and edges from curves and their intersections
|
||||||
fn build_nodes_and_edges(
|
fn build_nodes_and_edges(
|
||||||
curves: &[CubicBez],
|
_curves: &[CubicBez],
|
||||||
intersections: HashMap<usize, Vec<(f64, Point)>>,
|
intersections: HashMap<usize, Vec<(f64, Point)>>,
|
||||||
) -> (Vec<GraphNode>, Vec<GraphEdge>) {
|
) -> (Vec<GraphNode>, Vec<GraphEdge>) {
|
||||||
let mut nodes = Vec::new();
|
let mut nodes = Vec::new();
|
||||||
|
|
@ -459,11 +459,6 @@ impl PlanarGraph {
|
||||||
|
|
||||||
// Get the end node of this half-edge
|
// Get the end node of this half-edge
|
||||||
let edge = &self.edges[current_edge];
|
let edge = &self.edges[current_edge];
|
||||||
let start_node_this_edge = if current_forward {
|
|
||||||
edge.start_node
|
|
||||||
} else {
|
|
||||||
edge.end_node
|
|
||||||
};
|
|
||||||
let end_node = if current_forward {
|
let end_node = if current_forward {
|
||||||
edge.end_node
|
edge.end_node
|
||||||
} else {
|
} else {
|
||||||
|
|
|
||||||
|
|
@ -32,9 +32,9 @@ struct ExtractedSegment {
|
||||||
/// Original curve index
|
/// Original curve index
|
||||||
curve_index: usize,
|
curve_index: usize,
|
||||||
/// Minimum parameter value from boundary points
|
/// Minimum parameter value from boundary points
|
||||||
t_min: f64,
|
_t_min: f64,
|
||||||
/// Maximum parameter value from boundary points
|
/// Maximum parameter value from boundary points
|
||||||
t_max: f64,
|
_t_max: f64,
|
||||||
/// The curve segment (trimmed to [t_min, t_max])
|
/// The curve segment (trimmed to [t_min, t_max])
|
||||||
segment: CurveSegment,
|
segment: CurveSegment,
|
||||||
}
|
}
|
||||||
|
|
@ -148,8 +148,8 @@ fn split_segments_at_intersections(segments: Vec<ExtractedSegment>) -> Vec<Extra
|
||||||
|
|
||||||
result.push(ExtractedSegment {
|
result.push(ExtractedSegment {
|
||||||
curve_index: seg.curve_index,
|
curve_index: seg.curve_index,
|
||||||
t_min: t_start,
|
_t_min: t_start,
|
||||||
t_max: t_end,
|
_t_max: t_end,
|
||||||
segment: subseg,
|
segment: subseg,
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
@ -260,8 +260,8 @@ fn extract_segments(
|
||||||
|
|
||||||
segments.push(ExtractedSegment {
|
segments.push(ExtractedSegment {
|
||||||
curve_index: curve_idx,
|
curve_index: curve_idx,
|
||||||
t_min,
|
_t_min: t_min,
|
||||||
t_max,
|
_t_max: t_max,
|
||||||
segment,
|
segment,
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
@ -540,7 +540,7 @@ enum ConnectedSegment {
|
||||||
Curve {
|
Curve {
|
||||||
segment: CurveSegment,
|
segment: CurveSegment,
|
||||||
start: Point,
|
start: Point,
|
||||||
end: Point,
|
_end: Point,
|
||||||
},
|
},
|
||||||
/// A line segment bridging a gap
|
/// A line segment bridging a gap
|
||||||
Line { start: Point, end: Point },
|
Line { start: Point, end: Point },
|
||||||
|
|
@ -550,7 +550,7 @@ enum ConnectedSegment {
|
||||||
fn connect_segments(
|
fn connect_segments(
|
||||||
extracted: &[ExtractedSegment],
|
extracted: &[ExtractedSegment],
|
||||||
config: &SegmentBuilderConfig,
|
config: &SegmentBuilderConfig,
|
||||||
click_point: Point,
|
_click_point: Point,
|
||||||
) -> Option<Vec<ConnectedSegment>> {
|
) -> Option<Vec<ConnectedSegment>> {
|
||||||
if extracted.is_empty() {
|
if extracted.is_empty() {
|
||||||
println!("connect_segments: No segments to connect");
|
println!("connect_segments: No segments to connect");
|
||||||
|
|
@ -575,7 +575,7 @@ fn connect_segments(
|
||||||
connected.push(ConnectedSegment::Curve {
|
connected.push(ConnectedSegment::Curve {
|
||||||
segment: current.segment.clone(),
|
segment: current.segment.clone(),
|
||||||
start: current.segment.eval_at(0.0),
|
start: current.segment.eval_at(0.0),
|
||||||
end: current_end,
|
_end: current_end,
|
||||||
});
|
});
|
||||||
|
|
||||||
// Check if we need to connect to the next segment
|
// Check if we need to connect to the next segment
|
||||||
|
|
@ -794,7 +794,7 @@ mod tests {
|
||||||
// If it found segments, verify they're valid
|
// If it found segments, verify they're valid
|
||||||
assert!(!segments.is_empty());
|
assert!(!segments.is_empty());
|
||||||
for seg in &segments {
|
for seg in &segments {
|
||||||
assert!(seg.t_min <= seg.t_max);
|
assert!(seg._t_min <= seg._t_max);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
// If None, the algorithm couldn't form a cycle - that's okay for this test
|
// If None, the algorithm couldn't form a cycle - that's okay for this test
|
||||||
|
|
|
||||||
|
|
@ -23,12 +23,12 @@ pub struct VideoMetadata {
|
||||||
/// Video decoder with LRU frame caching
|
/// Video decoder with LRU frame caching
|
||||||
pub struct VideoDecoder {
|
pub struct VideoDecoder {
|
||||||
path: String,
|
path: String,
|
||||||
width: u32, // Original video width
|
_width: u32, // Original video width
|
||||||
height: u32, // Original video height
|
_height: u32, // Original video height
|
||||||
output_width: u32, // Scaled output width
|
output_width: u32, // Scaled output width
|
||||||
output_height: u32, // Scaled output height
|
output_height: u32, // Scaled output height
|
||||||
fps: f64,
|
fps: f64,
|
||||||
duration: f64,
|
_duration: f64,
|
||||||
time_base: f64,
|
time_base: f64,
|
||||||
stream_index: usize,
|
stream_index: usize,
|
||||||
frame_cache: LruCache<i64, Vec<u8>>, // timestamp -> RGBA data
|
frame_cache: LruCache<i64, Vec<u8>>, // timestamp -> RGBA data
|
||||||
|
|
@ -107,12 +107,12 @@ impl VideoDecoder {
|
||||||
|
|
||||||
Ok(Self {
|
Ok(Self {
|
||||||
path,
|
path,
|
||||||
width,
|
_width: width,
|
||||||
height,
|
_height: height,
|
||||||
output_width,
|
output_width,
|
||||||
output_height,
|
output_height,
|
||||||
fps,
|
fps,
|
||||||
duration,
|
_duration: duration,
|
||||||
time_base,
|
time_base,
|
||||||
stream_index,
|
stream_index,
|
||||||
frame_cache: LruCache::new(
|
frame_cache: LruCache::new(
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,717 @@
|
||||||
|
/// GPU-based Constant-Q Transform (CQT) spectrogram with streaming ring-buffer cache.
|
||||||
|
///
|
||||||
|
/// Replaces the old FFT spectrogram with a CQT that has logarithmic frequency spacing
|
||||||
|
/// (bins map directly to MIDI notes). Only the visible viewport is computed, with results
|
||||||
|
/// cached in a ring-buffer texture so scrolling only computes new columns.
|
||||||
|
///
|
||||||
|
/// Architecture:
|
||||||
|
/// - CqtGpuResources stored in CallbackResources (long-lived, holds pipelines)
|
||||||
|
/// - CqtCacheEntry per pool_index (cache texture, bin params, ring buffer state)
|
||||||
|
/// - CqtCallback implements CallbackTrait (per-frame compute + render)
|
||||||
|
/// - Compute shader reads audio from waveform mip-0 textures (already on GPU)
|
||||||
|
/// - Render shader reads from cache texture with colormap
|
||||||
|
|
||||||
|
use std::collections::HashMap;
|
||||||
|
use wgpu::util::DeviceExt;
|
||||||
|
|
||||||
|
use crate::waveform_gpu::WaveformGpuResources;
|
||||||
|
|
||||||
|
/// CQT parameters
|
||||||
|
const BINS_PER_OCTAVE: u32 = 24;
|
||||||
|
const FREQ_BINS: u32 = 174; // ceil(log2(4186.0 / 27.5) * 24) = ceil(173.95)
|
||||||
|
const HOP_SIZE: u32 = 512;
|
||||||
|
const CACHE_CAPACITY: u32 = 4096;
|
||||||
|
const MAX_COLS_PER_FRAME: u32 = 128;
|
||||||
|
const F_MIN: f64 = 27.5; // A0 = MIDI 21
|
||||||
|
const WAVEFORM_TEX_WIDTH: u32 = 2048;
|
||||||
|
|
||||||
|
/// Per-bin CQT kernel parameters, uploaded as a storage buffer.
|
||||||
|
/// Must match BinInfo in cqt_compute.wgsl.
|
||||||
|
#[repr(C)]
|
||||||
|
#[derive(Debug, Copy, Clone, bytemuck::Pod, bytemuck::Zeroable)]
|
||||||
|
struct CqtBinParams {
|
||||||
|
window_length: u32,
|
||||||
|
phase_step: f32, // 2*pi*Q / N_k
|
||||||
|
_pad0: u32,
|
||||||
|
_pad1: u32,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Compute shader uniform params. Must match CqtParams in cqt_compute.wgsl.
|
||||||
|
#[repr(C)]
|
||||||
|
#[derive(Debug, Copy, Clone, bytemuck::Pod, bytemuck::Zeroable)]
|
||||||
|
struct CqtComputeParams {
|
||||||
|
hop_size: u32,
|
||||||
|
freq_bins: u32,
|
||||||
|
cache_capacity: u32,
|
||||||
|
cache_write_offset: u32,
|
||||||
|
num_columns: u32,
|
||||||
|
column_start: u32,
|
||||||
|
tex_width: u32,
|
||||||
|
total_frames: u32,
|
||||||
|
sample_rate: f32,
|
||||||
|
column_stride: u32,
|
||||||
|
_pad1: u32,
|
||||||
|
_pad2: u32,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Render shader uniform params. Must match Params in cqt_render.wgsl exactly.
|
||||||
|
/// Layout: clip_rect(16) + 18 × f32(72) + pad vec2(8) = 96 bytes
|
||||||
|
#[repr(C)]
|
||||||
|
#[derive(Debug, Copy, Clone, bytemuck::Pod, bytemuck::Zeroable)]
|
||||||
|
pub struct CqtRenderParams {
|
||||||
|
pub clip_rect: [f32; 4], // 16 bytes @ offset 0
|
||||||
|
pub viewport_start_time: f32, // 4 @ 16
|
||||||
|
pub pixels_per_second: f32, // 4 @ 20
|
||||||
|
pub audio_duration: f32, // 4 @ 24
|
||||||
|
pub sample_rate: f32, // 4 @ 28
|
||||||
|
pub clip_start_time: f32, // 4 @ 32
|
||||||
|
pub trim_start: f32, // 4 @ 36
|
||||||
|
pub freq_bins: f32, // 4 @ 40
|
||||||
|
pub bins_per_octave: f32, // 4 @ 44
|
||||||
|
pub hop_size: f32, // 4 @ 48
|
||||||
|
pub scroll_y: f32, // 4 @ 52
|
||||||
|
pub note_height: f32, // 4 @ 56
|
||||||
|
pub min_note: f32, // 4 @ 60
|
||||||
|
pub max_note: f32, // 4 @ 64
|
||||||
|
pub gamma: f32, // 4 @ 68
|
||||||
|
pub cache_capacity: f32, // 4 @ 72
|
||||||
|
pub cache_start_column: f32, // 4 @ 76
|
||||||
|
pub cache_valid_start: f32, // 4 @ 80
|
||||||
|
pub cache_valid_end: f32, // 4 @ 84
|
||||||
|
pub column_stride: f32, // 4 @ 88
|
||||||
|
pub _pad: f32, // 4 @ 92, total 96
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Per-pool-index cache entry with ring buffer and GPU resources.
|
||||||
|
#[allow(dead_code)]
|
||||||
|
struct CqtCacheEntry {
|
||||||
|
// Cache texture (Rgba16Float for universal filterable + storage support)
|
||||||
|
cache_texture: wgpu::Texture,
|
||||||
|
cache_texture_view: wgpu::TextureView,
|
||||||
|
cache_storage_view: wgpu::TextureView,
|
||||||
|
cache_capacity: u32,
|
||||||
|
freq_bins: u32,
|
||||||
|
|
||||||
|
// Ring buffer state
|
||||||
|
cache_start_column: i64,
|
||||||
|
cache_valid_start: i64,
|
||||||
|
cache_valid_end: i64,
|
||||||
|
|
||||||
|
// CQT kernel data
|
||||||
|
bin_params_buffer: wgpu::Buffer,
|
||||||
|
|
||||||
|
// Waveform texture reference (cloned from WaveformGpuEntry)
|
||||||
|
waveform_texture_view: wgpu::TextureView,
|
||||||
|
waveform_total_frames: u64,
|
||||||
|
|
||||||
|
// Bind groups
|
||||||
|
compute_bind_group: wgpu::BindGroup,
|
||||||
|
compute_uniform_buffer: wgpu::Buffer,
|
||||||
|
render_bind_group: wgpu::BindGroup,
|
||||||
|
render_uniform_buffer: wgpu::Buffer,
|
||||||
|
|
||||||
|
// Metadata
|
||||||
|
sample_rate: u32,
|
||||||
|
current_stride: u32,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Global GPU resources for CQT (stored in egui_wgpu::CallbackResources).
|
||||||
|
pub struct CqtGpuResources {
|
||||||
|
entries: HashMap<usize, CqtCacheEntry>,
|
||||||
|
compute_pipeline: wgpu::ComputePipeline,
|
||||||
|
compute_bind_group_layout: wgpu::BindGroupLayout,
|
||||||
|
render_pipeline: wgpu::RenderPipeline,
|
||||||
|
render_bind_group_layout: wgpu::BindGroupLayout,
|
||||||
|
sampler: wgpu::Sampler,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Per-frame callback for computing and rendering a CQT spectrogram.
|
||||||
|
pub struct CqtCallback {
|
||||||
|
pub pool_index: usize,
|
||||||
|
pub params: CqtRenderParams,
|
||||||
|
pub target_format: wgpu::TextureFormat,
|
||||||
|
pub sample_rate: u32,
|
||||||
|
/// Visible column range (global CQT column indices)
|
||||||
|
pub visible_col_start: i64,
|
||||||
|
pub visible_col_end: i64,
|
||||||
|
/// Column stride: 1 = full resolution, N = compute every Nth column
|
||||||
|
pub stride: u32,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Precompute CQT bin parameters for a given sample rate.
|
||||||
|
fn precompute_bin_params(sample_rate: u32) -> Vec<CqtBinParams> {
|
||||||
|
let b = BINS_PER_OCTAVE as f64;
|
||||||
|
let q = 1.0 / (2.0_f64.powf(1.0 / b) - 1.0);
|
||||||
|
|
||||||
|
(0..FREQ_BINS)
|
||||||
|
.map(|k| {
|
||||||
|
let f_k = F_MIN * 2.0_f64.powf(k as f64 / b);
|
||||||
|
let n_k = (q * sample_rate as f64 / f_k).ceil() as u32;
|
||||||
|
let phase_step = (2.0 * std::f64::consts::PI * q / n_k as f64) as f32;
|
||||||
|
CqtBinParams {
|
||||||
|
window_length: n_k,
|
||||||
|
phase_step,
|
||||||
|
_pad0: 0,
|
||||||
|
_pad1: 0,
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
impl CqtGpuResources {
|
||||||
|
pub fn new(device: &wgpu::Device, target_format: wgpu::TextureFormat) -> Self {
|
||||||
|
// Compute shader
|
||||||
|
let compute_shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
|
||||||
|
label: Some("cqt_compute_shader"),
|
||||||
|
source: wgpu::ShaderSource::Wgsl(
|
||||||
|
include_str!("panes/shaders/cqt_compute.wgsl").into(),
|
||||||
|
),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Render shader
|
||||||
|
let render_shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
|
||||||
|
label: Some("cqt_render_shader"),
|
||||||
|
source: wgpu::ShaderSource::Wgsl(
|
||||||
|
include_str!("panes/shaders/cqt_render.wgsl").into(),
|
||||||
|
),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Compute bind group layout:
|
||||||
|
// 0: audio_tex (texture_2d<f32>, read)
|
||||||
|
// 1: cqt_out (texture_storage_2d<rgba16float, write>)
|
||||||
|
// 2: params (uniform)
|
||||||
|
// 3: bins (storage, read)
|
||||||
|
let compute_bind_group_layout =
|
||||||
|
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
|
||||||
|
label: Some("cqt_compute_bgl"),
|
||||||
|
entries: &[
|
||||||
|
wgpu::BindGroupLayoutEntry {
|
||||||
|
binding: 0,
|
||||||
|
visibility: wgpu::ShaderStages::COMPUTE,
|
||||||
|
ty: wgpu::BindingType::Texture {
|
||||||
|
sample_type: wgpu::TextureSampleType::Float { filterable: false },
|
||||||
|
view_dimension: wgpu::TextureViewDimension::D2,
|
||||||
|
multisampled: false,
|
||||||
|
},
|
||||||
|
count: None,
|
||||||
|
},
|
||||||
|
wgpu::BindGroupLayoutEntry {
|
||||||
|
binding: 1,
|
||||||
|
visibility: wgpu::ShaderStages::COMPUTE,
|
||||||
|
ty: wgpu::BindingType::StorageTexture {
|
||||||
|
access: wgpu::StorageTextureAccess::WriteOnly,
|
||||||
|
format: wgpu::TextureFormat::Rgba16Float,
|
||||||
|
view_dimension: wgpu::TextureViewDimension::D2,
|
||||||
|
},
|
||||||
|
count: None,
|
||||||
|
},
|
||||||
|
wgpu::BindGroupLayoutEntry {
|
||||||
|
binding: 2,
|
||||||
|
visibility: wgpu::ShaderStages::COMPUTE,
|
||||||
|
ty: wgpu::BindingType::Buffer {
|
||||||
|
ty: wgpu::BufferBindingType::Uniform,
|
||||||
|
has_dynamic_offset: false,
|
||||||
|
min_binding_size: None,
|
||||||
|
},
|
||||||
|
count: None,
|
||||||
|
},
|
||||||
|
wgpu::BindGroupLayoutEntry {
|
||||||
|
binding: 3,
|
||||||
|
visibility: wgpu::ShaderStages::COMPUTE,
|
||||||
|
ty: wgpu::BindingType::Buffer {
|
||||||
|
ty: wgpu::BufferBindingType::Storage { read_only: true },
|
||||||
|
has_dynamic_offset: false,
|
||||||
|
min_binding_size: None,
|
||||||
|
},
|
||||||
|
count: None,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
// Render bind group layout: cache_tex + sampler + uniforms
|
||||||
|
let render_bind_group_layout =
|
||||||
|
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
|
||||||
|
label: Some("cqt_render_bgl"),
|
||||||
|
entries: &[
|
||||||
|
wgpu::BindGroupLayoutEntry {
|
||||||
|
binding: 0,
|
||||||
|
visibility: wgpu::ShaderStages::FRAGMENT,
|
||||||
|
ty: wgpu::BindingType::Texture {
|
||||||
|
sample_type: wgpu::TextureSampleType::Float { filterable: true },
|
||||||
|
view_dimension: wgpu::TextureViewDimension::D2,
|
||||||
|
multisampled: false,
|
||||||
|
},
|
||||||
|
count: None,
|
||||||
|
},
|
||||||
|
wgpu::BindGroupLayoutEntry {
|
||||||
|
binding: 1,
|
||||||
|
visibility: wgpu::ShaderStages::FRAGMENT,
|
||||||
|
ty: wgpu::BindingType::Sampler(wgpu::SamplerBindingType::Filtering),
|
||||||
|
count: None,
|
||||||
|
},
|
||||||
|
wgpu::BindGroupLayoutEntry {
|
||||||
|
binding: 2,
|
||||||
|
visibility: wgpu::ShaderStages::FRAGMENT,
|
||||||
|
ty: wgpu::BindingType::Buffer {
|
||||||
|
ty: wgpu::BufferBindingType::Uniform,
|
||||||
|
has_dynamic_offset: false,
|
||||||
|
min_binding_size: None,
|
||||||
|
},
|
||||||
|
count: None,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
// Compute pipeline
|
||||||
|
let compute_pipeline_layout =
|
||||||
|
device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
|
||||||
|
label: Some("cqt_compute_pipeline_layout"),
|
||||||
|
bind_group_layouts: &[&compute_bind_group_layout],
|
||||||
|
push_constant_ranges: &[],
|
||||||
|
});
|
||||||
|
|
||||||
|
let compute_pipeline =
|
||||||
|
device.create_compute_pipeline(&wgpu::ComputePipelineDescriptor {
|
||||||
|
label: Some("cqt_compute_pipeline"),
|
||||||
|
layout: Some(&compute_pipeline_layout),
|
||||||
|
module: &compute_shader,
|
||||||
|
entry_point: Some("main"),
|
||||||
|
compilation_options: Default::default(),
|
||||||
|
cache: None,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Render pipeline
|
||||||
|
let render_pipeline_layout =
|
||||||
|
device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
|
||||||
|
label: Some("cqt_render_pipeline_layout"),
|
||||||
|
bind_group_layouts: &[&render_bind_group_layout],
|
||||||
|
push_constant_ranges: &[],
|
||||||
|
});
|
||||||
|
|
||||||
|
let render_pipeline = device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
|
||||||
|
label: Some("cqt_render_pipeline"),
|
||||||
|
layout: Some(&render_pipeline_layout),
|
||||||
|
vertex: wgpu::VertexState {
|
||||||
|
module: &render_shader,
|
||||||
|
entry_point: Some("vs_main"),
|
||||||
|
buffers: &[],
|
||||||
|
compilation_options: Default::default(),
|
||||||
|
},
|
||||||
|
fragment: Some(wgpu::FragmentState {
|
||||||
|
module: &render_shader,
|
||||||
|
entry_point: Some("fs_main"),
|
||||||
|
targets: &[Some(wgpu::ColorTargetState {
|
||||||
|
format: target_format,
|
||||||
|
blend: Some(wgpu::BlendState::ALPHA_BLENDING),
|
||||||
|
write_mask: wgpu::ColorWrites::ALL,
|
||||||
|
})],
|
||||||
|
compilation_options: Default::default(),
|
||||||
|
}),
|
||||||
|
primitive: wgpu::PrimitiveState {
|
||||||
|
topology: wgpu::PrimitiveTopology::TriangleList,
|
||||||
|
..Default::default()
|
||||||
|
},
|
||||||
|
depth_stencil: None,
|
||||||
|
multisample: wgpu::MultisampleState::default(),
|
||||||
|
multiview: None,
|
||||||
|
cache: None,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Bilinear sampler for smooth interpolation in render shader
|
||||||
|
let sampler = device.create_sampler(&wgpu::SamplerDescriptor {
|
||||||
|
label: Some("cqt_sampler"),
|
||||||
|
mag_filter: wgpu::FilterMode::Linear,
|
||||||
|
min_filter: wgpu::FilterMode::Linear,
|
||||||
|
mipmap_filter: wgpu::FilterMode::Nearest,
|
||||||
|
..Default::default()
|
||||||
|
});
|
||||||
|
|
||||||
|
Self {
|
||||||
|
entries: HashMap::new(),
|
||||||
|
compute_pipeline,
|
||||||
|
compute_bind_group_layout,
|
||||||
|
render_pipeline,
|
||||||
|
render_bind_group_layout,
|
||||||
|
sampler,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Create a cache entry for a pool index, referencing the waveform texture.
|
||||||
|
fn ensure_cache_entry(
|
||||||
|
&mut self,
|
||||||
|
device: &wgpu::Device,
|
||||||
|
pool_index: usize,
|
||||||
|
waveform_texture_view: wgpu::TextureView,
|
||||||
|
total_frames: u64,
|
||||||
|
sample_rate: u32,
|
||||||
|
) {
|
||||||
|
// If entry exists, check if waveform data has grown (progressive decode)
|
||||||
|
if let Some(entry) = self.entries.get_mut(&pool_index) {
|
||||||
|
if entry.waveform_total_frames != total_frames {
|
||||||
|
// Waveform texture updated in-place with more data.
|
||||||
|
// The texture view is still valid (no destroy/recreate),
|
||||||
|
// so just update total_frames to allow computing new columns.
|
||||||
|
entry.waveform_total_frames = total_frames;
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create cache texture (ring buffer)
|
||||||
|
let cache_texture = device.create_texture(&wgpu::TextureDescriptor {
|
||||||
|
label: Some(&format!("cqt_cache_{}", pool_index)),
|
||||||
|
size: wgpu::Extent3d {
|
||||||
|
width: CACHE_CAPACITY,
|
||||||
|
height: FREQ_BINS,
|
||||||
|
depth_or_array_layers: 1,
|
||||||
|
},
|
||||||
|
mip_level_count: 1,
|
||||||
|
sample_count: 1,
|
||||||
|
dimension: wgpu::TextureDimension::D2,
|
||||||
|
format: wgpu::TextureFormat::Rgba16Float,
|
||||||
|
usage: wgpu::TextureUsages::STORAGE_BINDING | wgpu::TextureUsages::TEXTURE_BINDING,
|
||||||
|
view_formats: &[],
|
||||||
|
});
|
||||||
|
|
||||||
|
let cache_texture_view = cache_texture.create_view(&wgpu::TextureViewDescriptor {
|
||||||
|
label: Some(&format!("cqt_cache_{}_view", pool_index)),
|
||||||
|
..Default::default()
|
||||||
|
});
|
||||||
|
|
||||||
|
let cache_storage_view = cache_texture.create_view(&wgpu::TextureViewDescriptor {
|
||||||
|
label: Some(&format!("cqt_cache_{}_storage", pool_index)),
|
||||||
|
..Default::default()
|
||||||
|
});
|
||||||
|
|
||||||
|
// Precompute bin params
|
||||||
|
let bin_params = precompute_bin_params(sample_rate);
|
||||||
|
let bin_params_buffer = device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
|
||||||
|
label: Some(&format!("cqt_bins_{}", pool_index)),
|
||||||
|
contents: bytemuck::cast_slice(&bin_params),
|
||||||
|
usage: wgpu::BufferUsages::STORAGE,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Compute uniform buffer
|
||||||
|
let compute_uniform_buffer = device.create_buffer(&wgpu::BufferDescriptor {
|
||||||
|
label: Some(&format!("cqt_compute_uniforms_{}", pool_index)),
|
||||||
|
size: std::mem::size_of::<CqtComputeParams>() as u64,
|
||||||
|
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
|
||||||
|
mapped_at_creation: false,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Render uniform buffer
|
||||||
|
let render_uniform_buffer = device.create_buffer(&wgpu::BufferDescriptor {
|
||||||
|
label: Some(&format!("cqt_render_uniforms_{}", pool_index)),
|
||||||
|
size: std::mem::size_of::<CqtRenderParams>() as u64,
|
||||||
|
usage: wgpu::BufferUsages::UNIFORM | wgpu::BufferUsages::COPY_DST,
|
||||||
|
mapped_at_creation: false,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Compute bind group
|
||||||
|
let compute_bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
|
||||||
|
label: Some(&format!("cqt_compute_bg_{}", pool_index)),
|
||||||
|
layout: &self.compute_bind_group_layout,
|
||||||
|
entries: &[
|
||||||
|
wgpu::BindGroupEntry {
|
||||||
|
binding: 0,
|
||||||
|
resource: wgpu::BindingResource::TextureView(&waveform_texture_view),
|
||||||
|
},
|
||||||
|
wgpu::BindGroupEntry {
|
||||||
|
binding: 1,
|
||||||
|
resource: wgpu::BindingResource::TextureView(&cache_storage_view),
|
||||||
|
},
|
||||||
|
wgpu::BindGroupEntry {
|
||||||
|
binding: 2,
|
||||||
|
resource: compute_uniform_buffer.as_entire_binding(),
|
||||||
|
},
|
||||||
|
wgpu::BindGroupEntry {
|
||||||
|
binding: 3,
|
||||||
|
resource: bin_params_buffer.as_entire_binding(),
|
||||||
|
},
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
// Render bind group
|
||||||
|
let render_bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
|
||||||
|
label: Some(&format!("cqt_render_bg_{}", pool_index)),
|
||||||
|
layout: &self.render_bind_group_layout,
|
||||||
|
entries: &[
|
||||||
|
wgpu::BindGroupEntry {
|
||||||
|
binding: 0,
|
||||||
|
resource: wgpu::BindingResource::TextureView(&cache_texture_view),
|
||||||
|
},
|
||||||
|
wgpu::BindGroupEntry {
|
||||||
|
binding: 1,
|
||||||
|
resource: wgpu::BindingResource::Sampler(&self.sampler),
|
||||||
|
},
|
||||||
|
wgpu::BindGroupEntry {
|
||||||
|
binding: 2,
|
||||||
|
resource: render_uniform_buffer.as_entire_binding(),
|
||||||
|
},
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
self.entries.insert(
|
||||||
|
pool_index,
|
||||||
|
CqtCacheEntry {
|
||||||
|
cache_texture,
|
||||||
|
cache_texture_view,
|
||||||
|
cache_storage_view,
|
||||||
|
cache_capacity: CACHE_CAPACITY,
|
||||||
|
freq_bins: FREQ_BINS,
|
||||||
|
cache_start_column: 0,
|
||||||
|
cache_valid_start: 0,
|
||||||
|
cache_valid_end: 0,
|
||||||
|
bin_params_buffer,
|
||||||
|
waveform_texture_view,
|
||||||
|
waveform_total_frames: total_frames,
|
||||||
|
compute_bind_group,
|
||||||
|
compute_uniform_buffer,
|
||||||
|
render_bind_group,
|
||||||
|
render_uniform_buffer,
|
||||||
|
sample_rate,
|
||||||
|
current_stride: 1,
|
||||||
|
},
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Dispatch compute shader to fill CQT columns in the cache.
|
||||||
|
/// Free function to avoid borrow conflicts with CqtGpuResources.entries.
|
||||||
|
fn dispatch_cqt_compute(
|
||||||
|
device: &wgpu::Device,
|
||||||
|
queue: &wgpu::Queue,
|
||||||
|
pipeline: &wgpu::ComputePipeline,
|
||||||
|
entry: &CqtCacheEntry,
|
||||||
|
start_col: i64,
|
||||||
|
end_col: i64,
|
||||||
|
stride: u32,
|
||||||
|
) -> Vec<wgpu::CommandBuffer> {
|
||||||
|
// Number of cache slots needed (each slot covers `stride` global columns)
|
||||||
|
let num_cols = ((end_col - start_col) as u32 / stride).max(1);
|
||||||
|
if end_col <= start_col {
|
||||||
|
return Vec::new();
|
||||||
|
}
|
||||||
|
|
||||||
|
// Clamp to max per frame
|
||||||
|
let num_cols = num_cols.min(MAX_COLS_PER_FRAME);
|
||||||
|
|
||||||
|
// Calculate ring buffer write offset (in cache slots, not global columns)
|
||||||
|
let cache_write_offset =
|
||||||
|
(((start_col - entry.cache_start_column) / stride as i64) as u32) % entry.cache_capacity;
|
||||||
|
|
||||||
|
let params = CqtComputeParams {
|
||||||
|
hop_size: HOP_SIZE,
|
||||||
|
freq_bins: FREQ_BINS,
|
||||||
|
cache_capacity: entry.cache_capacity,
|
||||||
|
cache_write_offset,
|
||||||
|
num_columns: num_cols,
|
||||||
|
column_start: start_col.max(0) as u32,
|
||||||
|
tex_width: WAVEFORM_TEX_WIDTH,
|
||||||
|
total_frames: entry.waveform_total_frames as u32,
|
||||||
|
sample_rate: entry.sample_rate as f32,
|
||||||
|
column_stride: stride,
|
||||||
|
_pad1: 0,
|
||||||
|
_pad2: 0,
|
||||||
|
};
|
||||||
|
|
||||||
|
queue.write_buffer(
|
||||||
|
&entry.compute_uniform_buffer,
|
||||||
|
0,
|
||||||
|
bytemuck::cast_slice(&[params]),
|
||||||
|
);
|
||||||
|
|
||||||
|
let mut encoder = device.create_command_encoder(&wgpu::CommandEncoderDescriptor {
|
||||||
|
label: Some("cqt_compute_encoder"),
|
||||||
|
});
|
||||||
|
|
||||||
|
{
|
||||||
|
let mut pass = encoder.begin_compute_pass(&wgpu::ComputePassDescriptor {
|
||||||
|
label: Some("cqt_compute_pass"),
|
||||||
|
timestamp_writes: None,
|
||||||
|
});
|
||||||
|
pass.set_pipeline(pipeline);
|
||||||
|
pass.set_bind_group(0, &entry.compute_bind_group, &[]);
|
||||||
|
|
||||||
|
// Dispatch: X = ceil(freq_bins / 64), Y = num_columns
|
||||||
|
let workgroups_x = (FREQ_BINS + 63) / 64;
|
||||||
|
pass.dispatch_workgroups(workgroups_x, num_cols, 1);
|
||||||
|
}
|
||||||
|
|
||||||
|
vec![encoder.finish()]
|
||||||
|
}
|
||||||
|
|
||||||
|
impl egui_wgpu::CallbackTrait for CqtCallback {
|
||||||
|
fn prepare(
|
||||||
|
&self,
|
||||||
|
device: &wgpu::Device,
|
||||||
|
queue: &wgpu::Queue,
|
||||||
|
_screen_descriptor: &egui_wgpu::ScreenDescriptor,
|
||||||
|
_egui_encoder: &mut wgpu::CommandEncoder,
|
||||||
|
resources: &mut egui_wgpu::CallbackResources,
|
||||||
|
) -> Vec<wgpu::CommandBuffer> {
|
||||||
|
// Initialize CQT resources if needed
|
||||||
|
if !resources.contains::<CqtGpuResources>() {
|
||||||
|
resources.insert(CqtGpuResources::new(device, self.target_format));
|
||||||
|
}
|
||||||
|
|
||||||
|
// First, check if waveform data is available and extract what we need
|
||||||
|
let waveform_info: Option<(wgpu::TextureView, u64)> = {
|
||||||
|
let waveform_gpu: Option<&WaveformGpuResources> = resources.get();
|
||||||
|
waveform_gpu.and_then(|wgpu_res| {
|
||||||
|
wgpu_res.entries.get(&self.pool_index).map(|entry| {
|
||||||
|
// Clone the texture view (Arc internally, cheap)
|
||||||
|
(entry.texture_views[0].clone(), entry.total_frames)
|
||||||
|
})
|
||||||
|
})
|
||||||
|
};
|
||||||
|
|
||||||
|
let (waveform_view, total_frames) = match waveform_info {
|
||||||
|
Some(info) => info,
|
||||||
|
None => return Vec::new(), // Waveform not uploaded yet
|
||||||
|
};
|
||||||
|
|
||||||
|
let cqt_gpu: &mut CqtGpuResources = resources.get_mut().unwrap();
|
||||||
|
|
||||||
|
// Ensure cache entry exists
|
||||||
|
cqt_gpu.ensure_cache_entry(
|
||||||
|
device,
|
||||||
|
self.pool_index,
|
||||||
|
waveform_view,
|
||||||
|
total_frames,
|
||||||
|
self.sample_rate,
|
||||||
|
);
|
||||||
|
|
||||||
|
// Determine which columns need computing
|
||||||
|
let stride = self.stride.max(1) as i64;
|
||||||
|
let vis_start = self.visible_col_start.max(0);
|
||||||
|
let max_col = (total_frames as i64) / HOP_SIZE as i64;
|
||||||
|
let vis_end_raw = self.visible_col_end.min(max_col);
|
||||||
|
// Clamp visible range to cache capacity (in global columns, accounting for stride)
|
||||||
|
let vis_end = vis_end_raw.min(vis_start + CACHE_CAPACITY as i64 * stride);
|
||||||
|
|
||||||
|
// If stride changed, invalidate cache
|
||||||
|
{
|
||||||
|
let entry = cqt_gpu.entries.get_mut(&self.pool_index).unwrap();
|
||||||
|
if entry.current_stride != self.stride {
|
||||||
|
entry.current_stride = self.stride;
|
||||||
|
entry.cache_start_column = vis_start;
|
||||||
|
entry.cache_valid_start = vis_start;
|
||||||
|
entry.cache_valid_end = vis_start;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stride-aware max columns per frame (in global column units)
|
||||||
|
let max_cols_global = MAX_COLS_PER_FRAME as i64 * stride;
|
||||||
|
|
||||||
|
// Read current cache state, compute what's needed, then update state.
|
||||||
|
// We split borrows carefully: read entry state, compute, then write back.
|
||||||
|
let cmds;
|
||||||
|
{
|
||||||
|
let entry = cqt_gpu.entries.get(&self.pool_index).unwrap();
|
||||||
|
let cache_valid_start = entry.cache_valid_start;
|
||||||
|
let cache_valid_end = entry.cache_valid_end;
|
||||||
|
|
||||||
|
if vis_start >= vis_end {
|
||||||
|
cmds = Vec::new();
|
||||||
|
} else if vis_start >= cache_valid_start && vis_end <= cache_valid_end {
|
||||||
|
// Fully cached
|
||||||
|
cmds = Vec::new();
|
||||||
|
} else if vis_start >= cache_valid_start
|
||||||
|
&& vis_start < cache_valid_end
|
||||||
|
&& vis_end > cache_valid_end
|
||||||
|
{
|
||||||
|
// Scrolling right — align to stride boundary
|
||||||
|
let actual_end =
|
||||||
|
cache_valid_end + (vis_end - cache_valid_end).min(max_cols_global);
|
||||||
|
cmds = dispatch_cqt_compute(
|
||||||
|
device, queue, &cqt_gpu.compute_pipeline, entry,
|
||||||
|
cache_valid_end, actual_end, self.stride,
|
||||||
|
);
|
||||||
|
let entry = cqt_gpu.entries.get_mut(&self.pool_index).unwrap();
|
||||||
|
entry.cache_valid_end = actual_end;
|
||||||
|
let cache_cap_global = entry.cache_capacity as i64 * stride;
|
||||||
|
if entry.cache_valid_end - entry.cache_valid_start > cache_cap_global {
|
||||||
|
entry.cache_valid_start = entry.cache_valid_end - cache_cap_global;
|
||||||
|
entry.cache_start_column = entry.cache_valid_start;
|
||||||
|
}
|
||||||
|
} else if vis_end <= cache_valid_end
|
||||||
|
&& vis_end > cache_valid_start
|
||||||
|
&& vis_start < cache_valid_start
|
||||||
|
{
|
||||||
|
// Scrolling left
|
||||||
|
let actual_start =
|
||||||
|
cache_valid_start - (cache_valid_start - vis_start).min(max_cols_global);
|
||||||
|
cmds = dispatch_cqt_compute(
|
||||||
|
device, queue, &cqt_gpu.compute_pipeline, entry,
|
||||||
|
actual_start, cache_valid_start, self.stride,
|
||||||
|
);
|
||||||
|
let entry = cqt_gpu.entries.get_mut(&self.pool_index).unwrap();
|
||||||
|
entry.cache_valid_start = actual_start;
|
||||||
|
entry.cache_start_column = actual_start;
|
||||||
|
let cache_cap_global = entry.cache_capacity as i64 * stride;
|
||||||
|
if entry.cache_valid_end - entry.cache_valid_start > cache_cap_global {
|
||||||
|
entry.cache_valid_end = entry.cache_valid_start + cache_cap_global;
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// No overlap or first compute — reset cache
|
||||||
|
let entry = cqt_gpu.entries.get_mut(&self.pool_index).unwrap();
|
||||||
|
entry.cache_start_column = vis_start;
|
||||||
|
entry.cache_valid_start = vis_start;
|
||||||
|
entry.cache_valid_end = vis_start;
|
||||||
|
|
||||||
|
let compute_end = vis_start + (vis_end - vis_start).min(max_cols_global);
|
||||||
|
let entry = cqt_gpu.entries.get(&self.pool_index).unwrap();
|
||||||
|
cmds = dispatch_cqt_compute(
|
||||||
|
device, queue, &cqt_gpu.compute_pipeline, entry,
|
||||||
|
vis_start, compute_end, self.stride,
|
||||||
|
);
|
||||||
|
let entry = cqt_gpu.entries.get_mut(&self.pool_index).unwrap();
|
||||||
|
entry.cache_valid_end = compute_end;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update render uniform buffer
|
||||||
|
let entry = cqt_gpu.entries.get(&self.pool_index).unwrap();
|
||||||
|
let mut params = self.params;
|
||||||
|
params.cache_start_column = entry.cache_start_column as f32;
|
||||||
|
params.cache_valid_start = entry.cache_valid_start as f32;
|
||||||
|
params.cache_valid_end = entry.cache_valid_end as f32;
|
||||||
|
params.cache_capacity = entry.cache_capacity as f32;
|
||||||
|
params.column_stride = self.stride as f32;
|
||||||
|
|
||||||
|
queue.write_buffer(
|
||||||
|
&entry.render_uniform_buffer,
|
||||||
|
0,
|
||||||
|
bytemuck::cast_slice(&[params]),
|
||||||
|
);
|
||||||
|
|
||||||
|
cmds
|
||||||
|
}
|
||||||
|
|
||||||
|
fn paint(
|
||||||
|
&self,
|
||||||
|
_info: eframe::egui::PaintCallbackInfo,
|
||||||
|
render_pass: &mut wgpu::RenderPass<'static>,
|
||||||
|
resources: &egui_wgpu::CallbackResources,
|
||||||
|
) {
|
||||||
|
let cqt_gpu: &CqtGpuResources = match resources.get() {
|
||||||
|
Some(r) => r,
|
||||||
|
None => return,
|
||||||
|
};
|
||||||
|
|
||||||
|
let entry = match cqt_gpu.entries.get(&self.pool_index) {
|
||||||
|
Some(e) => e,
|
||||||
|
None => return,
|
||||||
|
};
|
||||||
|
|
||||||
|
// Don't render if nothing is cached yet
|
||||||
|
if entry.cache_valid_start >= entry.cache_valid_end {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
render_pass.set_pipeline(&cqt_gpu.render_pipeline);
|
||||||
|
render_pass.set_bind_group(0, &entry.render_bind_group, &[]);
|
||||||
|
render_pass.draw(0..3, 0..1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
@ -4,7 +4,7 @@
|
||||||
//! using the actual WGSL shaders.
|
//! using the actual WGSL shaders.
|
||||||
|
|
||||||
use lightningbeam_core::effect::{EffectDefinition, EffectInstance};
|
use lightningbeam_core::effect::{EffectDefinition, EffectInstance};
|
||||||
use lightningbeam_core::gpu::effect_processor::{EffectProcessor, EffectUniforms};
|
use lightningbeam_core::gpu::effect_processor::EffectProcessor;
|
||||||
use std::collections::HashMap;
|
use std::collections::HashMap;
|
||||||
use uuid::Uuid;
|
use uuid::Uuid;
|
||||||
|
|
||||||
|
|
@ -19,6 +19,7 @@ pub struct EffectThumbnailGenerator {
|
||||||
/// Effect processor for compiling and applying shaders
|
/// Effect processor for compiling and applying shaders
|
||||||
effect_processor: EffectProcessor,
|
effect_processor: EffectProcessor,
|
||||||
/// Source texture (still-life image scaled to thumbnail size)
|
/// Source texture (still-life image scaled to thumbnail size)
|
||||||
|
#[allow(dead_code)] // Must stay alive — source_view is a view into this texture
|
||||||
source_texture: wgpu::Texture,
|
source_texture: wgpu::Texture,
|
||||||
/// View of the source texture
|
/// View of the source texture
|
||||||
source_view: wgpu::TextureView,
|
source_view: wgpu::TextureView,
|
||||||
|
|
@ -101,7 +102,7 @@ impl EffectThumbnailGenerator {
|
||||||
let dest_view = dest_texture.create_view(&wgpu::TextureViewDescriptor::default());
|
let dest_view = dest_texture.create_view(&wgpu::TextureViewDescriptor::default());
|
||||||
|
|
||||||
// Create readback buffer
|
// Create readback buffer
|
||||||
let buffer_size = (EFFECT_THUMBNAIL_SIZE * EFFECT_THUMBNAIL_SIZE * 4) as u64;
|
let _buffer_size = (EFFECT_THUMBNAIL_SIZE * EFFECT_THUMBNAIL_SIZE * 4) as u64;
|
||||||
// Align to 256 bytes for wgpu requirements
|
// Align to 256 bytes for wgpu requirements
|
||||||
let aligned_bytes_per_row = ((EFFECT_THUMBNAIL_SIZE * 4 + 255) / 256) * 256;
|
let aligned_bytes_per_row = ((EFFECT_THUMBNAIL_SIZE * 4 + 255) / 256) * 256;
|
||||||
let readback_buffer = device.create_buffer(&wgpu::BufferDescriptor {
|
let readback_buffer = device.create_buffer(&wgpu::BufferDescriptor {
|
||||||
|
|
@ -160,11 +161,13 @@ impl EffectThumbnailGenerator {
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Get a cached thumbnail, or None if not yet generated
|
/// Get a cached thumbnail, or None if not yet generated
|
||||||
|
#[allow(dead_code)]
|
||||||
pub fn get_thumbnail(&self, effect_id: &Uuid) -> Option<&Vec<u8>> {
|
pub fn get_thumbnail(&self, effect_id: &Uuid) -> Option<&Vec<u8>> {
|
||||||
self.thumbnail_cache.get(effect_id)
|
self.thumbnail_cache.get(effect_id)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Check if a thumbnail is cached
|
/// Check if a thumbnail is cached
|
||||||
|
#[allow(dead_code)]
|
||||||
pub fn has_thumbnail(&self, effect_id: &Uuid) -> bool {
|
pub fn has_thumbnail(&self, effect_id: &Uuid) -> bool {
|
||||||
self.thumbnail_cache.contains_key(effect_id)
|
self.thumbnail_cache.contains_key(effect_id)
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -1,3 +1,4 @@
|
||||||
|
#![allow(dead_code)]
|
||||||
//! Audio export functionality
|
//! Audio export functionality
|
||||||
//!
|
//!
|
||||||
//! Exports audio from the timeline to various formats:
|
//! Exports audio from the timeline to various formats:
|
||||||
|
|
@ -168,7 +169,7 @@ fn export_audio_ffmpeg_mp3<P: AsRef<Path>>(
|
||||||
// Step 3: Encode frames and write to output
|
// Step 3: Encode frames and write to output
|
||||||
// Convert interleaved f32 samples to planar i16 format
|
// Convert interleaved f32 samples to planar i16 format
|
||||||
let num_frames = pcm_samples.len() / settings.channels as usize;
|
let num_frames = pcm_samples.len() / settings.channels as usize;
|
||||||
let mut planar_samples = convert_to_planar_i16(&pcm_samples, settings.channels);
|
let planar_samples = convert_to_planar_i16(&pcm_samples, settings.channels);
|
||||||
|
|
||||||
// Get encoder frame size
|
// Get encoder frame size
|
||||||
let frame_size = encoder.frame_size();
|
let frame_size = encoder.frame_size();
|
||||||
|
|
|
||||||
|
|
@ -182,7 +182,7 @@ impl ExportDialog {
|
||||||
("Podcast AAC", AudioExportSettings::podcast_aac()),
|
("Podcast AAC", AudioExportSettings::podcast_aac()),
|
||||||
];
|
];
|
||||||
|
|
||||||
egui::ComboBox::from_id_source("export_preset")
|
egui::ComboBox::from_id_salt("export_preset")
|
||||||
.selected_text(presets[self.selected_audio_preset].0)
|
.selected_text(presets[self.selected_audio_preset].0)
|
||||||
.show_ui(ui, |ui| {
|
.show_ui(ui, |ui| {
|
||||||
for (i, (name, _)) in presets.iter().enumerate() {
|
for (i, (name, _)) in presets.iter().enumerate() {
|
||||||
|
|
@ -207,7 +207,7 @@ impl ExportDialog {
|
||||||
ui.heading("Format");
|
ui.heading("Format");
|
||||||
ui.horizontal(|ui| {
|
ui.horizontal(|ui| {
|
||||||
ui.label("Format:");
|
ui.label("Format:");
|
||||||
egui::ComboBox::from_id_source("audio_format")
|
egui::ComboBox::from_id_salt("audio_format")
|
||||||
.selected_text(self.audio_settings.format.name())
|
.selected_text(self.audio_settings.format.name())
|
||||||
.show_ui(ui, |ui| {
|
.show_ui(ui, |ui| {
|
||||||
ui.selectable_value(&mut self.audio_settings.format, AudioFormat::Wav, "WAV (Uncompressed)");
|
ui.selectable_value(&mut self.audio_settings.format, AudioFormat::Wav, "WAV (Uncompressed)");
|
||||||
|
|
@ -222,7 +222,7 @@ impl ExportDialog {
|
||||||
// Audio settings
|
// Audio settings
|
||||||
ui.horizontal(|ui| {
|
ui.horizontal(|ui| {
|
||||||
ui.label("Sample Rate:");
|
ui.label("Sample Rate:");
|
||||||
egui::ComboBox::from_id_source("sample_rate")
|
egui::ComboBox::from_id_salt("sample_rate")
|
||||||
.selected_text(format!("{} Hz", self.audio_settings.sample_rate))
|
.selected_text(format!("{} Hz", self.audio_settings.sample_rate))
|
||||||
.show_ui(ui, |ui| {
|
.show_ui(ui, |ui| {
|
||||||
ui.selectable_value(&mut self.audio_settings.sample_rate, 44100, "44100 Hz");
|
ui.selectable_value(&mut self.audio_settings.sample_rate, 44100, "44100 Hz");
|
||||||
|
|
@ -251,7 +251,7 @@ impl ExportDialog {
|
||||||
if self.audio_settings.format.uses_bitrate() {
|
if self.audio_settings.format.uses_bitrate() {
|
||||||
ui.horizontal(|ui| {
|
ui.horizontal(|ui| {
|
||||||
ui.label("Bitrate:");
|
ui.label("Bitrate:");
|
||||||
egui::ComboBox::from_id_source("bitrate")
|
egui::ComboBox::from_id_salt("bitrate")
|
||||||
.selected_text(format!("{} kbps", self.audio_settings.bitrate_kbps))
|
.selected_text(format!("{} kbps", self.audio_settings.bitrate_kbps))
|
||||||
.show_ui(ui, |ui| {
|
.show_ui(ui, |ui| {
|
||||||
ui.selectable_value(&mut self.audio_settings.bitrate_kbps, 128, "128 kbps");
|
ui.selectable_value(&mut self.audio_settings.bitrate_kbps, 128, "128 kbps");
|
||||||
|
|
@ -269,7 +269,7 @@ impl ExportDialog {
|
||||||
ui.heading("Codec");
|
ui.heading("Codec");
|
||||||
ui.horizontal(|ui| {
|
ui.horizontal(|ui| {
|
||||||
ui.label("Codec:");
|
ui.label("Codec:");
|
||||||
egui::ComboBox::from_id_source("video_codec")
|
egui::ComboBox::from_id_salt("video_codec")
|
||||||
.selected_text(format!("{:?}", self.video_settings.codec))
|
.selected_text(format!("{:?}", self.video_settings.codec))
|
||||||
.show_ui(ui, |ui| {
|
.show_ui(ui, |ui| {
|
||||||
ui.selectable_value(&mut self.video_settings.codec, VideoCodec::H264, "H.264 (Most Compatible)");
|
ui.selectable_value(&mut self.video_settings.codec, VideoCodec::H264, "H.264 (Most Compatible)");
|
||||||
|
|
@ -287,13 +287,13 @@ impl ExportDialog {
|
||||||
ui.horizontal(|ui| {
|
ui.horizontal(|ui| {
|
||||||
ui.label("Width:");
|
ui.label("Width:");
|
||||||
let mut custom_width = self.video_settings.width.unwrap_or(1920);
|
let mut custom_width = self.video_settings.width.unwrap_or(1920);
|
||||||
if ui.add(egui::DragValue::new(&mut custom_width).clamp_range(1..=7680)).changed() {
|
if ui.add(egui::DragValue::new(&mut custom_width).range(1..=7680)).changed() {
|
||||||
self.video_settings.width = Some(custom_width);
|
self.video_settings.width = Some(custom_width);
|
||||||
}
|
}
|
||||||
|
|
||||||
ui.label("Height:");
|
ui.label("Height:");
|
||||||
let mut custom_height = self.video_settings.height.unwrap_or(1080);
|
let mut custom_height = self.video_settings.height.unwrap_or(1080);
|
||||||
if ui.add(egui::DragValue::new(&mut custom_height).clamp_range(1..=4320)).changed() {
|
if ui.add(egui::DragValue::new(&mut custom_height).range(1..=4320)).changed() {
|
||||||
self.video_settings.height = Some(custom_height);
|
self.video_settings.height = Some(custom_height);
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
@ -320,7 +320,7 @@ impl ExportDialog {
|
||||||
ui.heading("Framerate");
|
ui.heading("Framerate");
|
||||||
ui.horizontal(|ui| {
|
ui.horizontal(|ui| {
|
||||||
ui.label("FPS:");
|
ui.label("FPS:");
|
||||||
egui::ComboBox::from_id_source("framerate")
|
egui::ComboBox::from_id_salt("framerate")
|
||||||
.selected_text(format!("{}", self.video_settings.framerate as u32))
|
.selected_text(format!("{}", self.video_settings.framerate as u32))
|
||||||
.show_ui(ui, |ui| {
|
.show_ui(ui, |ui| {
|
||||||
ui.selectable_value(&mut self.video_settings.framerate, 24.0, "24");
|
ui.selectable_value(&mut self.video_settings.framerate, 24.0, "24");
|
||||||
|
|
@ -335,7 +335,7 @@ impl ExportDialog {
|
||||||
ui.heading("Quality");
|
ui.heading("Quality");
|
||||||
ui.horizontal(|ui| {
|
ui.horizontal(|ui| {
|
||||||
ui.label("Quality:");
|
ui.label("Quality:");
|
||||||
egui::ComboBox::from_id_source("video_quality")
|
egui::ComboBox::from_id_salt("video_quality")
|
||||||
.selected_text(self.video_settings.quality.name())
|
.selected_text(self.video_settings.quality.name())
|
||||||
.show_ui(ui, |ui| {
|
.show_ui(ui, |ui| {
|
||||||
ui.selectable_value(&mut self.video_settings.quality, VideoQuality::Low, VideoQuality::Low.name());
|
ui.selectable_value(&mut self.video_settings.quality, VideoQuality::Low, VideoQuality::Low.name());
|
||||||
|
|
@ -363,13 +363,13 @@ impl ExportDialog {
|
||||||
ui.label("Start:");
|
ui.label("Start:");
|
||||||
ui.add(egui::DragValue::new(start_time)
|
ui.add(egui::DragValue::new(start_time)
|
||||||
.speed(0.1)
|
.speed(0.1)
|
||||||
.clamp_range(0.0..=*end_time)
|
.range(0.0..=*end_time)
|
||||||
.suffix(" s"));
|
.suffix(" s"));
|
||||||
|
|
||||||
ui.label("End:");
|
ui.label("End:");
|
||||||
ui.add(egui::DragValue::new(end_time)
|
ui.add(egui::DragValue::new(end_time)
|
||||||
.speed(0.1)
|
.speed(0.1)
|
||||||
.clamp_range(*start_time..=f64::MAX)
|
.range(*start_time..=f64::MAX)
|
||||||
.suffix(" s"));
|
.suffix(" s"));
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -42,6 +42,7 @@ pub struct VideoExportState {
|
||||||
/// Start time in seconds
|
/// Start time in seconds
|
||||||
start_time: f64,
|
start_time: f64,
|
||||||
/// End time in seconds
|
/// End time in seconds
|
||||||
|
#[allow(dead_code)]
|
||||||
end_time: f64,
|
end_time: f64,
|
||||||
/// Frames per second
|
/// Frames per second
|
||||||
framerate: f64,
|
framerate: f64,
|
||||||
|
|
@ -163,7 +164,7 @@ impl ExportOrchestrator {
|
||||||
/// For parallel video+audio exports, returns combined progress.
|
/// For parallel video+audio exports, returns combined progress.
|
||||||
pub fn poll_progress(&mut self) -> Option<ExportProgress> {
|
pub fn poll_progress(&mut self) -> Option<ExportProgress> {
|
||||||
// Handle parallel video+audio export
|
// Handle parallel video+audio export
|
||||||
if let Some(ref mut parallel) = self.parallel_export {
|
if let Some(ref mut _parallel) = self.parallel_export {
|
||||||
return self.poll_parallel_progress();
|
return self.poll_parallel_progress();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -461,6 +462,7 @@ impl ExportOrchestrator {
|
||||||
/// Wait for the export to complete
|
/// Wait for the export to complete
|
||||||
///
|
///
|
||||||
/// This blocks until the export thread finishes.
|
/// This blocks until the export thread finishes.
|
||||||
|
#[allow(dead_code)]
|
||||||
pub fn wait_for_completion(&mut self) {
|
pub fn wait_for_completion(&mut self) {
|
||||||
if let Some(handle) = self.thread_handle.take() {
|
if let Some(handle) = self.thread_handle.take() {
|
||||||
handle.join().ok();
|
handle.join().ok();
|
||||||
|
|
@ -915,7 +917,7 @@ impl ExportOrchestrator {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Render to GPU (timed)
|
// Render to GPU (timed)
|
||||||
let render_start = Instant::now();
|
let _render_start = Instant::now();
|
||||||
let encoder = video_exporter::render_frame_to_gpu_rgba(
|
let encoder = video_exporter::render_frame_to_gpu_rgba(
|
||||||
document, timestamp, width, height,
|
document, timestamp, width, height,
|
||||||
device, queue, renderer, image_cache, video_manager,
|
device, queue, renderer, image_cache, video_manager,
|
||||||
|
|
@ -1049,7 +1051,7 @@ impl ExportOrchestrator {
|
||||||
// Determine dimensions from first frame
|
// Determine dimensions from first frame
|
||||||
let (width, height) = if let Some((_, _, ref y_plane, _, _)) = first_frame {
|
let (width, height) = if let Some((_, _, ref y_plane, _, _)) = first_frame {
|
||||||
// Calculate dimensions from Y plane size (full resolution, 1 byte per pixel)
|
// Calculate dimensions from Y plane size (full resolution, 1 byte per pixel)
|
||||||
let pixel_count = y_plane.len();
|
let _pixel_count = y_plane.len();
|
||||||
// Use settings dimensions if provided, otherwise infer from buffer
|
// Use settings dimensions if provided, otherwise infer from buffer
|
||||||
let w = settings.width.unwrap_or(1920); // Default to 1920 if not specified
|
let w = settings.width.unwrap_or(1920); // Default to 1920 if not specified
|
||||||
let h = settings.height.unwrap_or(1080); // Default to 1080 if not specified
|
let h = settings.height.unwrap_or(1080); // Default to 1080 if not specified
|
||||||
|
|
@ -1088,7 +1090,7 @@ impl ExportOrchestrator {
|
||||||
println!("🧵 [ENCODER] Encoder initialized, ready to encode frames");
|
println!("🧵 [ENCODER] Encoder initialized, ready to encode frames");
|
||||||
|
|
||||||
// Process first frame
|
// Process first frame
|
||||||
if let Some((frame_num, timestamp, y_plane, u_plane, v_plane)) = first_frame {
|
if let Some((_frame_num, timestamp, y_plane, u_plane, v_plane)) = first_frame {
|
||||||
Self::encode_frame(
|
Self::encode_frame(
|
||||||
&mut encoder,
|
&mut encoder,
|
||||||
&mut output,
|
&mut output,
|
||||||
|
|
@ -1115,7 +1117,7 @@ impl ExportOrchestrator {
|
||||||
}
|
}
|
||||||
|
|
||||||
match frame_rx.recv() {
|
match frame_rx.recv() {
|
||||||
Ok(VideoFrameMessage::Frame { frame_num, timestamp, y_plane, u_plane, v_plane }) => {
|
Ok(VideoFrameMessage::Frame { frame_num: _, timestamp, y_plane, u_plane, v_plane }) => {
|
||||||
Self::encode_frame(
|
Self::encode_frame(
|
||||||
&mut encoder,
|
&mut encoder,
|
||||||
&mut output,
|
&mut output,
|
||||||
|
|
|
||||||
|
|
@ -216,7 +216,7 @@ impl ReadbackPipeline {
|
||||||
/// Call this frequently to process completed transfers.
|
/// Call this frequently to process completed transfers.
|
||||||
pub fn poll_nonblocking(&mut self) -> Vec<ReadbackResult> {
|
pub fn poll_nonblocking(&mut self) -> Vec<ReadbackResult> {
|
||||||
// Poll GPU without blocking
|
// Poll GPU without blocking
|
||||||
self.device.poll(wgpu::PollType::Poll);
|
let _ = self.device.poll(wgpu::PollType::Poll);
|
||||||
|
|
||||||
// Collect all completed readbacks
|
// Collect all completed readbacks
|
||||||
let mut results = Vec::new();
|
let mut results = Vec::new();
|
||||||
|
|
@ -269,13 +269,14 @@ impl ReadbackPipeline {
|
||||||
/// Flush pipeline and wait for all pending operations
|
/// Flush pipeline and wait for all pending operations
|
||||||
///
|
///
|
||||||
/// Call this at the end of export to ensure all frames are processed
|
/// Call this at the end of export to ensure all frames are processed
|
||||||
|
#[allow(dead_code)]
|
||||||
pub fn flush(&mut self) -> Vec<ReadbackResult> {
|
pub fn flush(&mut self) -> Vec<ReadbackResult> {
|
||||||
let mut all_results = Vec::new();
|
let mut all_results = Vec::new();
|
||||||
|
|
||||||
// Keep polling until all buffers are Free
|
// Keep polling until all buffers are Free
|
||||||
loop {
|
loop {
|
||||||
// Poll for new completions
|
// Poll for new completions
|
||||||
self.device.poll(wgpu::PollType::Poll);
|
let _ = self.device.poll(wgpu::PollType::Poll);
|
||||||
|
|
||||||
while let Ok(result) = self.readback_rx.try_recv() {
|
while let Ok(result) = self.readback_rx.try_recv() {
|
||||||
self.buffers[result.buffer_id].state = BufferState::Mapped;
|
self.buffers[result.buffer_id].state = BufferState::Mapped;
|
||||||
|
|
@ -310,8 +311,4 @@ impl ReadbackPipeline {
|
||||||
all_results
|
all_results
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Get buffer count currently in flight (for monitoring)
|
|
||||||
pub fn buffers_in_flight(&self) -> usize {
|
|
||||||
self.buffers.iter().filter(|b| b.state != BufferState::Free).count()
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -1,3 +1,4 @@
|
||||||
|
#![allow(dead_code)]
|
||||||
//! Video export functionality
|
//! Video export functionality
|
||||||
//!
|
//!
|
||||||
//! Exports video from the timeline using FFmpeg encoding:
|
//! Exports video from the timeline using FFmpeg encoding:
|
||||||
|
|
|
||||||
|
|
@ -20,6 +20,7 @@ mod theme;
|
||||||
use theme::{Theme, ThemeMode};
|
use theme::{Theme, ThemeMode};
|
||||||
|
|
||||||
mod waveform_gpu;
|
mod waveform_gpu;
|
||||||
|
mod cqt_gpu;
|
||||||
|
|
||||||
mod config;
|
mod config;
|
||||||
use config::AppConfig;
|
use config::AppConfig;
|
||||||
|
|
@ -401,6 +402,7 @@ enum FileCommand {
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Progress updates from file operations worker
|
/// Progress updates from file operations worker
|
||||||
|
#[allow(dead_code)] // EncodingAudio/DecodingAudio planned for granular progress reporting
|
||||||
enum FileProgress {
|
enum FileProgress {
|
||||||
SerializingAudioPool,
|
SerializingAudioPool,
|
||||||
EncodingAudio { current: usize, total: usize },
|
EncodingAudio { current: usize, total: usize },
|
||||||
|
|
@ -426,6 +428,7 @@ enum FileOperation {
|
||||||
|
|
||||||
/// Information about an imported asset (for auto-placement)
|
/// Information about an imported asset (for auto-placement)
|
||||||
#[derive(Debug, Clone)]
|
#[derive(Debug, Clone)]
|
||||||
|
#[allow(dead_code)] // name/duration populated for future import UX features
|
||||||
struct ImportedAssetInfo {
|
struct ImportedAssetInfo {
|
||||||
clip_id: uuid::Uuid,
|
clip_id: uuid::Uuid,
|
||||||
clip_type: panes::DragClipType,
|
clip_type: panes::DragClipType,
|
||||||
|
|
@ -617,6 +620,7 @@ enum RecordingArmMode {
|
||||||
#[default]
|
#[default]
|
||||||
Auto,
|
Auto,
|
||||||
/// User explicitly arms tracks (multi-track recording workflow)
|
/// User explicitly arms tracks (multi-track recording workflow)
|
||||||
|
#[allow(dead_code)]
|
||||||
Manual,
|
Manual,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -648,12 +652,15 @@ struct EditorApp {
|
||||||
rdp_tolerance: f64, // RDP simplification tolerance (default: 10.0)
|
rdp_tolerance: f64, // RDP simplification tolerance (default: 10.0)
|
||||||
schneider_max_error: f64, // Schneider curve fitting max error (default: 30.0)
|
schneider_max_error: f64, // Schneider curve fitting max error (default: 30.0)
|
||||||
// Audio engine integration
|
// Audio engine integration
|
||||||
audio_stream: Option<cpal::Stream>, // Audio stream (must be kept alive)
|
#[allow(dead_code)] // Must be kept alive to maintain audio output
|
||||||
audio_controller: Option<std::sync::Arc<std::sync::Mutex<daw_backend::EngineController>>>, // Shared audio controller
|
audio_stream: Option<cpal::Stream>,
|
||||||
audio_event_rx: Option<rtrb::Consumer<daw_backend::AudioEvent>>, // Audio event receiver
|
audio_controller: Option<std::sync::Arc<std::sync::Mutex<daw_backend::EngineController>>>,
|
||||||
audio_events_pending: std::sync::Arc<std::sync::atomic::AtomicBool>, // Flag set when audio events arrive
|
audio_event_rx: Option<rtrb::Consumer<daw_backend::AudioEvent>>,
|
||||||
audio_sample_rate: u32, // Audio sample rate
|
audio_events_pending: std::sync::Arc<std::sync::atomic::AtomicBool>,
|
||||||
audio_channels: u32, // Audio channel count
|
#[allow(dead_code)] // Stored for future export/recording configuration
|
||||||
|
audio_sample_rate: u32,
|
||||||
|
#[allow(dead_code)]
|
||||||
|
audio_channels: u32,
|
||||||
// Video decoding and management
|
// Video decoding and management
|
||||||
video_manager: std::sync::Arc<std::sync::Mutex<lightningbeam_core::video::VideoManager>>, // Shared video manager
|
video_manager: std::sync::Arc<std::sync::Mutex<lightningbeam_core::video::VideoManager>>, // Shared video manager
|
||||||
// Track ID mapping (Document layer UUIDs <-> daw-backend TrackIds)
|
// Track ID mapping (Document layer UUIDs <-> daw-backend TrackIds)
|
||||||
|
|
@ -665,8 +672,10 @@ struct EditorApp {
|
||||||
playback_time: f64, // Current playback position in seconds (persistent - save with document)
|
playback_time: f64, // Current playback position in seconds (persistent - save with document)
|
||||||
is_playing: bool, // Whether playback is currently active (transient - don't save)
|
is_playing: bool, // Whether playback is currently active (transient - don't save)
|
||||||
// Recording state
|
// Recording state
|
||||||
recording_arm_mode: RecordingArmMode, // How tracks are armed for recording
|
#[allow(dead_code)] // Infrastructure for Manual recording mode
|
||||||
armed_layers: HashSet<Uuid>, // Explicitly armed layers (used in Manual mode)
|
recording_arm_mode: RecordingArmMode,
|
||||||
|
#[allow(dead_code)]
|
||||||
|
armed_layers: HashSet<Uuid>,
|
||||||
is_recording: bool, // Whether recording is currently active
|
is_recording: bool, // Whether recording is currently active
|
||||||
recording_clips: HashMap<Uuid, u32>, // layer_id -> backend clip_id during recording
|
recording_clips: HashMap<Uuid, u32>, // layer_id -> backend clip_id during recording
|
||||||
recording_start_time: f64, // Playback time when recording started
|
recording_start_time: f64, // Playback time when recording started
|
||||||
|
|
@ -687,8 +696,8 @@ struct EditorApp {
|
||||||
|
|
||||||
/// Cache for MIDI event data (keyed by backend midi_clip_id)
|
/// Cache for MIDI event data (keyed by backend midi_clip_id)
|
||||||
/// Prevents repeated backend queries for the same MIDI clip
|
/// Prevents repeated backend queries for the same MIDI clip
|
||||||
/// Format: (timestamp, note_number, is_note_on)
|
/// Format: (timestamp, note_number, velocity, is_note_on)
|
||||||
midi_event_cache: HashMap<u32, Vec<(f64, u8, bool)>>,
|
midi_event_cache: HashMap<u32, Vec<(f64, u8, u8, bool)>>,
|
||||||
/// Cache for audio file durations to avoid repeated queries
|
/// Cache for audio file durations to avoid repeated queries
|
||||||
/// Format: pool_index -> duration in seconds
|
/// Format: pool_index -> duration in seconds
|
||||||
audio_duration_cache: HashMap<usize, f64>,
|
audio_duration_cache: HashMap<usize, f64>,
|
||||||
|
|
@ -751,6 +760,10 @@ impl EditorApp {
|
||||||
fn new(cc: &eframe::CreationContext, layouts: Vec<LayoutDefinition>, theme: Theme) -> Self {
|
fn new(cc: &eframe::CreationContext, layouts: Vec<LayoutDefinition>, theme: Theme) -> Self {
|
||||||
let current_layout = layouts[0].layout.clone();
|
let current_layout = layouts[0].layout.clone();
|
||||||
|
|
||||||
|
// Disable egui's "Unaligned" debug overlay (on by default in debug builds)
|
||||||
|
#[cfg(debug_assertions)]
|
||||||
|
cc.egui_ctx.style_mut(|style| style.debug.show_unaligned = false);
|
||||||
|
|
||||||
// Load application config
|
// Load application config
|
||||||
let config = AppConfig::load();
|
let config = AppConfig::load();
|
||||||
|
|
||||||
|
|
@ -937,7 +950,7 @@ impl EditorApp {
|
||||||
egui::vec2(content_width, content_height),
|
egui::vec2(content_width, content_height),
|
||||||
);
|
);
|
||||||
|
|
||||||
ui.allocate_ui_at_rect(content_rect, |ui| {
|
ui.scope_builder(egui::UiBuilder::new().max_rect(content_rect), |ui| {
|
||||||
ui.vertical_centered(|ui| {
|
ui.vertical_centered(|ui| {
|
||||||
// Title
|
// Title
|
||||||
ui.heading(egui::RichText::new("Welcome to Lightningbeam!")
|
ui.heading(egui::RichText::new("Welcome to Lightningbeam!")
|
||||||
|
|
@ -1467,10 +1480,6 @@ impl EditorApp {
|
||||||
self.pane_instances.clear();
|
self.pane_instances.clear();
|
||||||
}
|
}
|
||||||
|
|
||||||
fn current_layout_def(&self) -> &LayoutDefinition {
|
|
||||||
&self.layouts[self.current_layout_index]
|
|
||||||
}
|
|
||||||
|
|
||||||
fn apply_layout_action(&mut self, action: LayoutAction) {
|
fn apply_layout_action(&mut self, action: LayoutAction) {
|
||||||
match action {
|
match action {
|
||||||
LayoutAction::SplitHorizontal(path, percent) => {
|
LayoutAction::SplitHorizontal(path, percent) => {
|
||||||
|
|
@ -1662,6 +1671,7 @@ impl EditorApp {
|
||||||
let file = dialog.pick_file();
|
let file = dialog.pick_file();
|
||||||
|
|
||||||
if let Some(path) = file {
|
if let Some(path) = file {
|
||||||
|
let _import_timer = std::time::Instant::now();
|
||||||
// Get extension and detect file type
|
// Get extension and detect file type
|
||||||
let extension = path.extension()
|
let extension = path.extension()
|
||||||
.and_then(|e| e.to_str())
|
.and_then(|e| e.to_str())
|
||||||
|
|
@ -1690,12 +1700,16 @@ impl EditorApp {
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
eprintln!("[TIMING] import took {:.1}ms", _import_timer.elapsed().as_secs_f64() * 1000.0);
|
||||||
// Auto-place if this is "Import" (not "Import to Library")
|
// Auto-place if this is "Import" (not "Import to Library")
|
||||||
if auto_place {
|
if auto_place {
|
||||||
if let Some(asset_info) = imported_asset {
|
if let Some(asset_info) = imported_asset {
|
||||||
|
let _place_timer = std::time::Instant::now();
|
||||||
self.auto_place_asset(asset_info);
|
self.auto_place_asset(asset_info);
|
||||||
|
eprintln!("[TIMING] auto_place took {:.1}ms", _place_timer.elapsed().as_secs_f64() * 1000.0);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
eprintln!("[TIMING] total import+place took {:.1}ms", _import_timer.elapsed().as_secs_f64() * 1000.0);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
MenuAction::Export => {
|
MenuAction::Export => {
|
||||||
|
|
@ -1711,46 +1725,72 @@ impl EditorApp {
|
||||||
|
|
||||||
// Edit menu
|
// Edit menu
|
||||||
MenuAction::Undo => {
|
MenuAction::Undo => {
|
||||||
if let Some(ref controller_arc) = self.audio_controller {
|
let undo_succeeded = if let Some(ref controller_arc) = self.audio_controller {
|
||||||
let mut controller = controller_arc.lock().unwrap();
|
let mut controller = controller_arc.lock().unwrap();
|
||||||
let mut backend_context = lightningbeam_core::action::BackendContext {
|
let mut backend_context = lightningbeam_core::action::BackendContext {
|
||||||
audio_controller: Some(&mut *controller),
|
audio_controller: Some(&mut *controller),
|
||||||
layer_to_track_map: &self.layer_to_track_map,
|
layer_to_track_map: &self.layer_to_track_map,
|
||||||
clip_instance_to_backend_map: &mut self.clip_instance_to_backend_map,
|
clip_instance_to_backend_map: &mut self.clip_instance_to_backend_map,
|
||||||
};
|
};
|
||||||
|
|
||||||
match self.action_executor.undo_with_backend(&mut backend_context) {
|
match self.action_executor.undo_with_backend(&mut backend_context) {
|
||||||
Ok(true) => println!("Undid: {}", self.action_executor.redo_description().unwrap_or_default()),
|
Ok(true) => {
|
||||||
Ok(false) => println!("Nothing to undo"),
|
println!("Undid: {}", self.action_executor.redo_description().unwrap_or_default());
|
||||||
Err(e) => eprintln!("Undo failed: {}", e),
|
true
|
||||||
|
}
|
||||||
|
Ok(false) => { println!("Nothing to undo"); false }
|
||||||
|
Err(e) => { eprintln!("Undo failed: {}", e); false }
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
match self.action_executor.undo() {
|
match self.action_executor.undo() {
|
||||||
Ok(true) => println!("Undid: {}", self.action_executor.redo_description().unwrap_or_default()),
|
Ok(true) => {
|
||||||
Ok(false) => println!("Nothing to undo"),
|
println!("Undid: {}", self.action_executor.redo_description().unwrap_or_default());
|
||||||
Err(e) => eprintln!("Undo failed: {}", e),
|
true
|
||||||
|
}
|
||||||
|
Ok(false) => { println!("Nothing to undo"); false }
|
||||||
|
Err(e) => { eprintln!("Undo failed: {}", e); false }
|
||||||
|
}
|
||||||
|
};
|
||||||
|
// Rebuild MIDI cache after undo (backend_context dropped, borrows released)
|
||||||
|
if undo_succeeded {
|
||||||
|
let midi_update = self.action_executor.last_redo_midi_notes()
|
||||||
|
.map(|(id, notes)| (id, notes.to_vec()));
|
||||||
|
if let Some((clip_id, notes)) = midi_update {
|
||||||
|
self.rebuild_midi_cache_entry(clip_id, ¬es);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
MenuAction::Redo => {
|
MenuAction::Redo => {
|
||||||
if let Some(ref controller_arc) = self.audio_controller {
|
let redo_succeeded = if let Some(ref controller_arc) = self.audio_controller {
|
||||||
let mut controller = controller_arc.lock().unwrap();
|
let mut controller = controller_arc.lock().unwrap();
|
||||||
let mut backend_context = lightningbeam_core::action::BackendContext {
|
let mut backend_context = lightningbeam_core::action::BackendContext {
|
||||||
audio_controller: Some(&mut *controller),
|
audio_controller: Some(&mut *controller),
|
||||||
layer_to_track_map: &self.layer_to_track_map,
|
layer_to_track_map: &self.layer_to_track_map,
|
||||||
clip_instance_to_backend_map: &mut self.clip_instance_to_backend_map,
|
clip_instance_to_backend_map: &mut self.clip_instance_to_backend_map,
|
||||||
};
|
};
|
||||||
|
|
||||||
match self.action_executor.redo_with_backend(&mut backend_context) {
|
match self.action_executor.redo_with_backend(&mut backend_context) {
|
||||||
Ok(true) => println!("Redid: {}", self.action_executor.undo_description().unwrap_or_default()),
|
Ok(true) => {
|
||||||
Ok(false) => println!("Nothing to redo"),
|
println!("Redid: {}", self.action_executor.undo_description().unwrap_or_default());
|
||||||
Err(e) => eprintln!("Redo failed: {}", e),
|
true
|
||||||
|
}
|
||||||
|
Ok(false) => { println!("Nothing to redo"); false }
|
||||||
|
Err(e) => { eprintln!("Redo failed: {}", e); false }
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
match self.action_executor.redo() {
|
match self.action_executor.redo() {
|
||||||
Ok(true) => println!("Redid: {}", self.action_executor.undo_description().unwrap_or_default()),
|
Ok(true) => {
|
||||||
Ok(false) => println!("Nothing to redo"),
|
println!("Redid: {}", self.action_executor.undo_description().unwrap_or_default());
|
||||||
Err(e) => eprintln!("Redo failed: {}", e),
|
true
|
||||||
|
}
|
||||||
|
Ok(false) => { println!("Nothing to redo"); false }
|
||||||
|
Err(e) => { eprintln!("Redo failed: {}", e); false }
|
||||||
|
}
|
||||||
|
};
|
||||||
|
// Rebuild MIDI cache after redo (backend_context dropped, borrows released)
|
||||||
|
if redo_succeeded {
|
||||||
|
let midi_update = self.action_executor.last_undo_midi_notes()
|
||||||
|
.map(|(id, notes)| (id, notes.to_vec()));
|
||||||
|
if let Some((clip_id, notes)) = midi_update {
|
||||||
|
self.rebuild_midi_cache_entry(clip_id, ¬es);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -2319,8 +2359,6 @@ impl EditorApp {
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Import an audio file via daw-backend (async — non-blocking)
|
/// Import an audio file via daw-backend (async — non-blocking)
|
||||||
///
|
|
||||||
/// Reads only metadata from the file (sub-millisecond), then sends the path
|
|
||||||
/// to the engine for async import. The engine memory-maps WAV files or sets
|
/// to the engine for async import. The engine memory-maps WAV files or sets
|
||||||
/// up stream decoding for compressed formats. An `AudioFileReady` event is
|
/// up stream decoding for compressed formats. An `AudioFileReady` event is
|
||||||
/// emitted when the file is playback-ready; the event handler populates the
|
/// emitted when the file is playback-ready; the event handler populates the
|
||||||
|
|
@ -2347,16 +2385,20 @@ impl EditorApp {
|
||||||
let sample_rate = metadata.sample_rate;
|
let sample_rate = metadata.sample_rate;
|
||||||
|
|
||||||
if let Some(ref controller_arc) = self.audio_controller {
|
if let Some(ref controller_arc) = self.audio_controller {
|
||||||
// Predict the pool index (engine assigns sequentially)
|
// Import synchronously to get the real pool index from the engine.
|
||||||
let pool_index = self.action_executor.document().audio_clips.len();
|
// NOTE: briefly blocks the UI thread (sub-ms for PCM mmap; a few ms
|
||||||
|
// for compressed streaming init).
|
||||||
// Send async import command (non-blocking)
|
let pool_index = {
|
||||||
{
|
|
||||||
let mut controller = controller_arc.lock().unwrap();
|
let mut controller = controller_arc.lock().unwrap();
|
||||||
controller.import_audio(path.to_path_buf());
|
match controller.import_audio_sync(path.to_path_buf()) {
|
||||||
}
|
Ok(idx) => idx,
|
||||||
|
Err(e) => {
|
||||||
|
eprintln!("Failed to import audio '{}': {}", path.display(), e);
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
// Create audio clip in document immediately (metadata is enough)
|
|
||||||
let clip = AudioClip::new_sampled(&name, pool_index, duration);
|
let clip = AudioClip::new_sampled(&name, pool_index, duration);
|
||||||
let clip_id = self.action_executor.document_mut().add_audio_clip(clip);
|
let clip_id = self.action_executor.document_mut().add_audio_clip(clip);
|
||||||
|
|
||||||
|
|
@ -2377,6 +2419,18 @@ impl EditorApp {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Rebuild a MIDI event cache entry from backend note format.
|
||||||
|
/// Called after undo/redo to keep the cache consistent with the backend.
|
||||||
|
fn rebuild_midi_cache_entry(&mut self, clip_id: u32, notes: &[(f64, u8, u8, f64)]) {
|
||||||
|
let mut events: Vec<(f64, u8, u8, bool)> = Vec::with_capacity(notes.len() * 2);
|
||||||
|
for &(start_time, note, velocity, duration) in notes {
|
||||||
|
events.push((start_time, note, velocity, true));
|
||||||
|
events.push((start_time + duration, note, velocity, false));
|
||||||
|
}
|
||||||
|
events.sort_by(|a, b| a.0.partial_cmp(&b.0).unwrap());
|
||||||
|
self.midi_event_cache.insert(clip_id, events);
|
||||||
|
}
|
||||||
|
|
||||||
/// Import a MIDI file via daw-backend
|
/// Import a MIDI file via daw-backend
|
||||||
fn import_midi(&mut self, path: &std::path::Path) -> Option<ImportedAssetInfo> {
|
fn import_midi(&mut self, path: &std::path::Path) -> Option<ImportedAssetInfo> {
|
||||||
use lightningbeam_core::clip::AudioClip;
|
use lightningbeam_core::clip::AudioClip;
|
||||||
|
|
@ -2392,15 +2446,15 @@ impl EditorApp {
|
||||||
let duration = midi_clip.duration;
|
let duration = midi_clip.duration;
|
||||||
let event_count = midi_clip.events.len();
|
let event_count = midi_clip.events.len();
|
||||||
|
|
||||||
// Process MIDI events to cache format: (timestamp, note_number, is_note_on)
|
// Process MIDI events to cache format: (timestamp, note_number, velocity, is_note_on)
|
||||||
// Filter to note events only (status 0x90 = note-on, 0x80 = note-off)
|
// Filter to note events only (status 0x90 = note-on, 0x80 = note-off)
|
||||||
let processed_events: Vec<(f64, u8, bool)> = midi_clip.events.iter()
|
let processed_events: Vec<(f64, u8, u8, bool)> = midi_clip.events.iter()
|
||||||
.filter_map(|event| {
|
.filter_map(|event| {
|
||||||
let status_type = event.status & 0xF0;
|
let status_type = event.status & 0xF0;
|
||||||
if status_type == 0x90 || status_type == 0x80 {
|
if status_type == 0x90 || status_type == 0x80 {
|
||||||
// Note-on is 0x90 with velocity > 0, Note-off is 0x80 or velocity = 0
|
// Note-on is 0x90 with velocity > 0, Note-off is 0x80 or velocity = 0
|
||||||
let is_note_on = status_type == 0x90 && event.data2 > 0;
|
let is_note_on = status_type == 0x90 && event.data2 > 0;
|
||||||
Some((event.timestamp, event.data1, is_note_on))
|
Some((event.timestamp, event.data1, event.data2, is_note_on))
|
||||||
} else {
|
} else {
|
||||||
None // Ignore non-note events (CC, pitch bend, etc.)
|
None // Ignore non-note events (CC, pitch bend, etc.)
|
||||||
}
|
}
|
||||||
|
|
@ -2468,7 +2522,7 @@ impl EditorApp {
|
||||||
};
|
};
|
||||||
|
|
||||||
// Create video clip with real metadata
|
// Create video clip with real metadata
|
||||||
let mut clip = VideoClip::new(
|
let clip = VideoClip::new(
|
||||||
&name,
|
&name,
|
||||||
path_str.clone(),
|
path_str.clone(),
|
||||||
metadata.width as f64,
|
metadata.width as f64,
|
||||||
|
|
@ -2699,10 +2753,37 @@ impl EditorApp {
|
||||||
// Get the newly created layer ID (it's the last child in the document)
|
// Get the newly created layer ID (it's the last child in the document)
|
||||||
let doc = self.action_executor.document();
|
let doc = self.action_executor.document();
|
||||||
if let Some(last_layer) = doc.root.children.last() {
|
if let Some(last_layer) = doc.root.children.last() {
|
||||||
target_layer_id = Some(last_layer.id());
|
let layer_id = last_layer.id();
|
||||||
|
target_layer_id = Some(layer_id);
|
||||||
|
|
||||||
// Update active layer to the new layer
|
// Update active layer to the new layer
|
||||||
self.active_layer_id = target_layer_id;
|
self.active_layer_id = target_layer_id;
|
||||||
|
|
||||||
|
// Create a backend audio/MIDI track and add the mapping
|
||||||
|
if let Some(ref controller_arc) = self.audio_controller {
|
||||||
|
let mut controller = controller_arc.lock().unwrap();
|
||||||
|
match asset_info.clip_type {
|
||||||
|
panes::DragClipType::AudioSampled => {
|
||||||
|
match controller.create_audio_track_sync(layer_name.clone()) {
|
||||||
|
Ok(track_id) => {
|
||||||
|
self.layer_to_track_map.insert(layer_id, track_id);
|
||||||
|
self.track_to_layer_map.insert(track_id, layer_id);
|
||||||
|
}
|
||||||
|
Err(e) => eprintln!("Failed to create audio track for auto-place: {}", e),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
panes::DragClipType::AudioMidi => {
|
||||||
|
match controller.create_midi_track_sync(layer_name.clone()) {
|
||||||
|
Ok(track_id) => {
|
||||||
|
self.layer_to_track_map.insert(layer_id, track_id);
|
||||||
|
self.track_to_layer_map.insert(track_id, layer_id);
|
||||||
|
}
|
||||||
|
Err(e) => eprintln!("Failed to create MIDI track for auto-place: {}", e),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
_ => {} // Other types don't need backend tracks
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -3005,6 +3086,8 @@ impl EditorApp {
|
||||||
|
|
||||||
impl eframe::App for EditorApp {
|
impl eframe::App for EditorApp {
|
||||||
fn update(&mut self, ctx: &egui::Context, frame: &mut eframe::Frame) {
|
fn update(&mut self, ctx: &egui::Context, frame: &mut eframe::Frame) {
|
||||||
|
let _frame_start = std::time::Instant::now();
|
||||||
|
|
||||||
// Disable egui's built-in Ctrl+Plus/Minus zoom behavior
|
// Disable egui's built-in Ctrl+Plus/Minus zoom behavior
|
||||||
// We handle zoom ourselves for the Stage pane
|
// We handle zoom ourselves for the Stage pane
|
||||||
ctx.options_mut(|o| {
|
ctx.options_mut(|o| {
|
||||||
|
|
@ -3036,37 +3119,10 @@ impl eframe::App for EditorApp {
|
||||||
// Will switch to editor mode when file finishes loading
|
// Will switch to editor mode when file finishes loading
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fetch missing raw audio on-demand (for lazy loading after project load)
|
// NOTE: Missing raw audio samples for newly imported files will arrive
|
||||||
// Collect pool indices that need raw audio data
|
// via AudioDecodeProgress events (compressed) or inline with AudioFileReady
|
||||||
let missing_raw_audio: Vec<usize> = self.action_executor.document()
|
// (PCM). No blocking query needed here.
|
||||||
.audio_clips.values()
|
// For project loading, audio files are re-imported which also sends events.
|
||||||
.filter_map(|clip| {
|
|
||||||
if let lightningbeam_core::clip::AudioClipType::Sampled { audio_pool_index } = &clip.clip_type {
|
|
||||||
if !self.raw_audio_cache.contains_key(audio_pool_index) {
|
|
||||||
Some(*audio_pool_index)
|
|
||||||
} else {
|
|
||||||
None
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
None
|
|
||||||
}
|
|
||||||
})
|
|
||||||
.collect();
|
|
||||||
|
|
||||||
// Fetch missing raw audio samples
|
|
||||||
for pool_index in missing_raw_audio {
|
|
||||||
if let Some(ref controller_arc) = self.audio_controller {
|
|
||||||
let mut controller = controller_arc.lock().unwrap();
|
|
||||||
match controller.get_pool_audio_samples(pool_index) {
|
|
||||||
Ok((samples, sr, ch)) => {
|
|
||||||
self.raw_audio_cache.insert(pool_index, (samples, sr, ch));
|
|
||||||
self.waveform_gpu_dirty.insert(pool_index);
|
|
||||||
self.audio_pools_with_new_waveforms.insert(pool_index);
|
|
||||||
}
|
|
||||||
Err(e) => eprintln!("Failed to fetch raw audio for pool {}: {}", pool_index, e),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Initialize and update effect thumbnail generator (GPU-based effect previews)
|
// Initialize and update effect thumbnail generator (GPU-based effect previews)
|
||||||
if let Some(render_state) = frame.wgpu_render_state() {
|
if let Some(render_state) = frame.wgpu_render_state() {
|
||||||
|
|
@ -3221,6 +3277,7 @@ impl eframe::App for EditorApp {
|
||||||
ctx.request_repaint();
|
ctx.request_repaint();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
let _pre_events_ms = _frame_start.elapsed().as_secs_f64() * 1000.0;
|
||||||
// Check if audio events are pending and request repaint if needed
|
// Check if audio events are pending and request repaint if needed
|
||||||
if self.audio_events_pending.load(std::sync::atomic::Ordering::Relaxed) {
|
if self.audio_events_pending.load(std::sync::atomic::Ordering::Relaxed) {
|
||||||
ctx.request_repaint();
|
ctx.request_repaint();
|
||||||
|
|
@ -3439,13 +3496,103 @@ impl eframe::App for EditorApp {
|
||||||
self.recording_layer_id = None;
|
self.recording_layer_id = None;
|
||||||
ctx.request_repaint();
|
ctx.request_repaint();
|
||||||
}
|
}
|
||||||
AudioEvent::MidiRecordingProgress(_track_id, _clip_id, duration, _notes) => {
|
AudioEvent::MidiRecordingProgress(_track_id, clip_id, duration, notes) => {
|
||||||
println!("🎹 MIDI recording progress: {:.2}s", duration);
|
// Update clip duration in document (so timeline bar grows)
|
||||||
|
if let Some(layer_id) = self.recording_layer_id {
|
||||||
|
let doc_clip_id = {
|
||||||
|
let document = self.action_executor.document();
|
||||||
|
document.root.children.iter()
|
||||||
|
.find(|l| l.id() == layer_id)
|
||||||
|
.and_then(|layer| {
|
||||||
|
if let lightningbeam_core::layer::AnyLayer::Audio(audio_layer) = layer {
|
||||||
|
audio_layer.clip_instances.last().map(|i| i.clip_id)
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
}
|
||||||
|
})
|
||||||
|
};
|
||||||
|
|
||||||
|
if let Some(doc_clip_id) = doc_clip_id {
|
||||||
|
if let Some(clip) = self.action_executor.document_mut().audio_clips.get_mut(&doc_clip_id) {
|
||||||
|
clip.duration = duration;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update midi_event_cache with notes captured so far
|
||||||
|
// (inlined instead of calling rebuild_midi_cache_entry to avoid
|
||||||
|
// conflicting &mut self borrow with event_rx loop)
|
||||||
|
{
|
||||||
|
let mut events: Vec<(f64, u8, u8, bool)> = Vec::with_capacity(notes.len() * 2);
|
||||||
|
for &(start_time, note, velocity, dur) in ¬es {
|
||||||
|
events.push((start_time, note, velocity, true));
|
||||||
|
events.push((start_time + dur, note, velocity, false));
|
||||||
|
}
|
||||||
|
events.sort_by(|a, b| a.0.partial_cmp(&b.0).unwrap());
|
||||||
|
self.midi_event_cache.insert(clip_id, events);
|
||||||
|
}
|
||||||
ctx.request_repaint();
|
ctx.request_repaint();
|
||||||
}
|
}
|
||||||
AudioEvent::MidiRecordingStopped(track_id, clip_id, note_count) => {
|
AudioEvent::MidiRecordingStopped(track_id, clip_id, note_count) => {
|
||||||
println!("🎹 MIDI recording stopped: track={:?}, clip_id={}, {} notes",
|
println!("🎹 MIDI recording stopped: track={:?}, clip_id={}, {} notes",
|
||||||
track_id, clip_id, note_count);
|
track_id, clip_id, note_count);
|
||||||
|
|
||||||
|
// Query backend for the definitive final note data
|
||||||
|
if let Some(ref controller_arc) = self.audio_controller {
|
||||||
|
let mut controller = controller_arc.lock().unwrap();
|
||||||
|
match controller.query_midi_clip(track_id, clip_id) {
|
||||||
|
Ok(midi_clip_data) => {
|
||||||
|
// Convert backend MidiEvent format to cache format
|
||||||
|
let cache_events: Vec<(f64, u8, u8, bool)> = midi_clip_data.events.iter()
|
||||||
|
.filter_map(|event| {
|
||||||
|
let status_type = event.status & 0xF0;
|
||||||
|
if status_type == 0x90 || status_type == 0x80 {
|
||||||
|
let is_note_on = status_type == 0x90 && event.data2 > 0;
|
||||||
|
Some((event.timestamp, event.data1, event.data2, is_note_on))
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
drop(controller);
|
||||||
|
self.midi_event_cache.insert(clip_id, cache_events);
|
||||||
|
|
||||||
|
// Update document clip with final duration and name
|
||||||
|
if let Some(layer_id) = self.recording_layer_id {
|
||||||
|
let doc_clip_id = {
|
||||||
|
let document = self.action_executor.document();
|
||||||
|
document.root.children.iter()
|
||||||
|
.find(|l| l.id() == layer_id)
|
||||||
|
.and_then(|layer| {
|
||||||
|
if let lightningbeam_core::layer::AnyLayer::Audio(audio_layer) = layer {
|
||||||
|
audio_layer.clip_instances.last().map(|i| i.clip_id)
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
}
|
||||||
|
})
|
||||||
|
};
|
||||||
|
if let Some(doc_clip_id) = doc_clip_id {
|
||||||
|
if let Some(clip) = self.action_executor.document_mut().audio_clips.get_mut(&doc_clip_id) {
|
||||||
|
clip.duration = midi_clip_data.duration;
|
||||||
|
clip.name = format!("MIDI Recording {}", clip_id);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
println!("✅ Finalized MIDI recording: {} notes, {:.2}s",
|
||||||
|
note_count, midi_clip_data.duration);
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
eprintln!("Failed to query MIDI clip data after recording: {}", e);
|
||||||
|
// Cache was already populated by last MidiRecordingProgress event
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// TODO: Store clip_instance_to_backend_map entry for this MIDI clip.
|
||||||
|
// The backend created the instance in create_midi_clip(), but doesn't
|
||||||
|
// report the instance_id back. Needed for move/trim operations later.
|
||||||
|
|
||||||
// Clear recording state
|
// Clear recording state
|
||||||
self.is_recording = false;
|
self.is_recording = false;
|
||||||
self.recording_clips.clear();
|
self.recording_clips.clear();
|
||||||
|
|
@ -3473,22 +3620,15 @@ impl eframe::App for EditorApp {
|
||||||
// via AudioDecodeProgress events.
|
// via AudioDecodeProgress events.
|
||||||
ctx.request_repaint();
|
ctx.request_repaint();
|
||||||
}
|
}
|
||||||
AudioEvent::AudioDecodeProgress { pool_index, decoded_frames, total_frames } => {
|
AudioEvent::AudioDecodeProgress { pool_index, samples, sample_rate, channels } => {
|
||||||
// Waveform decode complete — fetch samples for GPU waveform
|
// Samples arrive as deltas — append to existing cache
|
||||||
if decoded_frames == total_frames {
|
if let Some(entry) = self.raw_audio_cache.get_mut(&pool_index) {
|
||||||
if let Some(ref controller_arc) = self.audio_controller {
|
entry.0.extend_from_slice(&samples);
|
||||||
let mut controller = controller_arc.lock().unwrap();
|
} else {
|
||||||
match controller.get_pool_audio_samples(pool_index) {
|
self.raw_audio_cache.insert(pool_index, (samples, sample_rate, channels));
|
||||||
Ok((samples, sr, ch)) => {
|
|
||||||
println!("Waveform decode complete for pool {}: {} samples", pool_index, samples.len());
|
|
||||||
self.raw_audio_cache.insert(pool_index, (samples, sr, ch));
|
|
||||||
self.waveform_gpu_dirty.insert(pool_index);
|
|
||||||
}
|
|
||||||
Err(e) => eprintln!("Failed to fetch decoded audio for pool {}: {}", pool_index, e),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
ctx.request_repaint();
|
|
||||||
}
|
}
|
||||||
|
self.waveform_gpu_dirty.insert(pool_index);
|
||||||
|
ctx.request_repaint();
|
||||||
}
|
}
|
||||||
_ => {} // Ignore other events for now
|
_ => {} // Ignore other events for now
|
||||||
}
|
}
|
||||||
|
|
@ -3504,6 +3644,8 @@ impl eframe::App for EditorApp {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
let _post_events_ms = _frame_start.elapsed().as_secs_f64() * 1000.0;
|
||||||
|
|
||||||
// Request continuous repaints when playing to update time display
|
// Request continuous repaints when playing to update time display
|
||||||
if self.is_playing {
|
if self.is_playing {
|
||||||
ctx.request_repaint();
|
ctx.request_repaint();
|
||||||
|
|
@ -3643,12 +3785,11 @@ impl eframe::App for EditorApp {
|
||||||
// Poll export orchestrator for progress
|
// Poll export orchestrator for progress
|
||||||
if let Some(orchestrator) = &mut self.export_orchestrator {
|
if let Some(orchestrator) = &mut self.export_orchestrator {
|
||||||
// Only log occasionally to avoid spam
|
// Only log occasionally to avoid spam
|
||||||
static mut POLL_COUNT: u32 = 0;
|
use std::sync::atomic::{AtomicU32, Ordering as AtomicOrdering};
|
||||||
unsafe {
|
static POLL_COUNT: AtomicU32 = AtomicU32::new(0);
|
||||||
POLL_COUNT += 1;
|
let count = POLL_COUNT.fetch_add(1, AtomicOrdering::Relaxed) + 1;
|
||||||
if POLL_COUNT % 60 == 0 {
|
if count % 60 == 0 {
|
||||||
println!("🔍 [MAIN] Polling orchestrator (poll #{})...", POLL_COUNT);
|
println!("🔍 [MAIN] Polling orchestrator (poll #{})...", count);
|
||||||
}
|
|
||||||
}
|
}
|
||||||
if let Some(progress) = orchestrator.poll_progress() {
|
if let Some(progress) = orchestrator.poll_progress() {
|
||||||
match progress {
|
match progress {
|
||||||
|
|
@ -3793,7 +3934,7 @@ impl eframe::App for EditorApp {
|
||||||
paint_bucket_gap_tolerance: &mut self.paint_bucket_gap_tolerance,
|
paint_bucket_gap_tolerance: &mut self.paint_bucket_gap_tolerance,
|
||||||
polygon_sides: &mut self.polygon_sides,
|
polygon_sides: &mut self.polygon_sides,
|
||||||
layer_to_track_map: &self.layer_to_track_map,
|
layer_to_track_map: &self.layer_to_track_map,
|
||||||
midi_event_cache: &self.midi_event_cache,
|
midi_event_cache: &mut self.midi_event_cache,
|
||||||
audio_pools_with_new_waveforms: &self.audio_pools_with_new_waveforms,
|
audio_pools_with_new_waveforms: &self.audio_pools_with_new_waveforms,
|
||||||
raw_audio_cache: &self.raw_audio_cache,
|
raw_audio_cache: &self.raw_audio_cache,
|
||||||
waveform_gpu_dirty: &mut self.waveform_gpu_dirty,
|
waveform_gpu_dirty: &mut self.waveform_gpu_dirty,
|
||||||
|
|
@ -3918,6 +4059,19 @@ impl eframe::App for EditorApp {
|
||||||
self.split_clips_at_playhead();
|
self.split_clips_at_playhead();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Space bar toggles play/pause (only when no text input is focused)
|
||||||
|
if !wants_keyboard && ctx.input(|i| i.key_pressed(egui::Key::Space)) {
|
||||||
|
self.is_playing = !self.is_playing;
|
||||||
|
if let Some(ref controller_arc) = self.audio_controller {
|
||||||
|
let mut controller = controller_arc.lock().unwrap();
|
||||||
|
if self.is_playing {
|
||||||
|
controller.play();
|
||||||
|
} else {
|
||||||
|
controller.pause();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
ctx.input(|i| {
|
ctx.input(|i| {
|
||||||
// Check menu shortcuts that use modifiers (Cmd+S, etc.) - allow even when typing
|
// Check menu shortcuts that use modifiers (Cmd+S, etc.) - allow even when typing
|
||||||
// But skip shortcuts without modifiers when keyboard input is claimed (e.g., virtual piano)
|
// But skip shortcuts without modifiers when keyboard input is claimed (e.g., virtual piano)
|
||||||
|
|
@ -3979,6 +4133,12 @@ impl eframe::App for EditorApp {
|
||||||
);
|
);
|
||||||
debug_overlay::render_debug_overlay(ctx, &stats);
|
debug_overlay::render_debug_overlay(ctx, &stats);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
let frame_ms = _frame_start.elapsed().as_secs_f64() * 1000.0;
|
||||||
|
if frame_ms > 50.0 {
|
||||||
|
eprintln!("[TIMING] SLOW FRAME: {:.1}ms (pre-events={:.1}, events={:.1}, post-events={:.1})",
|
||||||
|
frame_ms, _pre_events_ms, _post_events_ms - _pre_events_ms, frame_ms - _post_events_ms);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
@ -4023,7 +4183,7 @@ struct RenderContext<'a> {
|
||||||
/// Mapping from Document layer UUIDs to daw-backend TrackIds
|
/// Mapping from Document layer UUIDs to daw-backend TrackIds
|
||||||
layer_to_track_map: &'a std::collections::HashMap<Uuid, daw_backend::TrackId>,
|
layer_to_track_map: &'a std::collections::HashMap<Uuid, daw_backend::TrackId>,
|
||||||
/// Cache of MIDI events for rendering (keyed by backend midi_clip_id)
|
/// Cache of MIDI events for rendering (keyed by backend midi_clip_id)
|
||||||
midi_event_cache: &'a HashMap<u32, Vec<(f64, u8, bool)>>,
|
midi_event_cache: &'a mut HashMap<u32, Vec<(f64, u8, u8, bool)>>,
|
||||||
/// Audio pool indices with new raw audio data this frame (for thumbnail invalidation)
|
/// Audio pool indices with new raw audio data this frame (for thumbnail invalidation)
|
||||||
audio_pools_with_new_waveforms: &'a HashSet<usize>,
|
audio_pools_with_new_waveforms: &'a HashSet<usize>,
|
||||||
/// Raw audio samples for GPU waveform rendering (pool_index -> (samples, sample_rate, channels))
|
/// Raw audio samples for GPU waveform rendering (pool_index -> (samples, sample_rate, channels))
|
||||||
|
|
@ -4142,12 +4302,12 @@ fn render_layout_node(
|
||||||
|
|
||||||
if ui.button("Split Horizontal ->").clicked() {
|
if ui.button("Split Horizontal ->").clicked() {
|
||||||
*layout_action = Some(LayoutAction::EnterSplitPreviewHorizontal);
|
*layout_action = Some(LayoutAction::EnterSplitPreviewHorizontal);
|
||||||
ui.close_menu();
|
ui.close();
|
||||||
}
|
}
|
||||||
|
|
||||||
if ui.button("Split Vertical |").clicked() {
|
if ui.button("Split Vertical |").clicked() {
|
||||||
*layout_action = Some(LayoutAction::EnterSplitPreviewVertical);
|
*layout_action = Some(LayoutAction::EnterSplitPreviewVertical);
|
||||||
ui.close_menu();
|
ui.close();
|
||||||
}
|
}
|
||||||
|
|
||||||
ui.separator();
|
ui.separator();
|
||||||
|
|
@ -4156,14 +4316,14 @@ fn render_layout_node(
|
||||||
let mut path_keep_right = path.clone();
|
let mut path_keep_right = path.clone();
|
||||||
path_keep_right.push(1); // Remove left, keep right child
|
path_keep_right.push(1); // Remove left, keep right child
|
||||||
*layout_action = Some(LayoutAction::RemoveSplit(path_keep_right));
|
*layout_action = Some(LayoutAction::RemoveSplit(path_keep_right));
|
||||||
ui.close_menu();
|
ui.close();
|
||||||
}
|
}
|
||||||
|
|
||||||
if ui.button("Join Right >").clicked() {
|
if ui.button("Join Right >").clicked() {
|
||||||
let mut path_keep_left = path.clone();
|
let mut path_keep_left = path.clone();
|
||||||
path_keep_left.push(0); // Remove right, keep left child
|
path_keep_left.push(0); // Remove right, keep left child
|
||||||
*layout_action = Some(LayoutAction::RemoveSplit(path_keep_left));
|
*layout_action = Some(LayoutAction::RemoveSplit(path_keep_left));
|
||||||
ui.close_menu();
|
ui.close();
|
||||||
}
|
}
|
||||||
|
|
||||||
});
|
});
|
||||||
|
|
@ -4264,12 +4424,12 @@ fn render_layout_node(
|
||||||
|
|
||||||
if ui.button("Split Horizontal ->").clicked() {
|
if ui.button("Split Horizontal ->").clicked() {
|
||||||
*layout_action = Some(LayoutAction::EnterSplitPreviewHorizontal);
|
*layout_action = Some(LayoutAction::EnterSplitPreviewHorizontal);
|
||||||
ui.close_menu();
|
ui.close();
|
||||||
}
|
}
|
||||||
|
|
||||||
if ui.button("Split Vertical |").clicked() {
|
if ui.button("Split Vertical |").clicked() {
|
||||||
*layout_action = Some(LayoutAction::EnterSplitPreviewVertical);
|
*layout_action = Some(LayoutAction::EnterSplitPreviewVertical);
|
||||||
ui.close_menu();
|
ui.close();
|
||||||
}
|
}
|
||||||
|
|
||||||
ui.separator();
|
ui.separator();
|
||||||
|
|
@ -4278,14 +4438,14 @@ fn render_layout_node(
|
||||||
let mut path_keep_bottom = path.clone();
|
let mut path_keep_bottom = path.clone();
|
||||||
path_keep_bottom.push(1); // Remove top, keep bottom child
|
path_keep_bottom.push(1); // Remove top, keep bottom child
|
||||||
*layout_action = Some(LayoutAction::RemoveSplit(path_keep_bottom));
|
*layout_action = Some(LayoutAction::RemoveSplit(path_keep_bottom));
|
||||||
ui.close_menu();
|
ui.close();
|
||||||
}
|
}
|
||||||
|
|
||||||
if ui.button("Join Down v").clicked() {
|
if ui.button("Join Down v").clicked() {
|
||||||
let mut path_keep_top = path.clone();
|
let mut path_keep_top = path.clone();
|
||||||
path_keep_top.push(0); // Remove bottom, keep top child
|
path_keep_top.push(0); // Remove bottom, keep top child
|
||||||
*layout_action = Some(LayoutAction::RemoveSplit(path_keep_top));
|
*layout_action = Some(LayoutAction::RemoveSplit(path_keep_top));
|
||||||
ui.close_menu();
|
ui.close();
|
||||||
}
|
}
|
||||||
|
|
||||||
});
|
});
|
||||||
|
|
@ -4695,100 +4855,6 @@ fn render_pane(
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Render toolbar with tool buttons
|
|
||||||
fn render_toolbar(
|
|
||||||
ui: &mut egui::Ui,
|
|
||||||
rect: egui::Rect,
|
|
||||||
tool_icon_cache: &mut ToolIconCache,
|
|
||||||
selected_tool: &mut Tool,
|
|
||||||
path: &NodePath,
|
|
||||||
) {
|
|
||||||
let button_size = 60.0; // 50% bigger (was 40.0)
|
|
||||||
let button_padding = 8.0;
|
|
||||||
let button_spacing = 4.0;
|
|
||||||
|
|
||||||
// Calculate how many columns we can fit
|
|
||||||
let available_width = rect.width() - (button_padding * 2.0);
|
|
||||||
let columns = ((available_width + button_spacing) / (button_size + button_spacing)).floor() as usize;
|
|
||||||
let columns = columns.max(1); // At least 1 column
|
|
||||||
|
|
||||||
let mut x = rect.left() + button_padding;
|
|
||||||
let mut y = rect.top() + button_padding;
|
|
||||||
let mut col = 0;
|
|
||||||
|
|
||||||
for tool in Tool::all() {
|
|
||||||
let button_rect = egui::Rect::from_min_size(
|
|
||||||
egui::pos2(x, y),
|
|
||||||
egui::vec2(button_size, button_size),
|
|
||||||
);
|
|
||||||
|
|
||||||
// Check if this is the selected tool
|
|
||||||
let is_selected = *selected_tool == *tool;
|
|
||||||
|
|
||||||
// Button background
|
|
||||||
let bg_color = if is_selected {
|
|
||||||
egui::Color32::from_rgb(70, 100, 150) // Highlighted blue
|
|
||||||
} else {
|
|
||||||
egui::Color32::from_rgb(50, 50, 50)
|
|
||||||
};
|
|
||||||
ui.painter().rect_filled(button_rect, 4.0, bg_color);
|
|
||||||
|
|
||||||
// Load and render tool icon
|
|
||||||
if let Some(icon) = tool_icon_cache.get_or_load(*tool, ui.ctx()) {
|
|
||||||
let icon_rect = button_rect.shrink(8.0); // Padding inside button
|
|
||||||
ui.painter().image(
|
|
||||||
icon.id(),
|
|
||||||
icon_rect,
|
|
||||||
egui::Rect::from_min_max(egui::pos2(0.0, 0.0), egui::pos2(1.0, 1.0)),
|
|
||||||
egui::Color32::WHITE,
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Make button interactive (include path to ensure unique IDs across panes)
|
|
||||||
let button_id = ui.id().with(("tool_button", path, *tool as usize));
|
|
||||||
let response = ui.interact(button_rect, button_id, egui::Sense::click());
|
|
||||||
|
|
||||||
// Check for click first
|
|
||||||
if response.clicked() {
|
|
||||||
*selected_tool = *tool;
|
|
||||||
}
|
|
||||||
|
|
||||||
if response.hovered() {
|
|
||||||
ui.painter().rect_stroke(
|
|
||||||
button_rect,
|
|
||||||
4.0,
|
|
||||||
egui::Stroke::new(2.0, egui::Color32::from_gray(180)),
|
|
||||||
egui::StrokeKind::Middle,
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Show tooltip with tool name and shortcut (consumes response)
|
|
||||||
response.on_hover_text(format!("{} ({})", tool.display_name(), tool.shortcut_hint()));
|
|
||||||
|
|
||||||
// Draw selection border
|
|
||||||
if is_selected {
|
|
||||||
ui.painter().rect_stroke(
|
|
||||||
button_rect,
|
|
||||||
4.0,
|
|
||||||
egui::Stroke::new(2.0, egui::Color32::from_rgb(100, 150, 255)),
|
|
||||||
egui::StrokeKind::Middle,
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Move to next position in grid
|
|
||||||
col += 1;
|
|
||||||
if col >= columns {
|
|
||||||
// Move to next row
|
|
||||||
col = 0;
|
|
||||||
x = rect.left() + button_padding;
|
|
||||||
y += button_size + button_spacing;
|
|
||||||
} else {
|
|
||||||
// Move to next column
|
|
||||||
x += button_size + button_spacing;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Get a color for each pane type for visualization
|
/// Get a color for each pane type for visualization
|
||||||
fn pane_color(pane_type: PaneType) -> egui::Color32 {
|
fn pane_color(pane_type: PaneType) -> egui::Color32 {
|
||||||
match pane_type {
|
match pane_type {
|
||||||
|
|
|
||||||
|
|
@ -29,7 +29,9 @@ pub enum ShortcutKey {
|
||||||
// Numbers
|
// Numbers
|
||||||
Num0,
|
Num0,
|
||||||
// Symbols
|
// Symbols
|
||||||
Comma, Minus, Equals, Plus,
|
Comma, Minus, Equals,
|
||||||
|
#[allow(dead_code)] // Completes keyboard mapping set
|
||||||
|
Plus,
|
||||||
BracketLeft, BracketRight,
|
BracketLeft, BracketRight,
|
||||||
// Special
|
// Special
|
||||||
Delete,
|
Delete,
|
||||||
|
|
@ -189,6 +191,7 @@ pub enum MenuAction {
|
||||||
RecenterView,
|
RecenterView,
|
||||||
NextLayout,
|
NextLayout,
|
||||||
PreviousLayout,
|
PreviousLayout,
|
||||||
|
#[allow(dead_code)] // Handler exists in main.rs, menu item not yet wired
|
||||||
SwitchLayout(usize),
|
SwitchLayout(usize),
|
||||||
|
|
||||||
// Help menu
|
// Help menu
|
||||||
|
|
@ -219,6 +222,7 @@ pub enum MenuDef {
|
||||||
// Shortcut constants for clarity
|
// Shortcut constants for clarity
|
||||||
const CTRL: bool = true;
|
const CTRL: bool = true;
|
||||||
const SHIFT: bool = true;
|
const SHIFT: bool = true;
|
||||||
|
#[allow(dead_code)]
|
||||||
const ALT: bool = true;
|
const ALT: bool = true;
|
||||||
const NO_CTRL: bool = false;
|
const NO_CTRL: bool = false;
|
||||||
const NO_SHIFT: bool = false;
|
const NO_SHIFT: bool = false;
|
||||||
|
|
@ -288,7 +292,9 @@ impl MenuItemDef {
|
||||||
// macOS app menu items
|
// macOS app menu items
|
||||||
const SETTINGS: Self = Self { label: "Settings", action: MenuAction::Settings, shortcut: Some(Shortcut::new(ShortcutKey::Comma, CTRL, NO_SHIFT, NO_ALT)) };
|
const SETTINGS: Self = Self { label: "Settings", action: MenuAction::Settings, shortcut: Some(Shortcut::new(ShortcutKey::Comma, CTRL, NO_SHIFT, NO_ALT)) };
|
||||||
const CLOSE_WINDOW: Self = Self { label: "Close Window", action: MenuAction::CloseWindow, shortcut: Some(Shortcut::new(ShortcutKey::W, CTRL, NO_SHIFT, NO_ALT)) };
|
const CLOSE_WINDOW: Self = Self { label: "Close Window", action: MenuAction::CloseWindow, shortcut: Some(Shortcut::new(ShortcutKey::W, CTRL, NO_SHIFT, NO_ALT)) };
|
||||||
|
#[allow(dead_code)] // Used in #[cfg(target_os = "macos")] block
|
||||||
const QUIT_MACOS: Self = Self { label: "Quit Lightningbeam", action: MenuAction::Quit, shortcut: Some(Shortcut::new(ShortcutKey::Q, CTRL, NO_SHIFT, NO_ALT)) };
|
const QUIT_MACOS: Self = Self { label: "Quit Lightningbeam", action: MenuAction::Quit, shortcut: Some(Shortcut::new(ShortcutKey::Q, CTRL, NO_SHIFT, NO_ALT)) };
|
||||||
|
#[allow(dead_code)]
|
||||||
const ABOUT_MACOS: Self = Self { label: "About Lightningbeam", action: MenuAction::About, shortcut: None };
|
const ABOUT_MACOS: Self = Self { label: "About Lightningbeam", action: MenuAction::About, shortcut: None };
|
||||||
|
|
||||||
/// Get all menu items with shortcuts (for keyboard handling)
|
/// Get all menu items with shortcuts (for keyboard handling)
|
||||||
|
|
@ -593,7 +599,7 @@ impl MenuSystem {
|
||||||
pub fn render_egui_menu_bar(&self, ui: &mut egui::Ui, recent_files: &[std::path::PathBuf]) -> Option<MenuAction> {
|
pub fn render_egui_menu_bar(&self, ui: &mut egui::Ui, recent_files: &[std::path::PathBuf]) -> Option<MenuAction> {
|
||||||
let mut action = None;
|
let mut action = None;
|
||||||
|
|
||||||
egui::menu::bar(ui, |ui| {
|
egui::MenuBar::new().ui(ui, |ui| {
|
||||||
for menu_def in MenuItemDef::menu_structure() {
|
for menu_def in MenuItemDef::menu_structure() {
|
||||||
if let Some(a) = self.render_menu_def(ui, menu_def, recent_files) {
|
if let Some(a) = self.render_menu_def(ui, menu_def, recent_files) {
|
||||||
action = Some(a);
|
action = Some(a);
|
||||||
|
|
@ -632,7 +638,7 @@ impl MenuSystem {
|
||||||
|
|
||||||
if ui.button(display_name).clicked() {
|
if ui.button(display_name).clicked() {
|
||||||
action = Some(MenuAction::OpenRecent(index));
|
action = Some(MenuAction::OpenRecent(index));
|
||||||
ui.close_menu();
|
ui.close();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -643,14 +649,14 @@ impl MenuSystem {
|
||||||
|
|
||||||
if ui.button("Clear Recent Files").clicked() {
|
if ui.button("Clear Recent Files").clicked() {
|
||||||
action = Some(MenuAction::ClearRecentFiles);
|
action = Some(MenuAction::ClearRecentFiles);
|
||||||
ui.close_menu();
|
ui.close();
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
// Normal submenu rendering
|
// Normal submenu rendering
|
||||||
for child in *children {
|
for child in *children {
|
||||||
if let Some(a) = self.render_menu_def(ui, child, recent_files) {
|
if let Some(a) = self.render_menu_def(ui, child, recent_files) {
|
||||||
action = Some(a);
|
action = Some(a);
|
||||||
ui.close_menu();
|
ui.close();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -62,6 +62,7 @@ const SEARCH_BAR_HEIGHT: f32 = 30.0;
|
||||||
const CATEGORY_TAB_HEIGHT: f32 = 28.0;
|
const CATEGORY_TAB_HEIGHT: f32 = 28.0;
|
||||||
const BREADCRUMB_HEIGHT: f32 = 24.0;
|
const BREADCRUMB_HEIGHT: f32 = 24.0;
|
||||||
const ITEM_HEIGHT: f32 = 40.0;
|
const ITEM_HEIGHT: f32 = 40.0;
|
||||||
|
#[allow(dead_code)]
|
||||||
const ITEM_PADDING: f32 = 4.0;
|
const ITEM_PADDING: f32 = 4.0;
|
||||||
const LIST_THUMBNAIL_SIZE: f32 = 32.0;
|
const LIST_THUMBNAIL_SIZE: f32 = 32.0;
|
||||||
const GRID_ITEM_SIZE: f32 = 80.0;
|
const GRID_ITEM_SIZE: f32 = 80.0;
|
||||||
|
|
@ -137,6 +138,7 @@ impl ThumbnailCache {
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Check if a thumbnail is already cached (and not dirty)
|
/// Check if a thumbnail is already cached (and not dirty)
|
||||||
|
#[allow(dead_code)]
|
||||||
pub fn has(&self, asset_id: &Uuid) -> bool {
|
pub fn has(&self, asset_id: &Uuid) -> bool {
|
||||||
self.textures.contains_key(asset_id) && !self.dirty.contains(asset_id)
|
self.textures.contains_key(asset_id) && !self.dirty.contains(asset_id)
|
||||||
}
|
}
|
||||||
|
|
@ -146,11 +148,6 @@ impl ThumbnailCache {
|
||||||
self.dirty.insert(*asset_id);
|
self.dirty.insert(*asset_id);
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Clear all cached thumbnails
|
|
||||||
pub fn clear(&mut self) {
|
|
||||||
self.textures.clear();
|
|
||||||
self.dirty.clear();
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
|
|
@ -285,7 +282,7 @@ fn generate_waveform_thumbnail(
|
||||||
|
|
||||||
// Draw waveform
|
// Draw waveform
|
||||||
let center_y = size / 2;
|
let center_y = size / 2;
|
||||||
let num_peaks = waveform_peaks.len().min(size);
|
let _num_peaks = waveform_peaks.len().min(size);
|
||||||
|
|
||||||
for (x, &(min_val, max_val)) in waveform_peaks.iter().take(size).enumerate() {
|
for (x, &(min_val, max_val)) in waveform_peaks.iter().take(size).enumerate() {
|
||||||
// Scale peaks to pixel range (center ± half height)
|
// Scale peaks to pixel range (center ± half height)
|
||||||
|
|
@ -376,7 +373,7 @@ fn generate_video_thumbnail(
|
||||||
/// Generate a piano roll thumbnail for MIDI clips
|
/// Generate a piano roll thumbnail for MIDI clips
|
||||||
/// Shows notes as horizontal bars with Y position = note % 12 (one octave)
|
/// Shows notes as horizontal bars with Y position = note % 12 (one octave)
|
||||||
fn generate_midi_thumbnail(
|
fn generate_midi_thumbnail(
|
||||||
events: &[(f64, u8, bool)], // (timestamp, note_number, is_note_on)
|
events: &[(f64, u8, u8, bool)], // (timestamp, note_number, velocity, is_note_on)
|
||||||
duration: f64,
|
duration: f64,
|
||||||
bg_color: egui::Color32,
|
bg_color: egui::Color32,
|
||||||
note_color: egui::Color32,
|
note_color: egui::Color32,
|
||||||
|
|
@ -394,7 +391,7 @@ fn generate_midi_thumbnail(
|
||||||
}
|
}
|
||||||
|
|
||||||
// Draw note events
|
// Draw note events
|
||||||
for &(timestamp, note_number, is_note_on) in events {
|
for &(timestamp, note_number, _velocity, is_note_on) in events {
|
||||||
if !is_note_on || timestamp > preview_duration {
|
if !is_note_on || timestamp > preview_duration {
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
@ -552,6 +549,7 @@ fn shape_color_to_tiny_skia(color: &ShapeColor) -> tiny_skia::Color {
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Generate a simple effect thumbnail with a pink gradient
|
/// Generate a simple effect thumbnail with a pink gradient
|
||||||
|
#[allow(dead_code)]
|
||||||
fn generate_effect_thumbnail() -> Vec<u8> {
|
fn generate_effect_thumbnail() -> Vec<u8> {
|
||||||
let size = THUMBNAIL_SIZE as usize;
|
let size = THUMBNAIL_SIZE as usize;
|
||||||
let mut rgba = vec![0u8; size * size * 4];
|
let mut rgba = vec![0u8; size * size * 4];
|
||||||
|
|
@ -628,6 +626,7 @@ fn generate_effect_thumbnail() -> Vec<u8> {
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Ellipsize a string to fit within a maximum character count
|
/// Ellipsize a string to fit within a maximum character count
|
||||||
|
#[allow(dead_code)]
|
||||||
fn ellipsize(s: &str, max_chars: usize) -> String {
|
fn ellipsize(s: &str, max_chars: usize) -> String {
|
||||||
if s.chars().count() <= max_chars {
|
if s.chars().count() <= max_chars {
|
||||||
s.to_string()
|
s.to_string()
|
||||||
|
|
@ -706,6 +705,7 @@ pub struct AssetEntry {
|
||||||
pub struct FolderEntry {
|
pub struct FolderEntry {
|
||||||
pub id: Uuid,
|
pub id: Uuid,
|
||||||
pub name: String,
|
pub name: String,
|
||||||
|
#[allow(dead_code)]
|
||||||
pub category: AssetCategory,
|
pub category: AssetCategory,
|
||||||
pub item_count: usize,
|
pub item_count: usize,
|
||||||
}
|
}
|
||||||
|
|
@ -718,6 +718,7 @@ pub enum LibraryItem {
|
||||||
}
|
}
|
||||||
|
|
||||||
impl LibraryItem {
|
impl LibraryItem {
|
||||||
|
#[allow(dead_code)]
|
||||||
pub fn id(&self) -> Uuid {
|
pub fn id(&self) -> Uuid {
|
||||||
match self {
|
match self {
|
||||||
LibraryItem::Folder(f) => f.id,
|
LibraryItem::Folder(f) => f.id,
|
||||||
|
|
@ -810,6 +811,7 @@ pub struct AssetLibraryPane {
|
||||||
current_folders: HashMap<u8, Option<Uuid>>,
|
current_folders: HashMap<u8, Option<Uuid>>,
|
||||||
|
|
||||||
/// Set of expanded folder IDs (for tree view - future enhancement)
|
/// Set of expanded folder IDs (for tree view - future enhancement)
|
||||||
|
#[allow(dead_code)]
|
||||||
expanded_folders: HashSet<Uuid>,
|
expanded_folders: HashSet<Uuid>,
|
||||||
|
|
||||||
/// Cached folder icon texture
|
/// Cached folder icon texture
|
||||||
|
|
@ -1283,6 +1285,7 @@ impl AssetLibraryPane {
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Filter assets based on current category and search text
|
/// Filter assets based on current category and search text
|
||||||
|
#[allow(dead_code)]
|
||||||
fn filter_assets<'a>(&self, assets: &'a [AssetEntry]) -> Vec<&'a AssetEntry> {
|
fn filter_assets<'a>(&self, assets: &'a [AssetEntry]) -> Vec<&'a AssetEntry> {
|
||||||
let search_lower = self.search_filter.to_lowercase();
|
let search_lower = self.search_filter.to_lowercase();
|
||||||
|
|
||||||
|
|
@ -1727,6 +1730,7 @@ impl AssetLibraryPane {
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Render a section header for effect categories
|
/// Render a section header for effect categories
|
||||||
|
#[allow(dead_code)] // Part of List/Grid view rendering subsystem, not yet wired
|
||||||
fn render_section_header(ui: &mut egui::Ui, label: &str, color: egui::Color32) {
|
fn render_section_header(ui: &mut egui::Ui, label: &str, color: egui::Color32) {
|
||||||
ui.add_space(4.0);
|
ui.add_space(4.0);
|
||||||
let (header_rect, _) = ui.allocate_exact_size(
|
let (header_rect, _) = ui.allocate_exact_size(
|
||||||
|
|
@ -1744,7 +1748,7 @@ impl AssetLibraryPane {
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Render a grid of asset items
|
/// Render a grid of asset items
|
||||||
#[allow(clippy::too_many_arguments)]
|
#[allow(clippy::too_many_arguments, dead_code)]
|
||||||
fn render_grid_items(
|
fn render_grid_items(
|
||||||
&mut self,
|
&mut self,
|
||||||
ui: &mut egui::Ui,
|
ui: &mut egui::Ui,
|
||||||
|
|
@ -1755,7 +1759,7 @@ impl AssetLibraryPane {
|
||||||
shared: &mut SharedPaneState,
|
shared: &mut SharedPaneState,
|
||||||
document: &Document,
|
document: &Document,
|
||||||
text_color: egui::Color32,
|
text_color: egui::Color32,
|
||||||
secondary_text_color: egui::Color32,
|
_secondary_text_color: egui::Color32,
|
||||||
) {
|
) {
|
||||||
if assets.is_empty() {
|
if assets.is_empty() {
|
||||||
return;
|
return;
|
||||||
|
|
@ -2003,7 +2007,7 @@ impl AssetLibraryPane {
|
||||||
&mut self,
|
&mut self,
|
||||||
ui: &mut egui::Ui,
|
ui: &mut egui::Ui,
|
||||||
rect: egui::Rect,
|
rect: egui::Rect,
|
||||||
path: &NodePath,
|
_path: &NodePath,
|
||||||
shared: &mut SharedPaneState,
|
shared: &mut SharedPaneState,
|
||||||
items: &[&LibraryItem],
|
items: &[&LibraryItem],
|
||||||
document: &Document,
|
document: &Document,
|
||||||
|
|
@ -2012,7 +2016,7 @@ impl AssetLibraryPane {
|
||||||
let folder_icon = self.get_folder_icon(ui.ctx()).cloned();
|
let folder_icon = self.get_folder_icon(ui.ctx()).cloned();
|
||||||
|
|
||||||
let _scroll_area = egui::ScrollArea::vertical()
|
let _scroll_area = egui::ScrollArea::vertical()
|
||||||
.id_source("asset_library_scroll")
|
.id_salt("asset_library_scroll")
|
||||||
.show_viewport(ui, |ui, viewport| {
|
.show_viewport(ui, |ui, viewport| {
|
||||||
ui.set_min_width(rect.width());
|
ui.set_min_width(rect.width());
|
||||||
|
|
||||||
|
|
@ -2171,7 +2175,7 @@ impl AssetLibraryPane {
|
||||||
// Load folder icon if needed
|
// Load folder icon if needed
|
||||||
let folder_icon = self.get_folder_icon(ui.ctx()).cloned();
|
let folder_icon = self.get_folder_icon(ui.ctx()).cloned();
|
||||||
|
|
||||||
ui.allocate_new_ui(egui::UiBuilder::new().max_rect(rect), |ui| {
|
ui.scope_builder(egui::UiBuilder::new().max_rect(rect), |ui| {
|
||||||
egui::ScrollArea::vertical()
|
egui::ScrollArea::vertical()
|
||||||
.id_salt(("asset_library_grid_scroll", path))
|
.id_salt(("asset_library_grid_scroll", path))
|
||||||
.auto_shrink([false, false])
|
.auto_shrink([false, false])
|
||||||
|
|
@ -2661,6 +2665,7 @@ impl AssetLibraryPane {
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Render assets based on current view mode
|
/// Render assets based on current view mode
|
||||||
|
#[allow(dead_code)]
|
||||||
fn render_assets(
|
fn render_assets(
|
||||||
&mut self,
|
&mut self,
|
||||||
ui: &mut egui::Ui,
|
ui: &mut egui::Ui,
|
||||||
|
|
@ -2681,6 +2686,7 @@ impl AssetLibraryPane {
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Render the asset list view
|
/// Render the asset list view
|
||||||
|
#[allow(dead_code)]
|
||||||
fn render_asset_list_view(
|
fn render_asset_list_view(
|
||||||
&mut self,
|
&mut self,
|
||||||
ui: &mut egui::Ui,
|
ui: &mut egui::Ui,
|
||||||
|
|
@ -2724,7 +2730,7 @@ impl AssetLibraryPane {
|
||||||
|
|
||||||
// Use egui's built-in ScrollArea for scrolling
|
// Use egui's built-in ScrollArea for scrolling
|
||||||
let scroll_area_rect = rect;
|
let scroll_area_rect = rect;
|
||||||
ui.allocate_new_ui(egui::UiBuilder::new().max_rect(scroll_area_rect), |ui| {
|
ui.scope_builder(egui::UiBuilder::new().max_rect(scroll_area_rect), |ui| {
|
||||||
egui::ScrollArea::vertical()
|
egui::ScrollArea::vertical()
|
||||||
.id_salt(("asset_list_scroll", path))
|
.id_salt(("asset_list_scroll", path))
|
||||||
.auto_shrink([false, false])
|
.auto_shrink([false, false])
|
||||||
|
|
@ -2757,7 +2763,7 @@ impl AssetLibraryPane {
|
||||||
};
|
};
|
||||||
let mut rendered_builtin_header = false;
|
let mut rendered_builtin_header = false;
|
||||||
let mut rendered_custom_header = false;
|
let mut rendered_custom_header = false;
|
||||||
let mut builtin_rendered = 0;
|
let mut _builtin_rendered = 0;
|
||||||
|
|
||||||
for asset in assets_to_render {
|
for asset in assets_to_render {
|
||||||
// Render section headers for Effects tab
|
// Render section headers for Effects tab
|
||||||
|
|
@ -2781,7 +2787,7 @@ impl AssetLibraryPane {
|
||||||
rendered_custom_header = true;
|
rendered_custom_header = true;
|
||||||
}
|
}
|
||||||
if asset.is_builtin {
|
if asset.is_builtin {
|
||||||
builtin_rendered += 1;
|
_builtin_rendered += 1;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -3093,6 +3099,7 @@ impl AssetLibraryPane {
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Render the asset grid view
|
/// Render the asset grid view
|
||||||
|
#[allow(dead_code)]
|
||||||
fn render_asset_grid_view(
|
fn render_asset_grid_view(
|
||||||
&mut self,
|
&mut self,
|
||||||
ui: &mut egui::Ui,
|
ui: &mut egui::Ui,
|
||||||
|
|
@ -3165,7 +3172,7 @@ impl AssetLibraryPane {
|
||||||
0
|
0
|
||||||
};
|
};
|
||||||
|
|
||||||
ui.allocate_new_ui(egui::UiBuilder::new().max_rect(rect), |ui| {
|
ui.scope_builder(egui::UiBuilder::new().max_rect(rect), |ui| {
|
||||||
egui::ScrollArea::vertical()
|
egui::ScrollArea::vertical()
|
||||||
.id_salt(("asset_grid_scroll", path))
|
.id_salt(("asset_grid_scroll", path))
|
||||||
.auto_shrink([false, false])
|
.auto_shrink([false, false])
|
||||||
|
|
|
||||||
|
|
@ -47,6 +47,7 @@ pub struct DraggingAsset {
|
||||||
/// Display name
|
/// Display name
|
||||||
pub name: String,
|
pub name: String,
|
||||||
/// Duration in seconds
|
/// Duration in seconds
|
||||||
|
#[allow(dead_code)] // Populated during drag, consumed when drag-and-drop features expand
|
||||||
pub duration: f64,
|
pub duration: f64,
|
||||||
/// Dimensions (width, height) for vector/video clips, None for audio
|
/// Dimensions (width, height) for vector/video clips, None for audio
|
||||||
pub dimensions: Option<(f64, f64)>,
|
pub dimensions: Option<(f64, f64)>,
|
||||||
|
|
@ -132,6 +133,7 @@ pub fn find_sampled_audio_track(document: &lightningbeam_core::document::Documen
|
||||||
/// Shared state that all panes can access
|
/// Shared state that all panes can access
|
||||||
pub struct SharedPaneState<'a> {
|
pub struct SharedPaneState<'a> {
|
||||||
pub tool_icon_cache: &'a mut crate::ToolIconCache,
|
pub tool_icon_cache: &'a mut crate::ToolIconCache,
|
||||||
|
#[allow(dead_code)] // Used by pane chrome rendering in main.rs
|
||||||
pub icon_cache: &'a mut crate::IconCache,
|
pub icon_cache: &'a mut crate::IconCache,
|
||||||
pub selected_tool: &'a mut Tool,
|
pub selected_tool: &'a mut Tool,
|
||||||
pub fill_color: &'a mut egui::Color32,
|
pub fill_color: &'a mut egui::Color32,
|
||||||
|
|
@ -187,8 +189,12 @@ pub struct SharedPaneState<'a> {
|
||||||
pub paint_bucket_gap_tolerance: &'a mut f64,
|
pub paint_bucket_gap_tolerance: &'a mut f64,
|
||||||
/// Number of sides for polygon tool
|
/// Number of sides for polygon tool
|
||||||
pub polygon_sides: &'a mut u32,
|
pub polygon_sides: &'a mut u32,
|
||||||
/// Cache of MIDI events for rendering (keyed by backend midi_clip_id)
|
/// Cache of MIDI events for rendering (keyed by backend midi_clip_id).
|
||||||
pub midi_event_cache: &'a std::collections::HashMap<u32, Vec<(f64, u8, bool)>>,
|
/// Mutable so panes can update the cache immediately on edits (avoiding 1-frame snap-back).
|
||||||
|
/// NOTE: If an action later fails during execution, the cache may be out of sync with the
|
||||||
|
/// backend. This is acceptable because MIDI note edits are simple and unlikely to fail.
|
||||||
|
/// Undo/redo rebuilds affected entries from the backend to restore consistency.
|
||||||
|
pub midi_event_cache: &'a mut std::collections::HashMap<u32, Vec<(f64, u8, u8, bool)>>,
|
||||||
/// Audio pool indices that got new raw audio data this frame (for thumbnail invalidation)
|
/// Audio pool indices that got new raw audio data this frame (for thumbnail invalidation)
|
||||||
pub audio_pools_with_new_waveforms: &'a std::collections::HashSet<usize>,
|
pub audio_pools_with_new_waveforms: &'a std::collections::HashSet<usize>,
|
||||||
/// Raw audio samples for GPU waveform rendering (pool_index -> (samples, sample_rate, channels))
|
/// Raw audio samples for GPU waveform rendering (pool_index -> (samples, sample_rate, channels))
|
||||||
|
|
@ -216,7 +222,7 @@ pub trait PaneRenderer {
|
||||||
/// Render the optional header section with controls
|
/// Render the optional header section with controls
|
||||||
///
|
///
|
||||||
/// Returns true if a header was rendered, false if no header
|
/// Returns true if a header was rendered, false if no header
|
||||||
fn render_header(&mut self, ui: &mut egui::Ui, shared: &mut SharedPaneState) -> bool {
|
fn render_header(&mut self, _ui: &mut egui::Ui, _shared: &mut SharedPaneState) -> bool {
|
||||||
false // Default: no header
|
false // Default: no header
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -230,6 +236,7 @@ pub trait PaneRenderer {
|
||||||
);
|
);
|
||||||
|
|
||||||
/// Get the display name of this pane
|
/// Get the display name of this pane
|
||||||
|
#[allow(dead_code)] // Implemented by all panes, dispatch infrastructure complete
|
||||||
fn name(&self) -> &str;
|
fn name(&self) -> &str;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -15,6 +15,7 @@ use uuid::Uuid;
|
||||||
pub enum NodeGraphAction {
|
pub enum NodeGraphAction {
|
||||||
AddNode(AddNodeAction),
|
AddNode(AddNodeAction),
|
||||||
RemoveNode(RemoveNodeAction),
|
RemoveNode(RemoveNodeAction),
|
||||||
|
#[allow(dead_code)]
|
||||||
MoveNode(MoveNodeAction),
|
MoveNode(MoveNodeAction),
|
||||||
Connect(ConnectAction),
|
Connect(ConnectAction),
|
||||||
Disconnect(DisconnectAction),
|
Disconnect(DisconnectAction),
|
||||||
|
|
@ -240,6 +241,7 @@ impl RemoveNodeAction {
|
||||||
// MoveNodeAction
|
// MoveNodeAction
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
|
|
||||||
|
#[allow(dead_code)]
|
||||||
pub struct MoveNodeAction {
|
pub struct MoveNodeAction {
|
||||||
layer_id: Uuid,
|
layer_id: Uuid,
|
||||||
backend_node_id: BackendNodeId,
|
backend_node_id: BackendNodeId,
|
||||||
|
|
@ -248,6 +250,7 @@ pub struct MoveNodeAction {
|
||||||
}
|
}
|
||||||
|
|
||||||
impl MoveNodeAction {
|
impl MoveNodeAction {
|
||||||
|
#[allow(dead_code)]
|
||||||
pub fn new(layer_id: Uuid, backend_node_id: BackendNodeId, new_position: (f32, f32)) -> Self {
|
pub fn new(layer_id: Uuid, backend_node_id: BackendNodeId, new_position: (f32, f32)) -> Self {
|
||||||
Self {
|
Self {
|
||||||
layer_id,
|
layer_id,
|
||||||
|
|
|
||||||
|
|
@ -17,8 +17,8 @@ pub struct AudioGraphBackend {
|
||||||
audio_controller: Arc<Mutex<EngineController>>,
|
audio_controller: Arc<Mutex<EngineController>>,
|
||||||
|
|
||||||
/// Maps backend NodeIndex to stable IDs for round-trip serialization
|
/// Maps backend NodeIndex to stable IDs for round-trip serialization
|
||||||
node_index_to_stable: HashMap<NodeIndex, u32>,
|
_node_index_to_stable: HashMap<NodeIndex, u32>,
|
||||||
next_stable_id: u32,
|
_next_stable_id: u32,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl AudioGraphBackend {
|
impl AudioGraphBackend {
|
||||||
|
|
@ -26,8 +26,8 @@ impl AudioGraphBackend {
|
||||||
Self {
|
Self {
|
||||||
track_id,
|
track_id,
|
||||||
audio_controller,
|
audio_controller,
|
||||||
node_index_to_stable: HashMap::new(),
|
_node_index_to_stable: HashMap::new(),
|
||||||
next_stable_id: 0,
|
_next_stable_id: 0,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -41,25 +41,23 @@ impl GraphBackend for AudioGraphBackend {
|
||||||
|
|
||||||
// Generate placeholder node ID
|
// Generate placeholder node ID
|
||||||
// This will be replaced with actual backend NodeIndex from sync query
|
// This will be replaced with actual backend NodeIndex from sync query
|
||||||
let stable_id = self.next_stable_id;
|
let stable_id = self._next_stable_id;
|
||||||
self.next_stable_id += 1;
|
self._next_stable_id += 1;
|
||||||
|
|
||||||
// Placeholder: use stable_id as backend index (will be wrong, but compiles)
|
// Placeholder: use stable_id as backend index (will be wrong, but compiles)
|
||||||
let node_idx = NodeIndex::new(stable_id as usize);
|
let node_idx = NodeIndex::new(stable_id as usize);
|
||||||
self.node_index_to_stable.insert(node_idx, stable_id);
|
self._node_index_to_stable.insert(node_idx, stable_id);
|
||||||
|
|
||||||
Ok(BackendNodeId::Audio(node_idx))
|
Ok(BackendNodeId::Audio(node_idx))
|
||||||
}
|
}
|
||||||
|
|
||||||
fn remove_node(&mut self, backend_id: BackendNodeId) -> Result<(), String> {
|
fn remove_node(&mut self, backend_id: BackendNodeId) -> Result<(), String> {
|
||||||
let BackendNodeId::Audio(node_idx) = backend_id else {
|
let BackendNodeId::Audio(node_idx) = backend_id;
|
||||||
return Err("Invalid backend node type".to_string());
|
|
||||||
};
|
|
||||||
|
|
||||||
let mut controller = self.audio_controller.lock().unwrap();
|
let mut controller = self.audio_controller.lock().unwrap();
|
||||||
controller.graph_remove_node(self.track_id, node_idx.index() as u32);
|
controller.graph_remove_node(self.track_id, node_idx.index() as u32);
|
||||||
|
|
||||||
self.node_index_to_stable.remove(&node_idx);
|
self._node_index_to_stable.remove(&node_idx);
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
@ -71,12 +69,8 @@ impl GraphBackend for AudioGraphBackend {
|
||||||
input_node: BackendNodeId,
|
input_node: BackendNodeId,
|
||||||
input_port: usize,
|
input_port: usize,
|
||||||
) -> Result<(), String> {
|
) -> Result<(), String> {
|
||||||
let BackendNodeId::Audio(from_idx) = output_node else {
|
let BackendNodeId::Audio(from_idx) = output_node;
|
||||||
return Err("Invalid output node type".to_string());
|
let BackendNodeId::Audio(to_idx) = input_node;
|
||||||
};
|
|
||||||
let BackendNodeId::Audio(to_idx) = input_node else {
|
|
||||||
return Err("Invalid input node type".to_string());
|
|
||||||
};
|
|
||||||
|
|
||||||
let mut controller = self.audio_controller.lock().unwrap();
|
let mut controller = self.audio_controller.lock().unwrap();
|
||||||
controller.graph_connect(
|
controller.graph_connect(
|
||||||
|
|
@ -97,12 +91,8 @@ impl GraphBackend for AudioGraphBackend {
|
||||||
input_node: BackendNodeId,
|
input_node: BackendNodeId,
|
||||||
input_port: usize,
|
input_port: usize,
|
||||||
) -> Result<(), String> {
|
) -> Result<(), String> {
|
||||||
let BackendNodeId::Audio(from_idx) = output_node else {
|
let BackendNodeId::Audio(from_idx) = output_node;
|
||||||
return Err("Invalid output node type".to_string());
|
let BackendNodeId::Audio(to_idx) = input_node;
|
||||||
};
|
|
||||||
let BackendNodeId::Audio(to_idx) = input_node else {
|
|
||||||
return Err("Invalid input node type".to_string());
|
|
||||||
};
|
|
||||||
|
|
||||||
let mut controller = self.audio_controller.lock().unwrap();
|
let mut controller = self.audio_controller.lock().unwrap();
|
||||||
controller.graph_disconnect(
|
controller.graph_disconnect(
|
||||||
|
|
@ -122,9 +112,7 @@ impl GraphBackend for AudioGraphBackend {
|
||||||
param_id: u32,
|
param_id: u32,
|
||||||
value: f64,
|
value: f64,
|
||||||
) -> Result<(), String> {
|
) -> Result<(), String> {
|
||||||
let BackendNodeId::Audio(node_idx) = backend_id else {
|
let BackendNodeId::Audio(node_idx) = backend_id;
|
||||||
return Err("Invalid backend node type".to_string());
|
|
||||||
};
|
|
||||||
|
|
||||||
let mut controller = self.audio_controller.lock().unwrap();
|
let mut controller = self.audio_controller.lock().unwrap();
|
||||||
controller.graph_set_parameter(
|
controller.graph_set_parameter(
|
||||||
|
|
@ -180,9 +168,7 @@ impl GraphBackend for AudioGraphBackend {
|
||||||
x: f32,
|
x: f32,
|
||||||
y: f32,
|
y: f32,
|
||||||
) -> Result<BackendNodeId, String> {
|
) -> Result<BackendNodeId, String> {
|
||||||
let BackendNodeId::Audio(allocator_idx) = voice_allocator_id else {
|
let BackendNodeId::Audio(allocator_idx) = voice_allocator_id;
|
||||||
return Err("Invalid voice allocator node type".to_string());
|
|
||||||
};
|
|
||||||
|
|
||||||
let mut controller = self.audio_controller.lock().unwrap();
|
let mut controller = self.audio_controller.lock().unwrap();
|
||||||
controller.graph_add_node_to_template(
|
controller.graph_add_node_to_template(
|
||||||
|
|
@ -194,8 +180,8 @@ impl GraphBackend for AudioGraphBackend {
|
||||||
);
|
);
|
||||||
|
|
||||||
// Placeholder return
|
// Placeholder return
|
||||||
let stable_id = self.next_stable_id;
|
let stable_id = self._next_stable_id;
|
||||||
self.next_stable_id += 1;
|
self._next_stable_id += 1;
|
||||||
let node_idx = NodeIndex::new(stable_id as usize);
|
let node_idx = NodeIndex::new(stable_id as usize);
|
||||||
|
|
||||||
Ok(BackendNodeId::Audio(node_idx))
|
Ok(BackendNodeId::Audio(node_idx))
|
||||||
|
|
@ -209,15 +195,9 @@ impl GraphBackend for AudioGraphBackend {
|
||||||
input_node: BackendNodeId,
|
input_node: BackendNodeId,
|
||||||
input_port: usize,
|
input_port: usize,
|
||||||
) -> Result<(), String> {
|
) -> Result<(), String> {
|
||||||
let BackendNodeId::Audio(allocator_idx) = voice_allocator_id else {
|
let BackendNodeId::Audio(allocator_idx) = voice_allocator_id;
|
||||||
return Err("Invalid voice allocator node type".to_string());
|
let BackendNodeId::Audio(from_idx) = output_node;
|
||||||
};
|
let BackendNodeId::Audio(to_idx) = input_node;
|
||||||
let BackendNodeId::Audio(from_idx) = output_node else {
|
|
||||||
return Err("Invalid output node type".to_string());
|
|
||||||
};
|
|
||||||
let BackendNodeId::Audio(to_idx) = input_node else {
|
|
||||||
return Err("Invalid input node type".to_string());
|
|
||||||
};
|
|
||||||
|
|
||||||
let mut controller = self.audio_controller.lock().unwrap();
|
let mut controller = self.audio_controller.lock().unwrap();
|
||||||
controller.graph_connect_in_template(
|
controller.graph_connect_in_template(
|
||||||
|
|
|
||||||
|
|
@ -18,6 +18,7 @@ pub enum BackendNodeId {
|
||||||
/// Implementations:
|
/// Implementations:
|
||||||
/// - AudioGraphBackend: Wraps daw_backend::AudioGraph via EngineController
|
/// - AudioGraphBackend: Wraps daw_backend::AudioGraph via EngineController
|
||||||
/// - VfxGraphBackend (future): GPU-based shader graph
|
/// - VfxGraphBackend (future): GPU-based shader graph
|
||||||
|
#[allow(dead_code)]
|
||||||
pub trait GraphBackend: Send {
|
pub trait GraphBackend: Send {
|
||||||
/// Add a node to the backend graph
|
/// Add a node to the backend graph
|
||||||
fn add_node(&mut self, node_type: &str, x: f32, y: f32) -> Result<BackendNodeId, String>;
|
fn add_node(&mut self, node_type: &str, x: f32, y: f32) -> Result<BackendNodeId, String>;
|
||||||
|
|
|
||||||
|
|
@ -67,6 +67,7 @@ impl NodeGraphPane {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[allow(dead_code)]
|
||||||
pub fn with_track_id(
|
pub fn with_track_id(
|
||||||
track_id: Uuid,
|
track_id: Uuid,
|
||||||
audio_controller: std::sync::Arc<std::sync::Mutex<daw_backend::EngineController>>,
|
audio_controller: std::sync::Arc<std::sync::Mutex<daw_backend::EngineController>>,
|
||||||
|
|
@ -207,7 +208,7 @@ impl NodeGraphPane {
|
||||||
// Set parameter values
|
// Set parameter values
|
||||||
for (¶m_id, &value) in &node.parameters {
|
for (¶m_id, &value) in &node.parameters {
|
||||||
// Find the input param in the graph and set its value
|
// Find the input param in the graph and set its value
|
||||||
if let Some(node_data) = self.state.graph.nodes.get_mut(frontend_id) {
|
if let Some(_node_data) = self.state.graph.nodes.get_mut(frontend_id) {
|
||||||
// TODO: Set parameter values on the node's input params
|
// TODO: Set parameter values on the node's input params
|
||||||
// This requires matching param_id to the input param by index
|
// This requires matching param_id to the input param by index
|
||||||
let _ = (param_id, value); // Silence unused warning for now
|
let _ = (param_id, value); // Silence unused warning for now
|
||||||
|
|
@ -428,25 +429,25 @@ impl NodeGraphPane {
|
||||||
|
|
||||||
fn check_parameter_changes(&mut self) {
|
fn check_parameter_changes(&mut self) {
|
||||||
// Check all input parameters for value changes
|
// Check all input parameters for value changes
|
||||||
let mut checked_count = 0;
|
let mut _checked_count = 0;
|
||||||
let mut connection_only_count = 0;
|
let mut _connection_only_count = 0;
|
||||||
let mut non_float_count = 0;
|
let mut _non_float_count = 0;
|
||||||
|
|
||||||
for (input_id, input_param) in &self.state.graph.inputs {
|
for (input_id, input_param) in &self.state.graph.inputs {
|
||||||
// Only check parameters that can have constant values (not ConnectionOnly)
|
// Only check parameters that can have constant values (not ConnectionOnly)
|
||||||
if matches!(input_param.kind, InputParamKind::ConnectionOnly) {
|
if matches!(input_param.kind, InputParamKind::ConnectionOnly) {
|
||||||
connection_only_count += 1;
|
_connection_only_count += 1;
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get current value
|
// Get current value
|
||||||
let current_value = match &input_param.value {
|
let current_value = match &input_param.value {
|
||||||
ValueType::Float { value } => {
|
ValueType::Float { value } => {
|
||||||
checked_count += 1;
|
_checked_count += 1;
|
||||||
*value
|
*value
|
||||||
},
|
},
|
||||||
other => {
|
other => {
|
||||||
non_float_count += 1;
|
_non_float_count += 1;
|
||||||
eprintln!("[DEBUG] Non-float parameter type: {:?}", std::mem::discriminant(other));
|
eprintln!("[DEBUG] Non-float parameter type: {:?}", std::mem::discriminant(other));
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
@ -572,7 +573,7 @@ impl crate::panes::PaneRenderer for NodeGraphPane {
|
||||||
// Check if track is MIDI or Audio
|
// Check if track is MIDI or Audio
|
||||||
if let Some(audio_controller) = &shared.audio_controller {
|
if let Some(audio_controller) = &shared.audio_controller {
|
||||||
let is_valid_track = {
|
let is_valid_track = {
|
||||||
let controller = audio_controller.lock().unwrap();
|
let _controller = audio_controller.lock().unwrap();
|
||||||
// TODO: Query track type from backend
|
// TODO: Query track type from backend
|
||||||
// For now, assume it's valid if we have a track ID mapping
|
// For now, assume it's valid if we have a track ID mapping
|
||||||
true
|
true
|
||||||
|
|
@ -624,13 +625,17 @@ impl crate::panes::PaneRenderer for NodeGraphPane {
|
||||||
let grid_color = grid_style.background_color.unwrap_or(egui::Color32::from_gray(55));
|
let grid_color = grid_style.background_color.unwrap_or(egui::Color32::from_gray(55));
|
||||||
|
|
||||||
// Allocate the rect and render the graph editor within it
|
// Allocate the rect and render the graph editor within it
|
||||||
ui.allocate_ui_at_rect(rect, |ui| {
|
ui.scope_builder(egui::UiBuilder::new().max_rect(rect), |ui| {
|
||||||
// Check for scroll input to override library's default zoom behavior
|
// Check for scroll input to override library's default zoom behavior
|
||||||
|
// Only handle scroll when mouse is over the node graph area
|
||||||
|
let pointer_over_graph = ui.rect_contains_pointer(rect);
|
||||||
let modifiers = ui.input(|i| i.modifiers);
|
let modifiers = ui.input(|i| i.modifiers);
|
||||||
let has_ctrl = modifiers.ctrl || modifiers.command;
|
let has_ctrl = modifiers.ctrl || modifiers.command;
|
||||||
|
|
||||||
// When ctrl is held, check for raw scroll events in the events list
|
// When ctrl is held, check for raw scroll events in the events list
|
||||||
let scroll_delta = if has_ctrl {
|
let scroll_delta = if !pointer_over_graph {
|
||||||
|
egui::Vec2::ZERO
|
||||||
|
} else if has_ctrl {
|
||||||
// Sum up scroll events from the raw event list
|
// Sum up scroll events from the raw event list
|
||||||
ui.input(|i| {
|
ui.input(|i| {
|
||||||
let mut total_scroll = egui::Vec2::ZERO;
|
let mut total_scroll = egui::Vec2::ZERO;
|
||||||
|
|
@ -701,8 +706,8 @@ impl crate::panes::PaneRenderer for NodeGraphPane {
|
||||||
|
|
||||||
// Draw menu button in top-left corner
|
// Draw menu button in top-left corner
|
||||||
let button_pos = rect.min + egui::vec2(8.0, 8.0);
|
let button_pos = rect.min + egui::vec2(8.0, 8.0);
|
||||||
ui.allocate_ui_at_rect(
|
ui.scope_builder(
|
||||||
egui::Rect::from_min_size(button_pos, egui::vec2(100.0, 24.0)),
|
egui::UiBuilder::new().max_rect(egui::Rect::from_min_size(button_pos, egui::vec2(100.0, 24.0))),
|
||||||
|ui| {
|
|ui| {
|
||||||
if ui.button("➕ Add Node").clicked() {
|
if ui.button("➕ Add Node").clicked() {
|
||||||
// Open node finder at button's top-left position
|
// Open node finder at button's top-left position
|
||||||
|
|
|
||||||
|
|
@ -1,3 +1,4 @@
|
||||||
|
#![allow(dead_code)]
|
||||||
//! Node Type Registry
|
//! Node Type Registry
|
||||||
//!
|
//!
|
||||||
//! Defines metadata for all available node types
|
//! Defines metadata for all available node types
|
||||||
|
|
|
||||||
|
|
@ -38,7 +38,7 @@ impl NodePalette {
|
||||||
.rect_filled(rect, 0.0, egui::Color32::from_rgb(30, 30, 30));
|
.rect_filled(rect, 0.0, egui::Color32::from_rgb(30, 30, 30));
|
||||||
|
|
||||||
// Create UI within the palette rect
|
// Create UI within the palette rect
|
||||||
ui.allocate_ui_at_rect(rect, |ui| {
|
ui.scope_builder(egui::UiBuilder::new().max_rect(rect), |ui| {
|
||||||
ui.vertical(|ui| {
|
ui.vertical(|ui| {
|
||||||
ui.add_space(8.0);
|
ui.add_space(8.0);
|
||||||
|
|
||||||
|
|
|
||||||
File diff suppressed because it is too large
Load Diff
|
|
@ -219,6 +219,7 @@ pub struct ShaderEditorPane {
|
||||||
/// The shader source code being edited
|
/// The shader source code being edited
|
||||||
shader_code: String,
|
shader_code: String,
|
||||||
/// Whether to show the template selector
|
/// Whether to show the template selector
|
||||||
|
#[allow(dead_code)]
|
||||||
show_templates: bool,
|
show_templates: bool,
|
||||||
/// Error message from last compilation attempt (if any)
|
/// Error message from last compilation attempt (if any)
|
||||||
compile_error: Option<String>,
|
compile_error: Option<String>,
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,101 @@
|
||||||
|
// GPU Constant-Q Transform (CQT) compute shader.
|
||||||
|
//
|
||||||
|
// Reads raw audio samples from a waveform mip-0 texture (Rgba16Float, packed
|
||||||
|
// row-major at TEX_WIDTH=2048) and computes CQT magnitude for each
|
||||||
|
// (freq_bin, time_column) pair, writing normalized dB values into a ring-buffer
|
||||||
|
// cache texture (R32Float, width=cache_capacity, height=freq_bins).
|
||||||
|
//
|
||||||
|
// Dispatch: (ceil(freq_bins / 64), num_columns, 1)
|
||||||
|
// Each thread handles one frequency bin for one time column.
|
||||||
|
|
||||||
|
struct CqtParams {
|
||||||
|
hop_size: u32,
|
||||||
|
freq_bins: u32,
|
||||||
|
cache_capacity: u32,
|
||||||
|
cache_write_offset: u32, // ring buffer position to start writing
|
||||||
|
num_columns: u32, // how many columns in this dispatch
|
||||||
|
column_start: u32, // global CQT column index of first column
|
||||||
|
tex_width: u32, // waveform texture width (2048)
|
||||||
|
total_frames: u32, // total audio frames in waveform texture
|
||||||
|
sample_rate: f32,
|
||||||
|
column_stride: u32,
|
||||||
|
_pad1: u32,
|
||||||
|
_pad2: u32,
|
||||||
|
}
|
||||||
|
|
||||||
|
struct BinInfo {
|
||||||
|
window_length: u32,
|
||||||
|
phase_step: f32, // 2*pi*Q / N_k
|
||||||
|
_pad0: u32,
|
||||||
|
_pad1: u32,
|
||||||
|
}
|
||||||
|
|
||||||
|
@group(0) @binding(0) var audio_tex: texture_2d<f32>;
|
||||||
|
@group(0) @binding(1) var cqt_out: texture_storage_2d<rgba16float, write>;
|
||||||
|
@group(0) @binding(2) var<uniform> params: CqtParams;
|
||||||
|
@group(0) @binding(3) var<storage, read> bins: array<BinInfo>;
|
||||||
|
|
||||||
|
const PI2: f32 = 6.283185307;
|
||||||
|
|
||||||
|
@compute @workgroup_size(64)
|
||||||
|
fn main(@builtin(global_invocation_id) gid: vec3<u32>) {
|
||||||
|
let bin_k = gid.x;
|
||||||
|
let col_rel = gid.y; // relative to this dispatch batch
|
||||||
|
|
||||||
|
if bin_k >= params.freq_bins || col_rel >= params.num_columns {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
let global_col = params.column_start + col_rel * params.column_stride;
|
||||||
|
let sample_start = global_col * params.hop_size;
|
||||||
|
|
||||||
|
let info = bins[bin_k];
|
||||||
|
let n_k = info.window_length;
|
||||||
|
|
||||||
|
// Center the analysis window: offset by half the window length so the
|
||||||
|
// column timestamp refers to the center of the window, not the start.
|
||||||
|
// This gives better time alignment, especially for low-frequency bins
|
||||||
|
// that have very long windows.
|
||||||
|
let half_win = n_k / 2u;
|
||||||
|
|
||||||
|
// Accumulate complex inner product: sum of x[n] * w[n] * exp(-i * phase_step * n)
|
||||||
|
var sum_re: f32 = 0.0;
|
||||||
|
var sum_im: f32 = 0.0;
|
||||||
|
|
||||||
|
for (var n = 0u; n < n_k; n++) {
|
||||||
|
// Center the window around the hop position
|
||||||
|
let raw_idx = i32(sample_start) + i32(n) - i32(half_win);
|
||||||
|
if raw_idx < 0 || u32(raw_idx) >= params.total_frames {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
let sample_idx = u32(raw_idx);
|
||||||
|
|
||||||
|
// Read audio sample from 2D waveform texture (mip 0)
|
||||||
|
// At mip 0: R=G=left, B=A=right; average to mono
|
||||||
|
let tx = sample_idx % params.tex_width;
|
||||||
|
let ty = sample_idx / params.tex_width;
|
||||||
|
let texel = textureLoad(audio_tex, vec2<i32>(i32(tx), i32(ty)), 0);
|
||||||
|
let sample_val = (texel.r + texel.b) * 0.5;
|
||||||
|
|
||||||
|
// Hann window computed analytically
|
||||||
|
let window = 0.5 * (1.0 - cos(PI2 * f32(n) / f32(n_k)));
|
||||||
|
|
||||||
|
// Complex exponential: exp(-i * phase_step * n)
|
||||||
|
let angle = info.phase_step * f32(n);
|
||||||
|
let windowed = sample_val * window;
|
||||||
|
sum_re += windowed * cos(angle);
|
||||||
|
sum_im -= windowed * sin(angle);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Magnitude, normalized by window length
|
||||||
|
let mag = sqrt(sum_re * sum_re + sum_im * sum_im) / f32(n_k);
|
||||||
|
|
||||||
|
// Convert to dB, map -80dB..0dB -> 0.0..1.0
|
||||||
|
// WGSL log() is natural log, so log10(x) = log(x) / log(10)
|
||||||
|
let db = 20.0 * log(mag + 1e-10) / 2.302585093;
|
||||||
|
let normalized = clamp((db + 80.0) / 80.0, 0.0, 1.0);
|
||||||
|
|
||||||
|
// Write to ring buffer cache texture
|
||||||
|
let cache_x = (params.cache_write_offset + col_rel) % params.cache_capacity;
|
||||||
|
textureStore(cqt_out, vec2<i32>(i32(cache_x), i32(bin_k)), vec4(normalized, 0.0, 0.0, 1.0));
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,155 @@
|
||||||
|
// CQT spectrogram render shader.
|
||||||
|
//
|
||||||
|
// Reads from a ring-buffer cache texture (Rgba16Float) where:
|
||||||
|
// X = time column (ring buffer index), Y = CQT frequency bin
|
||||||
|
// CQT bins map directly to MIDI notes via: bin = (note - min_note) * bins_per_octave / 12
|
||||||
|
//
|
||||||
|
// Applies the same colormap as the old FFT spectrogram.
|
||||||
|
|
||||||
|
// Must match CqtRenderParams in cqt_gpu.rs exactly (96 bytes).
|
||||||
|
struct Params {
|
||||||
|
clip_rect: vec4<f32>, // 16 @ 0
|
||||||
|
viewport_start_time: f32, // 4 @ 16
|
||||||
|
pixels_per_second: f32, // 4 @ 20
|
||||||
|
audio_duration: f32, // 4 @ 24
|
||||||
|
sample_rate: f32, // 4 @ 28
|
||||||
|
clip_start_time: f32, // 4 @ 32
|
||||||
|
trim_start: f32, // 4 @ 36
|
||||||
|
freq_bins: f32, // 4 @ 40
|
||||||
|
bins_per_octave: f32, // 4 @ 44
|
||||||
|
hop_size: f32, // 4 @ 48
|
||||||
|
scroll_y: f32, // 4 @ 52
|
||||||
|
note_height: f32, // 4 @ 56
|
||||||
|
min_note: f32, // 4 @ 60
|
||||||
|
max_note: f32, // 4 @ 64
|
||||||
|
gamma: f32, // 4 @ 68
|
||||||
|
cache_capacity: f32, // 4 @ 72
|
||||||
|
cache_start_column: f32, // 4 @ 76
|
||||||
|
cache_valid_start: f32, // 4 @ 80
|
||||||
|
cache_valid_end: f32, // 4 @ 84
|
||||||
|
column_stride: f32, // 4 @ 88
|
||||||
|
_pad: f32, // 4 @ 92, total 96
|
||||||
|
}
|
||||||
|
|
||||||
|
@group(0) @binding(0) var cache_tex: texture_2d<f32>;
|
||||||
|
@group(0) @binding(1) var cache_sampler: sampler;
|
||||||
|
@group(0) @binding(2) var<uniform> params: Params;
|
||||||
|
|
||||||
|
struct VertexOutput {
|
||||||
|
@builtin(position) position: vec4<f32>,
|
||||||
|
@location(0) uv: vec2<f32>,
|
||||||
|
}
|
||||||
|
|
||||||
|
@vertex
|
||||||
|
fn vs_main(@builtin(vertex_index) vi: u32) -> VertexOutput {
|
||||||
|
var out: VertexOutput;
|
||||||
|
let x = f32(i32(vi) / 2) * 4.0 - 1.0;
|
||||||
|
let y = f32(i32(vi) % 2) * 4.0 - 1.0;
|
||||||
|
out.position = vec4(x, y, 0.0, 1.0);
|
||||||
|
out.uv = vec2((x + 1.0) * 0.5, (1.0 - y) * 0.5);
|
||||||
|
return out;
|
||||||
|
}
|
||||||
|
|
||||||
|
fn rounded_rect_sdf(pos: vec2<f32>, rect_min: vec2<f32>, rect_max: vec2<f32>, r: f32) -> f32 {
|
||||||
|
let center = (rect_min + rect_max) * 0.5;
|
||||||
|
let half_size = (rect_max - rect_min) * 0.5;
|
||||||
|
let q = abs(pos - center) - half_size + vec2(r);
|
||||||
|
return length(max(q, vec2(0.0))) - r;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Colormap: black -> blue -> purple -> red -> orange -> yellow -> white
|
||||||
|
fn colormap(v: f32, gamma: f32) -> vec4<f32> {
|
||||||
|
let t = pow(clamp(v, 0.0, 1.0), gamma);
|
||||||
|
|
||||||
|
if t < 1.0 / 6.0 {
|
||||||
|
let s = t * 6.0;
|
||||||
|
return vec4(0.0, 0.0, s, 1.0);
|
||||||
|
} else if t < 2.0 / 6.0 {
|
||||||
|
let s = (t - 1.0 / 6.0) * 6.0;
|
||||||
|
return vec4(s * 0.6, 0.0, 1.0 - s * 0.2, 1.0);
|
||||||
|
} else if t < 3.0 / 6.0 {
|
||||||
|
let s = (t - 2.0 / 6.0) * 6.0;
|
||||||
|
return vec4(0.6 + s * 0.4, 0.0, 0.8 - s * 0.8, 1.0);
|
||||||
|
} else if t < 4.0 / 6.0 {
|
||||||
|
let s = (t - 3.0 / 6.0) * 6.0;
|
||||||
|
return vec4(1.0, s * 0.5, 0.0, 1.0);
|
||||||
|
} else if t < 5.0 / 6.0 {
|
||||||
|
let s = (t - 4.0 / 6.0) * 6.0;
|
||||||
|
return vec4(1.0, 0.5 + s * 0.5, 0.0, 1.0);
|
||||||
|
} else {
|
||||||
|
let s = (t - 5.0 / 6.0) * 6.0;
|
||||||
|
return vec4(1.0, 1.0, s, 1.0);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@fragment
|
||||||
|
fn fs_main(in: VertexOutput) -> @location(0) vec4<f32> {
|
||||||
|
let frag_x = in.position.x;
|
||||||
|
let frag_y = in.position.y;
|
||||||
|
|
||||||
|
// Clip to view rectangle
|
||||||
|
if frag_x < params.clip_rect.x || frag_x > params.clip_rect.z ||
|
||||||
|
frag_y < params.clip_rect.y || frag_y > params.clip_rect.w {
|
||||||
|
discard;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Compute the content rect in screen space
|
||||||
|
let content_left = params.clip_rect.x + (params.clip_start_time - params.trim_start - params.viewport_start_time) * params.pixels_per_second;
|
||||||
|
let content_right = content_left + params.audio_duration * params.pixels_per_second;
|
||||||
|
let content_top = params.clip_rect.y - params.scroll_y;
|
||||||
|
let content_bottom = params.clip_rect.y + (params.max_note - params.min_note + 1.0) * params.note_height - params.scroll_y;
|
||||||
|
|
||||||
|
// Rounded corners
|
||||||
|
let vis_top = max(content_top, params.clip_rect.y);
|
||||||
|
let vis_bottom = min(content_bottom, params.clip_rect.w);
|
||||||
|
let corner_radius = 6.0;
|
||||||
|
let dist = rounded_rect_sdf(
|
||||||
|
vec2(frag_x, frag_y),
|
||||||
|
vec2(content_left, vis_top),
|
||||||
|
vec2(content_right, vis_bottom),
|
||||||
|
corner_radius
|
||||||
|
);
|
||||||
|
if dist > 0.0 {
|
||||||
|
discard;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fragment X -> audio time -> global CQT column
|
||||||
|
let timeline_time = params.viewport_start_time + (frag_x - params.clip_rect.x) / params.pixels_per_second;
|
||||||
|
let audio_time = timeline_time - params.clip_start_time + params.trim_start;
|
||||||
|
|
||||||
|
if audio_time < 0.0 || audio_time > params.audio_duration {
|
||||||
|
discard;
|
||||||
|
}
|
||||||
|
|
||||||
|
let global_col = audio_time * params.sample_rate / params.hop_size;
|
||||||
|
|
||||||
|
// Check if this column is in the cached range
|
||||||
|
if global_col < params.cache_valid_start || global_col >= params.cache_valid_end {
|
||||||
|
discard;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fragment Y -> MIDI note -> CQT bin (direct mapping!)
|
||||||
|
let note = params.max_note - ((frag_y - params.clip_rect.y + params.scroll_y) / params.note_height);
|
||||||
|
|
||||||
|
if note < params.min_note || note > params.max_note {
|
||||||
|
discard;
|
||||||
|
}
|
||||||
|
|
||||||
|
// CQT bin: each octave has bins_per_octave bins, starting from min_note
|
||||||
|
let bin = (note - params.min_note) * params.bins_per_octave / 12.0;
|
||||||
|
|
||||||
|
if bin < 0.0 || bin >= params.freq_bins {
|
||||||
|
discard;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Map global column to ring buffer position (accounting for stride)
|
||||||
|
let ring_pos = (global_col - params.cache_start_column) / params.column_stride;
|
||||||
|
let cache_x = ring_pos % params.cache_capacity;
|
||||||
|
|
||||||
|
// Sample cache texture with bilinear filtering
|
||||||
|
let u = (cache_x + 0.5) / params.cache_capacity;
|
||||||
|
let v = (bin + 0.5) / params.freq_bins;
|
||||||
|
let magnitude = textureSampleLevel(cache_tex, cache_sampler, vec2(u, v), 0.0).r;
|
||||||
|
|
||||||
|
return colormap(magnitude, params.gamma);
|
||||||
|
}
|
||||||
|
|
@ -63,8 +63,9 @@ fn fs_main(in: VertexOutput) -> @location(0) vec4<f32> {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fragment X position → audio time
|
// Fragment X position → audio time
|
||||||
let timeline_time = params.viewport_start_time + (frag_x - params.clip_rect.x) / params.pixels_per_second;
|
// clip_start_time is the screen X of the (unclamped) clip left edge.
|
||||||
let audio_time = timeline_time - params.clip_start_time + params.trim_start;
|
// (frag_x - clip_start_time) / pps gives the time offset from the clip's start.
|
||||||
|
let audio_time = (frag_x - params.clip_start_time) / params.pixels_per_second + params.trim_start;
|
||||||
|
|
||||||
// Audio time → frame index
|
// Audio time → frame index
|
||||||
let frame_f = audio_time * params.sample_rate - params.segment_start_frame;
|
let frame_f = audio_time * params.sample_rate - params.segment_start_frame;
|
||||||
|
|
|
||||||
|
|
@ -6,8 +6,8 @@
|
||||||
use eframe::egui;
|
use eframe::egui;
|
||||||
use lightningbeam_core::action::Action;
|
use lightningbeam_core::action::Action;
|
||||||
use lightningbeam_core::clip::ClipInstance;
|
use lightningbeam_core::clip::ClipInstance;
|
||||||
use lightningbeam_core::gpu::{BufferPool, BufferFormat, BufferSpec, Compositor, EffectProcessor, HDR_FORMAT, SrgbToLinearConverter};
|
use lightningbeam_core::gpu::{BufferPool, BufferFormat, BufferSpec, Compositor, EffectProcessor, SrgbToLinearConverter};
|
||||||
use lightningbeam_core::layer::{AnyLayer, AudioLayer, AudioLayerType, VideoLayer, VectorLayer};
|
use lightningbeam_core::layer::{AnyLayer, AudioLayer};
|
||||||
use lightningbeam_core::renderer::RenderedLayerType;
|
use lightningbeam_core::renderer::RenderedLayerType;
|
||||||
use super::{DragClipType, NodePath, PaneRenderer, SharedPaneState};
|
use super::{DragClipType, NodePath, PaneRenderer, SharedPaneState};
|
||||||
use std::sync::{Arc, Mutex, OnceLock};
|
use std::sync::{Arc, Mutex, OnceLock};
|
||||||
|
|
@ -872,7 +872,7 @@ impl egui_wgpu::CallbackTrait for VelloCallback {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Also draw selection outlines for clip instances
|
// Also draw selection outlines for clip instances
|
||||||
let clip_instance_count = self.selection.clip_instances().len();
|
let _clip_instance_count = self.selection.clip_instances().len();
|
||||||
for &clip_id in self.selection.clip_instances() {
|
for &clip_id in self.selection.clip_instances() {
|
||||||
if let Some(clip_instance) = vector_layer.clip_instances.iter().find(|ci| ci.id == clip_id) {
|
if let Some(clip_instance) = vector_layer.clip_instances.iter().find(|ci| ci.id == clip_id) {
|
||||||
// Calculate clip-local time
|
// Calculate clip-local time
|
||||||
|
|
@ -1865,7 +1865,7 @@ impl egui_wgpu::CallbackTrait for VelloCallback {
|
||||||
// Clamp to texture bounds
|
// Clamp to texture bounds
|
||||||
if tex_x < width && tex_y < height {
|
if tex_x < width && tex_y < height {
|
||||||
// Create a staging buffer to read back the pixel
|
// Create a staging buffer to read back the pixel
|
||||||
let bytes_per_pixel = 4; // RGBA8
|
let _bytes_per_pixel = 4; // RGBA8
|
||||||
// Align bytes_per_row to 256 (wgpu::COPY_BYTES_PER_ROW_ALIGNMENT)
|
// Align bytes_per_row to 256 (wgpu::COPY_BYTES_PER_ROW_ALIGNMENT)
|
||||||
let bytes_per_row_alignment = 256u32;
|
let bytes_per_row_alignment = 256u32;
|
||||||
let bytes_per_row = bytes_per_row_alignment; // Single pixel, use minimum alignment
|
let bytes_per_row = bytes_per_row_alignment; // Single pixel, use minimum alignment
|
||||||
|
|
@ -2128,7 +2128,6 @@ impl StagePane {
|
||||||
use lightningbeam_core::tool::ToolState;
|
use lightningbeam_core::tool::ToolState;
|
||||||
use lightningbeam_core::layer::AnyLayer;
|
use lightningbeam_core::layer::AnyLayer;
|
||||||
use lightningbeam_core::hit_test::{self, hit_test_vector_editing, EditingHitTolerance, VectorEditHit};
|
use lightningbeam_core::hit_test::{self, hit_test_vector_editing, EditingHitTolerance, VectorEditHit};
|
||||||
use lightningbeam_core::bezpath_editing::{extract_editable_curves, mold_curve};
|
|
||||||
use vello::kurbo::{Point, Rect as KurboRect, Affine};
|
use vello::kurbo::{Point, Rect as KurboRect, Affine};
|
||||||
|
|
||||||
// Check if we have an active vector layer
|
// Check if we have an active vector layer
|
||||||
|
|
@ -2618,9 +2617,8 @@ impl StagePane {
|
||||||
mouse_pos: vello::kurbo::Point,
|
mouse_pos: vello::kurbo::Point,
|
||||||
shared: &mut SharedPaneState,
|
shared: &mut SharedPaneState,
|
||||||
) {
|
) {
|
||||||
use lightningbeam_core::bezpath_editing::{mold_curve, rebuild_bezpath};
|
use lightningbeam_core::bezpath_editing::mold_curve;
|
||||||
use lightningbeam_core::tool::ToolState;
|
use lightningbeam_core::tool::ToolState;
|
||||||
use vello::kurbo::Point;
|
|
||||||
|
|
||||||
// Clone tool state to get owned values
|
// Clone tool state to get owned values
|
||||||
let tool_state = shared.tool_state.clone();
|
let tool_state = shared.tool_state.clone();
|
||||||
|
|
@ -2799,12 +2797,12 @@ impl StagePane {
|
||||||
ui: &mut egui::Ui,
|
ui: &mut egui::Ui,
|
||||||
response: &egui::Response,
|
response: &egui::Response,
|
||||||
world_pos: egui::Vec2,
|
world_pos: egui::Vec2,
|
||||||
shift_held: bool,
|
_shift_held: bool,
|
||||||
shared: &mut SharedPaneState,
|
shared: &mut SharedPaneState,
|
||||||
) {
|
) {
|
||||||
use lightningbeam_core::tool::ToolState;
|
use lightningbeam_core::tool::ToolState;
|
||||||
use lightningbeam_core::layer::AnyLayer;
|
use lightningbeam_core::layer::AnyLayer;
|
||||||
use lightningbeam_core::hit_test::{self, hit_test_vector_editing, EditingHitTolerance, VectorEditHit};
|
use lightningbeam_core::hit_test::{hit_test_vector_editing, EditingHitTolerance, VectorEditHit};
|
||||||
use vello::kurbo::{Point, Affine};
|
use vello::kurbo::{Point, Affine};
|
||||||
|
|
||||||
// Check if we have an active vector layer
|
// Check if we have an active vector layer
|
||||||
|
|
@ -2897,7 +2895,7 @@ impl StagePane {
|
||||||
shape_instance_id: uuid::Uuid,
|
shape_instance_id: uuid::Uuid,
|
||||||
curve_index: usize,
|
curve_index: usize,
|
||||||
point_index: u8,
|
point_index: u8,
|
||||||
mouse_pos: vello::kurbo::Point,
|
_mouse_pos: vello::kurbo::Point,
|
||||||
active_layer_id: uuid::Uuid,
|
active_layer_id: uuid::Uuid,
|
||||||
shared: &mut SharedPaneState,
|
shared: &mut SharedPaneState,
|
||||||
) {
|
) {
|
||||||
|
|
@ -3606,7 +3604,7 @@ impl StagePane {
|
||||||
|
|
||||||
// Mouse drag: add points to path
|
// Mouse drag: add points to path
|
||||||
if response.dragged() {
|
if response.dragged() {
|
||||||
if let ToolState::DrawingPath { points, simplify_mode } = &mut *shared.tool_state {
|
if let ToolState::DrawingPath { points, simplify_mode: _ } = &mut *shared.tool_state {
|
||||||
// Only add point if it's far enough from the last point (reduce noise)
|
// Only add point if it's far enough from the last point (reduce noise)
|
||||||
const MIN_POINT_DISTANCE: f64 = 2.0;
|
const MIN_POINT_DISTANCE: f64 = 2.0;
|
||||||
|
|
||||||
|
|
@ -3760,63 +3758,12 @@ impl StagePane {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Decompose an affine matrix into transform components
|
|
||||||
/// Returns (translation_x, translation_y, rotation_deg, scale_x, scale_y, skew_x_deg, skew_y_deg)
|
|
||||||
fn decompose_affine(affine: kurbo::Affine) -> (f64, f64, f64, f64, f64, f64, f64) {
|
|
||||||
let coeffs = affine.as_coeffs();
|
|
||||||
let a = coeffs[0];
|
|
||||||
let b = coeffs[1];
|
|
||||||
let c = coeffs[2];
|
|
||||||
let d = coeffs[3];
|
|
||||||
let e = coeffs[4]; // translation_x
|
|
||||||
let f = coeffs[5]; // translation_y
|
|
||||||
|
|
||||||
// Extract translation
|
|
||||||
let tx = e;
|
|
||||||
let ty = f;
|
|
||||||
|
|
||||||
// Decompose linear part [[a, c], [b, d]] into rotate * scale * skew
|
|
||||||
// Using QR-like decomposition
|
|
||||||
|
|
||||||
// Extract rotation
|
|
||||||
let rotation_rad = b.atan2(a);
|
|
||||||
let cos_r = rotation_rad.cos();
|
|
||||||
let sin_r = rotation_rad.sin();
|
|
||||||
|
|
||||||
// Remove rotation to get scale * skew
|
|
||||||
// R^(-1) * M where M = [[a, c], [b, d]]
|
|
||||||
let m11 = a * cos_r + b * sin_r;
|
|
||||||
let m12 = c * cos_r + d * sin_r;
|
|
||||||
let m21 = -a * sin_r + b * cos_r;
|
|
||||||
let m22 = -c * sin_r + d * cos_r;
|
|
||||||
|
|
||||||
// Now [[m11, m12], [m21, m22]] = scale * skew
|
|
||||||
// scale * skew = [[sx, 0], [0, sy]] * [[1, tan(skew_y)], [tan(skew_x), 1]]
|
|
||||||
// = [[sx, sx*tan(skew_y)], [sy*tan(skew_x), sy]]
|
|
||||||
|
|
||||||
let scale_x = m11;
|
|
||||||
let scale_y = m22;
|
|
||||||
|
|
||||||
let skew_x_rad = if scale_y.abs() > 0.001 { (m21 / scale_y).atan() } else { 0.0 };
|
|
||||||
let skew_y_rad = if scale_x.abs() > 0.001 { (m12 / scale_x).atan() } else { 0.0 };
|
|
||||||
|
|
||||||
(
|
|
||||||
tx,
|
|
||||||
ty,
|
|
||||||
rotation_rad.to_degrees(),
|
|
||||||
scale_x,
|
|
||||||
scale_y,
|
|
||||||
skew_x_rad.to_degrees(),
|
|
||||||
skew_y_rad.to_degrees(),
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Apply transform preview to objects based on current mouse position
|
/// Apply transform preview to objects based on current mouse position
|
||||||
fn apply_transform_preview(
|
fn apply_transform_preview(
|
||||||
vector_layer: &mut lightningbeam_core::layer::VectorLayer,
|
vector_layer: &mut lightningbeam_core::layer::VectorLayer,
|
||||||
mode: &lightningbeam_core::tool::TransformMode,
|
mode: &lightningbeam_core::tool::TransformMode,
|
||||||
original_transforms: &std::collections::HashMap<uuid::Uuid, lightningbeam_core::object::Transform>,
|
original_transforms: &std::collections::HashMap<uuid::Uuid, lightningbeam_core::object::Transform>,
|
||||||
pivot: vello::kurbo::Point,
|
_pivot: vello::kurbo::Point,
|
||||||
start_mouse: vello::kurbo::Point,
|
start_mouse: vello::kurbo::Point,
|
||||||
current_mouse: vello::kurbo::Point,
|
current_mouse: vello::kurbo::Point,
|
||||||
original_bbox: vello::kurbo::Rect,
|
original_bbox: vello::kurbo::Rect,
|
||||||
|
|
@ -4784,7 +4731,6 @@ impl StagePane {
|
||||||
_ => egui::CursorIcon::Default,
|
_ => egui::CursorIcon::Default,
|
||||||
};
|
};
|
||||||
ui.ctx().set_cursor_icon(cursor);
|
ui.ctx().set_cursor_icon(cursor);
|
||||||
hovering_handle = true;
|
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -5219,7 +5165,6 @@ impl StagePane {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
_ => {}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} else if let AnyLayer::Video(video_layer) = layer {
|
} else if let AnyLayer::Video(video_layer) = layer {
|
||||||
|
|
|
||||||
|
|
@ -125,6 +125,7 @@ impl TimelinePane {
|
||||||
|
|
||||||
/// Execute a view action with the given parameters
|
/// Execute a view action with the given parameters
|
||||||
/// Called from main.rs after determining this is the best handler
|
/// Called from main.rs after determining this is the best handler
|
||||||
|
#[allow(dead_code)] // Mirrors StagePane; wiring in main.rs pending (see TODO at view action dispatch)
|
||||||
pub fn execute_view_action(&mut self, action: &crate::menu::MenuAction, zoom_center: egui::Vec2) {
|
pub fn execute_view_action(&mut self, action: &crate::menu::MenuAction, zoom_center: egui::Vec2) {
|
||||||
use crate::menu::MenuAction;
|
use crate::menu::MenuAction;
|
||||||
match action {
|
match action {
|
||||||
|
|
@ -150,21 +151,25 @@ impl TimelinePane {
|
||||||
|
|
||||||
/// Start recording on the active audio layer
|
/// Start recording on the active audio layer
|
||||||
fn start_recording(&mut self, shared: &mut SharedPaneState) {
|
fn start_recording(&mut self, shared: &mut SharedPaneState) {
|
||||||
|
use lightningbeam_core::clip::{AudioClip, ClipInstance};
|
||||||
|
|
||||||
let Some(active_layer_id) = *shared.active_layer_id else {
|
let Some(active_layer_id) = *shared.active_layer_id else {
|
||||||
println!("⚠️ No active layer selected for recording");
|
println!("⚠️ No active layer selected for recording");
|
||||||
return;
|
return;
|
||||||
};
|
};
|
||||||
|
|
||||||
// Get the active layer and check if it's an audio layer
|
// Get layer type (copy it so we can drop the document borrow before mutating)
|
||||||
let document = shared.action_executor.document();
|
let layer_type = {
|
||||||
let Some(layer) = document.root.children.iter().find(|l| l.id() == active_layer_id) else {
|
let document = shared.action_executor.document();
|
||||||
println!("⚠️ Active layer not found in document");
|
let Some(layer) = document.root.children.iter().find(|l| l.id() == active_layer_id) else {
|
||||||
return;
|
println!("⚠️ Active layer not found in document");
|
||||||
};
|
return;
|
||||||
|
};
|
||||||
let AnyLayer::Audio(audio_layer) = layer else {
|
let AnyLayer::Audio(audio_layer) = layer else {
|
||||||
println!("⚠️ Active layer is not an audio layer - cannot record");
|
println!("⚠️ Active layer is not an audio layer - cannot record");
|
||||||
return;
|
return;
|
||||||
|
};
|
||||||
|
audio_layer.audio_layer_type
|
||||||
};
|
};
|
||||||
|
|
||||||
// Get the backend track ID for this layer
|
// Get the backend track ID for this layer
|
||||||
|
|
@ -179,31 +184,53 @@ impl TimelinePane {
|
||||||
if let Some(controller_arc) = shared.audio_controller {
|
if let Some(controller_arc) = shared.audio_controller {
|
||||||
let mut controller = controller_arc.lock().unwrap();
|
let mut controller = controller_arc.lock().unwrap();
|
||||||
|
|
||||||
match audio_layer.audio_layer_type {
|
match layer_type {
|
||||||
AudioLayerType::Midi => {
|
AudioLayerType::Midi => {
|
||||||
// For MIDI recording, we need to create a clip first
|
// Create backend MIDI clip and start recording
|
||||||
// The backend will emit MidiRecordingStarted with the clip_id
|
|
||||||
let clip_id = controller.create_midi_clip(track_id, start_time, 4.0);
|
let clip_id = controller.create_midi_clip(track_id, start_time, 4.0);
|
||||||
controller.start_midi_recording(track_id, clip_id, start_time);
|
controller.start_midi_recording(track_id, clip_id, start_time);
|
||||||
shared.recording_clips.insert(active_layer_id, clip_id);
|
shared.recording_clips.insert(active_layer_id, clip_id);
|
||||||
println!("🎹 Started MIDI recording on track {:?} at {:.2}s, clip_id={}",
|
println!("🎹 Started MIDI recording on track {:?} at {:.2}s, clip_id={}",
|
||||||
track_id, start_time, clip_id);
|
track_id, start_time, clip_id);
|
||||||
|
|
||||||
|
// Drop controller lock before document mutation
|
||||||
|
drop(controller);
|
||||||
|
|
||||||
|
// Create document clip + clip instance immediately (clip_id is known synchronously)
|
||||||
|
let doc_clip = AudioClip::new_midi("Recording...", clip_id, 4.0);
|
||||||
|
let doc_clip_id = shared.action_executor.document_mut().add_audio_clip(doc_clip);
|
||||||
|
|
||||||
|
let clip_instance = ClipInstance::new(doc_clip_id)
|
||||||
|
.with_timeline_start(start_time);
|
||||||
|
|
||||||
|
if let Some(layer) = shared.action_executor.document_mut().root.children.iter_mut()
|
||||||
|
.find(|l| l.id() == active_layer_id)
|
||||||
|
{
|
||||||
|
if let lightningbeam_core::layer::AnyLayer::Audio(audio_layer) = layer {
|
||||||
|
audio_layer.clip_instances.push(clip_instance);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Initialize empty cache entry for this clip
|
||||||
|
shared.midi_event_cache.insert(clip_id, Vec::new());
|
||||||
}
|
}
|
||||||
AudioLayerType::Sampled => {
|
AudioLayerType::Sampled => {
|
||||||
// For audio recording, backend creates the clip
|
// For audio recording, backend creates the clip
|
||||||
controller.start_recording(track_id, start_time);
|
controller.start_recording(track_id, start_time);
|
||||||
println!("🎤 Started audio recording on track {:?} at {:.2}s", track_id, start_time);
|
println!("🎤 Started audio recording on track {:?} at {:.2}s", track_id, start_time);
|
||||||
|
drop(controller);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Auto-start playback if not already playing
|
// Re-acquire lock for playback start
|
||||||
if !*shared.is_playing {
|
if !*shared.is_playing {
|
||||||
|
let mut controller = controller_arc.lock().unwrap();
|
||||||
controller.play();
|
controller.play();
|
||||||
*shared.is_playing = true;
|
*shared.is_playing = true;
|
||||||
println!("▶ Auto-started playback for recording");
|
println!("▶ Auto-started playback for recording");
|
||||||
}
|
}
|
||||||
|
|
||||||
// Store recording state for clip creation when RecordingStarted event arrives
|
// Store recording state
|
||||||
*shared.is_recording = true;
|
*shared.is_recording = true;
|
||||||
*shared.recording_start_time = start_time;
|
*shared.recording_start_time = start_time;
|
||||||
*shared.recording_layer_id = Some(active_layer_id);
|
*shared.recording_layer_id = Some(active_layer_id);
|
||||||
|
|
@ -505,7 +532,7 @@ impl TimelinePane {
|
||||||
painter: &egui::Painter,
|
painter: &egui::Painter,
|
||||||
clip_rect: egui::Rect,
|
clip_rect: egui::Rect,
|
||||||
rect_min_x: f32, // Timeline panel left edge (for proper viewport-relative positioning)
|
rect_min_x: f32, // Timeline panel left edge (for proper viewport-relative positioning)
|
||||||
events: &[(f64, u8, bool)], // (timestamp, note_number, is_note_on)
|
events: &[(f64, u8, u8, bool)], // (timestamp, note_number, velocity, is_note_on)
|
||||||
trim_start: f64,
|
trim_start: f64,
|
||||||
visible_duration: f64,
|
visible_duration: f64,
|
||||||
timeline_start: f64,
|
timeline_start: f64,
|
||||||
|
|
@ -527,7 +554,7 @@ impl TimelinePane {
|
||||||
let mut note_rectangles: Vec<(egui::Rect, u8)> = Vec::new();
|
let mut note_rectangles: Vec<(egui::Rect, u8)> = Vec::new();
|
||||||
|
|
||||||
// First pass: pair note-ons with note-offs to calculate durations
|
// First pass: pair note-ons with note-offs to calculate durations
|
||||||
for &(timestamp, note_number, is_note_on) in events {
|
for &(timestamp, note_number, _velocity, is_note_on) in events {
|
||||||
if is_note_on {
|
if is_note_on {
|
||||||
// Store note-on timestamp
|
// Store note-on timestamp
|
||||||
active_notes.insert(note_number, timestamp);
|
active_notes.insert(note_number, timestamp);
|
||||||
|
|
@ -755,7 +782,7 @@ impl TimelinePane {
|
||||||
|
|
||||||
// Mute button
|
// Mute button
|
||||||
// TODO: Replace with SVG icon (volume-up-fill.svg / volume-mute.svg)
|
// TODO: Replace with SVG icon (volume-up-fill.svg / volume-mute.svg)
|
||||||
let mute_response = ui.allocate_new_ui(egui::UiBuilder::new().max_rect(mute_button_rect), |ui| {
|
let mute_response = ui.scope_builder(egui::UiBuilder::new().max_rect(mute_button_rect), |ui| {
|
||||||
let mute_text = if is_muted { "🔇" } else { "🔊" };
|
let mute_text = if is_muted { "🔇" } else { "🔊" };
|
||||||
let button = egui::Button::new(mute_text)
|
let button = egui::Button::new(mute_text)
|
||||||
.fill(if is_muted {
|
.fill(if is_muted {
|
||||||
|
|
@ -779,7 +806,7 @@ impl TimelinePane {
|
||||||
|
|
||||||
// Solo button
|
// Solo button
|
||||||
// TODO: Replace with SVG headphones icon
|
// TODO: Replace with SVG headphones icon
|
||||||
let solo_response = ui.allocate_new_ui(egui::UiBuilder::new().max_rect(solo_button_rect), |ui| {
|
let solo_response = ui.scope_builder(egui::UiBuilder::new().max_rect(solo_button_rect), |ui| {
|
||||||
let button = egui::Button::new("🎧")
|
let button = egui::Button::new("🎧")
|
||||||
.fill(if is_soloed {
|
.fill(if is_soloed {
|
||||||
egui::Color32::from_rgba_unmultiplied(100, 200, 100, 100)
|
egui::Color32::from_rgba_unmultiplied(100, 200, 100, 100)
|
||||||
|
|
@ -802,7 +829,7 @@ impl TimelinePane {
|
||||||
|
|
||||||
// Lock button
|
// Lock button
|
||||||
// TODO: Replace with SVG lock/lock-open icons
|
// TODO: Replace with SVG lock/lock-open icons
|
||||||
let lock_response = ui.allocate_new_ui(egui::UiBuilder::new().max_rect(lock_button_rect), |ui| {
|
let lock_response = ui.scope_builder(egui::UiBuilder::new().max_rect(lock_button_rect), |ui| {
|
||||||
let lock_text = if is_locked { "🔒" } else { "🔓" };
|
let lock_text = if is_locked { "🔒" } else { "🔓" };
|
||||||
let button = egui::Button::new(lock_text)
|
let button = egui::Button::new(lock_text)
|
||||||
.fill(if is_locked {
|
.fill(if is_locked {
|
||||||
|
|
@ -825,7 +852,7 @@ impl TimelinePane {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Volume slider (nonlinear: 0-70% slider = 0-100% volume, 70-100% slider = 100-200% volume)
|
// Volume slider (nonlinear: 0-70% slider = 0-100% volume, 70-100% slider = 100-200% volume)
|
||||||
let volume_response = ui.allocate_new_ui(egui::UiBuilder::new().max_rect(volume_slider_rect), |ui| {
|
let volume_response = ui.scope_builder(egui::UiBuilder::new().max_rect(volume_slider_rect), |ui| {
|
||||||
// Map volume (0.0-2.0) to slider position (0.0-1.0)
|
// Map volume (0.0-2.0) to slider position (0.0-1.0)
|
||||||
let slider_value = if current_volume <= 1.0 {
|
let slider_value = if current_volume <= 1.0 {
|
||||||
// 0.0-1.0 volume maps to 0.0-0.7 slider (70%)
|
// 0.0-1.0 volume maps to 0.0-0.7 slider (70%)
|
||||||
|
|
@ -892,7 +919,7 @@ impl TimelinePane {
|
||||||
document: &lightningbeam_core::document::Document,
|
document: &lightningbeam_core::document::Document,
|
||||||
active_layer_id: &Option<uuid::Uuid>,
|
active_layer_id: &Option<uuid::Uuid>,
|
||||||
selection: &lightningbeam_core::selection::Selection,
|
selection: &lightningbeam_core::selection::Selection,
|
||||||
midi_event_cache: &std::collections::HashMap<u32, Vec<(f64, u8, bool)>>,
|
midi_event_cache: &std::collections::HashMap<u32, Vec<(f64, u8, u8, bool)>>,
|
||||||
raw_audio_cache: &std::collections::HashMap<usize, (Vec<f32>, u32, u32)>,
|
raw_audio_cache: &std::collections::HashMap<usize, (Vec<f32>, u32, u32)>,
|
||||||
waveform_gpu_dirty: &mut std::collections::HashSet<usize>,
|
waveform_gpu_dirty: &mut std::collections::HashSet<usize>,
|
||||||
target_format: wgpu::TextureFormat,
|
target_format: wgpu::TextureFormat,
|
||||||
|
|
@ -1191,7 +1218,7 @@ impl TimelinePane {
|
||||||
if let Some((samples, sr, ch)) = raw_audio_cache.get(audio_pool_index) {
|
if let Some((samples, sr, ch)) = raw_audio_cache.get(audio_pool_index) {
|
||||||
let total_frames = samples.len() / (*ch).max(1) as usize;
|
let total_frames = samples.len() / (*ch).max(1) as usize;
|
||||||
let audio_file_duration = total_frames as f64 / *sr as f64;
|
let audio_file_duration = total_frames as f64 / *sr as f64;
|
||||||
let screen_size = ui.ctx().screen_rect().size();
|
let screen_size = ui.ctx().content_rect().size();
|
||||||
|
|
||||||
let pending_upload = if waveform_gpu_dirty.contains(audio_pool_index) {
|
let pending_upload = if waveform_gpu_dirty.contains(audio_pool_index) {
|
||||||
waveform_gpu_dirty.remove(audio_pool_index);
|
waveform_gpu_dirty.remove(audio_pool_index);
|
||||||
|
|
@ -1228,7 +1255,7 @@ impl TimelinePane {
|
||||||
pixels_per_second: self.pixels_per_second as f32,
|
pixels_per_second: self.pixels_per_second as f32,
|
||||||
audio_duration: audio_file_duration as f32,
|
audio_duration: audio_file_duration as f32,
|
||||||
sample_rate: *sr as f32,
|
sample_rate: *sr as f32,
|
||||||
clip_start_time: instance_start as f32,
|
clip_start_time: clip_screen_start,
|
||||||
trim_start: preview_trim_start as f32,
|
trim_start: preview_trim_start as f32,
|
||||||
tex_width: crate::waveform_gpu::tex_width() as f32,
|
tex_width: crate::waveform_gpu::tex_width() as f32,
|
||||||
total_frames: total_frames as f32,
|
total_frames: total_frames as f32,
|
||||||
|
|
@ -1321,7 +1348,7 @@ impl TimelinePane {
|
||||||
fn handle_input(
|
fn handle_input(
|
||||||
&mut self,
|
&mut self,
|
||||||
ui: &mut egui::Ui,
|
ui: &mut egui::Ui,
|
||||||
full_timeline_rect: egui::Rect,
|
_full_timeline_rect: egui::Rect,
|
||||||
ruler_rect: egui::Rect,
|
ruler_rect: egui::Rect,
|
||||||
content_rect: egui::Rect,
|
content_rect: egui::Rect,
|
||||||
header_rect: egui::Rect,
|
header_rect: egui::Rect,
|
||||||
|
|
@ -1331,7 +1358,7 @@ impl TimelinePane {
|
||||||
selection: &mut lightningbeam_core::selection::Selection,
|
selection: &mut lightningbeam_core::selection::Selection,
|
||||||
pending_actions: &mut Vec<Box<dyn lightningbeam_core::action::Action>>,
|
pending_actions: &mut Vec<Box<dyn lightningbeam_core::action::Action>>,
|
||||||
playback_time: &mut f64,
|
playback_time: &mut f64,
|
||||||
is_playing: &mut bool,
|
_is_playing: &mut bool,
|
||||||
audio_controller: Option<&std::sync::Arc<std::sync::Mutex<daw_backend::EngineController>>>,
|
audio_controller: Option<&std::sync::Arc<std::sync::Mutex<daw_backend::EngineController>>>,
|
||||||
) {
|
) {
|
||||||
// Don't allocate the header area for input - let widgets handle it directly
|
// Don't allocate the header area for input - let widgets handle it directly
|
||||||
|
|
@ -1759,8 +1786,10 @@ impl TimelinePane {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Distinguish between mouse wheel (discrete) and trackpad (smooth)
|
// Distinguish between mouse wheel (discrete) and trackpad (smooth)
|
||||||
|
// Only handle scroll when mouse is over the timeline area
|
||||||
let mut handled = false;
|
let mut handled = false;
|
||||||
ui.input(|i| {
|
let pointer_over_timeline = response.hovered() || ui.rect_contains_pointer(header_rect);
|
||||||
|
if pointer_over_timeline { ui.input(|i| {
|
||||||
for event in &i.raw.events {
|
for event in &i.raw.events {
|
||||||
if let egui::Event::MouseWheel { unit, delta, modifiers, .. } = event {
|
if let egui::Event::MouseWheel { unit, delta, modifiers, .. } = event {
|
||||||
match unit {
|
match unit {
|
||||||
|
|
@ -1787,10 +1816,10 @@ impl TimelinePane {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
}); }
|
||||||
|
|
||||||
// Handle scroll_delta for trackpad panning (when Ctrl not held)
|
// Handle scroll_delta for trackpad panning (when Ctrl not held)
|
||||||
if !handled {
|
if pointer_over_timeline && !handled {
|
||||||
let scroll_delta = ui.input(|i| i.smooth_scroll_delta);
|
let scroll_delta = ui.input(|i| i.smooth_scroll_delta);
|
||||||
if scroll_delta.x.abs() > 0.0 || scroll_delta.y.abs() > 0.0 {
|
if scroll_delta.x.abs() > 0.0 || scroll_delta.y.abs() > 0.0 {
|
||||||
// Horizontal scroll: pan timeline (inverted: positive delta scrolls left/earlier in time)
|
// Horizontal scroll: pan timeline (inverted: positive delta scrolls left/earlier in time)
|
||||||
|
|
@ -2268,7 +2297,7 @@ impl PaneRenderer for TimelinePane {
|
||||||
*shared.dragging_asset = None;
|
*shared.dragging_asset = None;
|
||||||
} else {
|
} else {
|
||||||
// Get document dimensions for centering and create clip instance
|
// Get document dimensions for centering and create clip instance
|
||||||
let (center_x, center_y, mut clip_instance) = {
|
let (_center_x, _center_y, clip_instance) = {
|
||||||
let doc = shared.action_executor.document();
|
let doc = shared.action_executor.document();
|
||||||
let center_x = doc.width / 2.0;
|
let center_x = doc.width / 2.0;
|
||||||
let center_y = doc.height / 2.0;
|
let center_y = doc.height / 2.0;
|
||||||
|
|
|
||||||
|
|
@ -215,7 +215,7 @@ impl VirtualPianoPane {
|
||||||
|
|
||||||
// Handle interaction (skip if a black key is being interacted with)
|
// Handle interaction (skip if a black key is being interacted with)
|
||||||
let key_id = ui.id().with(("white_key", note));
|
let key_id = ui.id().with(("white_key", note));
|
||||||
let response = ui.interact(key_rect, key_id, egui::Sense::click_and_drag());
|
let _response = ui.interact(key_rect, key_id, egui::Sense::click_and_drag());
|
||||||
|
|
||||||
// Visual feedback for pressed keys (check both pressed_notes and current pointer state)
|
// Visual feedback for pressed keys (check both pressed_notes and current pointer state)
|
||||||
let pointer_over_key = ui.input(|i| {
|
let pointer_over_key = ui.input(|i| {
|
||||||
|
|
@ -298,7 +298,7 @@ impl VirtualPianoPane {
|
||||||
|
|
||||||
// Handle interaction (same as white keys)
|
// Handle interaction (same as white keys)
|
||||||
let key_id = ui.id().with(("black_key", note));
|
let key_id = ui.id().with(("black_key", note));
|
||||||
let response = ui.interact(key_rect, key_id, egui::Sense::click_and_drag());
|
let _response = ui.interact(key_rect, key_id, egui::Sense::click_and_drag());
|
||||||
|
|
||||||
// Visual feedback for pressed keys (check both pressed_notes and current pointer state)
|
// Visual feedback for pressed keys (check both pressed_notes and current pointer state)
|
||||||
let pointer_over_key = ui.input(|i| {
|
let pointer_over_key = ui.input(|i| {
|
||||||
|
|
|
||||||
|
|
@ -192,7 +192,7 @@ impl PreferencesDialog {
|
||||||
ui.label("Default BPM:");
|
ui.label("Default BPM:");
|
||||||
ui.add(
|
ui.add(
|
||||||
egui::DragValue::new(&mut self.working_prefs.bpm)
|
egui::DragValue::new(&mut self.working_prefs.bpm)
|
||||||
.clamp_range(20..=300)
|
.range(20..=300)
|
||||||
.speed(1.0),
|
.speed(1.0),
|
||||||
);
|
);
|
||||||
});
|
});
|
||||||
|
|
@ -201,7 +201,7 @@ impl PreferencesDialog {
|
||||||
ui.label("Default Framerate:");
|
ui.label("Default Framerate:");
|
||||||
ui.add(
|
ui.add(
|
||||||
egui::DragValue::new(&mut self.working_prefs.framerate)
|
egui::DragValue::new(&mut self.working_prefs.framerate)
|
||||||
.clamp_range(1..=120)
|
.range(1..=120)
|
||||||
.speed(1.0)
|
.speed(1.0)
|
||||||
.suffix(" fps"),
|
.suffix(" fps"),
|
||||||
);
|
);
|
||||||
|
|
@ -211,7 +211,7 @@ impl PreferencesDialog {
|
||||||
ui.label("Default File Width:");
|
ui.label("Default File Width:");
|
||||||
ui.add(
|
ui.add(
|
||||||
egui::DragValue::new(&mut self.working_prefs.file_width)
|
egui::DragValue::new(&mut self.working_prefs.file_width)
|
||||||
.clamp_range(100..=10000)
|
.range(100..=10000)
|
||||||
.speed(10.0)
|
.speed(10.0)
|
||||||
.suffix(" px"),
|
.suffix(" px"),
|
||||||
);
|
);
|
||||||
|
|
@ -221,7 +221,7 @@ impl PreferencesDialog {
|
||||||
ui.label("Default File Height:");
|
ui.label("Default File Height:");
|
||||||
ui.add(
|
ui.add(
|
||||||
egui::DragValue::new(&mut self.working_prefs.file_height)
|
egui::DragValue::new(&mut self.working_prefs.file_height)
|
||||||
.clamp_range(100..=10000)
|
.range(100..=10000)
|
||||||
.speed(10.0)
|
.speed(10.0)
|
||||||
.suffix(" px"),
|
.suffix(" px"),
|
||||||
);
|
);
|
||||||
|
|
@ -231,7 +231,7 @@ impl PreferencesDialog {
|
||||||
ui.label("Scroll Speed:");
|
ui.label("Scroll Speed:");
|
||||||
ui.add(
|
ui.add(
|
||||||
egui::DragValue::new(&mut self.working_prefs.scroll_speed)
|
egui::DragValue::new(&mut self.working_prefs.scroll_speed)
|
||||||
.clamp_range(0.1..=10.0)
|
.range(0.1..=10.0)
|
||||||
.speed(0.1),
|
.speed(0.1),
|
||||||
);
|
);
|
||||||
});
|
});
|
||||||
|
|
@ -245,7 +245,7 @@ impl PreferencesDialog {
|
||||||
ui.horizontal(|ui| {
|
ui.horizontal(|ui| {
|
||||||
ui.label("Audio Buffer Size:");
|
ui.label("Audio Buffer Size:");
|
||||||
|
|
||||||
egui::ComboBox::from_id_source("audio_buffer_size")
|
egui::ComboBox::from_id_salt("audio_buffer_size")
|
||||||
.selected_text(format!("{} samples", self.working_prefs.audio_buffer_size))
|
.selected_text(format!("{} samples", self.working_prefs.audio_buffer_size))
|
||||||
.show_ui(ui, |ui| {
|
.show_ui(ui, |ui| {
|
||||||
ui.selectable_value(
|
ui.selectable_value(
|
||||||
|
|
@ -292,7 +292,7 @@ impl PreferencesDialog {
|
||||||
ui.horizontal(|ui| {
|
ui.horizontal(|ui| {
|
||||||
ui.label("Theme:");
|
ui.label("Theme:");
|
||||||
|
|
||||||
egui::ComboBox::from_id_source("theme_mode")
|
egui::ComboBox::from_id_salt("theme_mode")
|
||||||
.selected_text(format!("{:?}", self.working_prefs.theme_mode))
|
.selected_text(format!("{:?}", self.working_prefs.theme_mode))
|
||||||
.show_ui(ui, |ui| {
|
.show_ui(ui, |ui| {
|
||||||
ui.selectable_value(
|
ui.selectable_value(
|
||||||
|
|
|
||||||
|
|
@ -4,4 +4,3 @@
|
||||||
|
|
||||||
pub mod dialog;
|
pub mod dialog;
|
||||||
|
|
||||||
pub use dialog::{PreferencesDialog, PreferencesSaveResult};
|
|
||||||
|
|
|
||||||
|
|
@ -46,27 +46,6 @@ pub struct Style {
|
||||||
// Add more properties as needed
|
// Add more properties as needed
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Style {
|
|
||||||
/// Merge another style into this one (other's properties override if present)
|
|
||||||
pub fn merge(&mut self, other: &Style) {
|
|
||||||
if other.background_color.is_some() {
|
|
||||||
self.background_color = other.background_color;
|
|
||||||
}
|
|
||||||
if other.border_color.is_some() {
|
|
||||||
self.border_color = other.border_color;
|
|
||||||
}
|
|
||||||
if other.text_color.is_some() {
|
|
||||||
self.text_color = other.text_color;
|
|
||||||
}
|
|
||||||
if other.width.is_some() {
|
|
||||||
self.width = other.width;
|
|
||||||
}
|
|
||||||
if other.height.is_some() {
|
|
||||||
self.height = other.height;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Clone)]
|
#[derive(Debug, Clone)]
|
||||||
pub struct Theme {
|
pub struct Theme {
|
||||||
light_variables: HashMap<String, String>,
|
light_variables: HashMap<String, String>,
|
||||||
|
|
@ -229,21 +208,13 @@ impl Theme {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Get a CSS variable value and parse as color (backward compatibility helper)
|
|
||||||
/// This allows old code using theme.color("variable-name") to work
|
|
||||||
pub fn color(&self, var_name: &str) -> Option<egui::Color32> {
|
|
||||||
// Try light variables first, then dark variables
|
|
||||||
let value = self.light_variables.get(var_name)
|
|
||||||
.or_else(|| self.dark_variables.get(var_name))?;
|
|
||||||
parse_hex_color(value)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Get the number of loaded selectors
|
/// Get the number of loaded selectors
|
||||||
pub fn len(&self) -> usize {
|
pub fn len(&self) -> usize {
|
||||||
self.light_styles.len()
|
self.light_styles.len()
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Check if theme has no styles
|
/// Check if theme has no styles
|
||||||
|
#[allow(dead_code)] // Used in tests
|
||||||
pub fn is_empty(&self) -> bool {
|
pub fn is_empty(&self) -> bool {
|
||||||
self.light_styles.is_empty()
|
self.light_styles.is_empty()
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -14,9 +14,6 @@ use wgpu::util::DeviceExt;
|
||||||
/// Fixed texture width (power of 2) for all waveform textures
|
/// Fixed texture width (power of 2) for all waveform textures
|
||||||
const TEX_WIDTH: u32 = 2048;
|
const TEX_WIDTH: u32 = 2048;
|
||||||
|
|
||||||
/// Maximum number of texture segments per audio clip
|
|
||||||
const MAX_SEGMENTS: u32 = 16;
|
|
||||||
|
|
||||||
/// GPU resources for all waveform textures, stored in CallbackResources
|
/// GPU resources for all waveform textures, stored in CallbackResources
|
||||||
pub struct WaveformGpuResources {
|
pub struct WaveformGpuResources {
|
||||||
/// Per-audio-pool-index GPU data
|
/// Per-audio-pool-index GPU data
|
||||||
|
|
@ -34,6 +31,7 @@ pub struct WaveformGpuResources {
|
||||||
}
|
}
|
||||||
|
|
||||||
/// GPU data for a single audio file
|
/// GPU data for a single audio file
|
||||||
|
#[allow(dead_code)] // textures/texture_views must stay alive to back bind groups; metadata for future use
|
||||||
pub struct WaveformGpuEntry {
|
pub struct WaveformGpuEntry {
|
||||||
/// One texture per segment (for long audio split across multiple textures)
|
/// One texture per segment (for long audio split across multiple textures)
|
||||||
pub textures: Vec<wgpu::Texture>,
|
pub textures: Vec<wgpu::Texture>,
|
||||||
|
|
@ -45,8 +43,10 @@ pub struct WaveformGpuEntry {
|
||||||
pub uniform_buffers: Vec<wgpu::Buffer>,
|
pub uniform_buffers: Vec<wgpu::Buffer>,
|
||||||
/// Frames covered by each texture segment
|
/// Frames covered by each texture segment
|
||||||
pub frames_per_segment: u32,
|
pub frames_per_segment: u32,
|
||||||
/// Total frame count
|
/// Total frame count of data currently in the texture
|
||||||
pub total_frames: u64,
|
pub total_frames: u64,
|
||||||
|
/// Allocated texture height (may be larger than needed for current total_frames)
|
||||||
|
pub tex_height: u32,
|
||||||
/// Sample rate
|
/// Sample rate
|
||||||
pub sample_rate: u32,
|
pub sample_rate: u32,
|
||||||
/// Number of channels in source audio
|
/// Number of channels in source audio
|
||||||
|
|
@ -273,14 +273,100 @@ impl WaveformGpuResources {
|
||||||
sample_rate: u32,
|
sample_rate: u32,
|
||||||
channels: u32,
|
channels: u32,
|
||||||
) -> Vec<wgpu::CommandBuffer> {
|
) -> Vec<wgpu::CommandBuffer> {
|
||||||
// Remove old entry if exists
|
let new_total_frames = samples.len() / channels.max(1) as usize;
|
||||||
self.entries.remove(&pool_index);
|
if new_total_frames == 0 {
|
||||||
|
|
||||||
let total_frames = samples.len() / channels.max(1) as usize;
|
|
||||||
if total_frames == 0 {
|
|
||||||
return Vec::new();
|
return Vec::new();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// If entry exists and texture is large enough, do an incremental update
|
||||||
|
let incremental = if let Some(entry) = self.entries.get(&pool_index) {
|
||||||
|
let new_tex_height = (new_total_frames as u32 + TEX_WIDTH - 1) / TEX_WIDTH;
|
||||||
|
if new_tex_height <= entry.tex_height && new_total_frames > entry.total_frames as usize {
|
||||||
|
Some((entry.total_frames as usize, entry.tex_height))
|
||||||
|
} else if new_total_frames <= entry.total_frames as usize {
|
||||||
|
return Vec::new(); // No new data
|
||||||
|
} else {
|
||||||
|
None // Texture too small, need full recreate
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
None // No entry yet
|
||||||
|
};
|
||||||
|
|
||||||
|
if let Some((old_frames, tex_height)) = incremental {
|
||||||
|
// Write only the NEW rows into the existing texture
|
||||||
|
let start_row = old_frames as u32 / TEX_WIDTH;
|
||||||
|
let end_row = (new_total_frames as u32 + TEX_WIDTH - 1) / TEX_WIDTH;
|
||||||
|
let rows_to_write = end_row - start_row;
|
||||||
|
|
||||||
|
let row_texel_count = (TEX_WIDTH * rows_to_write) as usize;
|
||||||
|
let mut row_data: Vec<half::f16> = vec![half::f16::ZERO; row_texel_count * 4];
|
||||||
|
|
||||||
|
let row_start_frame = start_row as usize * TEX_WIDTH as usize;
|
||||||
|
for frame in 0..(rows_to_write as usize * TEX_WIDTH as usize) {
|
||||||
|
let global_frame = row_start_frame + frame;
|
||||||
|
if global_frame >= new_total_frames {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
let sample_offset = global_frame * channels as usize;
|
||||||
|
let left = if sample_offset < samples.len() {
|
||||||
|
samples[sample_offset]
|
||||||
|
} else {
|
||||||
|
0.0
|
||||||
|
};
|
||||||
|
let right = if channels >= 2 && sample_offset + 1 < samples.len() {
|
||||||
|
samples[sample_offset + 1]
|
||||||
|
} else {
|
||||||
|
left
|
||||||
|
};
|
||||||
|
let texel_offset = frame * 4;
|
||||||
|
row_data[texel_offset] = half::f16::from_f32(left);
|
||||||
|
row_data[texel_offset + 1] = half::f16::from_f32(left);
|
||||||
|
row_data[texel_offset + 2] = half::f16::from_f32(right);
|
||||||
|
row_data[texel_offset + 3] = half::f16::from_f32(right);
|
||||||
|
}
|
||||||
|
|
||||||
|
let entry = self.entries.get(&pool_index).unwrap();
|
||||||
|
queue.write_texture(
|
||||||
|
wgpu::TexelCopyTextureInfo {
|
||||||
|
texture: &entry.textures[0],
|
||||||
|
mip_level: 0,
|
||||||
|
origin: wgpu::Origin3d { x: 0, y: start_row, z: 0 },
|
||||||
|
aspect: wgpu::TextureAspect::All,
|
||||||
|
},
|
||||||
|
bytemuck::cast_slice(&row_data),
|
||||||
|
wgpu::TexelCopyBufferLayout {
|
||||||
|
offset: 0,
|
||||||
|
bytes_per_row: Some(TEX_WIDTH * 8),
|
||||||
|
rows_per_image: Some(rows_to_write),
|
||||||
|
},
|
||||||
|
wgpu::Extent3d {
|
||||||
|
width: TEX_WIDTH,
|
||||||
|
height: rows_to_write,
|
||||||
|
depth_or_array_layers: 1,
|
||||||
|
},
|
||||||
|
);
|
||||||
|
|
||||||
|
// Regenerate mipmaps
|
||||||
|
let mip_count = compute_mip_count(TEX_WIDTH, tex_height);
|
||||||
|
let cmds = self.generate_mipmaps(
|
||||||
|
device,
|
||||||
|
&entry.textures[0],
|
||||||
|
TEX_WIDTH,
|
||||||
|
tex_height,
|
||||||
|
mip_count,
|
||||||
|
new_total_frames as u32,
|
||||||
|
);
|
||||||
|
|
||||||
|
// Update total_frames after borrow of entry is done
|
||||||
|
self.entries.get_mut(&pool_index).unwrap().total_frames = new_total_frames as u64;
|
||||||
|
return cmds;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Full create (first upload or texture needs to grow)
|
||||||
|
self.entries.remove(&pool_index);
|
||||||
|
|
||||||
|
let total_frames = new_total_frames;
|
||||||
|
|
||||||
let max_frames_per_segment = (TEX_WIDTH as u64)
|
let max_frames_per_segment = (TEX_WIDTH as u64)
|
||||||
* (device.limits().max_texture_dimension_2d as u64);
|
* (device.limits().max_texture_dimension_2d as u64);
|
||||||
let segment_count =
|
let segment_count =
|
||||||
|
|
@ -325,7 +411,6 @@ impl WaveformGpuResources {
|
||||||
});
|
});
|
||||||
|
|
||||||
// Pack raw samples into Rgba16Float data for mip 0
|
// Pack raw samples into Rgba16Float data for mip 0
|
||||||
// R=left_min=left_sample, G=left_max=left_sample, B=right_min, A=right_max
|
|
||||||
let texel_count = (TEX_WIDTH * tex_height) as usize;
|
let texel_count = (TEX_WIDTH * tex_height) as usize;
|
||||||
let mut mip0_data: Vec<half::f16> = vec![half::f16::ZERO; texel_count * 4];
|
let mut mip0_data: Vec<half::f16> = vec![half::f16::ZERO; texel_count * 4];
|
||||||
|
|
||||||
|
|
@ -341,14 +426,14 @@ impl WaveformGpuResources {
|
||||||
let right = if channels >= 2 && sample_offset + 1 < samples.len() {
|
let right = if channels >= 2 && sample_offset + 1 < samples.len() {
|
||||||
samples[sample_offset + 1]
|
samples[sample_offset + 1]
|
||||||
} else {
|
} else {
|
||||||
left // Mono: duplicate left to right
|
left
|
||||||
};
|
};
|
||||||
|
|
||||||
let texel_offset = frame * 4;
|
let texel_offset = frame * 4;
|
||||||
mip0_data[texel_offset] = half::f16::from_f32(left); // R = left_min
|
mip0_data[texel_offset] = half::f16::from_f32(left);
|
||||||
mip0_data[texel_offset + 1] = half::f16::from_f32(left); // G = left_max
|
mip0_data[texel_offset + 1] = half::f16::from_f32(left);
|
||||||
mip0_data[texel_offset + 2] = half::f16::from_f32(right); // B = right_min
|
mip0_data[texel_offset + 2] = half::f16::from_f32(right);
|
||||||
mip0_data[texel_offset + 3] = half::f16::from_f32(right); // A = right_max
|
mip0_data[texel_offset + 3] = half::f16::from_f32(right);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Upload mip 0
|
// Upload mip 0
|
||||||
|
|
@ -362,7 +447,7 @@ impl WaveformGpuResources {
|
||||||
bytemuck::cast_slice(&mip0_data),
|
bytemuck::cast_slice(&mip0_data),
|
||||||
wgpu::TexelCopyBufferLayout {
|
wgpu::TexelCopyBufferLayout {
|
||||||
offset: 0,
|
offset: 0,
|
||||||
bytes_per_row: Some(TEX_WIDTH * 8), // 4 channels × 2 bytes (f16)
|
bytes_per_row: Some(TEX_WIDTH * 8),
|
||||||
rows_per_image: Some(tex_height),
|
rows_per_image: Some(tex_height),
|
||||||
},
|
},
|
||||||
wgpu::Extent3d {
|
wgpu::Extent3d {
|
||||||
|
|
@ -389,7 +474,7 @@ impl WaveformGpuResources {
|
||||||
..Default::default()
|
..Default::default()
|
||||||
});
|
});
|
||||||
|
|
||||||
// Create uniform buffer placeholder (will be filled per-draw in paint)
|
// Create uniform buffer placeholder
|
||||||
let uniform_buffer = device.create_buffer(&wgpu::BufferDescriptor {
|
let uniform_buffer = device.create_buffer(&wgpu::BufferDescriptor {
|
||||||
label: Some(&format!("waveform_{}_seg{}_uniforms", pool_index, seg)),
|
label: Some(&format!("waveform_{}_seg{}_uniforms", pool_index, seg)),
|
||||||
size: std::mem::size_of::<WaveformParams>() as u64,
|
size: std::mem::size_of::<WaveformParams>() as u64,
|
||||||
|
|
@ -432,6 +517,7 @@ impl WaveformGpuResources {
|
||||||
uniform_buffers,
|
uniform_buffers,
|
||||||
frames_per_segment,
|
frames_per_segment,
|
||||||
total_frames: total_frames as u64,
|
total_frames: total_frames as u64,
|
||||||
|
tex_height: (total_frames as u32 + TEX_WIDTH - 1) / TEX_WIDTH,
|
||||||
sample_rate,
|
sample_rate,
|
||||||
channels,
|
channels,
|
||||||
},
|
},
|
||||||
|
|
@ -612,12 +698,6 @@ fn compute_mip_count(width: u32, height: u32) -> u32 {
|
||||||
(max_dim as f32).log2().floor() as u32 + 1
|
(max_dim as f32).log2().floor() as u32 + 1
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Calculate how many texture segments are needed for a given frame count
|
|
||||||
pub fn segment_count_for_frames(total_frames: u64, max_texture_height: u32) -> u32 {
|
|
||||||
let max_frames_per_segment = TEX_WIDTH as u64 * max_texture_height as u64;
|
|
||||||
((total_frames + max_frames_per_segment - 1) / max_frames_per_segment) as u32
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Get the fixed texture width used for all waveform textures
|
/// Get the fixed texture width used for all waveform textures
|
||||||
pub fn tex_width() -> u32 {
|
pub fn tex_width() -> u32 {
|
||||||
TEX_WIDTH
|
TEX_WIDTH
|
||||||
|
|
|
||||||
|
|
@ -72,6 +72,7 @@ fn key_to_char(key: egui::Key, shift: bool) -> Option<char> {
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Response from the IME text field widget
|
/// Response from the IME text field widget
|
||||||
|
#[allow(dead_code)] // Standard widget response fields; callers will use as features expand
|
||||||
pub struct ImeTextFieldResponse {
|
pub struct ImeTextFieldResponse {
|
||||||
/// The egui response for the text field area
|
/// The egui response for the text field area
|
||||||
pub response: egui::Response,
|
pub response: egui::Response,
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue