Merge branch 'main' into rust-ui

This commit is contained in:
Skyler Lehmkuhl 2025-11-28 07:00:22 -05:00
commit 5761d48f1b
48 changed files with 5136 additions and 2326 deletions

View File

@ -1,3 +1,17 @@
# 0.8.1-alpha:
Changes:
- Rewrite timeline UI
- Add start screen
- Move audio engine to backend
- Add node editor for audio synthesis
- Add factory presets for instruments
- Add MIDI input support
- Add BPM handling and time signature
- Add metronome
- Add preset layouts for different tasks
- Add video import
- Add animation curves for object properties
# 0.7.14-alpha:
Changes:
- Moving frames can now be undone

142
README.md
View File

@ -1,133 +1,51 @@
# Lightningbeam
An open-source vector animation tool built with Tauri. A spiritual successor to Macromedia Flash 8 / Adobe Animate.
A free and open-source 2D multimedia editor combining vector animation, audio production, and video editing in a single application.
[![Version](https://img.shields.io/badge/version-0.7.14--alpha-orange)](https://github.com/skykooler/Lightningbeam/releases)
## Screenshots
## Overview
![Animation View](screenshots/animation.png)
Lightningbeam is a cross-platform vector animation application for creating keyframe-based animations and interactive content. Originally started in 2010 as an open-source alternative to Adobe Flash, the project has been rewritten using modern web technologies (JavaScript/Canvas) with a Tauri-based native desktop application wrapper.
![Music Editing View](screenshots/music.png)
## Current Status
![Video Editing View](screenshots/video.png)
**⚠️ Alpha Software**: Lightningbeam is in active development and not yet feature-complete. The codebase is currently undergoing significant refactoring, particularly the timeline system which is being migrated from frame-based to curve-based animation.
## Current Features
Current branch `new_timeline` implements a major timeline redesign inspired by GarageBand, featuring hierarchical tracks and animation curve visualization.
**Vector Animation**
- Draw and animate vector shapes with keyframe-based timeline
- Non-destructive editing workflow
## Features
**Audio Production**
- Multi-track audio recording
- MIDI sequencing with synthesized and sampled instruments
- Integrated DAW functionality
Current functionality includes:
**Video Editing**
- Basic video timeline and editing (early stage)
- FFmpeg-based video decoding
- **Vector Drawing Tools**: Pen, brush, line, rectangle, ellipse, polygon tools
- **Keyframe Animation**: Timeline-based animation with interpolation
- **Shape Tweening**: Morph between different vector shapes
- **Motion Tweening**: Smooth object movement with curve-based interpolation
- **Layer System**: Multiple layers with visibility controls
- **Hierarchical Objects**: Group objects and edit nested timelines
- **Audio Support**: Import MP3 audio files
- **Video Export**: Export animations as MP4 or WebM
- **Transform Tools**: Move, rotate, and scale objects
- **Color Tools**: Color picker, paint bucket with flood fill
- **Undo/Redo**: Full history management
- **Copy/Paste**: Duplicate objects and keyframes
## Technical Stack
## Installation
- **Frontend:** Vanilla JavaScript
- **Backend:** Rust (Tauri framework)
- **Audio:** cpal + dasp for audio processing
- **Video:** FFmpeg for encode/decode
### Pre-built Releases
## Project Status
Download the latest release for your platform from the [Releases page](https://github.com/skykooler/Lightningbeam/releases).
Lightningbeam is under active development. Current focus is on core functionality and architecture. Full project export is not yet fully implemented.
Supported platforms:
- Linux (AppImage, .deb, .rpm)
- macOS
- Windows
- Web (limited functionality)
### Known Architectural Challenge
### Building from Source
The current Tauri implementation hits IPC bandwidth limitations when streaming decoded video frames from Rust to JavaScript. Tauri's IPC layer has significant serialization overhead (~few MB/s), which is insufficient for real-time high-resolution video rendering.
**Prerequisites:**
- [pnpm](https://pnpm.io/) package manager
- [Rust](https://rustup.rs/) toolchain (installed automatically by Tauri)
- Platform-specific dependencies for Tauri (see [Tauri Prerequisites](https://tauri.app/v1/guides/getting-started/prerequisites))
I'm currently exploring a full Rust rewrite using wgpu/egui to eliminate the IPC bottleneck and handle rendering entirely in native code.
**Build steps:**
## Project History
```bash
# Clone the repository
git clone https://github.com/skykooler/Lightningbeam.git
cd Lightningbeam
Lightningbeam evolved from earlier multimedia editing projects I've worked on since 2010, including the FreeJam DAW. The current JavaScript/Tauri iteration began in November 2023.
# Install dependencies
pnpm install
## Goals
# Run in development mode
pnpm tauri dev
# Build for production
pnpm tauri build
```
**Note for Linux users:** `pnpm tauri dev` works on any distribution, but `pnpm tauri build` currently only works on Ubuntu due to limitations in Tauri's AppImage generation. If you're on a non-Ubuntu distro, you can build in an Ubuntu container/VM or use the development mode instead.
## Quick Start
1. Launch Lightningbeam
2. Create a new file (File → New)
3. Select a drawing tool from the toolbar
4. Draw shapes on the canvas
5. Create keyframes on the timeline to animate objects
6. Use motion or shape tweens to interpolate between keyframes
7. Export your animation (File → Export → Video)
## File Format
Lightningbeam uses the `.beam` file extension. Files are stored in JSON format and contain all project data including vector shapes, keyframes, layers, and animation curves.
**Note**: The file format specification is not yet documented and may change during development.
## Known Limitations
### Platform-Specific Issues
- **Linux**: Pinch-to-zoom gestures zoom the entire window instead of individual canvases. This is a [known Tauri/GTK WebView limitation](https://github.com/tauri-apps/tauri/discussions/3843) with no current workaround.
- **macOS**: Limited testing; some platform-specific bugs may exist.
- **Windows**: Minimal testing; application has been confirmed to run but may have undiscovered issues.
### Web Version Limitations
The web version has several limitations compared to desktop:
- Restricted file system access
- Keyboard shortcut conflicts with browser
- Higher audio latency
- No native file association
- Memory limitations with video export
### General Limitations
- The current timeline system is being replaced; legacy frame-based features may be unstable
- Many features and optimizations are still in development
- Performance benchmarking has not been completed
- File format may change between versions
## Contributing
Contributions are currently limited while the codebase undergoes restructuring. Once the timeline refactor is complete and the code is better organized, the project will be more open to external contributions.
If you encounter bugs or have feature requests, please open an issue on GitHub.
## Credits
Lightningbeam is built with:
- [Tauri](https://tauri.app/) - Desktop application framework
- [FFmpeg](https://ffmpeg.org/) - Video encoding/decoding
- Various JavaScript libraries for drawing, compression, and utilities
## License
**License not yet determined.** The author is considering the MIT License for maximum simplicity and adoption. Contributors should await license clarification before submitting code.
---
**Repository**: https://github.com/skykooler/Lightningbeam
**Version**: 0.7.14-alpha
**Status**: Active Development
Create a comprehensive FOSS alternative for 2D-focused multimedia work, integrating animation, audio, and video editing in a unified workflow.

View File

@ -457,6 +457,7 @@ dependencies = [
"dasp_rms",
"dasp_sample",
"dasp_signal",
"hound",
"midir",
"midly",
"pathdiff",
@ -578,6 +579,12 @@ version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea"
[[package]]
name = "hound"
version = "3.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "62adaabb884c94955b19907d60019f4e145d091c75345379e70d1ee696f7854f"
[[package]]
name = "indexmap"
version = "1.9.3"

View File

@ -16,6 +16,11 @@ rand = "0.8"
base64 = "0.22"
pathdiff = "0.2"
# Audio export
hound = "3.5"
# TODO: Add MP3 support with a different crate
# mp3lame-encoder API is too complex, need to find a better option
# Node-based audio graph dependencies
dasp_graph = "0.11"
dasp_signal = "0.11"

View File

@ -1,21 +1,68 @@
/// Clip ID type
pub type ClipId = u32;
/// Audio clip instance ID type
pub type AudioClipInstanceId = u32;
/// Audio clip that references data in the AudioPool
/// Type alias for backwards compatibility
pub type ClipId = AudioClipInstanceId;
/// Audio clip instance that references content in the AudioClipPool
///
/// This represents a placed instance of audio content on the timeline.
/// The actual audio data is stored in the AudioClipPool and referenced by `audio_pool_index`.
///
/// ## Timing Model
/// - `internal_start` / `internal_end`: Define the region of the source audio to play (trimming)
/// - `external_start` / `external_duration`: Define where the clip appears on the timeline and how long
///
/// ## Looping
/// If `external_duration` is greater than `internal_end - internal_start`,
/// the clip will seamlessly loop back to `internal_start` when it reaches `internal_end`.
#[derive(Debug, Clone)]
pub struct Clip {
pub id: ClipId,
pub struct AudioClipInstance {
pub id: AudioClipInstanceId,
pub audio_pool_index: usize,
pub start_time: f64, // Position on timeline in seconds
pub duration: f64, // Clip duration in seconds
pub offset: f64, // Offset into audio file in seconds
pub gain: f32, // Clip-level gain
/// Start position within the audio content (seconds)
pub internal_start: f64,
/// End position within the audio content (seconds)
pub internal_end: f64,
/// Start position on the timeline (seconds)
pub external_start: f64,
/// Duration on the timeline (seconds) - can be longer than internal duration for looping
pub external_duration: f64,
/// Clip-level gain
pub gain: f32,
}
impl Clip {
/// Create a new clip
/// Type alias for backwards compatibility
pub type Clip = AudioClipInstance;
impl AudioClipInstance {
/// Create a new audio clip instance
pub fn new(
id: ClipId,
id: AudioClipInstanceId,
audio_pool_index: usize,
internal_start: f64,
internal_end: f64,
external_start: f64,
external_duration: f64,
) -> Self {
Self {
id,
audio_pool_index,
internal_start,
internal_end,
external_start,
external_duration,
gain: 1.0,
}
}
/// Create a clip instance from legacy parameters (for backwards compatibility)
/// Maps old start_time/duration/offset to new timing model
pub fn from_legacy(
id: AudioClipInstanceId,
audio_pool_index: usize,
start_time: f64,
duration: f64,
@ -24,22 +71,64 @@ impl Clip {
Self {
id,
audio_pool_index,
start_time,
duration,
offset,
internal_start: offset,
internal_end: offset + duration,
external_start: start_time,
external_duration: duration,
gain: 1.0,
}
}
/// Check if this clip is active at a given timeline position
/// Check if this clip instance is active at a given timeline position
pub fn is_active_at(&self, time_seconds: f64) -> bool {
let clip_end = self.start_time + self.duration;
time_seconds >= self.start_time && time_seconds < clip_end
time_seconds >= self.external_start && time_seconds < self.external_end()
}
/// Get the end time of this clip on the timeline
/// Get the end time of this clip instance on the timeline
pub fn external_end(&self) -> f64 {
self.external_start + self.external_duration
}
/// Get the end time of this clip instance on the timeline
/// (Alias for external_end(), for backwards compatibility)
pub fn end_time(&self) -> f64 {
self.start_time + self.duration
self.external_end()
}
/// Get the start time on the timeline
/// (Alias for external_start, for backwards compatibility)
pub fn start_time(&self) -> f64 {
self.external_start
}
/// Get the internal (content) duration
pub fn internal_duration(&self) -> f64 {
self.internal_end - self.internal_start
}
/// Check if this clip instance loops
pub fn is_looping(&self) -> bool {
self.external_duration > self.internal_duration()
}
/// Get the position within the audio content for a given timeline position
/// Returns None if the timeline position is outside this clip instance
/// Handles looping automatically
pub fn get_content_position(&self, timeline_pos: f64) -> Option<f64> {
if timeline_pos < self.external_start || timeline_pos >= self.external_end() {
return None;
}
let relative_pos = timeline_pos - self.external_start;
let internal_duration = self.internal_duration();
if internal_duration <= 0.0 {
return None;
}
// Wrap around for looping
let content_offset = relative_pos % internal_duration;
Some(self.internal_start + content_offset)
}
/// Set clip gain

View File

@ -1,8 +1,9 @@
use crate::audio::buffer_pool::BufferPool;
use crate::audio::clip::ClipId;
use crate::audio::midi::{MidiClip, MidiClipId, MidiEvent};
use crate::audio::clip::{AudioClipInstance, ClipId};
use crate::audio::metronome::Metronome;
use crate::audio::midi::{MidiClip, MidiClipId, MidiClipInstance, MidiEvent};
use crate::audio::node_graph::{nodes::*, AudioGraph};
use crate::audio::pool::AudioPool;
use crate::audio::pool::AudioClipPool;
use crate::audio::project::Project;
use crate::audio::recording::{MidiRecordingState, RecordingState};
use crate::audio::track::{Track, TrackId, TrackNode};
@ -15,7 +16,7 @@ use std::sync::Arc;
/// Audio engine for Phase 6: hierarchical tracks with groups
pub struct Engine {
project: Project,
audio_pool: AudioPool,
audio_pool: AudioClipPool,
buffer_pool: BufferPool,
playhead: u64, // Playhead position in samples
sample_rate: u32,
@ -55,6 +56,9 @@ pub struct Engine {
// MIDI input manager for external MIDI devices
midi_input_manager: Option<MidiInputManager>,
// Metronome for click track
metronome: Metronome,
}
impl Engine {
@ -74,7 +78,7 @@ impl Engine {
Self {
project: Project::new(sample_rate),
audio_pool: AudioPool::new(),
audio_pool: AudioClipPool::new(),
buffer_pool: BufferPool::new(8, buffer_size), // 8 buffers should handle deep nesting
playhead: 0,
sample_rate,
@ -96,6 +100,7 @@ impl Engine {
recording_progress_counter: 0,
midi_recording_state: None,
midi_input_manager: None,
metronome: Metronome::new(sample_rate),
}
}
@ -159,12 +164,12 @@ impl Engine {
}
/// Get mutable reference to audio pool
pub fn audio_pool_mut(&mut self) -> &mut AudioPool {
pub fn audio_pool_mut(&mut self) -> &mut AudioClipPool {
&mut self.audio_pool
}
/// Get reference to audio pool
pub fn audio_pool(&self) -> &AudioPool {
pub fn audio_pool(&self) -> &AudioClipPool {
&self.audio_pool
}
@ -235,9 +240,15 @@ impl Engine {
let playhead_seconds = self.playhead as f64 / self.sample_rate as f64;
// Render the entire project hierarchy into the mix buffer
// Note: We need to use a raw pointer to avoid borrow checker issues
// The midi_clip_pool is part of project, so we extract a reference before mutable borrow
let midi_pool_ptr = &self.project.midi_clip_pool as *const _;
// SAFETY: The midi_clip_pool is not mutated during render, only read
let midi_pool_ref = unsafe { &*midi_pool_ptr };
self.project.render(
&mut self.mix_buffer,
&self.audio_pool,
midi_pool_ref,
&mut self.buffer_pool,
playhead_seconds,
self.sample_rate,
@ -247,6 +258,15 @@ impl Engine {
// Copy mix to output
output.copy_from_slice(&self.mix_buffer);
// Mix in metronome clicks
self.metronome.process(
output,
self.playhead,
self.playing,
self.sample_rate,
self.channels,
);
// Update playhead (convert total samples to frames)
self.playhead += (output.len() / self.channels as usize) as u64;
@ -300,10 +320,12 @@ impl Engine {
let clip_id = recording.clip_id;
let track_id = recording.track_id;
// Update clip duration in project
// Update clip duration in project as recording progresses
if let Some(crate::audio::track::TrackNode::Audio(track)) = self.project.get_track_mut(track_id) {
if let Some(clip) = track.clips.iter_mut().find(|c| c.id == clip_id) {
clip.duration = duration;
// Update both internal_end and external_duration as recording progresses
clip.internal_end = clip.internal_start + duration;
clip.external_duration = duration;
}
}
@ -370,33 +392,58 @@ impl Engine {
}
}
Command::MoveClip(track_id, clip_id, new_start_time) => {
// Moving just changes external_start, external_duration stays the same
match self.project.get_track_mut(track_id) {
Some(crate::audio::track::TrackNode::Audio(track)) => {
if let Some(clip) = track.clips.iter_mut().find(|c| c.id == clip_id) {
clip.start_time = new_start_time;
clip.external_start = new_start_time;
}
}
Some(crate::audio::track::TrackNode::Midi(track)) => {
if let Some(clip) = track.clips.iter_mut().find(|c| c.id == clip_id) {
clip.start_time = new_start_time;
// Note: clip_id here is the pool clip ID, not instance ID
if let Some(instance) = track.clip_instances.iter_mut().find(|c| c.clip_id == clip_id) {
instance.external_start = new_start_time;
}
}
_ => {}
}
}
Command::TrimClip(track_id, clip_id, new_start_time, new_duration, new_offset) => {
Command::TrimClip(track_id, clip_id, new_internal_start, new_internal_end) => {
// Trim changes which portion of the source content is used
// Also updates external_duration to match internal duration (no looping after trim)
match self.project.get_track_mut(track_id) {
Some(crate::audio::track::TrackNode::Audio(track)) => {
if let Some(clip) = track.clips.iter_mut().find(|c| c.id == clip_id) {
clip.start_time = new_start_time;
clip.duration = new_duration;
clip.offset = new_offset;
clip.internal_start = new_internal_start;
clip.internal_end = new_internal_end;
// By default, trimming sets external_duration to match internal duration
clip.external_duration = new_internal_end - new_internal_start;
}
}
Some(crate::audio::track::TrackNode::Midi(track)) => {
// Note: clip_id here is the pool clip ID, not instance ID
if let Some(instance) = track.clip_instances.iter_mut().find(|c| c.clip_id == clip_id) {
instance.internal_start = new_internal_start;
instance.internal_end = new_internal_end;
// By default, trimming sets external_duration to match internal duration
instance.external_duration = new_internal_end - new_internal_start;
}
}
_ => {}
}
}
Command::ExtendClip(track_id, clip_id, new_external_duration) => {
// Extend changes the external duration (enables looping if > internal duration)
match self.project.get_track_mut(track_id) {
Some(crate::audio::track::TrackNode::Audio(track)) => {
if let Some(clip) = track.clips.iter_mut().find(|c| c.id == clip_id) {
clip.start_time = new_start_time;
clip.duration = new_duration;
clip.external_duration = new_external_duration;
}
}
Some(crate::audio::track::TrackNode::Midi(track)) => {
// Note: clip_id here is the pool clip ID, not instance ID
if let Some(instance) = track.clip_instances.iter_mut().find(|c| c.clip_id == clip_id) {
instance.external_duration = new_external_duration;
}
}
_ => {}
@ -461,10 +508,10 @@ impl Engine {
pool_index, pool_size);
}
// Create a new clip with unique ID
// Create a new clip instance with unique ID using legacy parameters
let clip_id = self.next_clip_id;
self.next_clip_id += 1;
let clip = crate::audio::clip::Clip::new(
let clip = AudioClipInstance::from_legacy(
clip_id,
pool_index,
start_time,
@ -490,37 +537,57 @@ impl Engine {
Command::CreateMidiClip(track_id, start_time, duration) => {
// Get the next MIDI clip ID from the atomic counter
let clip_id = self.next_midi_clip_id_atomic.fetch_add(1, Ordering::Relaxed);
let clip = MidiClip::new(clip_id, start_time, duration);
let _ = self.project.add_midi_clip(track_id, clip);
// Notify UI about the new clip with its ID
// Create clip content in the pool
let clip = MidiClip::empty(clip_id, duration, format!("MIDI Clip {}", clip_id));
self.project.midi_clip_pool.add_existing_clip(clip);
// Create an instance for this clip on the track
let instance_id = self.project.next_midi_clip_instance_id();
let instance = MidiClipInstance::from_full_clip(instance_id, clip_id, duration, start_time);
if let Some(crate::audio::track::TrackNode::Midi(track)) = self.project.get_track_mut(track_id) {
track.clip_instances.push(instance);
}
// Notify UI about the new clip with its ID (using clip_id for now)
let _ = self.event_tx.push(AudioEvent::ClipAdded(track_id, clip_id));
}
Command::AddMidiNote(track_id, clip_id, time_offset, note, velocity, duration) => {
// Add a MIDI note event to the specified clip
if let Some(crate::audio::track::TrackNode::Midi(track)) = self.project.get_track_mut(track_id) {
if let Some(clip) = track.clips.iter_mut().find(|c| c.id == clip_id) {
// Add a MIDI note event to the specified clip in the pool
// Note: clip_id here refers to the clip in the pool, not the instance
if let Some(clip) = self.project.midi_clip_pool.get_clip_mut(clip_id) {
// Timestamp is now in seconds (sample-rate independent)
let note_on = MidiEvent::note_on(time_offset, 0, note, velocity);
clip.events.push(note_on);
clip.add_event(note_on);
// Add note off event
let note_off_time = time_offset + duration;
let note_off = MidiEvent::note_off(note_off_time, 0, note, 64);
clip.events.push(note_off);
// Sort events by timestamp (using partial_cmp for f64)
clip.events.sort_by(|a, b| a.timestamp.partial_cmp(&b.timestamp).unwrap());
}
}
}
Command::AddLoadedMidiClip(track_id, clip) => {
// Add a pre-loaded MIDI clip to the track
let _ = self.project.add_midi_clip(track_id, clip);
}
Command::UpdateMidiClipNotes(track_id, clip_id, notes) => {
// Update all notes in a MIDI clip
clip.add_event(note_off);
} else {
// Try legacy behavior: look for instance on track and find its clip
if let Some(crate::audio::track::TrackNode::Midi(track)) = self.project.get_track_mut(track_id) {
if let Some(clip) = track.clips.iter_mut().find(|c| c.id == clip_id) {
if let Some(instance) = track.clip_instances.iter().find(|c| c.clip_id == clip_id) {
let actual_clip_id = instance.clip_id;
if let Some(clip) = self.project.midi_clip_pool.get_clip_mut(actual_clip_id) {
let note_on = MidiEvent::note_on(time_offset, 0, note, velocity);
clip.add_event(note_on);
let note_off_time = time_offset + duration;
let note_off = MidiEvent::note_off(note_off_time, 0, note, 64);
clip.add_event(note_off);
}
}
}
}
}
Command::AddLoadedMidiClip(track_id, clip, start_time) => {
// Add a pre-loaded MIDI clip to the track with the given start time
let _ = self.project.add_midi_clip_at(track_id, clip, start_time);
}
Command::UpdateMidiClipNotes(_track_id, clip_id, notes) => {
// Update all notes in a MIDI clip (directly in the pool)
if let Some(clip) = self.project.midi_clip_pool.get_clip_mut(clip_id) {
// Clear existing events
clip.events.clear();
@ -540,7 +607,6 @@ impl Engine {
clip.events.sort_by(|a, b| a.timestamp.partial_cmp(&b.timestamp).unwrap());
}
}
}
Command::RequestBufferPoolStats => {
// Send buffer pool statistics back to UI
let stats = self.buffer_pool.stats();
@ -714,7 +780,7 @@ impl Engine {
self.project = Project::new(self.sample_rate);
// Clear audio pool
self.audio_pool = AudioPool::new();
self.audio_pool = AudioClipPool::new();
// Reset buffer pool (recreate with same settings)
let buffer_size = 512 * self.channels as usize;
@ -774,6 +840,10 @@ impl Engine {
}
}
Command::SetMetronomeEnabled(enabled) => {
self.metronome.set_enabled(enabled);
}
// Node graph commands
Command::GraphAddNode(track_id, node_type, x, y) => {
eprintln!("[DEBUG] GraphAddNode received: track_id={}, node_type={}, x={}, y={}", track_id, node_type, x, y);
@ -1096,12 +1166,21 @@ impl Engine {
// Write to file
if let Ok(json) = preset.to_json() {
if let Err(e) = std::fs::write(&preset_path, json) {
match std::fs::write(&preset_path, json) {
Ok(_) => {
// Emit success event with path
let _ = self.event_tx.push(AudioEvent::GraphPresetSaved(
track_id,
preset_path.clone()
));
}
Err(e) => {
let _ = self.event_tx.push(AudioEvent::GraphConnectionError(
track_id,
format!("Failed to save preset: {}", e)
));
}
}
} else {
let _ = self.event_tx.push(AudioEvent::GraphConnectionError(
track_id,
@ -1212,7 +1291,7 @@ impl Engine {
}
}
Command::MultiSamplerAddLayer(track_id, node_id, file_path, key_min, key_max, root_key, velocity_min, velocity_max) => {
Command::MultiSamplerAddLayer(track_id, node_id, file_path, key_min, key_max, root_key, velocity_min, velocity_max, loop_start, loop_end, loop_mode) => {
use crate::audio::node_graph::nodes::MultiSamplerNode;
if let Some(TrackNode::Midi(track)) = self.project.get_track_mut(track_id) {
@ -1226,7 +1305,7 @@ impl Engine {
unsafe {
let multi_sampler_node = &mut *node_ptr;
if let Err(e) = multi_sampler_node.load_layer_from_file(&file_path, key_min, key_max, root_key, velocity_min, velocity_max) {
if let Err(e) = multi_sampler_node.load_layer_from_file(&file_path, key_min, key_max, root_key, velocity_min, velocity_max, loop_start, loop_end, loop_mode) {
eprintln!("Failed to add sample layer: {}", e);
}
}
@ -1234,7 +1313,7 @@ impl Engine {
}
}
Command::MultiSamplerUpdateLayer(track_id, node_id, layer_index, key_min, key_max, root_key, velocity_min, velocity_max) => {
Command::MultiSamplerUpdateLayer(track_id, node_id, layer_index, key_min, key_max, root_key, velocity_min, velocity_max, loop_start, loop_end, loop_mode) => {
use crate::audio::node_graph::nodes::MultiSamplerNode;
if let Some(TrackNode::Midi(track)) = self.project.get_track_mut(track_id) {
@ -1248,7 +1327,7 @@ impl Engine {
unsafe {
let multi_sampler_node = &mut *node_ptr;
if let Err(e) = multi_sampler_node.update_layer(layer_index, key_min, key_max, root_key, velocity_min, velocity_max) {
if let Err(e) = multi_sampler_node.update_layer(layer_index, key_min, key_max, root_key, velocity_min, velocity_max, loop_start, loop_end, loop_mode) {
eprintln!("Failed to update sample layer: {}", e);
}
}
@ -1412,19 +1491,16 @@ impl Engine {
))),
}
}
Query::GetMidiClip(track_id, clip_id) => {
if let Some(TrackNode::Midi(track)) = self.project.get_track(track_id) {
if let Some(clip) = track.clips.iter().find(|c| c.id == clip_id) {
Query::GetMidiClip(_track_id, clip_id) => {
// Get MIDI clip data from the pool
if let Some(clip) = self.project.midi_clip_pool.get_clip(clip_id) {
use crate::command::MidiClipData;
QueryResponse::MidiClipData(Ok(MidiClipData {
duration: clip.duration,
events: clip.events.clone(),
}))
} else {
QueryResponse::MidiClipData(Err(format!("Clip {} not found in track {}", clip_id, track_id)))
}
} else {
QueryResponse::MidiClipData(Err(format!("Track {} not found or is not a MIDI track", track_id)))
QueryResponse::MidiClipData(Err(format!("Clip {} not found in pool", clip_id)))
}
}
@ -1592,6 +1668,17 @@ impl Engine {
None => QueryResponse::PoolFileInfo(Err(format!("Pool index {} not found", pool_index))),
}
}
Query::ExportAudio(settings, output_path) => {
// Perform export directly - this will block the audio thread but that's okay
// since we're exporting and not playing back anyway
// Use raw pointer to get midi_pool reference before mutable borrow of project
let midi_pool_ptr: *const _ = &self.project.midi_clip_pool;
let midi_pool_ref = unsafe { &*midi_pool_ptr };
match crate::audio::export_audio(&mut self.project, &self.audio_pool, midi_pool_ref, &settings, &output_path) {
Ok(()) => QueryResponse::AudioExported(Ok(())),
Err(e) => QueryResponse::AudioExported(Err(e)),
}
}
};
// Send response back
@ -1623,9 +1710,10 @@ impl Engine {
let clip = crate::audio::clip::Clip::new(
clip_id,
0, // Temporary pool index, will be updated on finalization
start_time,
0.0, // Duration starts at 0, will be updated during recording
0.0,
0.0, // internal_start
0.0, // internal_end - Duration starts at 0, will be updated during recording
start_time, // external_start (timeline position)
start_time, // external_end - will be updated during recording
);
// Add clip to track
@ -1784,12 +1872,10 @@ impl Engine {
eprintln!("[MIDI_RECORDING] Stopping MIDI recording for clip_id={}, track_id={}, captured {} notes, duration={:.3}s",
clip_id, track_id, note_count, recording_duration);
// Update the MIDI clip using the existing UpdateMidiClipNotes logic
eprintln!("[MIDI_RECORDING] Looking for track {} to update clip", track_id);
if let Some(crate::audio::track::TrackNode::Midi(track)) = self.project.get_track_mut(track_id) {
eprintln!("[MIDI_RECORDING] Found MIDI track, looking for clip {}", clip_id);
if let Some(clip) = track.clips.iter_mut().find(|c| c.id == clip_id) {
eprintln!("[MIDI_RECORDING] Found clip, clearing and adding {} notes", note_count);
// Update the MIDI clip in the pool (new model: clips are stored centrally in the pool)
eprintln!("[MIDI_RECORDING] Looking for clip {} in midi_clip_pool", clip_id);
if let Some(clip) = self.project.midi_clip_pool.get_clip_mut(clip_id) {
eprintln!("[MIDI_RECORDING] Found clip in pool, clearing and adding {} notes", note_count);
// Clear existing events
clip.events.clear();
@ -1815,11 +1901,18 @@ impl Engine {
// Sort events by timestamp (using partial_cmp for f64)
clip.events.sort_by(|a, b| a.timestamp.partial_cmp(&b.timestamp).unwrap());
eprintln!("[MIDI_RECORDING] Updated clip {} with {} notes ({} events)", clip_id, note_count, clip.events.len());
} else {
eprintln!("[MIDI_RECORDING] ERROR: Clip {} not found on track!", clip_id);
// Also update the clip instance's internal_end and external_duration to match the recording duration
if let Some(crate::audio::track::TrackNode::Midi(track)) = self.project.get_track_mut(track_id) {
if let Some(instance) = track.clip_instances.iter_mut().find(|i| i.clip_id == clip_id) {
instance.internal_end = recording_duration;
instance.external_duration = recording_duration;
eprintln!("[MIDI_RECORDING] Updated clip instance timing: internal_end={:.3}s, external_duration={:.3}s",
instance.internal_end, instance.external_duration);
}
}
} else {
eprintln!("[MIDI_RECORDING] ERROR: Track {} not found or not a MIDI track!", track_id);
eprintln!("[MIDI_RECORDING] ERROR: Clip {} not found in pool!", clip_id);
}
// Send event to UI
@ -1906,13 +1999,20 @@ impl EngineController {
let _ = self.command_tx.push(Command::SetTrackSolo(track_id, solo));
}
/// Move a clip to a new timeline position
/// Move a clip to a new timeline position (changes external_start)
pub fn move_clip(&mut self, track_id: TrackId, clip_id: ClipId, new_start_time: f64) {
let _ = self.command_tx.push(Command::MoveClip(track_id, clip_id, new_start_time));
}
pub fn trim_clip(&mut self, track_id: TrackId, clip_id: ClipId, new_start_time: f64, new_duration: f64, new_offset: f64) {
let _ = self.command_tx.push(Command::TrimClip(track_id, clip_id, new_start_time, new_duration, new_offset));
/// Trim a clip's internal boundaries (changes which portion of source content is used)
/// This also resets external_duration to match internal duration (disables looping)
pub fn trim_clip(&mut self, track_id: TrackId, clip_id: ClipId, new_internal_start: f64, new_internal_end: f64) {
let _ = self.command_tx.push(Command::TrimClip(track_id, clip_id, new_internal_start, new_internal_end));
}
/// Extend or shrink a clip's external duration (enables looping if > internal duration)
pub fn extend_clip(&mut self, track_id: TrackId, clip_id: ClipId, new_external_duration: f64) {
let _ = self.command_tx.push(Command::ExtendClip(track_id, clip_id, new_external_duration));
}
/// Send a generic command to the audio thread
@ -2036,9 +2136,9 @@ impl EngineController {
let _ = self.command_tx.push(Command::AddMidiNote(track_id, clip_id, time_offset, note, velocity, duration));
}
/// Add a pre-loaded MIDI clip to a track
pub fn add_loaded_midi_clip(&mut self, track_id: TrackId, clip: MidiClip) {
let _ = self.command_tx.push(Command::AddLoadedMidiClip(track_id, clip));
/// Add a pre-loaded MIDI clip to a track at the given timeline position
pub fn add_loaded_midi_clip(&mut self, track_id: TrackId, clip: MidiClip, start_time: f64) {
let _ = self.command_tx.push(Command::AddLoadedMidiClip(track_id, clip, start_time));
}
/// Update all notes in a MIDI clip
@ -2165,6 +2265,11 @@ impl EngineController {
let _ = self.command_tx.push(Command::SetActiveMidiTrack(track_id));
}
/// Enable or disable the metronome click track
pub fn set_metronome_enabled(&mut self, enabled: bool) {
let _ = self.command_tx.push(Command::SetMetronomeEnabled(enabled));
}
// Node graph operations
/// Add a node to a track's instrument graph
@ -2231,13 +2336,13 @@ impl EngineController {
}
/// Add a sample layer to a MultiSampler node
pub fn multi_sampler_add_layer(&mut self, track_id: TrackId, node_id: u32, file_path: String, key_min: u8, key_max: u8, root_key: u8, velocity_min: u8, velocity_max: u8) {
let _ = self.command_tx.push(Command::MultiSamplerAddLayer(track_id, node_id, file_path, key_min, key_max, root_key, velocity_min, velocity_max));
pub fn multi_sampler_add_layer(&mut self, track_id: TrackId, node_id: u32, file_path: String, key_min: u8, key_max: u8, root_key: u8, velocity_min: u8, velocity_max: u8, loop_start: Option<usize>, loop_end: Option<usize>, loop_mode: crate::audio::node_graph::nodes::LoopMode) {
let _ = self.command_tx.push(Command::MultiSamplerAddLayer(track_id, node_id, file_path, key_min, key_max, root_key, velocity_min, velocity_max, loop_start, loop_end, loop_mode));
}
/// Update a MultiSampler layer's configuration
pub fn multi_sampler_update_layer(&mut self, track_id: TrackId, node_id: u32, layer_index: usize, key_min: u8, key_max: u8, root_key: u8, velocity_min: u8, velocity_max: u8) {
let _ = self.command_tx.push(Command::MultiSamplerUpdateLayer(track_id, node_id, layer_index, key_min, key_max, root_key, velocity_min, velocity_max));
pub fn multi_sampler_update_layer(&mut self, track_id: TrackId, node_id: u32, layer_index: usize, key_min: u8, key_max: u8, root_key: u8, velocity_min: u8, velocity_max: u8, loop_start: Option<usize>, loop_end: Option<usize>, loop_mode: crate::audio::node_graph::nodes::LoopMode) {
let _ = self.command_tx.push(Command::MultiSamplerUpdateLayer(track_id, node_id, layer_index, key_min, key_max, root_key, velocity_min, velocity_max, loop_start, loop_end, loop_mode));
}
/// Remove a layer from a MultiSampler node
@ -2524,4 +2629,25 @@ impl EngineController {
Err("Query timeout".to_string())
}
/// Export audio to a file
pub fn export_audio<P: AsRef<std::path::Path>>(&mut self, settings: &crate::audio::ExportSettings, output_path: P) -> Result<(), String> {
// Send export query
if let Err(_) = self.query_tx.push(Query::ExportAudio(settings.clone(), output_path.as_ref().to_path_buf())) {
return Err("Failed to send export query - queue full".to_string());
}
// Wait for response (with longer timeout since export can take a while)
let start = std::time::Instant::now();
let timeout = std::time::Duration::from_secs(300); // 5 minute timeout for export
while start.elapsed() < timeout {
if let Ok(QueryResponse::AudioExported(result)) = self.query_response_rx.pop() {
return result;
}
std::thread::sleep(std::time::Duration::from_millis(100));
}
Err("Export timeout".to_string())
}
}

View File

@ -0,0 +1,266 @@
use super::buffer_pool::BufferPool;
use super::midi_pool::MidiClipPool;
use super::pool::AudioPool;
use super::project::Project;
use std::path::Path;
/// Supported export formats
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum ExportFormat {
Wav,
Flac,
// TODO: Add MP3 support
}
impl ExportFormat {
/// Get the file extension for this format
pub fn extension(&self) -> &'static str {
match self {
ExportFormat::Wav => "wav",
ExportFormat::Flac => "flac",
}
}
}
/// Export settings for rendering audio
#[derive(Debug, Clone)]
pub struct ExportSettings {
/// Output format
pub format: ExportFormat,
/// Sample rate for export
pub sample_rate: u32,
/// Number of channels (1 = mono, 2 = stereo)
pub channels: u32,
/// Bit depth (16 or 24) - only for WAV/FLAC
pub bit_depth: u16,
/// MP3 bitrate in kbps (128, 192, 256, 320)
pub mp3_bitrate: u32,
/// Start time in seconds
pub start_time: f64,
/// End time in seconds
pub end_time: f64,
}
impl Default for ExportSettings {
fn default() -> Self {
Self {
format: ExportFormat::Wav,
sample_rate: 44100,
channels: 2,
bit_depth: 16,
mp3_bitrate: 320,
start_time: 0.0,
end_time: 60.0,
}
}
}
/// Export the project to an audio file
///
/// This performs offline rendering, processing the entire timeline
/// in chunks to generate the final audio file.
pub fn export_audio<P: AsRef<Path>>(
project: &mut Project,
pool: &AudioPool,
midi_pool: &MidiClipPool,
settings: &ExportSettings,
output_path: P,
) -> Result<(), String> {
// Render the project to memory
let samples = render_to_memory(project, pool, midi_pool, settings)?;
// Write to file based on format
match settings.format {
ExportFormat::Wav => write_wav(&samples, settings, output_path)?,
ExportFormat::Flac => write_flac(&samples, settings, output_path)?,
}
Ok(())
}
/// Render the project to memory
fn render_to_memory(
project: &mut Project,
pool: &AudioPool,
midi_pool: &MidiClipPool,
settings: &ExportSettings,
) -> Result<Vec<f32>, String> {
// Calculate total number of frames
let duration = settings.end_time - settings.start_time;
let total_frames = (duration * settings.sample_rate as f64).round() as usize;
let total_samples = total_frames * settings.channels as usize;
println!("Export: duration={:.3}s, total_frames={}, total_samples={}, channels={}",
duration, total_frames, total_samples, settings.channels);
// Render in chunks to avoid memory issues
const CHUNK_FRAMES: usize = 4096;
let chunk_samples = CHUNK_FRAMES * settings.channels as usize;
// Create buffer for rendering
let mut render_buffer = vec![0.0f32; chunk_samples];
let mut buffer_pool = BufferPool::new(16, chunk_samples);
// Collect all rendered samples
let mut all_samples = Vec::with_capacity(total_samples);
let mut playhead = settings.start_time;
let chunk_duration = CHUNK_FRAMES as f64 / settings.sample_rate as f64;
// Render the entire timeline in chunks
while playhead < settings.end_time {
// Clear the render buffer
render_buffer.fill(0.0);
// Render this chunk
project.render(
&mut render_buffer,
pool,
midi_pool,
&mut buffer_pool,
playhead,
settings.sample_rate,
settings.channels,
);
// Calculate how many samples we actually need from this chunk
let remaining_time = settings.end_time - playhead;
let samples_needed = if remaining_time < chunk_duration {
// Calculate frames needed and ensure it's a whole number
let frames_needed = (remaining_time * settings.sample_rate as f64).round() as usize;
let samples = frames_needed * settings.channels as usize;
// Ensure we don't exceed chunk size
samples.min(chunk_samples)
} else {
chunk_samples
};
// Append to output
all_samples.extend_from_slice(&render_buffer[..samples_needed]);
playhead += chunk_duration;
}
println!("Export: rendered {} samples total", all_samples.len());
// Verify the sample count is a multiple of channels
if all_samples.len() % settings.channels as usize != 0 {
return Err(format!(
"Sample count {} is not a multiple of channel count {}",
all_samples.len(),
settings.channels
));
}
Ok(all_samples)
}
/// Write WAV file using hound
fn write_wav<P: AsRef<Path>>(
samples: &[f32],
settings: &ExportSettings,
output_path: P,
) -> Result<(), String> {
let spec = hound::WavSpec {
channels: settings.channels as u16,
sample_rate: settings.sample_rate,
bits_per_sample: settings.bit_depth,
sample_format: hound::SampleFormat::Int,
};
let mut writer = hound::WavWriter::create(output_path, spec)
.map_err(|e| format!("Failed to create WAV file: {}", e))?;
// Write samples
match settings.bit_depth {
16 => {
for &sample in samples {
let clamped = sample.max(-1.0).min(1.0);
let pcm_value = (clamped * 32767.0) as i16;
writer.write_sample(pcm_value)
.map_err(|e| format!("Failed to write sample: {}", e))?;
}
}
24 => {
for &sample in samples {
let clamped = sample.max(-1.0).min(1.0);
let pcm_value = (clamped * 8388607.0) as i32;
writer.write_sample(pcm_value)
.map_err(|e| format!("Failed to write sample: {}", e))?;
}
}
_ => return Err(format!("Unsupported bit depth: {}", settings.bit_depth)),
}
writer.finalize()
.map_err(|e| format!("Failed to finalize WAV file: {}", e))?;
Ok(())
}
/// Write FLAC file using hound (FLAC is essentially lossless WAV)
fn write_flac<P: AsRef<Path>>(
samples: &[f32],
settings: &ExportSettings,
output_path: P,
) -> Result<(), String> {
// For now, we'll use hound to write a WAV-like FLAC file
// In the future, we could use a dedicated FLAC encoder
let spec = hound::WavSpec {
channels: settings.channels as u16,
sample_rate: settings.sample_rate,
bits_per_sample: settings.bit_depth,
sample_format: hound::SampleFormat::Int,
};
let mut writer = hound::WavWriter::create(output_path, spec)
.map_err(|e| format!("Failed to create FLAC file: {}", e))?;
// Write samples (same as WAV for now)
match settings.bit_depth {
16 => {
for &sample in samples {
let clamped = sample.max(-1.0).min(1.0);
let pcm_value = (clamped * 32767.0) as i16;
writer.write_sample(pcm_value)
.map_err(|e| format!("Failed to write sample: {}", e))?;
}
}
24 => {
for &sample in samples {
let clamped = sample.max(-1.0).min(1.0);
let pcm_value = (clamped * 8388607.0) as i32;
writer.write_sample(pcm_value)
.map_err(|e| format!("Failed to write sample: {}", e))?;
}
}
_ => return Err(format!("Unsupported bit depth: {}", settings.bit_depth)),
}
writer.finalize()
.map_err(|e| format!("Failed to finalize FLAC file: {}", e))?;
Ok(())
}
// TODO: Add MP3 export support with a better library
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_export_settings_default() {
let settings = ExportSettings::default();
assert_eq!(settings.format, ExportFormat::Wav);
assert_eq!(settings.sample_rate, 44100);
assert_eq!(settings.channels, 2);
assert_eq!(settings.bit_depth, 16);
}
#[test]
fn test_format_extension() {
assert_eq!(ExportFormat::Wav.extension(), "wav");
assert_eq!(ExportFormat::Flac.extension(), "flac");
}
}

View File

@ -0,0 +1,169 @@
/// Metronome for providing click track during playback
pub struct Metronome {
enabled: bool,
bpm: f32,
time_signature_numerator: u32,
time_signature_denominator: u32,
last_beat: i64, // Last beat number that was played (-1 = none)
// Pre-generated click samples (mono)
high_click: Vec<f32>, // Accent click for first beat
low_click: Vec<f32>, // Normal click for other beats
// Click playback state
click_position: usize, // Current position in the click sample (0 = not playing)
playing_high_click: bool, // Which click we're currently playing
#[allow(dead_code)]
sample_rate: u32,
}
impl Metronome {
/// Create a new metronome with pre-generated click sounds
pub fn new(sample_rate: u32) -> Self {
let (high_click, low_click) = Self::generate_clicks(sample_rate);
Self {
enabled: false,
bpm: 120.0,
time_signature_numerator: 4,
time_signature_denominator: 4,
last_beat: -1,
high_click,
low_click,
click_position: 0,
playing_high_click: false,
sample_rate,
}
}
/// Generate woodblock-style click samples
fn generate_clicks(sample_rate: u32) -> (Vec<f32>, Vec<f32>) {
let click_duration_ms = 10.0; // 10ms click
let click_samples = ((sample_rate as f32 * click_duration_ms) / 1000.0) as usize;
// High click (accent): 1200 Hz + 2400 Hz (higher pitched woodblock)
let high_freq1 = 1200.0;
let high_freq2 = 2400.0;
let mut high_click = Vec::with_capacity(click_samples);
for i in 0..click_samples {
let t = i as f32 / sample_rate as f32;
let envelope = 1.0 - (i as f32 / click_samples as f32); // Linear decay
let envelope = envelope * envelope; // Square for faster decay
// Mix two sine waves for woodblock character
let sample = 0.3 * (2.0 * std::f32::consts::PI * high_freq1 * t).sin()
+ 0.2 * (2.0 * std::f32::consts::PI * high_freq2 * t).sin();
// Add a bit of noise for attack transient
let noise = (i as f32 * 0.1).sin() * 0.1;
high_click.push((sample + noise) * envelope * 0.5); // Scale down to avoid clipping
}
// Low click: 800 Hz + 1600 Hz (lower pitched woodblock)
let low_freq1 = 800.0;
let low_freq2 = 1600.0;
let mut low_click = Vec::with_capacity(click_samples);
for i in 0..click_samples {
let t = i as f32 / sample_rate as f32;
let envelope = 1.0 - (i as f32 / click_samples as f32);
let envelope = envelope * envelope;
let sample = 0.3 * (2.0 * std::f32::consts::PI * low_freq1 * t).sin()
+ 0.2 * (2.0 * std::f32::consts::PI * low_freq2 * t).sin();
let noise = (i as f32 * 0.1).sin() * 0.1;
low_click.push((sample + noise) * envelope * 0.4); // Slightly quieter than high click
}
(high_click, low_click)
}
/// Enable or disable the metronome
pub fn set_enabled(&mut self, enabled: bool) {
self.enabled = enabled;
if !enabled {
self.last_beat = -1; // Reset beat tracking when disabled
self.click_position = 0; // Stop any playing click
} else {
// When enabling, don't trigger a click until the next beat
self.click_position = usize::MAX; // Set to max to prevent immediate click
}
}
/// Update BPM and time signature
pub fn update_timing(&mut self, bpm: f32, time_signature: (u32, u32)) {
self.bpm = bpm;
self.time_signature_numerator = time_signature.0;
self.time_signature_denominator = time_signature.1;
}
/// Process audio and mix in metronome clicks
pub fn process(
&mut self,
output: &mut [f32],
playhead_samples: u64,
playing: bool,
sample_rate: u32,
channels: u32,
) {
if !self.enabled || !playing {
self.click_position = 0; // Reset if not playing
return;
}
let frames = output.len() / channels as usize;
for frame in 0..frames {
let current_sample = playhead_samples + frame as u64;
// Calculate current beat number
let current_time_seconds = current_sample as f64 / sample_rate as f64;
let beats_per_second = self.bpm as f64 / 60.0;
let current_beat = (current_time_seconds * beats_per_second).floor() as i64;
// Check if we crossed a beat boundary
if current_beat != self.last_beat && current_beat >= 0 {
self.last_beat = current_beat;
// Only trigger a click if we're not in the "just enabled" state
if self.click_position != usize::MAX {
// Determine which click to play
// Beat 1 of each measure gets the accent (high click)
let beat_in_measure = (current_beat as u32 % self.time_signature_numerator) as usize;
let is_first_beat = beat_in_measure == 0;
// Start playing the appropriate click
self.playing_high_click = is_first_beat;
self.click_position = 0; // Start from beginning of click
} else {
// We just got enabled - reset position but don't play yet
self.click_position = self.high_click.len(); // Set past end so no click plays
}
}
// Continue playing click sample if we're currently in one
let click = if self.playing_high_click {
&self.high_click
} else {
&self.low_click
};
if self.click_position < click.len() {
let click_sample = click[self.click_position];
// Mix into all channels
for ch in 0..channels as usize {
let output_idx = frame * channels as usize + ch;
output[output_idx] += click_sample;
}
self.click_position += 1;
}
}
}
}

View File

@ -63,73 +63,216 @@ impl MidiEvent {
}
}
/// MIDI clip ID type
/// MIDI clip ID type (for clips stored in the pool)
pub type MidiClipId = u32;
/// MIDI clip containing a sequence of MIDI events
/// MIDI clip instance ID type (for instances placed on tracks)
pub type MidiClipInstanceId = u32;
/// MIDI clip content - stores the actual MIDI events
///
/// This represents the content data stored in the MidiClipPool.
/// Events have timestamps relative to the start of the clip (0.0 = clip beginning).
#[derive(Debug, Clone)]
pub struct MidiClip {
pub id: MidiClipId,
pub events: Vec<MidiEvent>,
pub start_time: f64, // Position on timeline in seconds
pub duration: f64, // Clip duration in seconds
pub loop_enabled: bool,
pub duration: f64, // Total content duration in seconds
pub name: String,
}
impl MidiClip {
/// Create a new MIDI clip
pub fn new(id: MidiClipId, start_time: f64, duration: f64) -> Self {
/// Create a new MIDI clip with content
pub fn new(id: MidiClipId, events: Vec<MidiEvent>, duration: f64, name: String) -> Self {
let mut clip = Self {
id,
events,
duration,
name,
};
// Sort events by timestamp
clip.events.sort_by(|a, b| a.timestamp.partial_cmp(&b.timestamp).unwrap());
clip
}
/// Create an empty MIDI clip
pub fn empty(id: MidiClipId, duration: f64, name: String) -> Self {
Self {
id,
events: Vec::new(),
start_time,
duration,
loop_enabled: false,
name,
}
}
/// Add a MIDI event to the clip
pub fn add_event(&mut self, event: MidiEvent) {
self.events.push(event);
// Keep events sorted by timestamp (using partial_cmp for f64)
// Keep events sorted by timestamp
self.events.sort_by(|a, b| a.timestamp.partial_cmp(&b.timestamp).unwrap());
}
/// Get the end time of the clip
pub fn end_time(&self) -> f64 {
self.start_time + self.duration
/// Get events within a time range (relative to clip start)
/// This is used by MidiClipInstance to fetch events for a given portion
pub fn get_events_in_range(&self, start: f64, end: f64) -> Vec<MidiEvent> {
self.events
.iter()
.filter(|e| e.timestamp >= start && e.timestamp < end)
.copied()
.collect()
}
}
/// MIDI clip instance - a reference to MidiClip content with timeline positioning
///
/// ## Timing Model
/// - `internal_start` / `internal_end`: Define the region of the source clip to play (trimming)
/// - `external_start` / `external_duration`: Define where the instance appears on the timeline and how long
///
/// ## Looping
/// If `external_duration` is greater than `internal_end - internal_start`,
/// the instance will seamlessly loop back to `internal_start` when it reaches `internal_end`.
#[derive(Debug, Clone)]
pub struct MidiClipInstance {
pub id: MidiClipInstanceId,
pub clip_id: MidiClipId, // Reference to MidiClip in pool
/// Start position within the clip content (seconds)
pub internal_start: f64,
/// End position within the clip content (seconds)
pub internal_end: f64,
/// Start position on the timeline (seconds)
pub external_start: f64,
/// Duration on the timeline (seconds) - can be longer than internal duration for looping
pub external_duration: f64,
}
impl MidiClipInstance {
/// Create a new MIDI clip instance
pub fn new(
id: MidiClipInstanceId,
clip_id: MidiClipId,
internal_start: f64,
internal_end: f64,
external_start: f64,
external_duration: f64,
) -> Self {
Self {
id,
clip_id,
internal_start,
internal_end,
external_start,
external_duration,
}
}
/// Get events that should be triggered in a given time range
/// Create an instance that uses the full clip content (no trimming, no looping)
pub fn from_full_clip(
id: MidiClipInstanceId,
clip_id: MidiClipId,
clip_duration: f64,
external_start: f64,
) -> Self {
Self {
id,
clip_id,
internal_start: 0.0,
internal_end: clip_duration,
external_start,
external_duration: clip_duration,
}
}
/// Get the internal (content) duration
pub fn internal_duration(&self) -> f64 {
self.internal_end - self.internal_start
}
/// Get the end time on the timeline
pub fn external_end(&self) -> f64 {
self.external_start + self.external_duration
}
/// Check if this instance loops
pub fn is_looping(&self) -> bool {
self.external_duration > self.internal_duration()
}
/// Get the end time on the timeline (for backwards compatibility)
pub fn end_time(&self) -> f64 {
self.external_end()
}
/// Get the start time on the timeline (for backwards compatibility)
pub fn start_time(&self) -> f64 {
self.external_start
}
/// Check if this instance overlaps with a time range
pub fn overlaps_range(&self, range_start: f64, range_end: f64) -> bool {
self.external_start < range_end && self.external_end() > range_start
}
/// Get events that should be triggered in a given timeline range
///
/// Returns events along with their absolute timestamps in samples
/// This handles:
/// - Trimming (internal_start/internal_end)
/// - Looping (when external duration > internal duration)
/// - Time mapping from timeline to clip content
///
/// Returns events with timestamps adjusted to timeline time (not clip-relative)
pub fn get_events_in_range(
&self,
clip: &MidiClip,
range_start_seconds: f64,
range_end_seconds: f64,
_sample_rate: u32,
) -> Vec<MidiEvent> {
let mut result = Vec::new();
// Check if clip overlaps with the range
if range_start_seconds >= self.end_time() || range_end_seconds <= self.start_time {
// Check if instance overlaps with the range
if !self.overlaps_range(range_start_seconds, range_end_seconds) {
return result;
}
// Calculate the intersection
let play_start = range_start_seconds.max(self.start_time);
let play_end = range_end_seconds.min(self.end_time());
let internal_duration = self.internal_duration();
if internal_duration <= 0.0 {
return result;
}
// Position within the clip
let clip_position_seconds = play_start - self.start_time;
let clip_end_seconds = play_end - self.start_time;
// Calculate how many complete loops fit in the external duration
let num_loops = if self.external_duration > internal_duration {
(self.external_duration / internal_duration).ceil() as usize
} else {
1
};
// Find events in this range
// Note: event.timestamp is now in seconds relative to clip start
// Use half-open interval [start, end) to avoid triggering events twice
for event in &self.events {
if event.timestamp >= clip_position_seconds && event.timestamp < clip_end_seconds {
result.push(*event);
let external_end = self.external_end();
for loop_idx in 0..num_loops {
let loop_offset = loop_idx as f64 * internal_duration;
// Get events from the clip that fall within the internal range
for event in &clip.events {
// Skip events outside the trimmed region
if event.timestamp < self.internal_start || event.timestamp >= self.internal_end {
continue;
}
// Convert to timeline time
let relative_content_time = event.timestamp - self.internal_start;
let timeline_time = self.external_start + loop_offset + relative_content_time;
// Check if within current buffer range and instance bounds
if timeline_time >= range_start_seconds
&& timeline_time < range_end_seconds
&& timeline_time < external_end
{
let mut adjusted_event = *event;
adjusted_event.timestamp = timeline_time;
result.push(adjusted_event);
}
}
}

View File

@ -0,0 +1,101 @@
use std::collections::HashMap;
use super::midi::{MidiClip, MidiClipId, MidiEvent};
/// Pool for storing MIDI clip content
/// Similar to AudioClipPool but for MIDI data
pub struct MidiClipPool {
clips: HashMap<MidiClipId, MidiClip>,
next_id: MidiClipId,
}
impl MidiClipPool {
/// Create a new empty MIDI clip pool
pub fn new() -> Self {
Self {
clips: HashMap::new(),
next_id: 1, // Start at 1 so 0 can indicate "no clip"
}
}
/// Add a new clip to the pool with the given events and duration
/// Returns the ID of the newly created clip
pub fn add_clip(&mut self, events: Vec<MidiEvent>, duration: f64, name: String) -> MidiClipId {
let id = self.next_id;
self.next_id += 1;
let clip = MidiClip::new(id, events, duration, name);
self.clips.insert(id, clip);
id
}
/// Add an existing clip to the pool (used when loading projects)
/// The clip's ID is preserved
pub fn add_existing_clip(&mut self, clip: MidiClip) {
// Update next_id to avoid collisions
if clip.id >= self.next_id {
self.next_id = clip.id + 1;
}
self.clips.insert(clip.id, clip);
}
/// Get a clip by ID
pub fn get_clip(&self, id: MidiClipId) -> Option<&MidiClip> {
self.clips.get(&id)
}
/// Get a mutable clip by ID
pub fn get_clip_mut(&mut self, id: MidiClipId) -> Option<&mut MidiClip> {
self.clips.get_mut(&id)
}
/// Remove a clip from the pool
pub fn remove_clip(&mut self, id: MidiClipId) -> Option<MidiClip> {
self.clips.remove(&id)
}
/// Duplicate a clip, returning the new clip's ID
pub fn duplicate_clip(&mut self, id: MidiClipId) -> Option<MidiClipId> {
let clip = self.clips.get(&id)?;
let new_id = self.next_id;
self.next_id += 1;
let mut new_clip = clip.clone();
new_clip.id = new_id;
new_clip.name = format!("{} (copy)", clip.name);
self.clips.insert(new_id, new_clip);
Some(new_id)
}
/// Get all clip IDs in the pool
pub fn clip_ids(&self) -> Vec<MidiClipId> {
self.clips.keys().copied().collect()
}
/// Get the number of clips in the pool
pub fn len(&self) -> usize {
self.clips.len()
}
/// Check if the pool is empty
pub fn is_empty(&self) -> bool {
self.clips.is_empty()
}
/// Clear all clips from the pool
pub fn clear(&mut self) {
self.clips.clear();
self.next_id = 1;
}
/// Get an iterator over all clips
pub fn iter(&self) -> impl Iterator<Item = (&MidiClipId, &MidiClip)> {
self.clips.iter()
}
}
impl Default for MidiClipPool {
fn default() -> Self {
Self::new()
}
}

View File

@ -3,7 +3,10 @@ pub mod bpm_detector;
pub mod buffer_pool;
pub mod clip;
pub mod engine;
pub mod export;
pub mod metronome;
pub mod midi;
pub mod midi_pool;
pub mod node_graph;
pub mod pool;
pub mod project;
@ -13,10 +16,13 @@ pub mod track;
pub use automation::{AutomationLane, AutomationLaneId, AutomationPoint, CurveType, ParameterId};
pub use buffer_pool::BufferPool;
pub use clip::{Clip, ClipId};
pub use clip::{AudioClipInstance, AudioClipInstanceId, Clip, ClipId};
pub use engine::{Engine, EngineController};
pub use midi::{MidiClip, MidiClipId, MidiEvent};
pub use pool::{AudioFile as PoolAudioFile, AudioPool};
pub use export::{export_audio, ExportFormat, ExportSettings};
pub use metronome::Metronome;
pub use midi::{MidiClip, MidiClipId, MidiClipInstance, MidiClipInstanceId, MidiEvent};
pub use midi_pool::MidiClipPool;
pub use pool::{AudioClipPool, AudioFile as PoolAudioFile, AudioPool};
pub use project::Project;
pub use recording::RecordingState;
pub use sample_loader::{load_audio_file, SampleData};

View File

@ -472,7 +472,7 @@ impl AudioGraph {
// This is safe because each output buffer is independent
let buffer = &mut node.output_buffers[i] as *mut Vec<f32>;
unsafe {
let slice = &mut (*buffer)[..process_size.min((*buffer).len())];
let slice = &mut (&mut *buffer)[..process_size.min((*buffer).len())];
output_slices.push(slice);
}
}
@ -733,6 +733,9 @@ impl AudioGraph {
root_key: info.root_key,
velocity_min: info.velocity_min,
velocity_max: info.velocity_max,
loop_start: info.loop_start,
loop_end: info.loop_end,
loop_mode: info.loop_mode,
}
})
.collect();
@ -938,6 +941,9 @@ impl AudioGraph {
layer.root_key,
layer.velocity_min,
layer.velocity_max,
layer.loop_start,
layer.loop_end,
layer.loop_mode,
);
}
} else if let Some(ref path) = layer.file_path {
@ -950,6 +956,9 @@ impl AudioGraph {
layer.root_key,
layer.velocity_min,
layer.velocity_max,
layer.loop_start,
layer.loop_end,
layer.loop_mode,
) {
eprintln!("Failed to load sample layer from {}: {}", resolved_path, e);
}

View File

@ -63,7 +63,7 @@ pub use math::MathNode;
pub use midi_input::MidiInputNode;
pub use midi_to_cv::MidiToCVNode;
pub use mixer::MixerNode;
pub use multi_sampler::MultiSamplerNode;
pub use multi_sampler::{MultiSamplerNode, LoopMode};
pub use noise::NoiseGeneratorNode;
pub use oscillator::OscillatorNode;
pub use oscilloscope::OscilloscopeNode;

View File

@ -7,6 +7,16 @@ const PARAM_ATTACK: u32 = 1;
const PARAM_RELEASE: u32 = 2;
const PARAM_TRANSPOSE: u32 = 3;
/// Loop playback mode
#[derive(Clone, Copy, Debug, PartialEq, serde::Serialize, serde::Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum LoopMode {
/// Play sample once, no looping
OneShot,
/// Loop continuously between loop_start and loop_end
Continuous,
}
/// Metadata about a loaded sample layer (for preset serialization)
#[derive(Clone, Debug)]
pub struct LayerInfo {
@ -16,6 +26,9 @@ pub struct LayerInfo {
pub root_key: u8,
pub velocity_min: u8,
pub velocity_max: u8,
pub loop_start: Option<usize>, // Loop start point in samples
pub loop_end: Option<usize>, // Loop end point in samples
pub loop_mode: LoopMode,
}
/// Single sample with velocity range and key range
@ -32,6 +45,11 @@ struct SampleLayer {
// Velocity range: 0-127
velocity_min: u8,
velocity_max: u8,
// Loop points (in samples)
loop_start: Option<usize>,
loop_end: Option<usize>,
loop_mode: LoopMode,
}
impl SampleLayer {
@ -43,6 +61,9 @@ impl SampleLayer {
root_key: u8,
velocity_min: u8,
velocity_max: u8,
loop_start: Option<usize>,
loop_end: Option<usize>,
loop_mode: LoopMode,
) -> Self {
Self {
sample_data,
@ -52,6 +73,9 @@ impl SampleLayer {
root_key,
velocity_min,
velocity_max,
loop_start,
loop_end,
loop_mode,
}
}
@ -62,6 +86,114 @@ impl SampleLayer {
&& velocity >= self.velocity_min
&& velocity <= self.velocity_max
}
/// Auto-detect loop points using autocorrelation to find a good loop region
/// Returns (loop_start, loop_end) in samples
fn detect_loop_points(sample_data: &[f32], sample_rate: f32) -> Option<(usize, usize)> {
if sample_data.len() < (sample_rate * 0.5) as usize {
return None; // Need at least 0.5 seconds of audio
}
// Look for loop in the sustain region (skip attack/decay, avoid release)
// For sustained instruments, look in the middle 50% of the sample
let search_start = (sample_data.len() as f32 * 0.25) as usize;
let search_end = (sample_data.len() as f32 * 0.75) as usize;
if search_end <= search_start {
return None;
}
// Find the best loop point using autocorrelation
// For sustained instruments like brass/woodwind, we want longer loops
let min_loop_length = (sample_rate * 0.1) as usize; // Min 0.1s loop (more stable)
let max_loop_length = (sample_rate * 10.0) as usize; // Max 10 second loop
let mut best_correlation = -1.0;
let mut best_loop_start = search_start;
let mut best_loop_end = search_end;
// Try different loop lengths from LONGEST to SHORTEST
// This way we prefer longer loops and stop early if we find a good one
let length_step = ((sample_rate * 0.05) as usize).max(512); // 50ms steps
let actual_max_length = max_loop_length.min(search_end - search_start);
// Manually iterate backwards since step_by().rev() doesn't work on RangeInclusive<usize>
let mut loop_length = actual_max_length;
while loop_length >= min_loop_length {
// Try different starting points in the sustain region (finer steps)
let start_step = ((sample_rate * 0.02) as usize).max(256); // 20ms steps
for start in (search_start..search_end - loop_length).step_by(start_step) {
let end = start + loop_length;
if end > search_end {
break;
}
// Calculate correlation between loop end and loop start
let correlation = Self::calculate_loop_correlation(sample_data, start, end);
if correlation > best_correlation {
best_correlation = correlation;
best_loop_start = start;
best_loop_end = end;
}
}
// If we found a good enough loop, stop searching shorter ones
if best_correlation > 0.8 {
break;
}
// Decrement loop_length, with underflow protection
if loop_length < length_step {
break;
}
loop_length -= length_step;
}
// Lower threshold since longer loops are harder to match perfectly
if best_correlation > 0.6 {
Some((best_loop_start, best_loop_end))
} else {
// Fallback: use a reasonable chunk of the sustain region
let fallback_length = ((search_end - search_start) / 2).max(min_loop_length);
Some((search_start, search_start + fallback_length))
}
}
/// Calculate how well the audio loops at the given points
/// Returns correlation value between -1.0 and 1.0 (higher is better)
fn calculate_loop_correlation(sample_data: &[f32], loop_start: usize, loop_end: usize) -> f32 {
let loop_length = loop_end - loop_start;
let window_size = (loop_length / 10).max(128).min(2048); // Compare last 10% of loop
if loop_end + window_size >= sample_data.len() {
return -1.0;
}
// Compare the end of the loop region with the beginning
let region1_start = loop_end - window_size;
let region2_start = loop_start;
let mut sum_xy = 0.0;
let mut sum_x2 = 0.0;
let mut sum_y2 = 0.0;
for i in 0..window_size {
let x = sample_data[region1_start + i];
let y = sample_data[region2_start + i];
sum_xy += x * y;
sum_x2 += x * x;
sum_y2 += y * y;
}
// Pearson correlation coefficient
let denominator = (sum_x2 * sum_y2).sqrt();
if denominator > 0.0 {
sum_xy / denominator
} else {
-1.0
}
}
}
/// Active voice playing a sample
@ -75,6 +207,10 @@ struct Voice {
// Envelope
envelope_phase: EnvelopePhase,
envelope_value: f32,
// Loop crossfade state
crossfade_buffer: Vec<f32>, // Stores samples from before loop_start for crossfading
crossfade_length: usize, // Length of crossfade in samples (e.g., 100 samples = ~2ms @ 48kHz)
}
#[derive(Debug, Clone, Copy, PartialEq)]
@ -94,6 +230,8 @@ impl Voice {
is_active: true,
envelope_phase: EnvelopePhase::Attack,
envelope_value: 0.0,
crossfade_buffer: Vec::new(),
crossfade_length: 1000, // ~20ms at 48kHz (longer for smoother loops)
}
}
}
@ -166,6 +304,9 @@ impl MultiSamplerNode {
root_key: u8,
velocity_min: u8,
velocity_max: u8,
loop_start: Option<usize>,
loop_end: Option<usize>,
loop_mode: LoopMode,
) {
let layer = SampleLayer::new(
sample_data,
@ -175,6 +316,9 @@ impl MultiSamplerNode {
root_key,
velocity_min,
velocity_max,
loop_start,
loop_end,
loop_mode,
);
self.layers.push(layer);
}
@ -188,10 +332,25 @@ impl MultiSamplerNode {
root_key: u8,
velocity_min: u8,
velocity_max: u8,
loop_start: Option<usize>,
loop_end: Option<usize>,
loop_mode: LoopMode,
) -> Result<(), String> {
use crate::audio::sample_loader::load_audio_file;
let sample_data = load_audio_file(path)?;
// Auto-detect loop points if not provided and mode is Continuous
let (final_loop_start, final_loop_end) = if loop_mode == LoopMode::Continuous && loop_start.is_none() && loop_end.is_none() {
if let Some((start, end)) = SampleLayer::detect_loop_points(&sample_data.samples, sample_data.sample_rate as f32) {
(Some(start), Some(end))
} else {
(None, None)
}
} else {
(loop_start, loop_end)
};
self.add_layer(
sample_data.samples,
sample_data.sample_rate as f32,
@ -200,6 +359,9 @@ impl MultiSamplerNode {
root_key,
velocity_min,
velocity_max,
final_loop_start,
final_loop_end,
loop_mode,
);
// Store layer metadata for preset serialization
@ -210,6 +372,9 @@ impl MultiSamplerNode {
root_key,
velocity_min,
velocity_max,
loop_start: final_loop_start,
loop_end: final_loop_end,
loop_mode,
});
Ok(())
@ -236,6 +401,9 @@ impl MultiSamplerNode {
root_key: u8,
velocity_min: u8,
velocity_max: u8,
loop_start: Option<usize>,
loop_end: Option<usize>,
loop_mode: LoopMode,
) -> Result<(), String> {
if layer_index >= self.layers.len() {
return Err("Layer index out of bounds".to_string());
@ -247,6 +415,9 @@ impl MultiSamplerNode {
self.layers[layer_index].root_key = root_key;
self.layers[layer_index].velocity_min = velocity_min;
self.layers[layer_index].velocity_max = velocity_max;
self.layers[layer_index].loop_start = loop_start;
self.layers[layer_index].loop_end = loop_end;
self.layers[layer_index].loop_mode = loop_mode;
// Update the layer info
if layer_index < self.layer_infos.len() {
@ -255,6 +426,9 @@ impl MultiSamplerNode {
self.layer_infos[layer_index].root_key = root_key;
self.layer_infos[layer_index].velocity_min = velocity_min;
self.layer_infos[layer_index].velocity_max = velocity_max;
self.layer_infos[layer_index].loop_start = loop_start;
self.layer_infos[layer_index].loop_end = loop_end;
self.layer_infos[layer_index].loop_mode = loop_mode;
}
Ok(())
@ -429,10 +603,71 @@ impl AudioNode for MultiSamplerNode {
let speed_adjusted = speed * (layer.sample_rate / sample_rate as f32);
for frame in 0..frames {
// Read sample with linear interpolation
// Read sample with linear interpolation and loop handling
let playhead = voice.playhead;
let sample = if !layer.sample_data.is_empty() && playhead >= 0.0 {
let mut sample = 0.0;
if !layer.sample_data.is_empty() && playhead >= 0.0 {
let index = playhead.floor() as usize;
// Check if we need to handle looping
if layer.loop_mode == LoopMode::Continuous {
if let (Some(loop_start), Some(loop_end)) = (layer.loop_start, layer.loop_end) {
// Validate loop points
if loop_start < loop_end && loop_end <= layer.sample_data.len() {
// Fill crossfade buffer on first loop with samples just before loop_start
// These will be crossfaded with the beginning of the loop for seamless looping
if voice.crossfade_buffer.is_empty() && loop_start >= voice.crossfade_length {
let crossfade_start = loop_start.saturating_sub(voice.crossfade_length);
voice.crossfade_buffer = layer.sample_data[crossfade_start..loop_start].to_vec();
}
// Check if we've reached the loop end
if index >= loop_end {
// Wrap around to loop start
let loop_length = loop_end - loop_start;
let offset_from_end = index - loop_end;
let wrapped_index = loop_start + (offset_from_end % loop_length);
voice.playhead = wrapped_index as f32 + (playhead - playhead.floor());
}
// Read sample at current position
let current_index = voice.playhead.floor() as usize;
if current_index < layer.sample_data.len() {
let frac = voice.playhead - voice.playhead.floor();
let sample1 = layer.sample_data[current_index];
let sample2 = if current_index + 1 < layer.sample_data.len() {
layer.sample_data[current_index + 1]
} else {
layer.sample_data[loop_start] // Wrap to loop start for interpolation
};
sample = sample1 + (sample2 - sample1) * frac;
// Apply crossfade only at the END of loop
// Crossfade the end of loop with samples BEFORE loop_start
if current_index >= loop_start && current_index < loop_end {
if !voice.crossfade_buffer.is_empty() {
let crossfade_len = voice.crossfade_length.min(voice.crossfade_buffer.len());
// Only crossfade at loop end (last crossfade_length samples)
// This blends end samples (i,j,k) with pre-loop samples (a,b,c)
if current_index >= loop_end - crossfade_len && current_index < loop_end {
let crossfade_pos = current_index - (loop_end - crossfade_len);
if crossfade_pos < voice.crossfade_buffer.len() {
let end_sample = sample; // Current sample at end of loop (i, j, or k)
let pre_loop_sample = voice.crossfade_buffer[crossfade_pos]; // Corresponding pre-loop sample (a, b, or c)
// Equal-power crossfade: fade out end, fade in pre-loop
let fade_ratio = crossfade_pos as f32 / crossfade_len as f32;
let fade_out = (1.0 - fade_ratio).sqrt();
let fade_in = fade_ratio.sqrt();
sample = end_sample * fade_out + pre_loop_sample * fade_in;
}
}
}
}
}
} else {
// Invalid loop points, play normally
if index < layer.sample_data.len() {
let frac = playhead - playhead.floor();
let sample1 = layer.sample_data[index];
@ -441,13 +676,36 @@ impl AudioNode for MultiSamplerNode {
} else {
0.0
};
sample1 + (sample2 - sample1) * frac
} else {
0.0
sample = sample1 + (sample2 - sample1) * frac;
}
}
} else {
// No loop points defined, play normally
if index < layer.sample_data.len() {
let frac = playhead - playhead.floor();
let sample1 = layer.sample_data[index];
let sample2 = if index + 1 < layer.sample_data.len() {
layer.sample_data[index + 1]
} else {
0.0
};
sample = sample1 + (sample2 - sample1) * frac;
}
}
} else {
// OneShot mode - play normally without looping
if index < layer.sample_data.len() {
let frac = playhead - playhead.floor();
let sample1 = layer.sample_data[index];
let sample2 = if index + 1 < layer.sample_data.len() {
layer.sample_data[index + 1]
} else {
0.0
};
sample = sample1 + (sample2 - sample1) * frac;
}
}
}
// Process envelope
match voice.envelope_phase {
@ -484,7 +742,8 @@ impl AudioNode for MultiSamplerNode {
// Advance playhead
voice.playhead += speed_adjusted;
// Stop if we've reached the end
// Stop if we've reached the end (only for OneShot mode)
if layer.loop_mode == LoopMode::OneShot {
if voice.playhead >= layer.sample_data.len() as f32 {
voice.is_active = false;
break;
@ -492,6 +751,7 @@ impl AudioNode for MultiSamplerNode {
}
}
}
}
fn reset(&mut self) {
self.voices.clear();

View File

@ -1,5 +1,6 @@
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use super::nodes::LoopMode;
/// Sample data for preset serialization
#[derive(Debug, Clone, Serialize, Deserialize)]
@ -37,6 +38,16 @@ pub struct LayerData {
pub root_key: u8,
pub velocity_min: u8,
pub velocity_max: u8,
#[serde(skip_serializing_if = "Option::is_none")]
pub loop_start: Option<usize>,
#[serde(skip_serializing_if = "Option::is_none")]
pub loop_end: Option<usize>,
#[serde(default = "default_loop_mode")]
pub loop_mode: LoopMode,
}
fn default_loop_mode() -> LoopMode {
LoopMode::OneShot
}
/// Serializable representation of a node graph preset

View File

@ -119,13 +119,16 @@ impl AudioFile {
}
}
/// Pool of shared audio files
pub struct AudioPool {
/// Pool of shared audio files (audio clip content)
pub struct AudioClipPool {
files: Vec<AudioFile>,
}
impl AudioPool {
/// Create a new empty audio pool
/// Type alias for backwards compatibility
pub type AudioPool = AudioClipPool;
impl AudioClipPool {
/// Create a new empty audio clip pool
pub fn new() -> Self {
Self {
files: Vec::new(),
@ -301,7 +304,7 @@ impl AudioPool {
}
}
impl Default for AudioPool {
impl Default for AudioClipPool {
fn default() -> Self {
Self::new()
}
@ -335,8 +338,8 @@ pub struct AudioPoolEntry {
pub embedded_data: Option<EmbeddedAudioData>,
}
impl AudioPool {
/// Serialize the audio pool for project saving
impl AudioClipPool {
/// Serialize the audio clip pool for project saving
///
/// Files smaller than 10MB are embedded as base64.
/// Larger files are stored as relative paths to the project file.

View File

@ -1,19 +1,27 @@
use super::buffer_pool::BufferPool;
use super::clip::Clip;
use super::midi::{MidiClip, MidiEvent};
use super::pool::AudioPool;
use super::midi::{MidiClip, MidiClipId, MidiClipInstance, MidiClipInstanceId, MidiEvent};
use super::midi_pool::MidiClipPool;
use super::pool::AudioClipPool;
use super::track::{AudioTrack, Metatrack, MidiTrack, RenderContext, TrackId, TrackNode};
use std::collections::HashMap;
/// Project manages the hierarchical track structure
/// Project manages the hierarchical track structure and clip pools
///
/// Tracks are stored in a flat HashMap but can be organized into groups,
/// forming a tree structure. Groups render their children recursively.
///
/// Clip content is stored in pools (MidiClipPool), while tracks store
/// clip instances that reference the pool content.
pub struct Project {
tracks: HashMap<TrackId, TrackNode>,
next_track_id: TrackId,
root_tracks: Vec<TrackId>, // Top-level tracks (not in any group)
sample_rate: u32, // System sample rate
/// Pool for MIDI clip content
pub midi_clip_pool: MidiClipPool,
/// Next MIDI clip instance ID (for generating unique IDs)
next_midi_clip_instance_id: MidiClipInstanceId,
}
impl Project {
@ -24,6 +32,8 @@ impl Project {
next_track_id: 0,
root_tracks: Vec::new(),
sample_rate,
midi_clip_pool: MidiClipPool::new(),
next_midi_clip_instance_id: 1,
}
}
@ -241,21 +251,81 @@ impl Project {
}
}
/// Add a MIDI clip to a MIDI track
pub fn add_midi_clip(&mut self, track_id: TrackId, clip: MidiClip) -> Result<(), &'static str> {
/// Add a MIDI clip instance to a MIDI track
/// The clip content should already exist in the midi_clip_pool
pub fn add_midi_clip_instance(&mut self, track_id: TrackId, instance: MidiClipInstance) -> Result<(), &'static str> {
if let Some(TrackNode::Midi(track)) = self.tracks.get_mut(&track_id) {
track.add_clip(clip);
track.add_clip_instance(instance);
Ok(())
} else {
Err("Track not found or is not a MIDI track")
}
}
/// Create a new MIDI clip in the pool and add an instance to a track
/// Returns (clip_id, instance_id) on success
pub fn create_midi_clip_with_instance(
&mut self,
track_id: TrackId,
events: Vec<MidiEvent>,
duration: f64,
name: String,
external_start: f64,
) -> Result<(MidiClipId, MidiClipInstanceId), &'static str> {
// Verify track exists and is a MIDI track
if !matches!(self.tracks.get(&track_id), Some(TrackNode::Midi(_))) {
return Err("Track not found or is not a MIDI track");
}
// Create clip in pool
let clip_id = self.midi_clip_pool.add_clip(events, duration, name);
// Create instance
let instance_id = self.next_midi_clip_instance_id;
self.next_midi_clip_instance_id += 1;
let instance = MidiClipInstance::from_full_clip(instance_id, clip_id, duration, external_start);
// Add instance to track
if let Some(TrackNode::Midi(track)) = self.tracks.get_mut(&track_id) {
track.add_clip_instance(instance);
}
Ok((clip_id, instance_id))
}
/// Generate a new unique MIDI clip instance ID
pub fn next_midi_clip_instance_id(&mut self) -> MidiClipInstanceId {
let id = self.next_midi_clip_instance_id;
self.next_midi_clip_instance_id += 1;
id
}
/// Legacy method for backwards compatibility - creates clip and instance from old MidiClip format
pub fn add_midi_clip(&mut self, track_id: TrackId, clip: MidiClip) -> Result<(), &'static str> {
self.add_midi_clip_at(track_id, clip, 0.0)
}
/// Add a MIDI clip to the pool and create an instance at the given timeline position
pub fn add_midi_clip_at(&mut self, track_id: TrackId, clip: MidiClip, start_time: f64) -> Result<(), &'static str> {
// Add the clip to the pool (it already has events and duration)
let duration = clip.duration;
let clip_id = clip.id;
self.midi_clip_pool.add_existing_clip(clip);
// Create an instance that uses the full clip at the given position
let instance_id = self.next_midi_clip_instance_id();
let instance = MidiClipInstance::from_full_clip(instance_id, clip_id, duration, start_time);
self.add_midi_clip_instance(track_id, instance)
}
/// Render all root tracks into the output buffer
pub fn render(
&mut self,
output: &mut [f32],
pool: &AudioPool,
audio_pool: &AudioClipPool,
midi_pool: &MidiClipPool,
buffer_pool: &mut BufferPool,
playhead_seconds: f64,
sample_rate: u32,
@ -278,7 +348,8 @@ impl Project {
self.render_track(
track_id,
output,
pool,
audio_pool,
midi_pool,
buffer_pool,
ctx,
any_solo,
@ -292,7 +363,8 @@ impl Project {
&mut self,
track_id: TrackId,
output: &mut [f32],
pool: &AudioPool,
audio_pool: &AudioClipPool,
midi_pool: &MidiClipPool,
buffer_pool: &mut BufferPool,
ctx: RenderContext,
any_solo: bool,
@ -336,11 +408,11 @@ impl Project {
match self.tracks.get_mut(&track_id) {
Some(TrackNode::Audio(track)) => {
// Render audio track directly into output
track.render(output, pool, ctx.playhead_seconds, ctx.sample_rate, ctx.channels);
track.render(output, audio_pool, ctx.playhead_seconds, ctx.sample_rate, ctx.channels);
}
Some(TrackNode::Midi(track)) => {
// Render MIDI track directly into output
track.render(output, ctx.playhead_seconds, ctx.sample_rate, ctx.channels);
track.render(output, midi_pool, ctx.playhead_seconds, ctx.sample_rate, ctx.channels);
}
Some(TrackNode::Group(group)) => {
// Get children IDs, check if this group is soloed, and transform context
@ -360,7 +432,8 @@ impl Project {
self.render_track(
child_id,
&mut group_buffer,
pool,
audio_pool,
midi_pool,
buffer_pool,
child_ctx,
any_solo,

View File

@ -1,9 +1,10 @@
use super::automation::{AutomationLane, AutomationLaneId, ParameterId};
use super::clip::Clip;
use super::midi::{MidiClip, MidiEvent};
use super::clip::AudioClipInstance;
use super::midi::{MidiClipInstance, MidiEvent};
use super::midi_pool::MidiClipPool;
use super::node_graph::AudioGraph;
use super::node_graph::nodes::{AudioInputNode, AudioOutputNode};
use super::pool::AudioPool;
use super::pool::AudioClipPool;
use std::collections::HashMap;
/// Track ID type
@ -285,11 +286,12 @@ impl Metatrack {
}
}
/// MIDI track with MIDI clips and a node-based instrument
/// MIDI track with MIDI clip instances and a node-based instrument
pub struct MidiTrack {
pub id: TrackId,
pub name: String,
pub clips: Vec<MidiClip>,
/// Clip instances placed on this track (reference clips in the MidiClipPool)
pub clip_instances: Vec<MidiClipInstance>,
pub instrument_graph: AudioGraph,
pub volume: f32,
pub muted: bool,
@ -310,7 +312,7 @@ impl MidiTrack {
Self {
id,
name,
clips: Vec::new(),
clip_instances: Vec::new(),
instrument_graph: AudioGraph::new(sample_rate, default_buffer_size),
volume: 1.0,
muted: false,
@ -346,9 +348,9 @@ impl MidiTrack {
self.automation_lanes.remove(&lane_id).is_some()
}
/// Add a MIDI clip to this track
pub fn add_clip(&mut self, clip: MidiClip) {
self.clips.push(clip);
/// Add a MIDI clip instance to this track
pub fn add_clip_instance(&mut self, instance: MidiClipInstance) {
self.clip_instances.push(instance);
}
/// Set track volume
@ -420,6 +422,7 @@ impl MidiTrack {
pub fn render(
&mut self,
output: &mut [f32],
midi_pool: &MidiClipPool,
playhead_seconds: f64,
sample_rate: u32,
channels: u32,
@ -427,18 +430,19 @@ impl MidiTrack {
let buffer_duration_seconds = output.len() as f64 / (sample_rate as f64 * channels as f64);
let buffer_end_seconds = playhead_seconds + buffer_duration_seconds;
// Collect MIDI events from all clips that overlap with current time range
// Collect MIDI events from all clip instances that overlap with current time range
let mut midi_events = Vec::new();
for clip in &self.clips {
let events = clip.get_events_in_range(
for instance in &self.clip_instances {
// Get the clip content from the pool
if let Some(clip) = midi_pool.get_clip(instance.clip_id) {
let events = instance.get_events_in_range(
clip,
playhead_seconds,
buffer_end_seconds,
sample_rate,
);
// Events now have timestamps in seconds relative to clip start
midi_events.extend(events);
}
}
// Add live MIDI events (from virtual keyboard or MIDI controllers)
// This allows real-time input to be heard during playback/recording
@ -480,11 +484,12 @@ impl MidiTrack {
}
}
/// Audio track with clips
/// Audio track with audio clip instances
pub struct AudioTrack {
pub id: TrackId,
pub name: String,
pub clips: Vec<Clip>,
/// Audio clip instances (reference content in the AudioClipPool)
pub clips: Vec<AudioClipInstance>,
pub volume: f32,
pub muted: bool,
pub solo: bool,
@ -560,8 +565,8 @@ impl AudioTrack {
self.automation_lanes.remove(&lane_id).is_some()
}
/// Add a clip to this track
pub fn add_clip(&mut self, clip: Clip) {
/// Add an audio clip instance to this track
pub fn add_clip(&mut self, clip: AudioClipInstance) {
self.clips.push(clip);
}
@ -590,7 +595,7 @@ impl AudioTrack {
pub fn render(
&mut self,
output: &mut [f32],
pool: &AudioPool,
pool: &AudioClipPool,
playhead_seconds: f64,
sample_rate: u32,
channels: u32,
@ -602,10 +607,10 @@ impl AudioTrack {
let mut clip_buffer = vec![0.0f32; output.len()];
let mut rendered = 0;
// Render all active clips into the temporary buffer
// Render all active clip instances into the temporary buffer
for clip in &self.clips {
// Check if clip overlaps with current buffer time range
if clip.start_time < buffer_end_seconds && clip.end_time() > playhead_seconds {
if clip.external_start < buffer_end_seconds && clip.external_end() > playhead_seconds {
rendered += self.render_clip(
clip,
&mut clip_buffer,
@ -667,12 +672,13 @@ impl AudioTrack {
volume
}
/// Render a single clip into the output buffer
/// Render a single audio clip instance into the output buffer
/// Handles looping when external_duration > internal_duration
fn render_clip(
&self,
clip: &Clip,
clip: &AudioClipInstance,
output: &mut [f32],
pool: &AudioPool,
pool: &AudioClipPool,
playhead_seconds: f64,
sample_rate: u32,
channels: u32,
@ -680,46 +686,94 @@ impl AudioTrack {
let buffer_duration_seconds = output.len() as f64 / (sample_rate as f64 * channels as f64);
let buffer_end_seconds = playhead_seconds + buffer_duration_seconds;
// Determine the time range we need to render (intersection of buffer and clip)
let render_start_seconds = playhead_seconds.max(clip.start_time);
let render_end_seconds = buffer_end_seconds.min(clip.end_time());
// Determine the time range we need to render (intersection of buffer and clip external bounds)
let render_start_seconds = playhead_seconds.max(clip.external_start);
let render_end_seconds = buffer_end_seconds.min(clip.external_end());
// If no overlap, return early
if render_start_seconds >= render_end_seconds {
return 0;
}
// Calculate offset into the output buffer (in interleaved samples)
let output_offset_seconds = render_start_seconds - playhead_seconds;
let output_offset_samples = (output_offset_seconds * sample_rate as f64 * channels as f64) as usize;
// Calculate position within the clip's audio file (in seconds)
let clip_position_seconds = render_start_seconds - clip.start_time + clip.offset;
// Calculate how many samples to render in the output
let render_duration_seconds = render_end_seconds - render_start_seconds;
let samples_to_render = (render_duration_seconds * sample_rate as f64 * channels as f64) as usize;
let samples_to_render = samples_to_render.min(output.len() - output_offset_samples);
// Get the slice of output buffer to write to
if output_offset_samples + samples_to_render > output.len() {
let internal_duration = clip.internal_duration();
if internal_duration <= 0.0 {
return 0;
}
let output_slice = &mut output[output_offset_samples..output_offset_samples + samples_to_render];
// Calculate combined gain
let combined_gain = clip.gain * self.volume;
// Render from pool with sample rate conversion
// Pass the time position in seconds, let the pool handle sample rate conversion
pool.render_from_file(
let mut total_rendered = 0;
// Process the render range sample by sample (or in chunks for efficiency)
// For looping clips, we need to handle wrap-around at the loop boundary
let samples_per_second = sample_rate as f64 * channels as f64;
// For now, render in a simpler way - iterate through the timeline range
// and use get_content_position for each sample position
let output_start_offset = ((render_start_seconds - playhead_seconds) * samples_per_second) as usize;
let output_end_offset = ((render_end_seconds - playhead_seconds) * samples_per_second) as usize;
if output_end_offset > output.len() || output_start_offset > output.len() {
return 0;
}
// If not looping, we can render in one chunk (more efficient)
if !clip.is_looping() {
// Simple case: no looping
let content_start = clip.get_content_position(render_start_seconds).unwrap_or(clip.internal_start);
let output_len = output.len();
let output_slice = &mut output[output_start_offset..output_end_offset.min(output_len)];
total_rendered = pool.render_from_file(
clip.audio_pool_index,
output_slice,
clip_position_seconds,
content_start,
combined_gain,
sample_rate,
channels,
)
);
} else {
// Looping case: need to handle wrap-around at loop boundaries
// Render in segments, one per loop iteration
let mut timeline_pos = render_start_seconds;
let mut output_offset = output_start_offset;
while timeline_pos < render_end_seconds && output_offset < output.len() {
// Calculate position within the loop
let relative_pos = timeline_pos - clip.external_start;
let loop_offset = relative_pos % internal_duration;
let content_pos = clip.internal_start + loop_offset;
// Calculate how much we can render before hitting the loop boundary
let time_to_loop_end = internal_duration - loop_offset;
let time_to_render_end = render_end_seconds - timeline_pos;
let chunk_duration = time_to_loop_end.min(time_to_render_end);
let chunk_samples = (chunk_duration * samples_per_second) as usize;
let chunk_samples = chunk_samples.min(output.len() - output_offset);
if chunk_samples == 0 {
break;
}
let output_slice = &mut output[output_offset..output_offset + chunk_samples];
let rendered = pool.render_from_file(
clip.audio_pool_index,
output_slice,
content_pos,
combined_gain,
sample_rate,
channels,
);
total_rendered += rendered;
output_offset += chunk_samples;
timeline_pos += chunk_duration;
}
}
total_rendered
}
}

View File

@ -3,6 +3,7 @@ use crate::audio::{
TrackId,
};
use crate::audio::buffer_pool::BufferPoolStats;
use crate::audio::node_graph::nodes::LoopMode;
use crate::io::WaveformPeak;
/// Commands sent from UI/control thread to audio thread
@ -27,10 +28,14 @@ pub enum Command {
SetTrackSolo(TrackId, bool),
// Clip management commands
/// Move a clip to a new timeline position
/// Move a clip to a new timeline position (track_id, clip_id, new_external_start)
MoveClip(TrackId, ClipId, f64),
/// Trim a clip (track_id, clip_id, new_start_time, new_duration, new_offset)
TrimClip(TrackId, ClipId, f64, f64, f64),
/// Trim a clip's internal boundaries (track_id, clip_id, new_internal_start, new_internal_end)
/// This changes which portion of the source content is used
TrimClip(TrackId, ClipId, f64, f64),
/// Extend/shrink a clip's external duration (track_id, clip_id, new_external_duration)
/// If duration > internal duration, the clip will loop
ExtendClip(TrackId, ClipId, f64),
// Metatrack management commands
/// Create a new metatrack with a name
@ -66,8 +71,8 @@ pub enum Command {
CreateMidiClip(TrackId, f64, f64),
/// Add a MIDI note to a clip (track_id, clip_id, time_offset, note, velocity, duration)
AddMidiNote(TrackId, MidiClipId, f64, u8, u8, f64),
/// Add a pre-loaded MIDI clip to a track
AddLoadedMidiClip(TrackId, MidiClip),
/// Add a pre-loaded MIDI clip to a track (track_id, clip, start_time)
AddLoadedMidiClip(TrackId, MidiClip, f64),
/// Update MIDI clip notes (track_id, clip_id, notes: Vec<(start_time, note, velocity, duration)>)
/// NOTE: May need to switch to individual note operations if this becomes slow on clips with many notes
UpdateMidiClipNotes(TrackId, MidiClipId, Vec<(f64, u8, u8, f64)>),
@ -118,6 +123,10 @@ pub enum Command {
/// Set the active MIDI track for external MIDI input routing (track_id or None)
SetActiveMidiTrack(Option<TrackId>),
// Metronome command
/// Enable or disable the metronome click track
SetMetronomeEnabled(bool),
// Node graph commands
/// Add a node to a track's instrument graph (track_id, node_type, position_x, position_y)
GraphAddNode(TrackId, String, f32, f32),
@ -147,10 +156,10 @@ pub enum Command {
/// Load a sample into a SimpleSampler node (track_id, node_id, file_path)
SamplerLoadSample(TrackId, u32, String),
/// Add a sample layer to a MultiSampler node (track_id, node_id, file_path, key_min, key_max, root_key, velocity_min, velocity_max)
MultiSamplerAddLayer(TrackId, u32, String, u8, u8, u8, u8, u8),
/// Update a MultiSampler layer's configuration (track_id, node_id, layer_index, key_min, key_max, root_key, velocity_min, velocity_max)
MultiSamplerUpdateLayer(TrackId, u32, usize, u8, u8, u8, u8, u8),
/// Add a sample layer to a MultiSampler node (track_id, node_id, file_path, key_min, key_max, root_key, velocity_min, velocity_max, loop_start, loop_end, loop_mode)
MultiSamplerAddLayer(TrackId, u32, String, u8, u8, u8, u8, u8, Option<usize>, Option<usize>, LoopMode),
/// Update a MultiSampler layer's configuration (track_id, node_id, layer_index, key_min, key_max, root_key, velocity_min, velocity_max, loop_start, loop_end, loop_mode)
MultiSamplerUpdateLayer(TrackId, u32, usize, u8, u8, u8, u8, u8, Option<usize>, Option<usize>, LoopMode),
/// Remove a layer from a MultiSampler node (track_id, node_id, layer_index)
MultiSamplerRemoveLayer(TrackId, u32, usize),
@ -211,6 +220,8 @@ pub enum AudioEvent {
GraphStateChanged(TrackId),
/// Preset fully loaded (track_id) - emitted after all nodes and samples are loaded
GraphPresetLoaded(TrackId),
/// Preset has been saved to file (track_id, preset_path)
GraphPresetSaved(TrackId, String),
}
/// Synchronous queries sent from UI thread to audio thread
@ -246,6 +257,8 @@ pub enum Query {
GetPoolWaveform(usize, usize),
/// Get file info from audio pool (pool_index) - returns (duration, sample_rate, channels)
GetPoolFileInfo(usize),
/// Export audio to file (settings, output_path)
ExportAudio(crate::audio::ExportSettings, std::path::PathBuf),
}
/// Oscilloscope data from a node
@ -303,4 +316,6 @@ pub enum QueryResponse {
PoolWaveform(Result<Vec<crate::io::WaveformPeak>, String>),
/// Pool file info (duration, sample_rate, channels)
PoolFileInfo(Result<(f64, u32, u32), String>),
/// Audio exported
AudioExported(Result<(), String>),
}

View File

@ -157,9 +157,8 @@ pub fn load_midi_file<P: AsRef<Path>>(
(final_delta_ticks as f64 / ticks_per_beat) * (microseconds_per_beat / 1_000_000.0);
let duration_seconds = accumulated_time + final_delta_time;
// Create the MIDI clip
let mut clip = MidiClip::new(clip_id, 0.0, duration_seconds);
clip.events = events;
// Create the MIDI clip (content only, positioning happens when creating instance)
let clip = MidiClip::new(clip_id, events, duration_seconds, "Imported MIDI".to_string());
Ok(clip)
}

View File

@ -9,6 +9,7 @@ use std::time::Duration;
pub struct MidiInputManager {
connections: Arc<Mutex<Vec<ActiveMidiConnection>>>,
active_track_id: Arc<Mutex<Option<TrackId>>>,
#[allow(dead_code)]
command_tx: Arc<Mutex<rtrb::Producer<Command>>>,
}
@ -74,6 +75,26 @@ impl MidiInputManager {
// Get all available MIDI input ports
let ports = midi_in.ports();
// Get list of currently available device names
let mut available_devices = Vec::new();
for port in &ports {
if let Ok(port_name) = midi_in.port_name(port) {
available_devices.push(port_name);
}
}
// Remove disconnected devices from our connections list
{
let mut conns = connections.lock().unwrap();
let before_count = conns.len();
conns.retain(|conn| available_devices.contains(&conn.device_name));
let after_count = conns.len();
if before_count != after_count {
println!("MIDI: Removed {} disconnected device(s)", before_count - after_count);
}
}
// Get list of already connected device names
let connected_devices: Vec<String> = {
let conns = connections.lock().unwrap();
@ -125,16 +146,9 @@ impl MidiInputManager {
connection,
});
println!("MIDI: Connected to: {}", port_name);
// Need to recreate MidiInput for next iteration
midi_in = MidiInput::new("Lightningbeam")
.map_err(|e| format!("Failed to recreate MIDI input: {}", e))?;
}
Err(e) => {
eprintln!("MIDI: Failed to connect to {}: {}", port_name, e);
// Recreate MidiInput to continue with other ports
midi_in = MidiInput::new("Lightningbeam")
.map_err(|e| format!("Failed to recreate MIDI input: {}", e))?;
}
}
}

View File

@ -847,8 +847,7 @@ fn execute_command(
// Load the MIDI file
match load_midi_file(file_path, app.next_clip_id, 48000) {
Ok(mut midi_clip) => {
midi_clip.start_time = start_time;
Ok(midi_clip) => {
let clip_id = midi_clip.id;
let duration = midi_clip.duration;
let event_count = midi_clip.events.len();
@ -882,8 +881,8 @@ fn execute_command(
app.add_clip(track_id, clip_id, start_time, duration, file_path.to_string(), notes);
app.next_clip_id += 1;
// Send to audio engine
controller.add_loaded_midi_clip(track_id, midi_clip);
// Send to audio engine with the start_time (clip content is separate from timeline position)
controller.add_loaded_midi_clip(track_id, midi_clip, start_time);
app.set_status(format!("Loaded {} ({} events, {:.2}s) to track {} at {:.2}s",
file_path, event_count, duration, track_id, start_time));

BIN
screenshots/animation.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 112 KiB

BIN
screenshots/music.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 173 KiB

BIN
screenshots/video.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

1902
src-tauri/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -31,9 +31,9 @@ env_logger = "0.11"
daw-backend = { path = "../daw-backend" }
cpal = "0.15"
rtrb = "0.3"
tokio = { version = "1", features = ["sync", "time"] }
# Video decoding
ffmpeg-next = "7.0"
lru = "0.12"
# WebSocket for frame streaming (disable default features to remove tracing, but keep handshake)
@ -47,6 +47,13 @@ bytemuck = { version = "1.14", features = ["derive"] }
raw-window-handle = "0.6"
image = "0.24"
[target.'cfg(target_os = "macos")'.dependencies]
ffmpeg-next = { version = "7.0", features = ["build"] }
[target.'cfg(not(target_os = "macos"))'.dependencies]
ffmpeg-next = "7.0"
[profile.dev]
opt-level = 1 # Enable basic optimizations in debug mode for audio decoding performance

View File

@ -1,9 +1,11 @@
use daw_backend::{AudioEvent, AudioSystem, EngineController, EventEmitter, WaveformPeak};
use daw_backend::audio::pool::AudioPoolEntry;
use ffmpeg_next::ffi::FF_LOSS_COLORQUANT;
use std::sync::{Arc, Mutex};
use std::collections::HashMap;
use std::path::Path;
use tauri::{Emitter, Manager};
use tokio::sync::oneshot;
#[derive(serde::Serialize)]
pub struct AudioFileMetadata {
@ -39,6 +41,8 @@ pub struct AudioState {
pub(crate) next_graph_node_id: u32,
// Track next node ID for each VoiceAllocator template (VoiceAllocator backend ID -> next template node ID)
pub(crate) template_node_counters: HashMap<u32, u32>,
// Pending preset save notifications (preset_path -> oneshot sender)
pub(crate) preset_save_waiters: Arc<Mutex<HashMap<String, oneshot::Sender<()>>>>,
}
impl Default for AudioState {
@ -51,6 +55,7 @@ impl Default for AudioState {
next_track_id: 0,
next_pool_index: 0,
next_graph_node_id: 0,
preset_save_waiters: Arc::new(Mutex::new(HashMap::new())),
template_node_counters: HashMap::new(),
}
}
@ -59,10 +64,20 @@ impl Default for AudioState {
/// Implementation of EventEmitter that uses Tauri's event system
struct TauriEventEmitter {
app_handle: tauri::AppHandle,
preset_save_waiters: Arc<Mutex<HashMap<String, oneshot::Sender<()>>>>,
}
impl EventEmitter for TauriEventEmitter {
fn emit(&self, event: AudioEvent) {
// Handle preset save notifications
if let AudioEvent::GraphPresetSaved(_, ref preset_path) = event {
if let Ok(mut waiters) = self.preset_save_waiters.lock() {
if let Some(sender) = waiters.remove(preset_path) {
let _ = sender.send(());
}
}
}
// Serialize the event to the format expected by the frontend
let serialized_event = match event {
AudioEvent::PlaybackPosition(time) => {
@ -98,6 +113,9 @@ impl EventEmitter for TauriEventEmitter {
AudioEvent::GraphPresetLoaded(track_id) => {
SerializedAudioEvent::GraphPresetLoaded { track_id }
}
AudioEvent::GraphPresetSaved(track_id, preset_path) => {
SerializedAudioEvent::GraphPresetSaved { track_id, preset_path }
}
AudioEvent::MidiRecordingStopped(track_id, clip_id, note_count) => {
SerializedAudioEvent::MidiRecordingStopped { track_id, clip_id, note_count }
}
@ -140,7 +158,10 @@ pub async fn audio_init(
}
// Create TauriEventEmitter
let emitter = Arc::new(TauriEventEmitter { app_handle });
let emitter = Arc::new(TauriEventEmitter {
app_handle,
preset_save_waiters: audio_state.preset_save_waiters.clone(),
});
// Get buffer size from audio_state (default is 256)
let buffer_size = audio_state.buffer_size;
@ -201,6 +222,20 @@ pub async fn audio_stop(state: tauri::State<'_, Arc<Mutex<AudioState>>>) -> Resu
}
}
#[tauri::command]
pub async fn set_metronome_enabled(
state: tauri::State<'_, Arc<Mutex<AudioState>>>,
enabled: bool
) -> Result<(), String> {
let mut audio_state = state.lock().unwrap();
if let Some(controller) = &mut audio_state.controller {
controller.set_metronome_enabled(enabled);
Ok(())
} else {
Err("Audio not initialized".to_string())
}
}
#[tauri::command]
pub async fn audio_test_beep(state: tauri::State<'_, Arc<Mutex<AudioState>>>) -> Result<(), String> {
let mut audio_state = state.lock().unwrap();
@ -372,13 +407,28 @@ pub async fn audio_trim_clip(
state: tauri::State<'_, Arc<Mutex<AudioState>>>,
track_id: u32,
clip_id: u32,
new_start_time: f64,
new_duration: f64,
new_offset: f64,
internal_start: f64,
internal_end: f64,
) -> Result<(), String> {
let mut audio_state = state.lock().unwrap();
if let Some(controller) = &mut audio_state.controller {
controller.trim_clip(track_id, clip_id, new_start_time, new_duration, new_offset);
controller.trim_clip(track_id, clip_id, internal_start, internal_end);
Ok(())
} else {
Err("Audio not initialized".to_string())
}
}
#[tauri::command]
pub async fn audio_extend_clip(
state: tauri::State<'_, Arc<Mutex<AudioState>>>,
track_id: u32,
clip_id: u32,
new_external_duration: f64,
) -> Result<(), String> {
let mut audio_state = state.lock().unwrap();
if let Some(controller) = &mut audio_state.controller {
controller.extend_clip(track_id, clip_id, new_external_duration);
Ok(())
} else {
Err("Audio not initialized".to_string())
@ -567,11 +617,8 @@ pub async fn audio_load_midi_file(
let sample_rate = audio_state.sample_rate;
if let Some(controller) = &mut audio_state.controller {
// Load and parse the MIDI file
let mut clip = daw_backend::load_midi_file(&path, 0, sample_rate)?;
// Set the start time
clip.start_time = start_time;
// Load and parse the MIDI file (clip content only, no positioning)
let clip = daw_backend::load_midi_file(&path, 0, sample_rate)?;
let duration = clip.duration;
// Extract note data from MIDI events
@ -597,8 +644,8 @@ pub async fn audio_load_midi_file(
}
}
// Add the loaded MIDI clip to the track
controller.add_loaded_midi_clip(track_id, clip);
// Add the loaded MIDI clip to the track at the specified start_time
controller.add_loaded_midi_clip(track_id, clip, start_time);
Ok(MidiFileMetadata {
duration,
@ -925,6 +972,49 @@ pub async fn graph_load_preset(
}
}
#[tauri::command]
pub async fn graph_load_preset_from_json(
state: tauri::State<'_, Arc<Mutex<AudioState>>>,
track_id: u32,
preset_json: String,
) -> Result<(), String> {
use daw_backend::GraphPreset;
use std::io::Write;
let mut audio_state = state.lock().unwrap();
// Parse the preset JSON to count nodes
let preset = GraphPreset::from_json(&preset_json)
.map_err(|e| format!("Failed to parse preset: {}", e))?;
// Update the node ID counter to account for nodes in the preset
let node_count = preset.nodes.len() as u32;
audio_state.next_graph_node_id = node_count;
if let Some(controller) = &mut audio_state.controller {
// Write JSON to a temporary file
let temp_path = std::env::temp_dir().join(format!("lb_temp_preset_{}.json", track_id));
let mut file = std::fs::File::create(&temp_path)
.map_err(|e| format!("Failed to create temp file: {}", e))?;
file.write_all(preset_json.as_bytes())
.map_err(|e| format!("Failed to write temp file: {}", e))?;
drop(file);
// Load from the temp file
controller.graph_load_preset(track_id, temp_path.to_string_lossy().to_string());
// Clean up temp file (after a delay to allow loading)
std::thread::spawn(move || {
std::thread::sleep(std::time::Duration::from_millis(500));
let _ = std::fs::remove_file(temp_path);
});
Ok(())
} else {
Err("Audio not initialized".to_string())
}
}
#[derive(serde::Serialize)]
pub struct PresetInfo {
pub name: String,
@ -1122,9 +1212,21 @@ pub async fn multi_sampler_add_layer(
root_key: u8,
velocity_min: u8,
velocity_max: u8,
loop_start: Option<usize>,
loop_end: Option<usize>,
loop_mode: Option<String>,
) -> Result<(), String> {
use daw_backend::audio::node_graph::nodes::LoopMode;
let mut audio_state = state.lock().unwrap();
// Parse loop mode string to enum
let loop_mode_enum = match loop_mode.as_deref() {
Some("continuous") => LoopMode::Continuous,
Some("oneshot") | Some("one-shot") => LoopMode::OneShot,
_ => LoopMode::OneShot, // Default
};
if let Some(controller) = &mut audio_state.controller {
controller.multi_sampler_add_layer(
track_id,
@ -1135,6 +1237,9 @@ pub async fn multi_sampler_add_layer(
root_key,
velocity_min,
velocity_max,
loop_start,
loop_end,
loop_mode_enum,
);
Ok(())
} else {
@ -1150,6 +1255,9 @@ pub struct LayerInfo {
pub root_key: u8,
pub velocity_min: u8,
pub velocity_max: u8,
pub loop_start: Option<usize>,
pub loop_end: Option<usize>,
pub loop_mode: String,
}
#[tauri::command]
@ -1161,7 +1269,15 @@ pub async fn multi_sampler_get_layers(
eprintln!("[multi_sampler_get_layers] FUNCTION CALLED with track_id: {}, node_id: {}", track_id, node_id);
use daw_backend::GraphPreset;
// Set up oneshot channel to wait for preset save completion
let (tx, rx) = oneshot::channel();
let (temp_path_str, preset_save_waiters) = {
let mut audio_state = state.lock().unwrap();
// Clone preset_save_waiters first before any mutable borrows
let preset_save_waiters = audio_state.preset_save_waiters.clone();
if let Some(controller) = &mut audio_state.controller {
// Use preset serialization to get node data including layers
// Use timestamp to ensure unique temp file for each query to avoid conflicts
@ -1173,6 +1289,12 @@ pub async fn multi_sampler_get_layers(
let temp_path_str = temp_path.to_string_lossy().to_string();
eprintln!("[multi_sampler_get_layers] Temp path: {}", temp_path_str);
// Register waiter for this preset path
{
let mut waiters = preset_save_waiters.lock().unwrap();
waiters.insert(temp_path_str.clone(), tx);
}
controller.graph_save_preset(
track_id,
temp_path_str.clone(),
@ -1181,9 +1303,33 @@ pub async fn multi_sampler_get_layers(
vec![]
);
// Give the audio thread time to process
std::thread::sleep(std::time::Duration::from_millis(50));
(temp_path_str, preset_save_waiters)
} else {
eprintln!("[multi_sampler_get_layers] Audio not initialized");
return Err("Audio not initialized".to_string());
}
};
// Wait for preset save event with timeout
eprintln!("[multi_sampler_get_layers] Waiting for preset save completion...");
match tokio::time::timeout(std::time::Duration::from_secs(5), rx).await {
Ok(Ok(())) => {
eprintln!("[multi_sampler_get_layers] Preset save complete, reading file...");
}
Ok(Err(_)) => {
eprintln!("[multi_sampler_get_layers] Preset save channel closed");
return Ok(Vec::new());
}
Err(_) => {
eprintln!("[multi_sampler_get_layers] Timeout waiting for preset save");
// Clean up waiter
let mut waiters = preset_save_waiters.lock().unwrap();
waiters.remove(&temp_path_str);
return Ok(Vec::new());
}
}
let temp_path = std::path::PathBuf::from(&temp_path_str);
// Read the temp file and parse it
eprintln!("[multi_sampler_get_layers] Reading temp file...");
match std::fs::read_to_string(&temp_path) {
@ -1208,13 +1354,22 @@ pub async fn multi_sampler_get_layers(
// Check if it's a MultiSampler
if let daw_backend::audio::node_graph::preset::SampleData::MultiSampler { layers } = sample_data {
eprintln!("[multi_sampler_get_layers] Returning {} layers", layers.len());
return Ok(layers.iter().map(|layer| LayerInfo {
return Ok(layers.iter().map(|layer| {
let loop_mode_str = match layer.loop_mode {
daw_backend::audio::node_graph::nodes::LoopMode::Continuous => "continuous",
daw_backend::audio::node_graph::nodes::LoopMode::OneShot => "oneshot",
};
LayerInfo {
file_path: layer.file_path.clone().unwrap_or_default(),
key_min: layer.key_min,
key_max: layer.key_max,
root_key: layer.root_key,
velocity_min: layer.velocity_min,
velocity_max: layer.velocity_max,
loop_start: layer.loop_start,
loop_end: layer.loop_end,
loop_mode: loop_mode_str.to_string(),
}
}).collect());
} else {
eprintln!("[multi_sampler_get_layers] sample_data is not MultiSampler type");
@ -1233,10 +1388,6 @@ pub async fn multi_sampler_get_layers(
Ok(Vec::new()) // Return empty list if file doesn't exist
}
}
} else {
eprintln!("[multi_sampler_get_layers] Audio not initialized");
Err("Audio not initialized".to_string())
}
}
#[tauri::command]
@ -1250,9 +1401,21 @@ pub async fn multi_sampler_update_layer(
root_key: u8,
velocity_min: u8,
velocity_max: u8,
loop_start: Option<usize>,
loop_end: Option<usize>,
loop_mode: Option<String>,
) -> Result<(), String> {
use daw_backend::audio::node_graph::nodes::LoopMode;
let mut audio_state = state.lock().unwrap();
// Parse loop mode string to enum
let loop_mode_enum = match loop_mode.as_deref() {
Some("continuous") => LoopMode::Continuous,
Some("oneshot") | Some("one-shot") => LoopMode::OneShot,
_ => LoopMode::OneShot, // Default
};
if let Some(controller) = &mut audio_state.controller {
controller.multi_sampler_update_layer(
track_id,
@ -1263,6 +1426,9 @@ pub async fn multi_sampler_update_layer(
root_key,
velocity_min,
velocity_max,
loop_start,
loop_end,
loop_mode_enum,
);
Ok(())
} else {
@ -1418,6 +1584,7 @@ pub enum SerializedAudioEvent {
GraphConnectionError { track_id: u32, message: String },
GraphStateChanged { track_id: u32 },
GraphPresetLoaded { track_id: u32 },
GraphPresetSaved { track_id: u32, preset_path: String },
}
// audio_get_events command removed - events are now pushed via Tauri event system
@ -1502,3 +1669,45 @@ pub async fn audio_load_track_graph(
Err("Audio not initialized".to_string())
}
}
#[tauri::command]
pub async fn audio_export(
state: tauri::State<'_, Arc<Mutex<AudioState>>>,
output_path: String,
format: String,
sample_rate: u32,
channels: u32,
bit_depth: u16,
mp3_bitrate: u32,
start_time: f64,
end_time: f64,
) -> Result<(), String> {
let mut audio_state = state.lock().unwrap();
if let Some(controller) = &mut audio_state.controller {
// Parse format
let export_format = match format.as_str() {
"wav" => daw_backend::audio::ExportFormat::Wav,
"flac" => daw_backend::audio::ExportFormat::Flac,
_ => return Err(format!("Unsupported format: {}", format)),
};
// Create export settings
let settings = daw_backend::audio::ExportSettings {
format: export_format,
sample_rate,
channels,
bit_depth,
mp3_bitrate,
start_time,
end_time,
};
// Call export through controller
controller.export_audio(&settings, &output_path)?;
Ok(())
} else {
Err("Audio not initialized".to_string())
}
}

View File

@ -1,13 +1,14 @@
use std::{path::PathBuf, sync::{Arc, Mutex}};
use tauri_plugin_log::{Target, TargetKind};
use log::{trace, info, debug, warn, error};
use tracing_subscriber::EnvFilter;
use chrono::Local;
use tauri::{AppHandle, Manager, Url, WebviewUrl, WebviewWindowBuilder};
mod audio;
mod video;
mod frame_streamer;
mod renderer;
mod render_window;
mod video_server;
#[derive(Default)]
@ -15,12 +16,6 @@ struct AppState {
counter: u32,
}
struct RenderWindowState {
handle: Option<render_window::RenderWindowHandle>,
canvas_offset: (i32, i32), // Canvas position relative to window
canvas_size: (u32, u32),
}
// Learn more about Tauri commands at https://tauri.app/develop/calling-rust/
#[tauri::command]
fn greet(name: &str) -> String {
@ -49,170 +44,44 @@ fn error(msg: String) {
}
#[tauri::command]
fn get_frame_streamer_port(
frame_streamer: tauri::State<'_, Arc<Mutex<frame_streamer::FrameStreamer>>>,
) -> u16 {
let streamer = frame_streamer.lock().unwrap();
streamer.port()
}
async fn open_folder_dialog(app: AppHandle, title: String) -> Result<Option<String>, String> {
use tauri_plugin_dialog::DialogExt;
// Render window commands
#[tauri::command]
fn render_window_create(
x: i32,
y: i32,
width: u32,
height: u32,
canvas_offset_x: i32,
canvas_offset_y: i32,
app: tauri::AppHandle,
state: tauri::State<'_, Arc<Mutex<RenderWindowState>>>,
) -> Result<(), String> {
let mut render_state = state.lock().unwrap();
let folder = app.dialog()
.file()
.set_title(&title)
.blocking_pick_folder();
if render_state.handle.is_some() {
return Err("Render window already exists".to_string());
}
let handle = render_window::spawn_render_window(x, y, width, height)?;
render_state.handle = Some(handle);
render_state.canvas_offset = (canvas_offset_x, canvas_offset_y);
render_state.canvas_size = (width, height);
// Start a background thread to poll main window position
let state_clone = state.inner().clone();
let app_clone = app.clone();
std::thread::spawn(move || {
let mut last_pos: Option<(i32, i32)> = None;
loop {
std::thread::sleep(std::time::Duration::from_millis(50));
if let Some(main_window) = app_clone.get_webview_window("main") {
if let Ok(pos) = main_window.outer_position() {
let current_pos = (pos.x, pos.y);
// Only update if position actually changed
if last_pos != Some(current_pos) {
eprintln!("[WindowSync] Main window position: {:?}", current_pos);
let render_state = state_clone.lock().unwrap();
if let Some(handle) = &render_state.handle {
let new_x = pos.x + render_state.canvas_offset.0;
let new_y = pos.y + render_state.canvas_offset.1;
handle.set_position(new_x, new_y);
last_pos = Some(current_pos);
} else {
break; // No handle, exit thread
}
}
} else {
break; // Window closed, exit thread
}
} else {
break; // Main window gone, exit thread
}
}
});
Ok(())
Ok(folder.map(|path| path.to_string()))
}
#[tauri::command]
fn render_window_update_gradient(
top_r: f32,
top_g: f32,
top_b: f32,
top_a: f32,
bottom_r: f32,
bottom_g: f32,
bottom_b: f32,
bottom_a: f32,
state: tauri::State<'_, Arc<Mutex<RenderWindowState>>>,
) -> Result<(), String> {
let render_state = state.lock().unwrap();
async fn read_folder_files(path: String) -> Result<Vec<String>, String> {
use std::fs;
if let Some(handle) = &render_state.handle {
handle.update_gradient(
[top_r, top_g, top_b, top_a],
[bottom_r, bottom_g, bottom_b, bottom_a],
);
Ok(())
} else {
Err("Render window not created".to_string())
let entries = fs::read_dir(&path)
.map_err(|e| format!("Failed to read directory: {}", e))?;
let audio_extensions = vec!["wav", "aif", "aiff", "flac", "mp3", "ogg"];
let mut files = Vec::new();
for entry in entries {
let entry = entry.map_err(|e| format!("Failed to read entry: {}", e))?;
let path = entry.path();
if path.is_file() {
if let Some(ext) = path.extension() {
let ext_str = ext.to_string_lossy().to_lowercase();
if audio_extensions.contains(&ext_str.as_str()) {
if let Some(filename) = path.file_name() {
files.push(filename.to_string_lossy().to_string());
}
}
#[tauri::command]
fn render_window_set_position(
x: i32,
y: i32,
state: tauri::State<'_, Arc<Mutex<RenderWindowState>>>,
) -> Result<(), String> {
let render_state = state.lock().unwrap();
if let Some(handle) = &render_state.handle {
handle.set_position(x, y);
Ok(())
} else {
Err("Render window not created".to_string())
}
}
#[tauri::command]
fn render_window_sync_position(
app: tauri::AppHandle,
state: tauri::State<'_, Arc<Mutex<RenderWindowState>>>,
) -> Result<(), String> {
let render_state = state.lock().unwrap();
if let Some(main_window) = app.get_webview_window("main") {
if let Ok(pos) = main_window.outer_position() {
if let Some(handle) = &render_state.handle {
let new_x = pos.x + render_state.canvas_offset.0;
let new_y = pos.y + render_state.canvas_offset.1;
eprintln!("[Manual Sync] Updating to ({}, {})", new_x, new_y);
handle.set_position(new_x, new_y);
Ok(())
} else {
Err("Render window not created".to_string())
}
} else {
Err("Could not get window position".to_string())
}
} else {
Err("Main window not found".to_string())
}
}
#[tauri::command]
fn render_window_set_size(
width: u32,
height: u32,
state: tauri::State<'_, Arc<Mutex<RenderWindowState>>>,
) -> Result<(), String> {
let render_state = state.lock().unwrap();
if let Some(handle) = &render_state.handle {
handle.set_size(width, height);
Ok(())
} else {
Err("Render window not created".to_string())
}
}
#[tauri::command]
fn render_window_close(
state: tauri::State<'_, Arc<Mutex<RenderWindowState>>>,
) -> Result<(), String> {
let mut render_state = state.lock().unwrap();
if let Some(handle) = render_state.handle.take() {
handle.close();
Ok(())
} else {
Err("Render window not created".to_string())
}
Ok(files)
}
use tauri::PhysicalSize;
@ -300,26 +169,17 @@ fn handle_file_associations(app: AppHandle, files: Vec<PathBuf>) {
#[cfg_attr(mobile, tauri::mobile_entry_point)]
pub fn run() {
// Initialize env_logger with Error level only
env_logger::Builder::from_default_env()
.filter_level(log::LevelFilter::Error)
.init();
// Initialize WebSocket frame streamer
let frame_streamer = frame_streamer::FrameStreamer::new()
.expect("Failed to start frame streamer");
eprintln!("[App] Frame streamer started on port {}", frame_streamer.port());
let pkg_name = env!("CARGO_PKG_NAME").to_string();
// Initialize video HTTP server
let video_server = video_server::VideoServer::new()
.expect("Failed to start video server");
eprintln!("[App] Video server started on port {}", video_server.port());
tauri::Builder::default()
.manage(Mutex::new(AppState::default()))
.manage(Arc::new(Mutex::new(audio::AudioState::default())))
.manage(Arc::new(Mutex::new(video::VideoState::default())))
.manage(Arc::new(Mutex::new(frame_streamer)))
.manage(Arc::new(Mutex::new(RenderWindowState {
handle: None,
canvas_offset: (0, 0),
canvas_size: (0, 0),
})))
.manage(Arc::new(Mutex::new(video_server)))
.setup(|app| {
#[cfg(any(windows, target_os = "linux"))] // Windows/Linux needs different handling from macOS
{
@ -355,48 +215,39 @@ pub fn run() {
}
Ok(())
})
// .plugin(
// tauri_plugin_log::Builder::new()
// .filter(|metadata| {
// // ONLY allow Error-level logs, block everything else
// metadata.level() == log::Level::Error
// })
// .timezone_strategy(tauri_plugin_log::TimezoneStrategy::UseLocal)
// .format(|out, message, record| {
// let date = Local::now().format("%Y-%m-%d %H:%M:%S").to_string();
// out.finish(format_args!(
// "{}[{}] {}",
// date,
// record.level(),
// message
// ))
// })
// .targets([
// Target::new(TargetKind::Stdout),
// // LogDir locations:
// // Linux: /home/user/.local/share/org.lightningbeam.core/logs
// // macOS: /Users/user/Library/Logs/org.lightningbeam.core/logs
// // Windows: C:\Users\user\AppData\Local\org.lightningbeam.core\logs
// Target::new(TargetKind::LogDir { file_name: Some("logs".to_string()) }),
// Target::new(TargetKind::Webview),
// ])
// .build()
// )
.plugin(
tauri_plugin_log::Builder::new()
.timezone_strategy(tauri_plugin_log::TimezoneStrategy::UseLocal)
.format(|out, message, record| {
let date = Local::now().format("%Y-%m-%d %H:%M:%S").to_string();
out.finish(format_args!(
"{}[{}] {}",
date,
record.level(),
message
))
})
.targets([
Target::new(TargetKind::Stdout),
// LogDir locations:
// Linux: /home/user/.local/share/org.lightningbeam.core/logs
// macOS: /Users/user/Library/Logs/org.lightningbeam.core/logs
// Windows: C:\Users\user\AppData\Local\org.lightningbeam.core\logs
Target::new(TargetKind::LogDir { file_name: Some("logs".to_string()) }),
Target::new(TargetKind::Webview),
])
.build()
)
.plugin(tauri_plugin_dialog::init())
.plugin(tauri_plugin_fs::init())
.plugin(tauri_plugin_shell::init())
.invoke_handler(tauri::generate_handler![
greet, trace, debug, info, warn, error, create_window, get_frame_streamer_port,
render_window_create,
render_window_update_gradient,
render_window_set_position,
render_window_set_size,
render_window_sync_position,
render_window_close,
greet, trace, debug, info, warn, error, create_window,
audio::audio_init,
audio::audio_reset,
audio::audio_play,
audio::audio_stop,
audio::set_metronome_enabled,
audio::audio_seek,
audio::audio_test_beep,
audio::audio_set_track_parameter,
@ -405,6 +256,7 @@ pub fn run() {
audio::audio_add_clip,
audio::audio_move_clip,
audio::audio_trim_clip,
audio::audio_extend_clip,
audio::audio_start_recording,
audio::audio_stop_recording,
audio::audio_pause_recording,
@ -431,6 +283,7 @@ pub fn run() {
audio::graph_set_output_node,
audio::graph_save_preset,
audio::graph_load_preset,
audio::graph_load_preset_from_json,
audio::graph_list_presets,
audio::graph_delete_preset,
audio::graph_get_state,
@ -451,10 +304,17 @@ pub fn run() {
audio::audio_resolve_missing_file,
audio::audio_serialize_track_graph,
audio::audio_load_track_graph,
audio::audio_export,
video::video_load_file,
video::video_stream_frame,
video::video_get_frame,
video::video_get_frames_batch,
video::video_set_cache_size,
open_folder_dialog,
read_folder_files,
video::video_get_pool_info,
video::video_ipc_benchmark,
video::video_get_transcode_status,
video::video_allow_asset,
])
// .manage(window_counter)
.build(tauri::generate_context!())
@ -482,4 +342,5 @@ pub fn run() {
}
},
);
tracing_subscriber::fmt().with_env_filter(EnvFilter::new(format!("{}=trace", pkg_name))).init();
}

View File

@ -1,7 +1,7 @@
{
"$schema": "https://schema.tauri.app/config/2",
"productName": "Lightningbeam",
"version": "0.7.14-alpha",
"version": "0.8.1-alpha",
"identifier": "org.lightningbeam.core",
"build": {
"frontendDist": "../src"

View File

@ -2612,4 +2612,56 @@ export const actions = {
}
},
},
clearNodeGraph: {
execute: async (action) => {
// Get the current graph state to find all node IDs
const graphStateJson = await invoke('graph_get_state', { trackId: action.trackId });
const graphState = JSON.parse(graphStateJson);
// Remove all nodes from backend
for (const node of graphState.nodes) {
try {
await invoke("graph_remove_node", {
trackId: action.trackId,
nodeId: node.id,
});
} catch (e) {
console.error(`Failed to remove node ${node.id}:`, e);
}
}
// Reload the graph from backend
if (context.reloadNodeEditor) {
await context.reloadNodeEditor();
}
// Update minimap
if (context.updateMinimap) {
setTimeout(() => context.updateMinimap(), 100);
}
},
rollback: async (action) => {
// Restore the entire graph from the saved preset JSON
try {
await invoke("graph_load_preset_from_json", {
trackId: action.trackId,
presetJson: action.savedGraphJson,
});
// Reload the graph editor to show the restored nodes
if (context.reloadNodeEditor) {
await context.reloadNodeEditor();
}
// Update minimap
if (context.updateMinimap) {
setTimeout(() => context.updateMinimap(), 100);
}
} catch (e) {
console.error('Failed to restore graph:', e);
alert('Failed to restore graph: ' + e);
}
},
},
};

10
src/assets/metronome.svg Normal file
View File

@ -0,0 +1,10 @@
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<!-- Metronome body (trapezoid) -->
<path d="M12 4 L8 20 L16 20 Z" fill="currentColor" stroke="currentColor" stroke-width="1.5" stroke-linejoin="miter"/>
<!-- Base -->
<rect x="6" y="20" width="12" height="2" fill="currentColor"/>
<!-- Pendulum arm -->
<line x1="12" y1="8" x2="14" y2="16" stroke="currentColor" stroke-width="1.5" stroke-linecap="round"/>
<!-- Pendulum weight -->
<circle cx="14" cy="16" r="1.5" fill="currentColor"/>
</svg>

After

Width:  |  Height:  |  Size: 554 B

View File

@ -172,7 +172,7 @@ function serializeLayoutNode(element, depth = 0) {
// that matches the panes object keys, not the name property
const dataName = element.getAttribute("data-pane-name");
// Convert kebab-case to camelCase (e.g., "timeline-v2" -> "timelineV2")
// Convert kebab-case to camelCase (e.g., "preset-browser" -> "presetBrowser")
const camelCaseName = dataName.replace(/-([a-z0-9])/g, (g) => g[1].toUpperCase());
console.log(`${indent} -> Found pane: ${camelCaseName}`);

View File

@ -32,7 +32,7 @@ export const defaultLayouts = {
type: "vertical-grid",
percent: 30,
children: [
{ type: "pane", name: "timelineV2" },
{ type: "pane", name: "timeline" },
{ type: "pane", name: "stage" }
]
},
@ -63,7 +63,7 @@ export const defaultLayouts = {
{ type: "pane", name: "infopanel" }
]
},
{ type: "pane", name: "timelineV2" }
{ type: "pane", name: "timeline" }
]
}
]
@ -81,7 +81,7 @@ export const defaultLayouts = {
type: "vertical-grid",
percent: 50,
children: [
{ type: "pane", name: "timelineV2" },
{ type: "pane", name: "timeline" },
{ type: "pane", name: "nodeEditor"}
]
},
@ -107,7 +107,7 @@ export const defaultLayouts = {
percent: 50,
children: [
{ type: "pane", name: "stage" },
{ type: "pane", name: "timelineV2" }
{ type: "pane", name: "timeline" }
]
},
{
@ -142,7 +142,7 @@ export const defaultLayouts = {
percent: 50,
children: [
{ type: "pane", name: "infopanel" },
{ type: "pane", name: "timelineV2" }
{ type: "pane", name: "timeline" }
]
}
]
@ -168,7 +168,7 @@ export const defaultLayouts = {
percent: 70,
children: [
{ type: "pane", name: "stage" },
{ type: "pane", name: "timelineV2" }
{ type: "pane", name: "timeline" }
]
},
{ type: "pane", name: "infopanel" }
@ -196,7 +196,7 @@ export const defaultLayouts = {
percent: 70,
children: [
{ type: "pane", name: "infopanel" },
{ type: "pane", name: "timelineV2" }
{ type: "pane", name: "timeline" }
]
}
]
@ -223,7 +223,7 @@ export const defaultLayouts = {
percent: 60,
children: [
{ type: "pane", name: "infopanel" },
{ type: "pane", name: "timelineV2" }
{ type: "pane", name: "timeline" }
]
}
]

File diff suppressed because it is too large Load Diff

View File

@ -1179,12 +1179,12 @@ class AudioTrack {
name: clip.name,
startTime: clip.startTime,
duration: clip.duration,
offset: clip.offset || 0, // Default to 0 if not present
};
// Restore audio-specific fields
if (clip.poolIndex !== undefined) {
clipData.poolIndex = clip.poolIndex;
clipData.offset = clip.offset;
}
// Restore MIDI-specific fields

View File

@ -883,7 +883,8 @@ export const nodeTypes = {
<input type="range" data-node="${nodeId}" data-param="3" min="-24" max="24" value="0" step="1">
</div>
<div class="node-param" style="margin-top: 4px;">
<button class="add-layer-btn" data-node="${nodeId}" style="width: 100%; padding: 4px; font-size: 10px;">Add Sample Layer</button>
<button class="add-layer-btn" data-node="${nodeId}" style="width: 100%; padding: 4px; font-size: 10px; margin-bottom: 2px;">Add Sample Layer</button>
<button class="import-folder-btn" data-node="${nodeId}" style="width: 100%; padding: 4px; font-size: 10px;">Import Folder...</button>
</div>
<div id="sample-layers-container-${nodeId}" class="sample-layers-container">
<table id="sample-layers-table-${nodeId}" class="sample-layers-table">

View File

@ -42,6 +42,12 @@ export let context = {
recordingTrackId: null,
recordingClipId: null,
playPauseButton: null, // Reference to play/pause button for updating appearance
// MIDI activity indicator
lastMidiInputTime: 0, // Timestamp (Date.now()) of last MIDI input
// Metronome state
metronomeEnabled: false,
metronomeButton: null, // Reference to metronome button for updating appearance
metronomeGroup: null, // Reference to metronome button group for showing/hiding
};
// Application configuration
@ -91,7 +97,7 @@ export let config = {
currentLayout: "animation", // Current active layout key
defaultLayout: "animation", // Default layout for new files
showStartScreen: false, // Show layout picker on startup (disabled for now)
restoreLayoutFromFile: false, // Restore layout when opening files
restoreLayoutFromFile: true, // Restore layout when opening files
customLayouts: [] // User-saved custom layouts
};

View File

@ -213,6 +213,23 @@ button {
user-select: none;
}
/* Maximize button in pane headers */
.maximize-btn {
margin-left: auto;
margin-right: 8px;
padding: 4px 8px;
background: none;
border: 1px solid var(--foreground-color);
color: var(--text-primary);
cursor: pointer;
border-radius: 3px;
font-size: 14px;
}
.maximize-btn:hover {
background-color: var(--surface-light);
}
.pane {
user-select: none;
}
@ -675,7 +692,7 @@ button {
margin: 0px;
}
#popupMenu li {
color: var(--keyframe);
color: var(--text-primary);
list-style-type: none;
display: flex;
align-items: center; /* Vertically center the image and text */
@ -773,7 +790,7 @@ button {
background-color: var(--shade);
}
#popupMenu li {
color: var(--background-color);
color: var(--text-primary);
}
#popupMenu li:hover {
background-color: var(--surface-light);
@ -1022,6 +1039,24 @@ button {
animation: pulse 1s ease-in-out infinite;
}
/* Metronome Button - Inline SVG with currentColor */
.playback-btn-metronome {
color: var(--text-primary);
}
.playback-btn-metronome svg {
width: 18px;
height: 18px;
display: block;
margin: auto;
}
/* Active metronome state - use highlight color */
.playback-btn-metronome.active {
background-color: var(--highlight);
border-color: var(--highlight);
}
/* Dark mode playback button adjustments */
@media (prefers-color-scheme: dark) {
.playback-btn {
@ -1209,6 +1244,7 @@ button {
border-bottom: 1px solid #3d3d3d;
display: flex;
align-items: center;
justify-content: space-between;
padding: 0 16px;
z-index: 200;
user-select: none;
@ -1249,6 +1285,22 @@ button {
border-color: #5d5d5d;
}
.node-graph-clear-btn {
padding: 4px 12px;
background: #d32f2f;
border: 1px solid #b71c1c;
border-radius: 3px;
color: white;
font-size: 12px;
cursor: pointer;
transition: background 0.2s;
}
.node-graph-clear-btn:hover {
background: #e53935;
border-color: #c62828;
}
.exit-template-btn:active {
background: #5d5d5d;
}

View File

@ -24,7 +24,7 @@ class TimelineState {
this.rulerHeight = 30 // Height of time ruler in pixels
// Snapping (Phase 5)
this.snapToFrames = false // Whether to snap keyframes to frame boundaries
this.snapToFrames = true // Whether to snap keyframes to frame boundaries (default: on)
}
/**

File diff suppressed because it is too large Load Diff

View File

@ -9,12 +9,24 @@
export async function waitForAppReady(timeout = 5000) {
await browser.waitForApp();
// Check for "Create New File" dialog and click Create if present
// Check for "Animation" card on start screen and click it if present
// The card has a label div with text "Animation"
const animationCard = await browser.$('.focus-card-label*=Animation');
if (await animationCard.isExisting()) {
// Click the parent focus-card element
const card = await animationCard.parentElement();
await card.waitForClickable({ timeout: 2000 });
await card.click();
await browser.pause(1000); // Wait longer for animation view to load
} else {
// Legacy: Check for "Create New File" dialog and click Create if present
const createButton = await browser.$('button*=Create');
if (await createButton.isExisting()) {
await createButton.waitForClickable({ timeout: 2000 });
await createButton.click();
await browser.pause(500); // Wait for dialog to close
}
}
// Wait for the main canvas to be present
const canvas = await browser.$('canvas');

View File

@ -2,12 +2,30 @@
* Canvas interaction utilities for UI testing
*/
/**
* Reset canvas scroll/pan to origin
*/
export async function resetCanvasView() {
await browser.execute(function() {
if (window.context && window.context.stageWidget) {
window.context.stageWidget.offsetX = 0;
window.context.stageWidget.offsetY = 0;
// Trigger redraw to apply the reset
if (window.context.updateUI) {
window.context.updateUI();
}
}
});
await browser.pause(100); // Wait for canvas to reset
}
/**
* Click at specific coordinates on the canvas
* @param {number} x - X coordinate relative to canvas
* @param {number} y - Y coordinate relative to canvas
*/
export async function clickCanvas(x, y) {
await resetCanvasView();
await browser.clickCanvas(x, y);
await browser.pause(100); // Wait for render
}
@ -20,6 +38,7 @@ export async function clickCanvas(x, y) {
* @param {number} toY - Ending Y coordinate
*/
export async function dragCanvas(fromX, fromY, toX, toY) {
await resetCanvasView();
await browser.dragCanvas(fromX, fromY, toX, toY);
await browser.pause(200); // Wait for render
}
@ -31,17 +50,21 @@ export async function dragCanvas(fromX, fromY, toX, toY) {
* @param {number} width - Rectangle width
* @param {number} height - Rectangle height
* @param {boolean} filled - Whether to fill the shape (default: true)
* @param {string} color - Fill color in hex format (e.g., '#ff0000')
*/
export async function drawRectangle(x, y, width, height, filled = true) {
export async function drawRectangle(x, y, width, height, filled = true, color = null) {
// Select the rectangle tool
await selectTool('rectangle');
// Set fill option
await browser.execute((filled) => {
// Set fill option and color if provided
await browser.execute((filled, color) => {
if (window.context) {
window.context.fillShape = filled;
if (color) {
window.context.fillStyle = color;
}
}, filled);
}
}, filled, color);
// Draw by dragging from start to end point
await dragCanvas(x, y, x + width, y + height);
@ -192,14 +215,24 @@ export async function doubleClickCanvas(x, y) {
export async function setPlayheadTime(time) {
await browser.execute(function(timeValue) {
if (window.context && window.context.activeObject) {
// Set time on both the active object and timeline state
window.context.activeObject.currentTime = timeValue;
// Update timeline widget if it exists
if (window.context.timelineWidget && window.context.timelineWidget.timelineState) {
window.context.timelineWidget.timelineState.currentTime = timeValue;
}
// Trigger timeline redraw to show updated playhead position
if (window.context.timelineWidget && window.context.timelineWidget.requestRedraw) {
window.context.timelineWidget.requestRedraw();
}
// Trigger stage redraw to show shapes at new time
if (window.context.updateUI) {
window.context.updateUI();
}
}
}, time);
await browser.pause(100);
await browser.pause(200);
}
/**

68
tests/helpers/manual.js Normal file
View File

@ -0,0 +1,68 @@
/**
* Manual testing utilities for user-in-the-loop verification
* These helpers pause execution and wait for user confirmation
*/
/**
* Pause and wait for user to verify something visually with a confirm dialog
* @param {string} message - What the user should verify
* @param {boolean} waitForConfirm - If true, show confirm dialog and wait for user input
* @throws {Error} If user clicks Cancel to indicate verification failed
*/
export async function verifyManually(message, waitForConfirm = true) {
console.log('\n=== MANUAL VERIFICATION ===');
console.log(message);
console.log('===========================\n');
if (waitForConfirm) {
// Show a confirm dialog in the browser and wait for user response
const result = await browser.execute(function(msg) {
return confirm(msg);
}, message);
if (!result) {
console.log('User clicked Cancel - verification failed');
throw new Error('Manual verification failed: User clicked Cancel');
} else {
console.log('User clicked OK - verification passed');
}
return result;
} else {
// Just pause for observation
await browser.pause(3000);
return true;
}
}
/**
* Add a visual marker/annotation to describe what should be visible
* @param {string} description - Description of current state
*/
export async function logStep(description) {
console.log(`\n>>> STEP: ${description}`);
}
/**
* Extended pause with a description of what's happening
* @param {string} action - What action just occurred
* @param {number} pauseTime - How long to pause
*/
export async function pauseAndDescribe(action, pauseTime = 2000) {
console.log(`>>> ${action}`);
await browser.pause(pauseTime);
}
/**
* Ask user a yes/no question via confirm dialog
* @param {string} question - Question to ask the user
* @returns {Promise<boolean>} True if user clicked OK, false if Cancel
*/
export async function askUser(question) {
console.log(`\n>>> QUESTION: ${question}`);
const result = await browser.execute(function(msg) {
return confirm(msg);
}, question);
console.log(`User answered: ${result ? 'YES (OK)' : 'NO (Cancel)'}`);
return result;
}

View File

@ -14,6 +14,7 @@ import {
clickCanvas
} from '../helpers/canvas.js';
import { assertShapeExists } from '../helpers/assertions.js';
import { verifyManually, logStep } from '../helpers/manual.js';
describe('Group Editing', () => {
before(async () => {
@ -71,6 +72,7 @@ describe('Group Editing', () => {
it('should handle nested group editing with correct positioning', async () => {
// Create first group with two shapes
await logStep('Drawing two rectangles for inner group');
await drawRectangle(400, 100, 60, 60);
await drawRectangle(480, 100, 60, 60);
await selectMultipleShapes([
@ -80,24 +82,47 @@ describe('Group Editing', () => {
await useKeyboardShortcut('g', true);
await browser.pause(300);
await verifyManually('VERIFY: Do you see two rectangles grouped together?\nClick OK if yes, Cancel if no');
// Verify both shapes exist
await assertShapeExists(430, 130, 'First shape should exist');
await assertShapeExists(510, 130, 'Second shape should exist');
// Create another shape and group everything together
await logStep('Drawing third rectangle and creating nested group');
await drawRectangle(400, 180, 60, 60);
await selectMultipleShapes([
{ x: 470, y: 130 }, // Center of first group
{ x: 430, y: 210 } // Center of new shape
]);
// Select both the group and the new shape by dragging a selection box
// We need to start from well outside the shapes to avoid hitting them
// The first group spans x=400-540, y=100-160
// The third shape spans x=400-460, y=180-240
await selectTool('select');
await dragCanvas(390, 90, 550, 250); // Start from outside all shapes
await browser.pause(200);
await useKeyboardShortcut('g', true);
await browser.pause(300);
await verifyManually('VERIFY: All three rectangles now grouped together (nested group)?\nClick OK if yes, Cancel if no');
// Double-click to enter outer group
await logStep('Double-clicking to enter outer group');
await doubleClickCanvas(470, 130);
await browser.pause(300);
await verifyManually('VERIFY: Are we now inside the outer group?\nClick OK if yes, Cancel if no');
// Double-click again to enter inner group
await logStep('Double-clicking again to enter inner group');
await doubleClickCanvas(470, 130);
await browser.pause(300);
await verifyManually(
'VERIFY: Are we now inside the inner group?\n' +
'Can you see the two original rectangles at their original positions?\n' +
'First at (430, 130), second at (510, 130)?\n\n' +
'Click OK if yes, Cancel if no'
);
// All shapes should still be at their original positions
await assertShapeExists(430, 130, 'First shape should maintain position in nested group');

View File

@ -0,0 +1,279 @@
/**
* MANUAL Timeline Animation Tests
* Run these with visual verification - watch the app window as tests execute
*
* To run: pnpm wdio run wdio.conf.js --spec tests/specs/manual/timeline-manual.test.js
*/
import { describe, it, before, afterEach } from 'mocha';
import { waitForAppReady } from '../../helpers/app.js';
import {
drawRectangle,
selectMultipleShapes,
selectTool,
dragCanvas,
clickCanvas,
setPlayheadTime,
getPlayheadTime,
addKeyframe,
useKeyboardShortcut
} from '../../helpers/canvas.js';
import { verifyManually, logStep, pauseAndDescribe } from '../../helpers/manual.js';
describe('MANUAL: Timeline Animation', () => {
before(async () => {
await waitForAppReady();
});
afterEach(async () => {
// Close any open dialogs by accepting them
try {
await browser.execute(function() {
// Close any open confirm/alert dialogs
// This is a no-op if no dialog is open
});
} catch (e) {
// Ignore errors
}
// Pause briefly to show final state before ending
await browser.pause(1000);
console.log('\n>>> Test completed. Session will restart for next test.\n');
});
it('TEST 1: Group animation - draw, group, keyframe, move group', async () => {
await logStep('Drawing a RED rectangle at (100, 100) with size 100x100');
await drawRectangle(100, 100, 100, 100, true, '#ff0000');
await pauseAndDescribe('RED rectangle drawn', 200);
await verifyManually(
'VERIFY: Do you see a RED filled rectangle at the top-left area?\n' +
'It should be centered around (150, 150)\n\n' +
'Click OK if yes, Cancel if no'
);
await logStep('Selecting the RED rectangle by dragging a selection box over it');
await selectMultipleShapes([{ x: 150, y: 150 }]);
await pauseAndDescribe('RED rectangle selected', 200);
await verifyManually(
'VERIFY: Is the RED rectangle now selected? (Should have selection indicators)\n\n' +
'Click OK if yes, Cancel if no'
);
await logStep('Grouping the selected rectangle (Ctrl+G)');
await useKeyboardShortcut('g', true);
await pauseAndDescribe('RED rectangle grouped', 200);
await verifyManually(
'VERIFY: Was the rectangle grouped? (May look similar but is now a group)\n\n' +
'Click OK if yes, Cancel if no'
);
await logStep('Selecting the group by dragging a selection box over it');
await selectMultipleShapes([{ x: 150, y: 150 }]);
await pauseAndDescribe('Group selected', 200);
await logStep('Moving playhead to time 0.333 (frame 10 at 30fps)');
await setPlayheadTime(0.333);
await pauseAndDescribe('Playhead moved to 0.333s - WAIT for UI to update', 300);
await verifyManually(
'VERIFY: Did the playhead indicator move on the timeline?\n' +
'It should be at approximately frame 10\n\n' +
'Click OK if yes, Cancel if no'
);
await logStep('Adding a keyframe at current position');
await addKeyframe();
await pauseAndDescribe('Keyframe added', 200);
await verifyManually(
'VERIFY: Was a keyframe added? (Should see a keyframe marker on timeline)\n\n' +
'Click OK if yes, Cancel if no'
);
await logStep('Dragging the selected group to move it right (from x=150 to x=250)');
await dragCanvas(150, 150, 250, 150);
await pauseAndDescribe('Group moved to the right', 300);
await verifyManually(
'VERIFY: Did the RED rectangle move to the right?\n' +
'It should now be centered around (250, 150)\n\n' +
'Click OK if yes, Cancel if no'
);
await logStep('Moving playhead back to time 0 (frame 1)');
await setPlayheadTime(0);
await pauseAndDescribe('Playhead back at start', 300);
await verifyManually(
'VERIFY: Did the RED rectangle jump back to its original position (x=150)?\n' +
'This confirms the group animation is working!\n\n' +
'Click OK if yes, Cancel if no'
);
await logStep('Moving playhead to middle (time 0.166, frame 5)');
await setPlayheadTime(0.166);
await pauseAndDescribe('Playhead at middle frame', 300);
await verifyManually(
'VERIFY: Is the RED rectangle now between the two positions?\n' +
'It should be around x=200 (interpolated halfway)\n\n' +
'Click OK if yes, Cancel if no'
);
await logStep('Moving playhead back and forth to show animation');
await setPlayheadTime(0);
await browser.pause(300);
await setPlayheadTime(0.333);
await browser.pause(300);
await setPlayheadTime(0);
await browser.pause(300);
await setPlayheadTime(0.333);
await browser.pause(300);
await verifyManually(
'VERIFY: Did you see the RED rectangle animate back and forth?\n' +
'This demonstrates the timeline animation is working!\n\n' +
'Click OK if yes, Cancel if no'
);
await logStep('TEST 1 COMPLETE - Showing completion alert');
const completionShown = await browser.execute(function() {
alert('TEST 1 COMPLETE - Click OK to finish');
return true;
});
await browser.pause(2000); // Wait for alert to be dismissed before ending test
});
it('TEST 2: Shape tween - draw shape, add keyframes, modify edges', async () => {
await logStep('Resetting playhead to time 0 at start of test');
await setPlayheadTime(0);
await pauseAndDescribe('Playhead reset to time 0', 200);
await logStep('Drawing a BLUE rectangle at (400, 100)');
await drawRectangle(400, 100, 80, 80, true, '#0000ff');
await pauseAndDescribe('BLUE rectangle drawn', 200);
await verifyManually(
'VERIFY: Do you see a BLUE filled rectangle?\n' +
'It should be at (400, 100) with size 80x80\n\n' +
'Click OK if yes, Cancel if no'
);
await logStep('Selecting the BLUE rectangle');
await selectMultipleShapes([{ x: 440, y: 140 }]);
await pauseAndDescribe('BLUE rectangle selected', 200);
await verifyManually(
'VERIFY: Is the BLUE rectangle selected?\n' +
'(An initial keyframe should be automatically added at time 0)\n\n' +
'Click OK if yes, Cancel if no'
);
await logStep('Moving playhead to time 0.5 (frame 12 at 24fps)');
await setPlayheadTime(0.5);
await pauseAndDescribe('Playhead moved to 0.5s - WAIT for UI to update', 300);
await verifyManually(
'VERIFY: Did the playhead move to 0.5s on the timeline?\n\n' +
'Click OK if yes, Cancel if no'
);
await logStep('Adding a keyframe at time 0.5');
await addKeyframe();
await pauseAndDescribe('Keyframe added at 0.5s', 200);
await verifyManually(
'VERIFY: Was a keyframe added at 0.5s?\n\n' +
'Click OK if yes, Cancel if no'
);
await logStep('Clicking away from the shape to deselect it');
await clickCanvas(600, 300);
await pauseAndDescribe('Shape deselected', 100);
await verifyManually(
'VERIFY: Is the BLUE rectangle now deselected?\n' +
'(No selection indicators around it)\n\n' +
'Click OK if yes, Cancel if no'
);
await logStep('Dragging the right edge of the BLUE rectangle to curve/extend it');
await dragCanvas(480, 140, 530, 140);
await pauseAndDescribe('Dragged right edge of BLUE rectangle', 300);
await verifyManually(
'VERIFY: Did the right edge of the BLUE rectangle get curved/pulled out?\n' +
'The shape should now be modified/stretched to the right\n\n' +
'Click OK if yes, Cancel if no'
);
await logStep('Moving playhead back to time 0');
await setPlayheadTime(0);
await pauseAndDescribe('Playhead back at start', 300);
await verifyManually(
'VERIFY: Did the BLUE rectangle return to its original rectangular shape?\n' +
'The edge modification should not be visible at time 0\n\n' +
'Click OK if yes, Cancel if no'
);
await logStep('Moving playhead to middle between keyframes (time 0.25, frame 6)');
await setPlayheadTime(0.25);
await pauseAndDescribe('Playhead at middle (0.25s) - halfway between frame 0 and frame 12', 300);
await verifyManually(
'VERIFY: Is the BLUE rectangle shape somewhere between the two versions?\n' +
'It should be partially morphed (shape tween interpolation)\n' +
'Halfway between the original rectangle and the curved version\n\n' +
'Click OK if yes, Cancel if no'
);
await logStep('TEST 2 COMPLETE - Showing completion alert');
const completionShown = await browser.execute(function() {
alert('TEST 2 COMPLETE - Click OK to finish');
return true;
});
await browser.pause(2000); // Wait for alert to be dismissed before ending test
});
it('TEST 3: Test dragging unselected shape edge', async () => {
await logStep('Resetting playhead to time 0');
await setPlayheadTime(0);
await pauseAndDescribe('Playhead reset to time 0', 100);
await logStep('Drawing a GREEN rectangle at (200, 250) WITHOUT selecting it');
await drawRectangle(200, 250, 100, 100, true, '#00ff00');
await pauseAndDescribe('GREEN rectangle drawn (not selected)', 100);
await verifyManually(
'VERIFY: GREEN rectangle should be visible but NOT selected\n' +
'(No selection indicators around it)\n\n' +
'Click OK if yes, Cancel if no'
);
await logStep('Switching to select tool');
await selectTool('select');
await pauseAndDescribe('Select tool activated', 100);
await logStep('Dragging from the right edge (x=300) of GREEN rectangle to extend it');
await dragCanvas(300, 300, 350, 300);
await pauseAndDescribe('Dragged the right edge of GREEN rectangle', 200);
await verifyManually(
'VERIFY: What happened to the GREEN rectangle?\n\n' +
'Expected: The right edge should be curved/pulled out to x=350\n' +
'Did the edge get modified as expected?\n\n' +
'Click OK if yes, Cancel if no'
);
await logStep('TEST 3 COMPLETE - Showing completion alert');
const completionShown = await browser.execute(function() {
alert('TEST 3 COMPLETE - Click OK to finish');
return true;
});
await browser.pause(2000); // Wait for alert to be dismissed before ending test
});
});

View File

@ -48,9 +48,9 @@ describe('Shape Drawing', () => {
});
it('should draw large rectangles', async () => {
// Draw a large rectangle
await drawRectangle(50, 300, 400, 200);
await assertShapeExists(250, 400, 'Large rectangle should exist at center');
// Draw a large rectangle (canvas is ~350px tall, so keep within bounds)
await drawRectangle(50, 50, 400, 250);
await assertShapeExists(250, 175, 'Large rectangle should exist at center');
});
});

View File

@ -19,6 +19,7 @@ import {
getPixelColor
} from '../helpers/canvas.js';
import { assertShapeExists } from '../helpers/assertions.js';
import { verifyManually, logStep } from '../helpers/manual.js';
describe('Timeline Animation', () => {
before(async () => {
@ -44,10 +45,19 @@ describe('Timeline Animation', () => {
await addKeyframe();
await browser.pause(200);
await logStep('About to drag selected shape - dragging selected shapes does not move them yet');
// Shape is selected, so dragging from its center will move it
await dragCanvas(150, 150, 250, 150);
await browser.pause(300);
await verifyManually(
'VERIFY: Did the shape move to x=250?\n' +
'Expected: Shape at x=250\n' +
'Note: Dragging selected shapes is not implemented yet\n\n' +
'Click OK if at x=250, Cancel if not'
);
// At frame 10, shape should be at the new position (moved 100px to the right)
await assertShapeExists(250, 150, 'Shape should be at new position at frame 10');
@ -55,6 +65,11 @@ describe('Timeline Animation', () => {
await setPlayheadTime(0);
await browser.pause(200);
await verifyManually(
'VERIFY: Did the shape return to original position (x=150)?\n\n' +
'Click OK if yes, Cancel if no'
);
// Shape should be at original position
await assertShapeExists(150, 150, 'Shape should be at original position at frame 1');
@ -62,6 +77,11 @@ describe('Timeline Animation', () => {
await setPlayheadTime(0.166);
await browser.pause(200);
await verifyManually(
'VERIFY: Is the shape interpolated at x=200 (halfway)?\n\n' +
'Click OK if yes, Cancel if no'
);
// Shape should be interpolated between the two positions
// At frame 5 (halfway), shape should be around x=200 (halfway between 150 and 250)
await assertShapeExists(200, 150, 'Shape should be interpolated at frame 5');
@ -91,74 +111,85 @@ describe('Timeline Animation', () => {
it('should handle multiple keyframes on the same shape', async () => {
// Draw a shape
await drawRectangle(100, 300, 80, 80);
await drawRectangle(100, 100, 80, 80);
// Select it
await selectMultipleShapes([{ x: 140, y: 340 }]);
await selectMultipleShapes([{ x: 140, y: 140 }]);
await browser.pause(200);
// Keyframe 1: time 0 (original position at x=140, y=340)
// Keyframe 1: time 0 (original position at x=140, y=140)
// Keyframe 2: time 0.333 (move right)
await setPlayheadTime(0.333);
await addKeyframe();
await browser.pause(200);
await logStep('Dragging selected shape (not implemented yet)');
// Shape should still be selected, drag to move
await dragCanvas(140, 340, 200, 340);
await dragCanvas(140, 140, 200, 140);
await browser.pause(300);
// Keyframe 3: time 0.666 (move down but stay within canvas)
await verifyManually('VERIFY: Did shape move to x=200? (probably not)\nClick OK if at x=200, Cancel if not');
// Keyframe 3: time 0.666 (move down)
await setPlayheadTime(0.666);
await addKeyframe();
await browser.pause(200);
// Drag to move down (y=380 instead of 400 to stay in canvas)
await dragCanvas(200, 340, 200, 380);
// Drag to move down
await dragCanvas(200, 140, 200, 180);
await browser.pause(300);
await verifyManually('VERIFY: Did shape move to y=180?\nClick OK if yes, Cancel if no');
// Verify positions at each keyframe
await setPlayheadTime(0);
await browser.pause(200);
await assertShapeExists(140, 340, 'Shape at keyframe 1 (x=140, y=340)');
await verifyManually('VERIFY: Shape at original position (x=140, y=140)?\nClick OK if yes, Cancel if no');
await assertShapeExists(140, 140, 'Shape at keyframe 1 (x=140, y=140)');
await setPlayheadTime(0.333);
await browser.pause(200);
await assertShapeExists(200, 340, 'Shape at keyframe 2 (x=200, y=340)');
await verifyManually('VERIFY: Shape at x=200, y=140?\nClick OK if yes, Cancel if no');
await assertShapeExists(200, 140, 'Shape at keyframe 2 (x=200, y=140)');
await setPlayheadTime(0.666);
await browser.pause(200);
await assertShapeExists(200, 380, 'Shape at keyframe 3 (x=200, y=380)');
await verifyManually('VERIFY: Shape at x=200, y=180?\nClick OK if yes, Cancel if no');
await assertShapeExists(200, 180, 'Shape at keyframe 3 (x=200, y=180)');
// Check interpolation between keyframe 1 and 2 (at t=0.166, halfway)
await setPlayheadTime(0.166);
await browser.pause(200);
await assertShapeExists(170, 340, 'Shape interpolated between kf1 and kf2');
await verifyManually('VERIFY: Shape interpolated at x=170, y=140?\nClick OK if yes, Cancel if no');
await assertShapeExists(170, 140, 'Shape interpolated between kf1 and kf2');
// Check interpolation between keyframe 2 and 3 (at t=0.5, halfway)
await setPlayheadTime(0.5);
await browser.pause(200);
await assertShapeExists(200, 360, 'Shape interpolated between kf2 and kf3');
await verifyManually('VERIFY: Shape interpolated at x=200, y=160?\nClick OK if yes, Cancel if no');
await assertShapeExists(200, 160, 'Shape interpolated between kf2 and kf3');
});
});
describe('Group/Object Animation', () => {
it('should animate group position across keyframes', async () => {
// Create a group with two shapes
await drawRectangle(300, 300, 60, 60);
await drawRectangle(380, 300, 60, 60);
await drawRectangle(300, 100, 60, 60);
await drawRectangle(380, 100, 60, 60);
await selectMultipleShapes([
{ x: 330, y: 330 },
{ x: 410, y: 330 }
{ x: 330, y: 130 },
{ x: 410, y: 130 }
]);
await useKeyboardShortcut('g', true);
await browser.pause(300);
// Verify both shapes exist at frame 1
await assertShapeExists(330, 330, 'First shape at frame 1');
await assertShapeExists(410, 330, 'Second shape at frame 1');
await assertShapeExists(330, 130, 'First shape at frame 1');
await assertShapeExists(410, 130, 'Second shape at frame 1');
// Select the group by dragging a selection box over it
await selectMultipleShapes([{ x: 370, y: 330 }]);
await selectMultipleShapes([{ x: 370, y: 130 }]);
await browser.pause(200);
// Move to frame 10 and add keyframe
@ -166,30 +197,37 @@ describe('Timeline Animation', () => {
await addKeyframe();
await browser.pause(200);
await logStep('Dragging group down');
// Group is selected, so dragging will move it
// Drag from center of group down (but keep it within canvas bounds)
await dragCanvas(370, 330, 370, 380);
// Drag from center of group down
await dragCanvas(370, 130, 370, 200);
await browser.pause(300);
// At frame 10, group should be at new position (moved 50px down)
await assertShapeExists(330, 380, 'First shape at new position at frame 10');
await assertShapeExists(410, 380, 'Second shape at new position at frame 10');
await verifyManually('VERIFY: Did the group move down to y=200?\nClick OK if yes, Cancel if no');
// At frame 10, group should be at new position (moved down)
await assertShapeExists(330, 200, 'First shape at new position at frame 10');
await assertShapeExists(410, 200, 'Second shape at new position at frame 10');
// Go to frame 1
await setPlayheadTime(0);
await browser.pause(200);
await verifyManually('VERIFY: Did group return to original position (y=130)?\nClick OK if yes, Cancel if no');
// Group should be at original position
await assertShapeExists(330, 330, 'First shape at original position at frame 1');
await assertShapeExists(410, 330, 'Second shape at original position at frame 1');
await assertShapeExists(330, 130, 'First shape at original position at frame 1');
await assertShapeExists(410, 130, 'Second shape at original position at frame 1');
// Go to frame 5 (middle, t=0.166)
await setPlayheadTime(0.166);
await browser.pause(200);
// Group should be interpolated (halfway between y=330 and y=380, so y=355)
await assertShapeExists(330, 355, 'First shape interpolated at frame 5');
await assertShapeExists(410, 355, 'Second shape interpolated at frame 5');
await verifyManually('VERIFY: Is group interpolated at y=165 (halfway)?\nClick OK if yes, Cancel if no');
// Group should be interpolated (halfway between y=130 and y=200, so y=165)
await assertShapeExists(330, 165, 'First shape interpolated at frame 5');
await assertShapeExists(410, 165, 'Second shape interpolated at frame 5');
});
it('should maintain relative positions of shapes within animated group', async () => {
@ -233,41 +271,47 @@ describe('Timeline Animation', () => {
describe('Interpolation', () => {
it('should smoothly interpolate between keyframes', async () => {
// Draw a simple shape
await drawRectangle(500, 100, 50, 50);
await drawRectangle(100, 100, 50, 50);
// Select it
await selectMultipleShapes([{ x: 525, y: 125 }]);
await selectMultipleShapes([{ x: 125, y: 125 }]);
await browser.pause(200);
// Keyframe at start (x=525)
// Keyframe at start (x=125)
await setPlayheadTime(0);
await browser.pause(100);
// Keyframe at end (1 second = frame 30, move to x=725)
// Keyframe at end (1 second = frame 30, move to x=325)
await setPlayheadTime(1.0);
await addKeyframe();
await browser.pause(200);
await dragCanvas(525, 125, 725, 125);
await logStep('Dragging shape (selected shapes cannot be dragged yet)');
await dragCanvas(125, 125, 325, 125);
await browser.pause(300);
await verifyManually('VERIFY: Did shape move to x=325? (probably not)\nClick OK if at x=325, Cancel if not');
// Check multiple intermediate frames for smooth interpolation
// Total movement: 200px over 1 second
// At 25% (0.25s), x should be 525 + 50 = 575
// At 25% (0.25s), x should be 125 + 50 = 175
await setPlayheadTime(0.25);
await browser.pause(200);
await assertShapeExists(575, 125, 'Shape at 25% interpolation');
await verifyManually('VERIFY: Shape at x=175 (25% interpolation)?\nClick OK if yes, Cancel if no');
await assertShapeExists(175, 125, 'Shape at 25% interpolation');
// At 50% (0.5s), x should be 525 + 100 = 625
// At 50% (0.5s), x should be 125 + 100 = 225
await setPlayheadTime(0.5);
await browser.pause(200);
await assertShapeExists(625, 125, 'Shape at 50% interpolation');
await verifyManually('VERIFY: Shape at x=225 (50% interpolation)?\nClick OK if yes, Cancel if no');
await assertShapeExists(225, 125, 'Shape at 50% interpolation');
// At 75% (0.75s), x should be 525 + 150 = 675
// At 75% (0.75s), x should be 125 + 150 = 275
await setPlayheadTime(0.75);
await browser.pause(200);
await assertShapeExists(675, 125, 'Shape at 75% interpolation');
await verifyManually('VERIFY: Shape at x=275 (75% interpolation)?\nClick OK if yes, Cancel if no');
await assertShapeExists(275, 125, 'Shape at 75% interpolation');
});
});
});