Compare commits

..

69 Commits

Author SHA1 Message Date
Skyler Lehmkuhl c8d1c66033 Bump version to 0.8.1-alpha 2025-11-24 01:10:18 -05:00
Skyler Lehmkuhl cbdd277184 Add audio export 2025-11-24 01:03:31 -05:00
Skyler Lehmkuhl 2e1485afb1 Update readme 2025-11-23 22:13:43 -05:00
Skyler Lehmkuhl b3fe9eaabe Update readme 2025-11-23 21:58:34 -05:00
Skyler Lehmkuhl 5d84522a74 add clear node graph button 2025-11-12 11:23:46 -05:00
Skyler Lehmkuhl 30aa639460 Add looped instrument samples and auto-detection of loop points 2025-11-12 08:53:08 -05:00
Skyler Lehmkuhl 3296d3ab6e work on tests 2025-11-12 08:52:16 -05:00
Skyler Lehmkuhl a1e2368468 test improvements 2025-11-08 10:03:51 -05:00
Skyler Lehmkuhl 7ec69ce950 Merge branch 'new_timeline' of github.com:skykooler/Lightningbeam into new_timeline 2025-11-06 23:42:50 -05:00
Skyler Lehmkuhl b82d2b7278 codify new timeline 2025-11-06 23:42:45 -05:00
Skyler Lehmkuhl 47e1954efe try to improve performance 2025-11-06 22:36:02 -05:00
Skyler Lehmkuhl 430ecb0ae6 use native player to speed up playback 2025-11-06 11:36:56 -05:00
Skyler Lehmkuhl e51a6b803d Add metronome 2025-11-06 10:59:25 -05:00
Skyler Lehmkuhl e97dc5695f draw midi input indicator 2025-11-06 09:12:48 -05:00
Skyler Lehmkuhl 09426e21f4 use channel and jpeg compression to speed up playback 2025-11-06 06:42:12 -05:00
Skyler Lehmkuhl 3c5a24e0b6 video backend 2025-11-06 06:04:39 -05:00
Skyler Lehmkuhl 07dc7efbe4 Rename Layer to VectorLayer 2025-11-05 19:18:11 -05:00
Skyler Lehmkuhl 5320e14745 midi hotplug 2025-11-03 09:48:38 -05:00
Skyler Lehmkuhl 06314dbf57 Add MIDI input 2025-11-03 06:16:17 -05:00
Skyler Lehmkuhl f6a91abccd shift virtual keyboard 2025-11-03 05:08:34 -05:00
Skyler Lehmkuhl 3b0e5b7ada fix save/load bugs 2025-11-03 04:38:31 -05:00
Skyler Lehmkuhl 1ee86af94d File save/load for audio projects 2025-11-03 02:46:43 -05:00
Skyler Lehmkuhl 9702a501bd add BPM detection 2025-11-02 09:53:34 -05:00
Skyler Lehmkuhl 66c4746767 use nodes for audio tracks 2025-11-02 06:33:10 -05:00
Skyler Lehmkuhl 988bbfd1a9 Add automation and drag nodes into connections in the graph 2025-11-02 01:27:22 -05:00
Skyler Lehmkuhl 0ae168cbca Add bit crusher, constant, math, envelope follower, phaser, ring modulator, sample and hold, and vocoder nodes 2025-10-29 03:14:01 -04:00
Skyler Lehmkuhl dc32fc4200 MIDI recording 2025-10-29 01:50:45 -04:00
Skyler Lehmkuhl 6e7e90fe57 Lay out preset instruments better 2025-10-28 21:00:27 -04:00
Skyler Lehmkuhl d496d796dd Add CV visualizer to oscilloscope node 2025-10-28 20:19:25 -04:00
Skyler Lehmkuhl d7dc423fe3 Remove old SimpleSynth and effect system 2025-10-28 20:19:08 -04:00
Skyler Lehmkuhl 2cdde33e37 Add minimap and node search to node graph 2025-10-28 10:27:54 -04:00
Skyler Lehmkuhl a379266f99 Add undo/redo support for node graph editor 2025-10-28 09:53:57 -04:00
Skyler Lehmkuhl 9d6eaa5bba Node graph improvements and fixes 2025-10-28 08:51:53 -04:00
Skyler Lehmkuhl d2354e4864 Fix sampled instrument loading 2025-10-28 05:50:44 -04:00
Skyler Lehmkuhl e426da0f5b Update README 2025-10-28 04:19:20 -04:00
Skyler Lehmkuhl 8e6ea82f92 Load factory preset instruments 2025-10-28 04:19:05 -04:00
Skyler Lehmkuhl f1bcf16ddc Add preset instruments 2025-10-28 04:18:18 -04:00
Skyler Lehmkuhl 2e9699b524 Add sampler nodes and startup screen 2025-10-28 01:32:51 -04:00
Skyler Lehmkuhl e57ae51397 Fix preset loading, add LFO, noise, pan and splitter nodes 2025-10-25 07:29:14 -04:00
Skyler Lehmkuhl 139946fb75 Add presets and make graph follow selected layer/track 2025-10-25 05:31:18 -04:00
Skyler Lehmkuhl 16f4a2a359 Add audio node graph editing 2025-10-25 03:29:54 -04:00
Skyler Lehmkuhl 19e99fa8bf Update piano roll icon 2025-10-24 01:37:21 -04:00
Skyler Lehmkuhl 6b8679fa87 fix pane split/join menu 2025-10-24 00:28:24 -04:00
Skyler Lehmkuhl 4b1d9dc851 Fix UI selection when dragging pane borders 2025-10-23 23:25:22 -04:00
Skyler Lehmkuhl 976b41cb83 Add piano roll track editing 2025-10-23 23:10:56 -04:00
Skyler Lehmkuhl 3de1b05fb3 Add custom layouts, piano pane, midi file import 2025-10-23 21:15:17 -04:00
Skyler Lehmkuhl c46c28c9bb Add timestamp window 2025-10-23 06:21:02 -04:00
Skyler Lehmkuhl 9649fe173b Rename views to keyframe, curve and segment and update defaults 2025-10-23 05:38:10 -04:00
Skyler Lehmkuhl 5e1a30d812 add timeline markings 2025-10-23 05:00:13 -04:00
Skyler Lehmkuhl 8be10b8213 send playback events from backend to use as time reference 2025-10-23 04:30:52 -04:00
Skyler Lehmkuhl d2fa167179 use tauri events instead of polling to fix race condition in recording stop 2025-10-23 03:59:01 -04:00
Skyler Lehmkuhl 20c3b820a3 Record audio tracks 2025-10-23 01:08:45 -04:00
Skyler Lehmkuhl 48ec738027 add recording and reset function 2025-10-22 20:06:02 -04:00
Skyler Lehmkuhl 9699e1e1ea Migrate from frame-centric to AnimationData system
Replaces legacy Frame-based object positioning and shape management with
AnimationData curves throughout the codebase. This enables time-based
animation instead of discrete frame indices, providing smoother playback
and more flexible keyframe editing.

Key changes:
- Remove currentFrame getter and frame.keys lookups
- Replace setFrameNum() with setTime() for continuous time navigation
- Add Layer.addShape()/removeShape() with AnimationData integration
- Migrate actions (move, group, delete, z-order) to use animation curves
- Update keyboard shortcuts and drag operations to modify curves directly
- Leave "holes" in shapeIndex values for proper undo/redo support

Rendering now fully driven by AnimationData curves (exists, zOrder,
shapeIndex for shapes; x, y, rotation, scale for objects).
2025-10-20 01:56:53 -04:00
Skyler Lehmkuhl 5a72743209 UI tests 2025-10-20 00:44:47 -04:00
Skyler Lehmkuhl 97b9ff71b7 Fix curve issues 2025-10-19 18:45:17 -04:00
Skyler Lehmkuhl a8c81c8352 fix volume 2025-10-18 23:59:44 -04:00
Skyler Lehmkuhl 5e91882d01 Use buffer pool 2025-10-18 23:45:27 -04:00
Skyler Lehmkuhl d4fb8b721a better time stretching 2025-10-18 23:28:20 -04:00
Skyler Lehmkuhl f9e2d36f3a add metatracks 2025-10-18 22:56:38 -04:00
Skyler Lehmkuhl 242f494219 fix clicking 2025-10-18 21:55:28 -04:00
Skyler Lehmkuhl 7ef562917a midi import in daw backend 2025-10-18 21:46:40 -04:00
Skyler Lehmkuhl e45659ddfd Work on timeline 2025-10-18 21:32:59 -04:00
Skyler Lehmkuhl 9414bdcd74 Work on daw backend 2025-10-18 18:09:07 -04:00
Skyler Lehmkuhl 87d2036f07 Complete Phase 5: Timeline curve interaction and nested animation support
Phase 5 adds interactive curve editing, proper interpolation visualization,
and automatic segment keyframe management for nested animations.

Timeline curve interaction features:
- Add keyframe creation by clicking in expanded curve view
- Implement keyframe dragging with snapping support
- Add multi-keyframe selection (Shift/Ctrl modifiers)
- Support constrained dragging (Shift: vertical, Ctrl: horizontal)
- Add keyframe deletion via right-click context menu
- Display hover tooltips showing keyframe values
- Respect interpolation modes in curve visualization:
  * Linear: straight lines
  * Bezier: smooth curves with tangent handles
  * Step/Hold: horizontal hold then vertical jump
  * Zero: jump to zero and back

Nested animation improvements:
- Add bidirectional parent references:
  * Layer.parentObject → GraphicsObject
  * AnimationData.parentLayer → Layer
  * GraphicsObject.parentLayer → Layer
- Auto-update segment keyframes when nested animation duration changes
- Update both time and value of segment end keyframe
- Fix parameter ordering (required before optional) in constructors

Bug fixes:
- Fix nested object rendering offset (transformCanvas applied twice)
- Fix curve visualization ignoring interpolation mode
2025-10-15 19:08:59 -04:00
Skyler Lehmkuhl 1936e91327 Implement Timeline V2 Phase 2: Track hierarchy with selection and scrolling
Phase 2 Implementation:
- Added TrackHierarchy class to build and manage hierarchical track structure
- Track display with expand/collapse triangles for layers and groups
- Hierarchical indentation for visual hierarchy
- Track selection syncs with stage selection (shapes, objects, layers)
- Vertical scrolling for track area when many tracks present
- Horizontal scrolling in ruler area for timeline navigation

Timeline Integration:
- Set TimelineV2 as default timeline on app load
- Timeline automatically updates when shapes added or grouped
- Trigger timeline redraw in renderLayers() for efficient batching

Selection System:
- Clicking tracks selects corresponding objects/shapes on stage
- Selected tracks highlighted in timeline
- Updates context.selection and context.shapeselection arrays
- Stores oldselection/oldshapeselection for undo support
- Calls updateUI() and updateMenu() to sync UI state

Visual Improvements:
- Use predefined colors from styles.js (no hardcoded colors)
- Alternating track background colors for readability
- Selection highlighting with predefined highlight color
- Type indicators for tracks: [L]ayer, [G]roup, [S]hape

Mouse Interactions:
- Click ruler area to move playhead
- Click track expand/collapse triangles to show/hide children
- Click track name to select object/shape
- Scroll wheel in ruler area for horizontal timeline scroll
- Scroll wheel in track area for vertical track list scroll
- Adjusts hit testing for vertical scroll offset
2025-10-15 01:47:37 -04:00
Skyler Lehmkuhl 6c79914ffb Work on moving things to animation curves 2025-10-15 00:41:51 -04:00
Skyler Lehmkuhl 7bade4517c Move frames to animation curves 2025-10-13 22:41:08 -04:00
Skyler Lehmkuhl 9f338ba6dc Start refactoring 2025-10-05 23:08:31 -04:00
172 changed files with 62564 additions and 4176 deletions

View File

@ -1,3 +1,17 @@
# 0.8.1-alpha:
Changes:
- Rewrite timeline UI
- Add start screen
- Move audio engine to backend
- Add node editor for audio synthesis
- Add factory presets for instruments
- Add MIDI input support
- Add BPM handling and time signature
- Add metronome
- Add preset layouts for different tasks
- Add video import
- Add animation curves for object properties
# 0.7.14-alpha:
Changes:
- Moving frames can now be undone

View File

@ -1,7 +1,51 @@
# Lightningbeam 2
# Lightningbeam
This README needs content. This is Lightningbeam rewritten with Tauri.
A free and open-source 2D multimedia editor combining vector animation, audio production, and video editing in a single application.
To test:
## Screenshots
`pnpm tauri dev`
![Animation View](screenshots/animation.png)
![Music Editing View](screenshots/music.png)
![Video Editing View](screenshots/video.png)
## Current Features
**Vector Animation**
- Draw and animate vector shapes with keyframe-based timeline
- Non-destructive editing workflow
**Audio Production**
- Multi-track audio recording
- MIDI sequencing with synthesized and sampled instruments
- Integrated DAW functionality
**Video Editing**
- Basic video timeline and editing (early stage)
- FFmpeg-based video decoding
## Technical Stack
- **Frontend:** Vanilla JavaScript
- **Backend:** Rust (Tauri framework)
- **Audio:** cpal + dasp for audio processing
- **Video:** FFmpeg for encode/decode
## Project Status
Lightningbeam is under active development. Current focus is on core functionality and architecture. Full project export is not yet fully implemented.
### Known Architectural Challenge
The current Tauri implementation hits IPC bandwidth limitations when streaming decoded video frames from Rust to JavaScript. Tauri's IPC layer has significant serialization overhead (~few MB/s), which is insufficient for real-time high-resolution video rendering.
I'm currently exploring a full Rust rewrite using wgpu/egui to eliminate the IPC bottleneck and handle rendering entirely in native code.
## Project History
Lightningbeam evolved from earlier multimedia editing projects I've worked on since 2010, including the FreeJam DAW. The current JavaScript/Tauri iteration began in November 2023.
## Goals
Create a comprehensive FOSS alternative for 2D-focused multimedia work, integrating animation, audio, and video editing in a unified workflow.

BIN
daw-backend/C2.mp3 Normal file

Binary file not shown.

1983
daw-backend/Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

43
daw-backend/Cargo.toml Normal file
View File

@ -0,0 +1,43 @@
[package]
name = "daw-backend"
version = "0.1.0"
edition = "2021"
[dependencies]
cpal = "0.15"
symphonia = { version = "0.5", features = ["all"] }
rtrb = "0.3"
midly = "0.5"
midir = "0.9"
serde = { version = "1.0", features = ["derive"] }
ratatui = "0.26"
crossterm = "0.27"
rand = "0.8"
base64 = "0.22"
pathdiff = "0.2"
# Audio export
hound = "3.5"
# TODO: Add MP3 support with a different crate
# mp3lame-encoder API is too complex, need to find a better option
# Node-based audio graph dependencies
dasp_graph = "0.11"
dasp_signal = "0.11"
dasp_sample = "0.11"
dasp_interpolate = "0.11"
dasp_envelope = "0.11"
dasp_ring_buffer = "0.11"
dasp_peak = "0.11"
dasp_rms = "0.11"
petgraph = "0.6"
serde_json = "1.0"
[dev-dependencies]
[profile.release]
opt-level = 3
lto = true
[profile.dev]
opt-level = 1 # Faster compile times while still reasonable performance

BIN
daw-backend/Fade.wav Normal file

Binary file not shown.

Binary file not shown.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,72 @@
use daw_backend::load_midi_file;
fn main() {
let clip = load_midi_file("darude-sandstorm.mid", 0, 44100).unwrap();
println!("Clip duration: {:.2}s", clip.duration);
println!("Total events: {}", clip.events.len());
println!("\nEvent summary:");
let mut note_on_count = 0;
let mut note_off_count = 0;
let mut other_count = 0;
for event in &clip.events {
if event.is_note_on() {
note_on_count += 1;
} else if event.is_note_off() {
note_off_count += 1;
} else {
other_count += 1;
}
}
println!(" Note On events: {}", note_on_count);
println!(" Note Off events: {}", note_off_count);
println!(" Other events: {}", other_count);
// Show events around 28 seconds
println!("\nEvents around 28 seconds (27-29s):");
let sample_rate = 44100.0;
let start_sample = (27.0 * sample_rate) as u64;
let end_sample = (29.0 * sample_rate) as u64;
for (i, event) in clip.events.iter().enumerate() {
if event.timestamp >= start_sample && event.timestamp <= end_sample {
let time_sec = event.timestamp as f64 / sample_rate;
let event_type = if event.is_note_on() {
"NoteOn"
} else if event.is_note_off() {
"NoteOff"
} else {
"Other"
};
println!(" [{:4}] {:.3}s: {} ch={} note={} vel={}",
i, time_sec, event_type, event.channel(), event.data1, event.data2);
}
}
// Check for stuck notes - note ons without corresponding note offs
println!("\nChecking for unmatched notes...");
let mut active_notes = std::collections::HashMap::new();
for (i, event) in clip.events.iter().enumerate() {
if event.is_note_on() {
let key = (event.channel(), event.data1);
active_notes.insert(key, i);
} else if event.is_note_off() {
let key = (event.channel(), event.data1);
active_notes.remove(&key);
}
}
if !active_notes.is_empty() {
println!("Found {} notes that never got note-off events:", active_notes.len());
for ((ch, note), event_idx) in active_notes.iter().take(10) {
let time_sec = clip.events[*event_idx].timestamp as f64 / sample_rate;
println!(" Note {} on channel {} at {:.2}s (event #{})", note, ch, time_sec, event_idx);
}
} else {
println!("All notes have matching note-off events!");
}
}

View File

@ -0,0 +1,74 @@
use daw_backend::load_midi_file;
fn main() {
let clip = load_midi_file("darude-sandstorm.mid", 0, 44100).unwrap();
println!("Clip duration: {:.3}s", clip.duration);
println!("Total events: {}", clip.events.len());
// Show the last 30 events
println!("\nLast 30 events:");
let sample_rate = 44100.0;
let start_idx = clip.events.len().saturating_sub(30);
for (i, event) in clip.events.iter().enumerate().skip(start_idx) {
let time_sec = event.timestamp as f64 / sample_rate;
let event_type = if event.is_note_on() {
"NoteOn "
} else if event.is_note_off() {
"NoteOff"
} else {
"Other "
};
println!(" [{:4}] {:.3}s: {} ch={} note={:3} vel={:3}",
i, time_sec, event_type, event.channel(), event.data1, event.data2);
}
// Find notes that are still active at the end of the clip
println!("\nNotes active at end of clip ({:.3}s):", clip.duration);
let mut active_notes = std::collections::HashMap::new();
for event in &clip.events {
let time_sec = event.timestamp as f64 / sample_rate;
if event.is_note_on() {
let key = (event.channel(), event.data1);
active_notes.insert(key, time_sec);
} else if event.is_note_off() {
let key = (event.channel(), event.data1);
active_notes.remove(&key);
}
}
if !active_notes.is_empty() {
println!("Found {} notes still active after all events:", active_notes.len());
for ((ch, note), start_time) in &active_notes {
println!(" Channel {} Note {} started at {:.3}s (no note-off before clip end)",
ch, note, start_time);
}
} else {
println!("All notes are turned off by the end!");
}
// Check maximum polyphony
println!("\nAnalyzing polyphony...");
let mut max_polyphony = 0;
let mut current_notes = std::collections::HashSet::new();
for event in &clip.events {
if event.is_note_on() {
let key = (event.channel(), event.data1);
current_notes.insert(key);
max_polyphony = max_polyphony.max(current_notes.len());
} else if event.is_note_off() {
let key = (event.channel(), event.data1);
current_notes.remove(&key);
}
}
println!("Maximum simultaneous notes: {}", max_polyphony);
println!("Available synth voices: 16");
if max_polyphony > 16 {
println!("WARNING: Polyphony exceeds available voices! Voice stealing will occur.");
}
}

View File

@ -0,0 +1,279 @@
/// Automation system for parameter modulation over time
use serde::{Deserialize, Serialize};
/// Unique identifier for automation lanes
pub type AutomationLaneId = u32;
/// Unique identifier for parameters that can be automated
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize, Deserialize)]
pub enum ParameterId {
/// Track volume
TrackVolume,
/// Track pan
TrackPan,
/// Effect parameter (effect_index, param_id)
EffectParameter(usize, u32),
/// Metatrack time stretch
TimeStretch,
/// Metatrack offset
TimeOffset,
}
/// Type of interpolation curve between automation points
#[derive(Debug, Clone, Copy, PartialEq, Serialize, Deserialize)]
pub enum CurveType {
/// Linear interpolation (straight line)
Linear,
/// Exponential curve (smooth acceleration)
Exponential,
/// S-curve (ease in/out)
SCurve,
/// Step (no interpolation, jump to next value)
Step,
}
/// A single automation point
#[derive(Debug, Clone, Copy, PartialEq, Serialize, Deserialize)]
pub struct AutomationPoint {
/// Time in seconds
pub time: f64,
/// Parameter value (normalized 0.0 to 1.0, or actual value depending on parameter)
pub value: f32,
/// Curve type to next point
pub curve: CurveType,
}
impl AutomationPoint {
/// Create a new automation point
pub fn new(time: f64, value: f32, curve: CurveType) -> Self {
Self { time, value, curve }
}
}
/// An automation lane for a specific parameter
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct AutomationLane {
/// Unique identifier for this lane
pub id: AutomationLaneId,
/// Which parameter this lane controls
pub parameter_id: ParameterId,
/// Sorted list of automation points
points: Vec<AutomationPoint>,
/// Whether this lane is enabled
pub enabled: bool,
}
impl AutomationLane {
/// Create a new automation lane
pub fn new(id: AutomationLaneId, parameter_id: ParameterId) -> Self {
Self {
id,
parameter_id,
points: Vec::new(),
enabled: true,
}
}
/// Add an automation point, maintaining sorted order
pub fn add_point(&mut self, point: AutomationPoint) {
// Find insertion position to maintain sorted order
let pos = self.points.binary_search_by(|p| {
p.time.partial_cmp(&point.time).unwrap_or(std::cmp::Ordering::Equal)
});
match pos {
Ok(idx) => {
// Replace existing point at same time
self.points[idx] = point;
}
Err(idx) => {
// Insert at correct position
self.points.insert(idx, point);
}
}
}
/// Remove point at specific time
pub fn remove_point_at_time(&mut self, time: f64, tolerance: f64) -> bool {
if let Some(idx) = self.points.iter().position(|p| (p.time - time).abs() < tolerance) {
self.points.remove(idx);
true
} else {
false
}
}
/// Remove all points
pub fn clear(&mut self) {
self.points.clear();
}
/// Get all points
pub fn points(&self) -> &[AutomationPoint] {
&self.points
}
/// Get value at a specific time with interpolation
pub fn evaluate(&self, time: f64) -> Option<f32> {
if !self.enabled || self.points.is_empty() {
return None;
}
// Before first point
if time <= self.points[0].time {
return Some(self.points[0].value);
}
// After last point
if time >= self.points[self.points.len() - 1].time {
return Some(self.points[self.points.len() - 1].value);
}
// Find surrounding points
for i in 0..self.points.len() - 1 {
let p1 = &self.points[i];
let p2 = &self.points[i + 1];
if time >= p1.time && time <= p2.time {
return Some(interpolate(p1, p2, time));
}
}
None
}
/// Get number of points
pub fn point_count(&self) -> usize {
self.points.len()
}
}
/// Interpolate between two automation points based on curve type
fn interpolate(p1: &AutomationPoint, p2: &AutomationPoint, time: f64) -> f32 {
// Calculate normalized position between points (0.0 to 1.0)
let t = if p2.time == p1.time {
0.0
} else {
((time - p1.time) / (p2.time - p1.time)) as f32
};
// Apply curve
let curved_t = match p1.curve {
CurveType::Linear => t,
CurveType::Exponential => {
// Exponential curve: y = x^2
t * t
}
CurveType::SCurve => {
// Smooth S-curve using smoothstep
smoothstep(t)
}
CurveType::Step => {
// Step: hold value until next point
return p1.value;
}
};
// Linear interpolation with curved t
p1.value + (p2.value - p1.value) * curved_t
}
/// Smoothstep function for S-curve interpolation
/// Returns a smooth curve from 0 to 1
#[inline]
fn smoothstep(t: f32) -> f32 {
// Clamp to [0, 1]
let t = t.clamp(0.0, 1.0);
// 3t^2 - 2t^3
t * t * (3.0 - 2.0 * t)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_add_points_sorted() {
let mut lane = AutomationLane::new(0, ParameterId::TrackVolume);
lane.add_point(AutomationPoint::new(2.0, 0.5, CurveType::Linear));
lane.add_point(AutomationPoint::new(1.0, 0.3, CurveType::Linear));
lane.add_point(AutomationPoint::new(3.0, 0.8, CurveType::Linear));
assert_eq!(lane.points().len(), 3);
assert_eq!(lane.points()[0].time, 1.0);
assert_eq!(lane.points()[1].time, 2.0);
assert_eq!(lane.points()[2].time, 3.0);
}
#[test]
fn test_replace_point_at_same_time() {
let mut lane = AutomationLane::new(0, ParameterId::TrackVolume);
lane.add_point(AutomationPoint::new(1.0, 0.3, CurveType::Linear));
lane.add_point(AutomationPoint::new(1.0, 0.5, CurveType::Linear));
assert_eq!(lane.points().len(), 1);
assert_eq!(lane.points()[0].value, 0.5);
}
#[test]
fn test_linear_interpolation() {
let mut lane = AutomationLane::new(0, ParameterId::TrackVolume);
lane.add_point(AutomationPoint::new(0.0, 0.0, CurveType::Linear));
lane.add_point(AutomationPoint::new(1.0, 1.0, CurveType::Linear));
assert_eq!(lane.evaluate(0.0), Some(0.0));
assert_eq!(lane.evaluate(0.5), Some(0.5));
assert_eq!(lane.evaluate(1.0), Some(1.0));
}
#[test]
fn test_step_interpolation() {
let mut lane = AutomationLane::new(0, ParameterId::TrackVolume);
lane.add_point(AutomationPoint::new(0.0, 0.5, CurveType::Step));
lane.add_point(AutomationPoint::new(1.0, 1.0, CurveType::Step));
assert_eq!(lane.evaluate(0.0), Some(0.5));
assert_eq!(lane.evaluate(0.5), Some(0.5));
assert_eq!(lane.evaluate(0.99), Some(0.5));
assert_eq!(lane.evaluate(1.0), Some(1.0));
}
#[test]
fn test_evaluate_outside_range() {
let mut lane = AutomationLane::new(0, ParameterId::TrackVolume);
lane.add_point(AutomationPoint::new(1.0, 0.5, CurveType::Linear));
lane.add_point(AutomationPoint::new(2.0, 1.0, CurveType::Linear));
// Before first point
assert_eq!(lane.evaluate(0.0), Some(0.5));
// After last point
assert_eq!(lane.evaluate(3.0), Some(1.0));
}
#[test]
fn test_disabled_lane() {
let mut lane = AutomationLane::new(0, ParameterId::TrackVolume);
lane.add_point(AutomationPoint::new(0.0, 0.5, CurveType::Linear));
lane.enabled = false;
assert_eq!(lane.evaluate(0.0), None);
}
#[test]
fn test_remove_point() {
let mut lane = AutomationLane::new(0, ParameterId::TrackVolume);
lane.add_point(AutomationPoint::new(1.0, 0.5, CurveType::Linear));
lane.add_point(AutomationPoint::new(2.0, 0.8, CurveType::Linear));
assert!(lane.remove_point_at_time(1.0, 0.001));
assert_eq!(lane.points().len(), 1);
assert_eq!(lane.points()[0].time, 2.0);
}
}

View File

@ -0,0 +1,310 @@
/// BPM Detection using autocorrelation and onset detection
///
/// This module provides both offline analysis (for audio import)
/// and real-time streaming analysis (for the BPM detector node)
use std::collections::VecDeque;
/// Detects BPM from a complete audio buffer (offline analysis)
pub fn detect_bpm_offline(audio: &[f32], sample_rate: u32) -> Option<f32> {
if audio.is_empty() {
return None;
}
// Convert to mono if needed (already mono in our case)
// Downsample for efficiency (analyze every 4th sample for faster processing)
let downsampled: Vec<f32> = audio.iter().step_by(4).copied().collect();
let effective_sample_rate = sample_rate / 4;
// Detect onsets using energy-based method
let onsets = detect_onsets(&downsampled, effective_sample_rate);
if onsets.len() < 4 {
return None;
}
// Calculate onset strength function for autocorrelation
let onset_envelope = calculate_onset_envelope(&onsets, downsampled.len(), effective_sample_rate);
// Further downsample onset envelope for BPM analysis
// For 60-200 BPM (1-3.33 Hz), we only need ~10 Hz sample rate by Nyquist
// Use 100 Hz for good margin (100 samples per second)
let tempo_sample_rate = 100.0;
let downsample_factor = (effective_sample_rate as f32 / tempo_sample_rate) as usize;
let downsampled_envelope: Vec<f32> = onset_envelope
.iter()
.step_by(downsample_factor.max(1))
.copied()
.collect();
// Use autocorrelation to find the fundamental period
let bpm = detect_bpm_autocorrelation(&downsampled_envelope, tempo_sample_rate as u32);
bpm
}
/// Calculate an onset envelope from detected onsets
fn calculate_onset_envelope(onsets: &[usize], total_length: usize, sample_rate: u32) -> Vec<f32> {
// Create a sparse representation of onsets with exponential decay
let mut envelope = vec![0.0; total_length];
let decay_samples = (sample_rate as f32 * 0.05) as usize; // 50ms decay
for &onset in onsets {
if onset < total_length {
envelope[onset] = 1.0;
// Add exponential decay after onset
for i in 1..decay_samples.min(total_length - onset) {
let decay_value = (-3.0 * i as f32 / decay_samples as f32).exp();
envelope[onset + i] = f32::max(envelope[onset + i], decay_value);
}
}
}
envelope
}
/// Detect BPM using autocorrelation on onset envelope
fn detect_bpm_autocorrelation(onset_envelope: &[f32], sample_rate: u32) -> Option<f32> {
// BPM range: 60-200 BPM
let min_bpm = 60.0;
let max_bpm = 200.0;
let min_lag = (60.0 * sample_rate as f32 / max_bpm) as usize;
let max_lag = (60.0 * sample_rate as f32 / min_bpm) as usize;
if max_lag >= onset_envelope.len() / 2 {
return None;
}
// Calculate autocorrelation for tempo range
let mut best_lag = min_lag;
let mut best_correlation = 0.0;
for lag in min_lag..=max_lag {
let mut correlation = 0.0;
let mut count = 0;
for i in 0..(onset_envelope.len() - lag) {
correlation += onset_envelope[i] * onset_envelope[i + lag];
count += 1;
}
if count > 0 {
correlation /= count as f32;
// Bias toward faster tempos slightly (common in EDM)
let bias = 1.0 + (lag as f32 - min_lag as f32) / (max_lag - min_lag) as f32 * 0.1;
correlation /= bias;
if correlation > best_correlation {
best_correlation = correlation;
best_lag = lag;
}
}
}
// Convert best lag to BPM
let bpm = 60.0 * sample_rate as f32 / best_lag as f32;
// Check for octave errors by testing multiples
// Common ranges: 60-90 (slow), 90-140 (medium), 140-200 (fast)
let half_bpm = bpm / 2.0;
let double_bpm = bpm * 2.0;
let quad_bpm = bpm * 4.0;
// Choose the octave that falls in the most common range (100-180 BPM for EDM/pop)
let final_bpm = if quad_bpm >= 100.0 && quad_bpm <= 200.0 {
// Very slow detection, multiply by 4
quad_bpm
} else if double_bpm >= 100.0 && double_bpm <= 200.0 {
// Slow detection, multiply by 2
double_bpm
} else if bpm >= 100.0 && bpm <= 200.0 {
// Already in good range
bpm
} else if half_bpm >= 100.0 && half_bpm <= 200.0 {
// Too fast detection, divide by 2
half_bpm
} else {
// Outside ideal range, use as-is
bpm
};
// Round to nearest 0.5 BPM for cleaner values
Some((final_bpm * 2.0).round() / 2.0)
}
/// Detect onsets (beat events) in audio using energy-based method
fn detect_onsets(audio: &[f32], sample_rate: u32) -> Vec<usize> {
let mut onsets = Vec::new();
// Window size for energy calculation (~20ms)
let window_size = ((sample_rate as f32 * 0.02) as usize).max(1);
let hop_size = window_size / 2;
if audio.len() < window_size {
return onsets;
}
// Calculate energy for each window
let mut energies = Vec::new();
let mut pos = 0;
while pos + window_size <= audio.len() {
let window = &audio[pos..pos + window_size];
let energy: f32 = window.iter().map(|&s| s * s).sum();
energies.push(energy / window_size as f32); // Normalize
pos += hop_size;
}
if energies.len() < 3 {
return onsets;
}
// Calculate energy differences (onset strength)
let mut onset_strengths = Vec::new();
for i in 1..energies.len() {
let diff = (energies[i] - energies[i - 1]).max(0.0); // Only positive changes
onset_strengths.push(diff);
}
// Find threshold (adaptive)
let mean_strength: f32 = onset_strengths.iter().sum::<f32>() / onset_strengths.len() as f32;
let threshold = mean_strength * 1.5; // 1.5x mean
// Peak picking with minimum distance
let min_distance = sample_rate as usize / 10; // Minimum 100ms between onsets
let mut last_onset = 0;
for (i, &strength) in onset_strengths.iter().enumerate() {
if strength > threshold {
let sample_pos = (i + 1) * hop_size;
// Check if it's a local maximum and far enough from last onset
let is_local_max = (i == 0 || onset_strengths[i - 1] <= strength) &&
(i == onset_strengths.len() - 1 || onset_strengths[i + 1] < strength);
if is_local_max && (onsets.is_empty() || sample_pos - last_onset >= min_distance) {
onsets.push(sample_pos);
last_onset = sample_pos;
}
}
}
onsets
}
/// Real-time BPM detector for streaming audio
pub struct BpmDetectorRealtime {
sample_rate: u32,
// Circular buffer for recent audio (e.g., 10 seconds)
audio_buffer: VecDeque<f32>,
max_buffer_samples: usize,
// Current BPM estimate
current_bpm: f32,
// Update interval (samples)
samples_since_update: usize,
update_interval: usize,
// Smoothing
bpm_history: VecDeque<f32>,
history_size: usize,
}
impl BpmDetectorRealtime {
pub fn new(sample_rate: u32, buffer_duration_seconds: f32) -> Self {
let max_buffer_samples = (sample_rate as f32 * buffer_duration_seconds) as usize;
let update_interval = sample_rate as usize; // Update every 1 second
Self {
sample_rate,
audio_buffer: VecDeque::with_capacity(max_buffer_samples),
max_buffer_samples,
current_bpm: 120.0, // Default BPM
samples_since_update: 0,
update_interval,
bpm_history: VecDeque::with_capacity(8),
history_size: 8,
}
}
/// Process a chunk of audio and return current BPM estimate
pub fn process(&mut self, audio: &[f32]) -> f32 {
// Add samples to buffer
for &sample in audio {
if self.audio_buffer.len() >= self.max_buffer_samples {
self.audio_buffer.pop_front();
}
self.audio_buffer.push_back(sample);
}
self.samples_since_update += audio.len();
// Periodically re-analyze
if self.samples_since_update >= self.update_interval && self.audio_buffer.len() > self.sample_rate as usize {
self.samples_since_update = 0;
// Convert buffer to slice for analysis
let buffer_vec: Vec<f32> = self.audio_buffer.iter().copied().collect();
if let Some(detected_bpm) = detect_bpm_offline(&buffer_vec, self.sample_rate) {
// Add to history for smoothing
if self.bpm_history.len() >= self.history_size {
self.bpm_history.pop_front();
}
self.bpm_history.push_back(detected_bpm);
// Use median of recent detections for stability
let mut sorted_history: Vec<f32> = self.bpm_history.iter().copied().collect();
sorted_history.sort_by(|a, b| a.partial_cmp(b).unwrap());
self.current_bpm = sorted_history[sorted_history.len() / 2];
}
}
self.current_bpm
}
pub fn get_bpm(&self) -> f32 {
self.current_bpm
}
pub fn reset(&mut self) {
self.audio_buffer.clear();
self.bpm_history.clear();
self.samples_since_update = 0;
self.current_bpm = 120.0;
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_120_bpm_detection() {
let sample_rate = 48000;
let bpm = 120.0;
let beat_interval = 60.0 / bpm;
let beat_samples = (sample_rate as f32 * beat_interval) as usize;
// Generate 8 beats
let mut audio = vec![0.0; beat_samples * 8];
for beat in 0..8 {
let pos = beat * beat_samples;
// Add a sharp transient at each beat
for i in 0..100 {
audio[pos + i] = (1.0 - i as f32 / 100.0) * 0.8;
}
}
let detected = detect_bpm_offline(&audio, sample_rate);
assert!(detected.is_some());
let detected_bpm = detected.unwrap();
// Allow 5% tolerance
assert!((detected_bpm - bpm).abs() / bpm < 0.05,
"Expected ~{} BPM, got {}", bpm, detected_bpm);
}
}

View File

@ -0,0 +1,149 @@
use std::sync::atomic::{AtomicUsize, Ordering};
/// Pool of reusable audio buffers for recursive group rendering
///
/// This pool allows groups to acquire temporary buffers for submixing
/// child tracks without allocating memory in the audio thread.
pub struct BufferPool {
buffers: Vec<Vec<f32>>,
available: Vec<usize>,
buffer_size: usize,
/// Tracks the number of times a buffer had to be allocated (not reused)
/// This should be zero during steady-state playback
total_allocations: AtomicUsize,
/// Peak number of buffers simultaneously in use
peak_usage: AtomicUsize,
}
impl BufferPool {
/// Create a new buffer pool
///
/// # Arguments
/// * `initial_capacity` - Number of buffers to pre-allocate
/// * `buffer_size` - Size of each buffer in samples
pub fn new(initial_capacity: usize, buffer_size: usize) -> Self {
let mut buffers = Vec::with_capacity(initial_capacity);
let mut available = Vec::with_capacity(initial_capacity);
// Pre-allocate buffers
for i in 0..initial_capacity {
buffers.push(vec![0.0; buffer_size]);
available.push(i);
}
Self {
buffers,
available,
buffer_size,
total_allocations: AtomicUsize::new(0),
peak_usage: AtomicUsize::new(0),
}
}
/// Acquire a buffer from the pool
///
/// Returns a zeroed buffer ready for use. If no buffers are available,
/// allocates a new one (though this should be avoided in the audio thread).
pub fn acquire(&mut self) -> Vec<f32> {
// Track peak usage
let current_in_use = self.buffers.len() - self.available.len();
let peak = self.peak_usage.load(Ordering::Relaxed);
if current_in_use > peak {
self.peak_usage.store(current_in_use, Ordering::Relaxed);
}
if let Some(idx) = self.available.pop() {
// Reuse an existing buffer
let mut buf = std::mem::take(&mut self.buffers[idx]);
buf.fill(0.0);
buf
} else {
// No buffers available, allocate a new one
// This should be rare if the pool is sized correctly
self.total_allocations.fetch_add(1, Ordering::Relaxed);
vec![0.0; self.buffer_size]
}
}
/// Release a buffer back to the pool
///
/// # Arguments
/// * `buffer` - The buffer to return to the pool
pub fn release(&mut self, buffer: Vec<f32>) {
// Only add to pool if it's the correct size
if buffer.len() == self.buffer_size {
let idx = self.buffers.len();
self.buffers.push(buffer);
self.available.push(idx);
}
// Otherwise, drop the buffer (wrong size, shouldn't happen normally)
}
/// Get the configured buffer size
pub fn buffer_size(&self) -> usize {
self.buffer_size
}
/// Get the number of available buffers
pub fn available_count(&self) -> usize {
self.available.len()
}
/// Get the total number of buffers in the pool
pub fn total_count(&self) -> usize {
self.buffers.len()
}
/// Get the total number of allocations that occurred (excluding pre-allocated buffers)
///
/// This should be zero during steady-state playback. If non-zero, the pool
/// should be resized to avoid allocations in the audio thread.
pub fn allocation_count(&self) -> usize {
self.total_allocations.load(Ordering::Relaxed)
}
/// Get the peak number of buffers simultaneously in use
///
/// Use this to determine the optimal initial_capacity for your workload.
pub fn peak_usage(&self) -> usize {
self.peak_usage.load(Ordering::Relaxed)
}
/// Reset allocation statistics
///
/// Useful for benchmarking steady-state performance after warmup.
pub fn reset_stats(&mut self) {
self.total_allocations.store(0, Ordering::Relaxed);
self.peak_usage.store(0, Ordering::Relaxed);
}
/// Get comprehensive pool statistics
pub fn stats(&self) -> BufferPoolStats {
BufferPoolStats {
total_buffers: self.total_count(),
available_buffers: self.available_count(),
in_use_buffers: self.total_count() - self.available_count(),
peak_usage: self.peak_usage(),
total_allocations: self.allocation_count(),
buffer_size: self.buffer_size,
}
}
}
/// Statistics about buffer pool usage
#[derive(Debug, Clone, Copy)]
pub struct BufferPoolStats {
pub total_buffers: usize,
pub available_buffers: usize,
pub in_use_buffers: usize,
pub peak_usage: usize,
pub total_allocations: usize,
pub buffer_size: usize,
}
impl Default for BufferPool {
fn default() -> Self {
// Default: 8 buffers of 4096 samples (enough for 85ms at 48kHz stereo)
Self::new(8, 4096)
}
}

View File

@ -0,0 +1,49 @@
/// Clip ID type
pub type ClipId = u32;
/// Audio clip that references data in the AudioPool
#[derive(Debug, Clone)]
pub struct Clip {
pub id: ClipId,
pub audio_pool_index: usize,
pub start_time: f64, // Position on timeline in seconds
pub duration: f64, // Clip duration in seconds
pub offset: f64, // Offset into audio file in seconds
pub gain: f32, // Clip-level gain
}
impl Clip {
/// Create a new clip
pub fn new(
id: ClipId,
audio_pool_index: usize,
start_time: f64,
duration: f64,
offset: f64,
) -> Self {
Self {
id,
audio_pool_index,
start_time,
duration,
offset,
gain: 1.0,
}
}
/// Check if this clip is active at a given timeline position
pub fn is_active_at(&self, time_seconds: f64) -> bool {
let clip_end = self.start_time + self.duration;
time_seconds >= self.start_time && time_seconds < clip_end
}
/// Get the end time of this clip on the timeline
pub fn end_time(&self) -> f64 {
self.start_time + self.duration
}
/// Set clip gain
pub fn set_gain(&mut self, gain: f32) {
self.gain = gain.max(0.0);
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,262 @@
use super::buffer_pool::BufferPool;
use super::pool::AudioPool;
use super::project::Project;
use std::path::Path;
/// Supported export formats
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum ExportFormat {
Wav,
Flac,
// TODO: Add MP3 support
}
impl ExportFormat {
/// Get the file extension for this format
pub fn extension(&self) -> &'static str {
match self {
ExportFormat::Wav => "wav",
ExportFormat::Flac => "flac",
}
}
}
/// Export settings for rendering audio
#[derive(Debug, Clone)]
pub struct ExportSettings {
/// Output format
pub format: ExportFormat,
/// Sample rate for export
pub sample_rate: u32,
/// Number of channels (1 = mono, 2 = stereo)
pub channels: u32,
/// Bit depth (16 or 24) - only for WAV/FLAC
pub bit_depth: u16,
/// MP3 bitrate in kbps (128, 192, 256, 320)
pub mp3_bitrate: u32,
/// Start time in seconds
pub start_time: f64,
/// End time in seconds
pub end_time: f64,
}
impl Default for ExportSettings {
fn default() -> Self {
Self {
format: ExportFormat::Wav,
sample_rate: 44100,
channels: 2,
bit_depth: 16,
mp3_bitrate: 320,
start_time: 0.0,
end_time: 60.0,
}
}
}
/// Export the project to an audio file
///
/// This performs offline rendering, processing the entire timeline
/// in chunks to generate the final audio file.
pub fn export_audio<P: AsRef<Path>>(
project: &mut Project,
pool: &AudioPool,
settings: &ExportSettings,
output_path: P,
) -> Result<(), String> {
// Render the project to memory
let samples = render_to_memory(project, pool, settings)?;
// Write to file based on format
match settings.format {
ExportFormat::Wav => write_wav(&samples, settings, output_path)?,
ExportFormat::Flac => write_flac(&samples, settings, output_path)?,
}
Ok(())
}
/// Render the project to memory
fn render_to_memory(
project: &mut Project,
pool: &AudioPool,
settings: &ExportSettings,
) -> Result<Vec<f32>, String> {
// Calculate total number of frames
let duration = settings.end_time - settings.start_time;
let total_frames = (duration * settings.sample_rate as f64).round() as usize;
let total_samples = total_frames * settings.channels as usize;
println!("Export: duration={:.3}s, total_frames={}, total_samples={}, channels={}",
duration, total_frames, total_samples, settings.channels);
// Render in chunks to avoid memory issues
const CHUNK_FRAMES: usize = 4096;
let chunk_samples = CHUNK_FRAMES * settings.channels as usize;
// Create buffer for rendering
let mut render_buffer = vec![0.0f32; chunk_samples];
let mut buffer_pool = BufferPool::new(16, chunk_samples);
// Collect all rendered samples
let mut all_samples = Vec::with_capacity(total_samples);
let mut playhead = settings.start_time;
let chunk_duration = CHUNK_FRAMES as f64 / settings.sample_rate as f64;
// Render the entire timeline in chunks
while playhead < settings.end_time {
// Clear the render buffer
render_buffer.fill(0.0);
// Render this chunk
project.render(
&mut render_buffer,
pool,
&mut buffer_pool,
playhead,
settings.sample_rate,
settings.channels,
);
// Calculate how many samples we actually need from this chunk
let remaining_time = settings.end_time - playhead;
let samples_needed = if remaining_time < chunk_duration {
// Calculate frames needed and ensure it's a whole number
let frames_needed = (remaining_time * settings.sample_rate as f64).round() as usize;
let samples = frames_needed * settings.channels as usize;
// Ensure we don't exceed chunk size
samples.min(chunk_samples)
} else {
chunk_samples
};
// Append to output
all_samples.extend_from_slice(&render_buffer[..samples_needed]);
playhead += chunk_duration;
}
println!("Export: rendered {} samples total", all_samples.len());
// Verify the sample count is a multiple of channels
if all_samples.len() % settings.channels as usize != 0 {
return Err(format!(
"Sample count {} is not a multiple of channel count {}",
all_samples.len(),
settings.channels
));
}
Ok(all_samples)
}
/// Write WAV file using hound
fn write_wav<P: AsRef<Path>>(
samples: &[f32],
settings: &ExportSettings,
output_path: P,
) -> Result<(), String> {
let spec = hound::WavSpec {
channels: settings.channels as u16,
sample_rate: settings.sample_rate,
bits_per_sample: settings.bit_depth,
sample_format: hound::SampleFormat::Int,
};
let mut writer = hound::WavWriter::create(output_path, spec)
.map_err(|e| format!("Failed to create WAV file: {}", e))?;
// Write samples
match settings.bit_depth {
16 => {
for &sample in samples {
let clamped = sample.max(-1.0).min(1.0);
let pcm_value = (clamped * 32767.0) as i16;
writer.write_sample(pcm_value)
.map_err(|e| format!("Failed to write sample: {}", e))?;
}
}
24 => {
for &sample in samples {
let clamped = sample.max(-1.0).min(1.0);
let pcm_value = (clamped * 8388607.0) as i32;
writer.write_sample(pcm_value)
.map_err(|e| format!("Failed to write sample: {}", e))?;
}
}
_ => return Err(format!("Unsupported bit depth: {}", settings.bit_depth)),
}
writer.finalize()
.map_err(|e| format!("Failed to finalize WAV file: {}", e))?;
Ok(())
}
/// Write FLAC file using hound (FLAC is essentially lossless WAV)
fn write_flac<P: AsRef<Path>>(
samples: &[f32],
settings: &ExportSettings,
output_path: P,
) -> Result<(), String> {
// For now, we'll use hound to write a WAV-like FLAC file
// In the future, we could use a dedicated FLAC encoder
let spec = hound::WavSpec {
channels: settings.channels as u16,
sample_rate: settings.sample_rate,
bits_per_sample: settings.bit_depth,
sample_format: hound::SampleFormat::Int,
};
let mut writer = hound::WavWriter::create(output_path, spec)
.map_err(|e| format!("Failed to create FLAC file: {}", e))?;
// Write samples (same as WAV for now)
match settings.bit_depth {
16 => {
for &sample in samples {
let clamped = sample.max(-1.0).min(1.0);
let pcm_value = (clamped * 32767.0) as i16;
writer.write_sample(pcm_value)
.map_err(|e| format!("Failed to write sample: {}", e))?;
}
}
24 => {
for &sample in samples {
let clamped = sample.max(-1.0).min(1.0);
let pcm_value = (clamped * 8388607.0) as i32;
writer.write_sample(pcm_value)
.map_err(|e| format!("Failed to write sample: {}", e))?;
}
}
_ => return Err(format!("Unsupported bit depth: {}", settings.bit_depth)),
}
writer.finalize()
.map_err(|e| format!("Failed to finalize FLAC file: {}", e))?;
Ok(())
}
// TODO: Add MP3 export support with a better library
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_export_settings_default() {
let settings = ExportSettings::default();
assert_eq!(settings.format, ExportFormat::Wav);
assert_eq!(settings.sample_rate, 44100);
assert_eq!(settings.channels, 2);
assert_eq!(settings.bit_depth, 16);
}
#[test]
fn test_format_extension() {
assert_eq!(ExportFormat::Wav.extension(), "wav");
assert_eq!(ExportFormat::Flac.extension(), "flac");
}
}

View File

@ -0,0 +1,169 @@
/// Metronome for providing click track during playback
pub struct Metronome {
enabled: bool,
bpm: f32,
time_signature_numerator: u32,
time_signature_denominator: u32,
last_beat: i64, // Last beat number that was played (-1 = none)
// Pre-generated click samples (mono)
high_click: Vec<f32>, // Accent click for first beat
low_click: Vec<f32>, // Normal click for other beats
// Click playback state
click_position: usize, // Current position in the click sample (0 = not playing)
playing_high_click: bool, // Which click we're currently playing
#[allow(dead_code)]
sample_rate: u32,
}
impl Metronome {
/// Create a new metronome with pre-generated click sounds
pub fn new(sample_rate: u32) -> Self {
let (high_click, low_click) = Self::generate_clicks(sample_rate);
Self {
enabled: false,
bpm: 120.0,
time_signature_numerator: 4,
time_signature_denominator: 4,
last_beat: -1,
high_click,
low_click,
click_position: 0,
playing_high_click: false,
sample_rate,
}
}
/// Generate woodblock-style click samples
fn generate_clicks(sample_rate: u32) -> (Vec<f32>, Vec<f32>) {
let click_duration_ms = 10.0; // 10ms click
let click_samples = ((sample_rate as f32 * click_duration_ms) / 1000.0) as usize;
// High click (accent): 1200 Hz + 2400 Hz (higher pitched woodblock)
let high_freq1 = 1200.0;
let high_freq2 = 2400.0;
let mut high_click = Vec::with_capacity(click_samples);
for i in 0..click_samples {
let t = i as f32 / sample_rate as f32;
let envelope = 1.0 - (i as f32 / click_samples as f32); // Linear decay
let envelope = envelope * envelope; // Square for faster decay
// Mix two sine waves for woodblock character
let sample = 0.3 * (2.0 * std::f32::consts::PI * high_freq1 * t).sin()
+ 0.2 * (2.0 * std::f32::consts::PI * high_freq2 * t).sin();
// Add a bit of noise for attack transient
let noise = (i as f32 * 0.1).sin() * 0.1;
high_click.push((sample + noise) * envelope * 0.5); // Scale down to avoid clipping
}
// Low click: 800 Hz + 1600 Hz (lower pitched woodblock)
let low_freq1 = 800.0;
let low_freq2 = 1600.0;
let mut low_click = Vec::with_capacity(click_samples);
for i in 0..click_samples {
let t = i as f32 / sample_rate as f32;
let envelope = 1.0 - (i as f32 / click_samples as f32);
let envelope = envelope * envelope;
let sample = 0.3 * (2.0 * std::f32::consts::PI * low_freq1 * t).sin()
+ 0.2 * (2.0 * std::f32::consts::PI * low_freq2 * t).sin();
let noise = (i as f32 * 0.1).sin() * 0.1;
low_click.push((sample + noise) * envelope * 0.4); // Slightly quieter than high click
}
(high_click, low_click)
}
/// Enable or disable the metronome
pub fn set_enabled(&mut self, enabled: bool) {
self.enabled = enabled;
if !enabled {
self.last_beat = -1; // Reset beat tracking when disabled
self.click_position = 0; // Stop any playing click
} else {
// When enabling, don't trigger a click until the next beat
self.click_position = usize::MAX; // Set to max to prevent immediate click
}
}
/// Update BPM and time signature
pub fn update_timing(&mut self, bpm: f32, time_signature: (u32, u32)) {
self.bpm = bpm;
self.time_signature_numerator = time_signature.0;
self.time_signature_denominator = time_signature.1;
}
/// Process audio and mix in metronome clicks
pub fn process(
&mut self,
output: &mut [f32],
playhead_samples: u64,
playing: bool,
sample_rate: u32,
channels: u32,
) {
if !self.enabled || !playing {
self.click_position = 0; // Reset if not playing
return;
}
let frames = output.len() / channels as usize;
for frame in 0..frames {
let current_sample = playhead_samples + frame as u64;
// Calculate current beat number
let current_time_seconds = current_sample as f64 / sample_rate as f64;
let beats_per_second = self.bpm as f64 / 60.0;
let current_beat = (current_time_seconds * beats_per_second).floor() as i64;
// Check if we crossed a beat boundary
if current_beat != self.last_beat && current_beat >= 0 {
self.last_beat = current_beat;
// Only trigger a click if we're not in the "just enabled" state
if self.click_position != usize::MAX {
// Determine which click to play
// Beat 1 of each measure gets the accent (high click)
let beat_in_measure = (current_beat as u32 % self.time_signature_numerator) as usize;
let is_first_beat = beat_in_measure == 0;
// Start playing the appropriate click
self.playing_high_click = is_first_beat;
self.click_position = 0; // Start from beginning of click
} else {
// We just got enabled - reset position but don't play yet
self.click_position = self.high_click.len(); // Set past end so no click plays
}
}
// Continue playing click sample if we're currently in one
let click = if self.playing_high_click {
&self.high_click
} else {
&self.low_click
};
if self.click_position < click.len() {
let click_sample = click[self.click_position];
// Mix into all channels
for ch in 0..channels as usize {
let output_idx = frame * channels as usize + ch;
output[output_idx] += click_sample;
}
self.click_position += 1;
}
}
}
}

View File

@ -0,0 +1,138 @@
/// MIDI event representing a single MIDI message
#[derive(Debug, Clone, Copy, serde::Serialize, serde::Deserialize)]
pub struct MidiEvent {
/// Time position within the clip in seconds (sample-rate independent)
pub timestamp: f64,
/// MIDI status byte (includes channel)
pub status: u8,
/// First data byte (note number, CC number, etc.)
pub data1: u8,
/// Second data byte (velocity, CC value, etc.)
pub data2: u8,
}
impl MidiEvent {
/// Create a new MIDI event
pub fn new(timestamp: f64, status: u8, data1: u8, data2: u8) -> Self {
Self {
timestamp,
status,
data1,
data2,
}
}
/// Create a note on event
pub fn note_on(timestamp: f64, channel: u8, note: u8, velocity: u8) -> Self {
Self {
timestamp,
status: 0x90 | (channel & 0x0F),
data1: note,
data2: velocity,
}
}
/// Create a note off event
pub fn note_off(timestamp: f64, channel: u8, note: u8, velocity: u8) -> Self {
Self {
timestamp,
status: 0x80 | (channel & 0x0F),
data1: note,
data2: velocity,
}
}
/// Check if this is a note on event (with non-zero velocity)
pub fn is_note_on(&self) -> bool {
(self.status & 0xF0) == 0x90 && self.data2 > 0
}
/// Check if this is a note off event (or note on with zero velocity)
pub fn is_note_off(&self) -> bool {
(self.status & 0xF0) == 0x80 || ((self.status & 0xF0) == 0x90 && self.data2 == 0)
}
/// Get the MIDI channel (0-15)
pub fn channel(&self) -> u8 {
self.status & 0x0F
}
/// Get the message type (upper 4 bits of status)
pub fn message_type(&self) -> u8 {
self.status & 0xF0
}
}
/// MIDI clip ID type
pub type MidiClipId = u32;
/// MIDI clip containing a sequence of MIDI events
#[derive(Debug, Clone)]
pub struct MidiClip {
pub id: MidiClipId,
pub events: Vec<MidiEvent>,
pub start_time: f64, // Position on timeline in seconds
pub duration: f64, // Clip duration in seconds
pub loop_enabled: bool,
}
impl MidiClip {
/// Create a new MIDI clip
pub fn new(id: MidiClipId, start_time: f64, duration: f64) -> Self {
Self {
id,
events: Vec::new(),
start_time,
duration,
loop_enabled: false,
}
}
/// Add a MIDI event to the clip
pub fn add_event(&mut self, event: MidiEvent) {
self.events.push(event);
// Keep events sorted by timestamp (using partial_cmp for f64)
self.events.sort_by(|a, b| a.timestamp.partial_cmp(&b.timestamp).unwrap());
}
/// Get the end time of the clip
pub fn end_time(&self) -> f64 {
self.start_time + self.duration
}
/// Get events that should be triggered in a given time range
///
/// Returns events along with their absolute timestamps in samples
pub fn get_events_in_range(
&self,
range_start_seconds: f64,
range_end_seconds: f64,
_sample_rate: u32,
) -> Vec<MidiEvent> {
let mut result = Vec::new();
// Check if clip overlaps with the range
if range_start_seconds >= self.end_time() || range_end_seconds <= self.start_time {
return result;
}
// Calculate the intersection
let play_start = range_start_seconds.max(self.start_time);
let play_end = range_end_seconds.min(self.end_time());
// Position within the clip
let clip_position_seconds = play_start - self.start_time;
let clip_end_seconds = play_end - self.start_time;
// Find events in this range
// Note: event.timestamp is now in seconds relative to clip start
// Use half-open interval [start, end) to avoid triggering events twice
for event in &self.events {
if event.timestamp >= clip_position_seconds && event.timestamp < clip_end_seconds {
result.push(*event);
}
}
result
}
}

View File

@ -0,0 +1,27 @@
pub mod automation;
pub mod bpm_detector;
pub mod buffer_pool;
pub mod clip;
pub mod engine;
pub mod export;
pub mod metronome;
pub mod midi;
pub mod node_graph;
pub mod pool;
pub mod project;
pub mod recording;
pub mod sample_loader;
pub mod track;
pub use automation::{AutomationLane, AutomationLaneId, AutomationPoint, CurveType, ParameterId};
pub use buffer_pool::BufferPool;
pub use clip::{Clip, ClipId};
pub use engine::{Engine, EngineController};
pub use export::{export_audio, ExportFormat, ExportSettings};
pub use metronome::Metronome;
pub use midi::{MidiClip, MidiClipId, MidiEvent};
pub use pool::{AudioFile as PoolAudioFile, AudioPool};
pub use project::Project;
pub use recording::RecordingState;
pub use sample_loader::{load_audio_file, SampleData};
pub use track::{AudioTrack, Metatrack, MidiTrack, RenderContext, Track, TrackId, TrackNode};

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,10 @@
mod graph;
mod node_trait;
mod types;
pub mod nodes;
pub mod preset;
pub use graph::{Connection, GraphNode, AudioGraph};
pub use node_trait::AudioNode;
pub use preset::{GraphPreset, PresetMetadata, SerializedConnection, SerializedNode};
pub use types::{ConnectionError, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};

View File

@ -0,0 +1,79 @@
use super::types::{NodeCategory, NodePort, Parameter};
use crate::audio::midi::MidiEvent;
/// Custom node trait for audio processing nodes
///
/// All nodes must be Send to be usable in the audio thread.
/// Nodes should be real-time safe: no allocations, no blocking operations.
pub trait AudioNode: Send {
/// Node category for UI organization
fn category(&self) -> NodeCategory;
/// Input port definitions
fn inputs(&self) -> &[NodePort];
/// Output port definitions
fn outputs(&self) -> &[NodePort];
/// User-facing parameters
fn parameters(&self) -> &[Parameter];
/// Set parameter by ID
fn set_parameter(&mut self, id: u32, value: f32);
/// Get parameter by ID
fn get_parameter(&self, id: u32) -> f32;
/// Process audio buffers
///
/// # Arguments
/// * `inputs` - Audio/CV input buffers for each input port
/// * `outputs` - Audio/CV output buffers for each output port
/// * `midi_inputs` - MIDI event buffers for each MIDI input port
/// * `midi_outputs` - MIDI event buffers for each MIDI output port
/// * `sample_rate` - Current sample rate in Hz
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
midi_inputs: &[&[MidiEvent]],
midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
);
/// Handle MIDI events (for nodes with MIDI inputs)
fn handle_midi(&mut self, _event: &MidiEvent) {
// Default: do nothing
}
/// Reset internal state (clear delays, resonances, etc.)
fn reset(&mut self);
/// Get the node type name (for serialization)
fn node_type(&self) -> &str;
/// Get a unique identifier for this node instance
fn name(&self) -> &str;
/// Clone this node into a new boxed instance
/// Required for VoiceAllocator to create multiple instances
fn clone_node(&self) -> Box<dyn AudioNode>;
/// Get oscilloscope data if this is an oscilloscope node
/// Returns None for non-oscilloscope nodes
fn get_oscilloscope_data(&self, _sample_count: usize) -> Option<Vec<f32>> {
None
}
/// Get oscilloscope CV data if this is an oscilloscope node
/// Returns None for non-oscilloscope nodes
fn get_oscilloscope_cv_data(&self, _sample_count: usize) -> Option<Vec<f32>> {
None
}
/// Downcast to `&mut dyn Any` for type-specific operations
fn as_any_mut(&mut self) -> &mut dyn std::any::Any;
/// Downcast to `&dyn Any` for type-specific read-only operations
fn as_any(&self) -> &dyn std::any::Any;
}

View File

@ -0,0 +1,46 @@
#!/bin/bash
for file in *.rs; do
if [ "$file" = "mod.rs" ]; then
continue
fi
echo "Processing $file"
# Create a backup
cp "$file" "$file.bak"
# Add as_any() method right after as_any_mut()
awk '
{
lines[NR] = $0
if (/fn as_any_mut\(&mut self\)/) {
# Found as_any_mut, look for its closing brace
found_method = NR
}
if (found_method > 0 && /^ }$/ && !inserted) {
closing_brace = NR
inserted = 1
}
}
END {
for (i = 1; i <= NR; i++) {
print lines[i]
if (i == closing_brace) {
print ""
print " fn as_any(&self) -> &dyn std::any::Any {"
print " self"
print " }"
}
}
}
' "$file.bak" > "$file"
# Verify the change was made
if grep -q "fn as_any(&self)" "$file"; then
echo " ✓ Successfully added as_any() to $file"
rm "$file.bak"
else
echo " ✗ Failed to add as_any() to $file - restoring backup"
mv "$file.bak" "$file"
fi
done

View File

@ -0,0 +1,223 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
const PARAM_ATTACK: u32 = 0;
const PARAM_DECAY: u32 = 1;
const PARAM_SUSTAIN: u32 = 2;
const PARAM_RELEASE: u32 = 3;
#[derive(Debug, Clone, Copy, PartialEq)]
enum EnvelopeStage {
Idle,
Attack,
Decay,
Sustain,
Release,
}
/// ADSR Envelope Generator
/// Outputs a CV signal (0.0-1.0) based on gate input and ADSR parameters
pub struct ADSRNode {
name: String,
attack: f32, // seconds
decay: f32, // seconds
sustain: f32, // level (0.0-1.0)
release: f32, // seconds
stage: EnvelopeStage,
level: f32, // current envelope level
gate_was_high: bool,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl ADSRNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Gate", SignalType::CV, 0),
];
let outputs = vec![
NodePort::new("Envelope Out", SignalType::CV, 0),
];
let parameters = vec![
Parameter::new(PARAM_ATTACK, "Attack", 0.001, 5.0, 0.01, ParameterUnit::Time),
Parameter::new(PARAM_DECAY, "Decay", 0.001, 5.0, 0.1, ParameterUnit::Time),
Parameter::new(PARAM_SUSTAIN, "Sustain", 0.0, 1.0, 0.7, ParameterUnit::Generic),
Parameter::new(PARAM_RELEASE, "Release", 0.001, 5.0, 0.2, ParameterUnit::Time),
];
Self {
name,
attack: 0.01,
decay: 0.1,
sustain: 0.7,
release: 0.2,
stage: EnvelopeStage::Idle,
level: 0.0,
gate_was_high: false,
inputs,
outputs,
parameters,
}
}
}
impl AudioNode for ADSRNode {
fn category(&self) -> NodeCategory {
NodeCategory::Utility
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_ATTACK => self.attack = value.clamp(0.001, 5.0),
PARAM_DECAY => self.decay = value.clamp(0.001, 5.0),
PARAM_SUSTAIN => self.sustain = value.clamp(0.0, 1.0),
PARAM_RELEASE => self.release = value.clamp(0.001, 5.0),
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_ATTACK => self.attack,
PARAM_DECAY => self.decay,
PARAM_SUSTAIN => self.sustain,
PARAM_RELEASE => self.release,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
if outputs.is_empty() {
return;
}
let output = &mut outputs[0];
let sample_rate_f32 = sample_rate as f32;
// CV signals are mono
let frames = output.len();
for frame in 0..frames {
// Read gate input (if available)
let gate_high = if !inputs.is_empty() && frame < inputs[0].len() {
inputs[0][frame] > 0.5 // Gate is high if CV > 0.5
} else {
false
};
// Detect gate transitions
if gate_high && !self.gate_was_high {
// Note on: Start attack
self.stage = EnvelopeStage::Attack;
} else if !gate_high && self.gate_was_high {
// Note off: Start release
self.stage = EnvelopeStage::Release;
}
self.gate_was_high = gate_high;
// Process envelope stage
match self.stage {
EnvelopeStage::Idle => {
self.level = 0.0;
}
EnvelopeStage::Attack => {
// Rise from current level to 1.0
let increment = 1.0 / (self.attack * sample_rate_f32);
self.level += increment;
if self.level >= 1.0 {
self.level = 1.0;
self.stage = EnvelopeStage::Decay;
}
}
EnvelopeStage::Decay => {
// Fall from 1.0 to sustain level
let target = self.sustain;
let decrement = (1.0 - target) / (self.decay * sample_rate_f32);
self.level -= decrement;
if self.level <= target {
self.level = target;
self.stage = EnvelopeStage::Sustain;
}
}
EnvelopeStage::Sustain => {
// Hold at sustain level
self.level = self.sustain;
}
EnvelopeStage::Release => {
// Fall from current level to 0.0
let decrement = self.level / (self.release * sample_rate_f32);
self.level -= decrement;
if self.level <= 0.001 {
self.level = 0.0;
self.stage = EnvelopeStage::Idle;
}
}
}
// Write envelope value (CV is mono)
output[frame] = self.level;
}
}
fn reset(&mut self) {
self.stage = EnvelopeStage::Idle;
self.level = 0.0;
self.gate_was_high = false;
}
fn node_type(&self) -> &str {
"ADSR"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
attack: self.attack,
decay: self.decay,
sustain: self.sustain,
release: self.release,
stage: EnvelopeStage::Idle, // Reset state
level: 0.0, // Reset level
gate_was_high: false, // Reset gate
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,127 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, SignalType};
use crate::audio::midi::MidiEvent;
/// Audio input node - receives audio from audio track clip playback
/// This node acts as the entry point for audio tracks, injecting clip audio into the effects graph
pub struct AudioInputNode {
name: String,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
/// Internal buffer to hold injected audio from clips
/// This is filled externally by AudioTrack::render() before graph processing
audio_buffer: Vec<f32>,
}
impl AudioInputNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
// Audio input node has no inputs - audio is injected externally
let inputs = vec![];
// Outputs stereo audio
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
Self {
name,
inputs,
outputs,
audio_buffer: Vec::new(),
}
}
/// Inject audio from clip playback into this node
/// Should be called by AudioTrack::render() before processing the graph
pub fn inject_audio(&mut self, audio: &[f32]) {
self.audio_buffer.clear();
self.audio_buffer.extend_from_slice(audio);
}
/// Clear the internal audio buffer
pub fn clear_buffer(&mut self) {
self.audio_buffer.clear();
}
}
impl AudioNode for AudioInputNode {
fn category(&self) -> NodeCategory {
NodeCategory::Input
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&[] // No parameters
}
fn set_parameter(&mut self, _id: u32, _value: f32) {
// No parameters
}
fn get_parameter(&self, _id: u32) -> f32 {
0.0
}
fn process(
&mut self,
_inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
_sample_rate: u32,
) {
if outputs.is_empty() {
return;
}
let output = &mut outputs[0];
let len = output.len().min(self.audio_buffer.len());
// Copy audio from internal buffer to output
if len > 0 {
output[..len].copy_from_slice(&self.audio_buffer[..len]);
}
// Clear any remaining samples in output
if output.len() > len {
output[len..].fill(0.0);
}
}
fn reset(&mut self) {
self.audio_buffer.clear();
}
fn node_type(&self) -> &str {
"AudioInput"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
audio_buffer: Vec::new(), // Don't clone the buffer, start fresh
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,159 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
const PARAM_ATTACK: u32 = 0;
const PARAM_RELEASE: u32 = 1;
/// Audio to CV converter (Envelope Follower)
/// Converts audio amplitude to control voltage
pub struct AudioToCVNode {
name: String,
envelope: f32, // Current envelope value
attack: f32, // Attack time in seconds
release: f32, // Release time in seconds
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl AudioToCVNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Audio In", SignalType::Audio, 0),
];
let outputs = vec![
NodePort::new("CV Out", SignalType::CV, 0),
];
let parameters = vec![
Parameter::new(PARAM_ATTACK, "Attack", 0.001, 1.0, 0.01, ParameterUnit::Time),
Parameter::new(PARAM_RELEASE, "Release", 0.001, 1.0, 0.1, ParameterUnit::Time),
];
Self {
name,
envelope: 0.0,
attack: 0.01,
release: 0.1,
inputs,
outputs,
parameters,
}
}
}
impl AudioNode for AudioToCVNode {
fn category(&self) -> NodeCategory {
NodeCategory::Utility
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_ATTACK => self.attack = value.clamp(0.001, 1.0),
PARAM_RELEASE => self.release = value.clamp(0.001, 1.0),
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_ATTACK => self.attack,
PARAM_RELEASE => self.release,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
if inputs.is_empty() || outputs.is_empty() {
return;
}
let input = inputs[0];
let output = &mut outputs[0];
// Audio input is stereo (interleaved L/R), CV output is mono
let audio_frames = input.len() / 2;
let cv_frames = output.len();
let frames = audio_frames.min(cv_frames);
// Calculate attack and release coefficients
let sample_rate_f32 = sample_rate as f32;
let attack_coeff = (-1.0 / (self.attack * sample_rate_f32)).exp();
let release_coeff = (-1.0 / (self.release * sample_rate_f32)).exp();
for frame in 0..frames {
// Get stereo samples
let left = input[frame * 2];
let right = input[frame * 2 + 1];
// Calculate RMS-like value (average of absolute values for simplicity)
let amplitude = (left.abs() + right.abs()) / 2.0;
// Envelope follower with attack/release
if amplitude > self.envelope {
// Attack: follow signal up quickly
self.envelope = amplitude * (1.0 - attack_coeff) + self.envelope * attack_coeff;
} else {
// Release: decay slowly
self.envelope = amplitude * (1.0 - release_coeff) + self.envelope * release_coeff;
}
// Output CV (mono)
output[frame] = self.envelope;
}
}
fn reset(&mut self) {
self.envelope = 0.0;
}
fn node_type(&self) -> &str {
"AudioToCV"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
envelope: 0.0, // Reset envelope
attack: self.attack,
release: self.release,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,288 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, SignalType};
use crate::audio::midi::MidiEvent;
use serde::{Deserialize, Serialize};
use std::sync::{Arc, RwLock};
/// Interpolation type for automation curves
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
#[serde(rename_all = "lowercase")]
pub enum InterpolationType {
Linear,
Bezier,
Step,
Hold,
}
/// A single keyframe in an automation curve
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct AutomationKeyframe {
/// Time in seconds (absolute project time)
pub time: f64,
/// CV output value
pub value: f32,
/// Interpolation type to next keyframe
pub interpolation: InterpolationType,
/// Bezier ease-out control point (for bezier interpolation)
pub ease_out: (f32, f32),
/// Bezier ease-in control point (for bezier interpolation)
pub ease_in: (f32, f32),
}
impl AutomationKeyframe {
pub fn new(time: f64, value: f32) -> Self {
Self {
time,
value,
interpolation: InterpolationType::Linear,
ease_out: (0.58, 1.0),
ease_in: (0.42, 0.0),
}
}
}
/// Automation Input Node - outputs CV signal controlled by timeline curves
pub struct AutomationInputNode {
name: String,
display_name: String, // User-editable name shown in UI
keyframes: Vec<AutomationKeyframe>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
/// Shared playback time (set by the graph before processing)
playback_time: Arc<RwLock<f64>>,
}
impl AutomationInputNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let outputs = vec![
NodePort::new("CV Out", SignalType::CV, 0),
];
Self {
name: name.clone(),
display_name: "Automation".to_string(),
keyframes: Vec::new(),
outputs,
parameters: Vec::new(),
playback_time: Arc::new(RwLock::new(0.0)),
}
}
/// Set the playback time (called by graph before processing)
pub fn set_playback_time(&mut self, time: f64) {
if let Ok(mut playback) = self.playback_time.write() {
*playback = time;
}
}
/// Get the display name (shown in UI)
pub fn display_name(&self) -> &str {
&self.display_name
}
/// Set the display name
pub fn set_display_name(&mut self, name: String) {
self.display_name = name;
}
/// Add a keyframe to the curve (maintains sorted order by time)
pub fn add_keyframe(&mut self, keyframe: AutomationKeyframe) {
// Find insertion position to maintain sorted order
let pos = self.keyframes.binary_search_by(|kf| {
kf.time.partial_cmp(&keyframe.time).unwrap_or(std::cmp::Ordering::Equal)
});
match pos {
Ok(idx) => {
// Replace existing keyframe at same time
self.keyframes[idx] = keyframe;
}
Err(idx) => {
// Insert at correct position
self.keyframes.insert(idx, keyframe);
}
}
}
/// Remove keyframe at specific time (with tolerance)
pub fn remove_keyframe_at_time(&mut self, time: f64, tolerance: f64) -> bool {
if let Some(idx) = self.keyframes.iter().position(|kf| (kf.time - time).abs() < tolerance) {
self.keyframes.remove(idx);
true
} else {
false
}
}
/// Update an existing keyframe
pub fn update_keyframe(&mut self, keyframe: AutomationKeyframe) {
// Remove old keyframe at this time, then add new one
self.remove_keyframe_at_time(keyframe.time, 0.001);
self.add_keyframe(keyframe);
}
/// Get all keyframes
pub fn keyframes(&self) -> &[AutomationKeyframe] {
&self.keyframes
}
/// Clear all keyframes
pub fn clear_keyframes(&mut self) {
self.keyframes.clear();
}
/// Evaluate curve at a specific time
fn evaluate_at_time(&self, time: f64) -> f32 {
if self.keyframes.is_empty() {
return 0.0;
}
// Before first keyframe
if time <= self.keyframes[0].time {
return self.keyframes[0].value;
}
// After last keyframe
let last_idx = self.keyframes.len() - 1;
if time >= self.keyframes[last_idx].time {
return self.keyframes[last_idx].value;
}
// Find bracketing keyframes
for i in 0..self.keyframes.len() - 1 {
let kf1 = &self.keyframes[i];
let kf2 = &self.keyframes[i + 1];
if time >= kf1.time && time <= kf2.time {
return self.interpolate(kf1, kf2, time);
}
}
0.0
}
/// Interpolate between two keyframes
fn interpolate(&self, kf1: &AutomationKeyframe, kf2: &AutomationKeyframe, time: f64) -> f32 {
// Calculate normalized position between keyframes (0.0 to 1.0)
let t = if kf2.time == kf1.time {
0.0
} else {
((time - kf1.time) / (kf2.time - kf1.time)) as f32
};
match kf1.interpolation {
InterpolationType::Linear => {
// Simple linear interpolation
kf1.value + (kf2.value - kf1.value) * t
}
InterpolationType::Bezier => {
// Cubic bezier interpolation using control points
let eased_t = self.cubic_bezier_ease(t, kf1.ease_out, kf2.ease_in);
kf1.value + (kf2.value - kf1.value) * eased_t
}
InterpolationType::Step | InterpolationType::Hold => {
// Hold value until next keyframe
kf1.value
}
}
}
/// Cubic bezier easing function
fn cubic_bezier_ease(&self, t: f32, ease_out: (f32, f32), ease_in: (f32, f32)) -> f32 {
// Simplified cubic bezier for 0,0 -> easeOut -> easeIn -> 1,1
let u = 1.0 - t;
3.0 * u * u * t * ease_out.1 +
3.0 * u * t * t * ease_in.1 +
t * t * t
}
}
impl AudioNode for AutomationInputNode {
fn category(&self) -> NodeCategory {
NodeCategory::Input
}
fn inputs(&self) -> &[NodePort] {
&[] // No inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, _id: u32, _value: f32) {
// No parameters
}
fn get_parameter(&self, _id: u32) -> f32 {
0.0
}
fn process(
&mut self,
_inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
if outputs.is_empty() {
return;
}
let output = &mut outputs[0];
let length = output.len();
// Get the starting playback time
let playhead = if let Ok(playback) = self.playback_time.read() {
*playback
} else {
0.0
};
// Calculate time per sample
let sample_duration = 1.0 / sample_rate as f64;
// Evaluate curve for each sample
for i in 0..length {
let time = playhead + (i as f64 * sample_duration);
output[i] = self.evaluate_at_time(time);
}
}
fn reset(&mut self) {
// No state to reset
}
fn node_type(&self) -> &str {
"AutomationInput"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
display_name: self.display_name.clone(),
keyframes: self.keyframes.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
playback_time: Arc::new(RwLock::new(0.0)),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,195 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
const PARAM_BIT_DEPTH: u32 = 0;
const PARAM_SAMPLE_RATE_REDUCTION: u32 = 1;
const PARAM_MIX: u32 = 2;
/// Bit Crusher effect - reduces bit depth and sample rate for lo-fi sound
pub struct BitCrusherNode {
name: String,
bit_depth: f32, // 1 to 16 bits
sample_rate_reduction: f32, // 1 to 48000 Hz (crushed sample rate)
mix: f32, // 0.0 to 1.0 (dry/wet)
// State for sample rate reduction
hold_left: f32,
hold_right: f32,
hold_counter: f32,
sample_rate: u32,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl BitCrusherNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Audio In", SignalType::Audio, 0),
];
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
let parameters = vec![
Parameter::new(PARAM_BIT_DEPTH, "Bit Depth", 1.0, 16.0, 8.0, ParameterUnit::Generic),
Parameter::new(PARAM_SAMPLE_RATE_REDUCTION, "Sample Rate", 100.0, 48000.0, 8000.0, ParameterUnit::Frequency),
Parameter::new(PARAM_MIX, "Mix", 0.0, 1.0, 1.0, ParameterUnit::Generic),
];
Self {
name,
bit_depth: 8.0,
sample_rate_reduction: 8000.0,
mix: 1.0,
hold_left: 0.0,
hold_right: 0.0,
hold_counter: 0.0,
sample_rate: 48000,
inputs,
outputs,
parameters,
}
}
/// Quantize sample to specified bit depth
fn quantize(&self, sample: f32) -> f32 {
// Calculate number of quantization levels
let levels = 2.0_f32.powf(self.bit_depth);
// Quantize: scale up, round, scale back down
let scaled = sample * levels / 2.0;
let quantized = scaled.round();
quantized * 2.0 / levels
}
}
impl AudioNode for BitCrusherNode {
fn category(&self) -> NodeCategory {
NodeCategory::Effect
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_BIT_DEPTH => self.bit_depth = value.clamp(1.0, 16.0),
PARAM_SAMPLE_RATE_REDUCTION => self.sample_rate_reduction = value.clamp(100.0, 48000.0),
PARAM_MIX => self.mix = value.clamp(0.0, 1.0),
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_BIT_DEPTH => self.bit_depth,
PARAM_SAMPLE_RATE_REDUCTION => self.sample_rate_reduction,
PARAM_MIX => self.mix,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
if inputs.is_empty() || outputs.is_empty() {
return;
}
// Update sample rate if changed
if self.sample_rate != sample_rate {
self.sample_rate = sample_rate;
}
let input = inputs[0];
let output = &mut outputs[0];
// Audio signals are stereo (interleaved L/R)
let frames = input.len() / 2;
let output_frames = output.len() / 2;
let frames_to_process = frames.min(output_frames);
// Calculate sample hold period
let hold_period = self.sample_rate as f32 / self.sample_rate_reduction;
for frame in 0..frames_to_process {
let left_in = input[frame * 2];
let right_in = input[frame * 2 + 1];
// Sample rate reduction: hold samples
if self.hold_counter <= 0.0 {
// Time to sample a new value
self.hold_left = self.quantize(left_in);
self.hold_right = self.quantize(right_in);
self.hold_counter = hold_period;
}
self.hold_counter -= 1.0;
// Mix dry and wet
let wet_left = self.hold_left;
let wet_right = self.hold_right;
output[frame * 2] = left_in * (1.0 - self.mix) + wet_left * self.mix;
output[frame * 2 + 1] = right_in * (1.0 - self.mix) + wet_right * self.mix;
}
}
fn reset(&mut self) {
self.hold_left = 0.0;
self.hold_right = 0.0;
self.hold_counter = 0.0;
}
fn node_type(&self) -> &str {
"BitCrusher"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
bit_depth: self.bit_depth,
sample_rate_reduction: self.sample_rate_reduction,
mix: self.mix,
hold_left: 0.0,
hold_right: 0.0,
hold_counter: 0.0,
sample_rate: self.sample_rate,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,165 @@
use crate::audio::bpm_detector::BpmDetectorRealtime;
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
const PARAM_SMOOTHING: u32 = 0;
/// BPM Detector Node - analyzes audio input and outputs tempo as CV
/// CV output represents BPM (e.g., 0.12 = 120 BPM when scaled appropriately)
pub struct BpmDetectorNode {
name: String,
detector: BpmDetectorRealtime,
smoothing: f32, // Smoothing factor for output (0-1)
last_output: f32, // For smooth transitions
sample_rate: u32, // Current sample rate
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl BpmDetectorNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Audio In", SignalType::Audio, 0),
];
let outputs = vec![
NodePort::new("BPM CV", SignalType::CV, 0),
];
let parameters = vec![
Parameter::new(PARAM_SMOOTHING, "Smoothing", 0.0, 1.0, 0.9, ParameterUnit::Percent),
];
// Use 10 second buffer for analysis
let detector = BpmDetectorRealtime::new(48000, 10.0);
Self {
name,
detector,
smoothing: 0.9,
last_output: 120.0,
sample_rate: 48000,
inputs,
outputs,
parameters,
}
}
}
impl AudioNode for BpmDetectorNode {
fn category(&self) -> NodeCategory {
NodeCategory::Utility
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_SMOOTHING => self.smoothing = value.clamp(0.0, 1.0),
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_SMOOTHING => self.smoothing,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
// Recreate detector if sample rate changed
if sample_rate != self.sample_rate {
self.sample_rate = sample_rate;
self.detector = BpmDetectorRealtime::new(sample_rate, 10.0);
}
if outputs.is_empty() {
return;
}
let output = &mut outputs[0];
let length = output.len();
let input = if !inputs.is_empty() && !inputs[0].is_empty() {
inputs[0]
} else {
// Fill output with last known BPM
for i in 0..length {
output[i] = self.last_output / 1000.0; // Scale BPM for CV (e.g., 120 BPM -> 0.12)
}
return;
};
// Process audio through detector
let detected_bpm = self.detector.process(input);
// Apply smoothing
let target_bpm = detected_bpm;
let smoothed_bpm = self.last_output * self.smoothing + target_bpm * (1.0 - self.smoothing);
self.last_output = smoothed_bpm;
// Output BPM as CV (scaled down for typical CV range)
// BPM / 1000 gives us reasonable CV values (60-180 BPM -> 0.06-0.18)
let cv_value = smoothed_bpm / 1000.0;
// Fill entire output buffer with current BPM value
for i in 0..length {
output[i] = cv_value;
}
}
fn reset(&mut self) {
self.detector.reset();
self.last_output = 120.0;
}
fn node_type(&self) -> &str {
"BpmDetector"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
detector: BpmDetectorRealtime::new(self.sample_rate, 10.0),
smoothing: self.smoothing,
last_output: self.last_output,
sample_rate: self.sample_rate,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,242 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
use std::f32::consts::PI;
const PARAM_RATE: u32 = 0;
const PARAM_DEPTH: u32 = 1;
const PARAM_WET_DRY: u32 = 2;
const MAX_DELAY_MS: f32 = 50.0;
const BASE_DELAY_MS: f32 = 15.0;
/// Chorus effect using modulated delay lines
pub struct ChorusNode {
name: String,
rate: f32, // LFO rate in Hz (0.1 to 5 Hz)
depth: f32, // Modulation depth 0.0 to 1.0
wet_dry: f32, // 0.0 = dry only, 1.0 = wet only
// Delay buffers for left and right channels
delay_buffer_left: Vec<f32>,
delay_buffer_right: Vec<f32>,
write_position: usize,
max_delay_samples: usize,
sample_rate: u32,
// LFO state
lfo_phase: f32,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl ChorusNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Audio In", SignalType::Audio, 0),
];
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
let parameters = vec![
Parameter::new(PARAM_RATE, "Rate", 0.1, 5.0, 1.0, ParameterUnit::Frequency),
Parameter::new(PARAM_DEPTH, "Depth", 0.0, 1.0, 0.5, ParameterUnit::Generic),
Parameter::new(PARAM_WET_DRY, "Wet/Dry", 0.0, 1.0, 0.5, ParameterUnit::Generic),
];
// Allocate max delay buffer size
let max_delay_samples = ((MAX_DELAY_MS / 1000.0) * 48000.0) as usize;
Self {
name,
rate: 1.0,
depth: 0.5,
wet_dry: 0.5,
delay_buffer_left: vec![0.0; max_delay_samples],
delay_buffer_right: vec![0.0; max_delay_samples],
write_position: 0,
max_delay_samples,
sample_rate: 48000,
lfo_phase: 0.0,
inputs,
outputs,
parameters,
}
}
fn read_interpolated_sample(&self, buffer: &[f32], delay_samples: f32) -> f32 {
// Linear interpolation for smooth delay modulation
let delay_samples = delay_samples.clamp(0.0, (self.max_delay_samples - 1) as f32);
let read_pos_float = self.write_position as f32 - delay_samples;
let read_pos_float = if read_pos_float < 0.0 {
read_pos_float + self.max_delay_samples as f32
} else {
read_pos_float
};
let read_pos_int = read_pos_float.floor() as usize;
let frac = read_pos_float - read_pos_int as f32;
let sample1 = buffer[read_pos_int % self.max_delay_samples];
let sample2 = buffer[(read_pos_int + 1) % self.max_delay_samples];
sample1 * (1.0 - frac) + sample2 * frac
}
}
impl AudioNode for ChorusNode {
fn category(&self) -> NodeCategory {
NodeCategory::Effect
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_RATE => {
self.rate = value.clamp(0.1, 5.0);
}
PARAM_DEPTH => {
self.depth = value.clamp(0.0, 1.0);
}
PARAM_WET_DRY => {
self.wet_dry = value.clamp(0.0, 1.0);
}
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_RATE => self.rate,
PARAM_DEPTH => self.depth,
PARAM_WET_DRY => self.wet_dry,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
if inputs.is_empty() || outputs.is_empty() {
return;
}
// Update sample rate if changed
if self.sample_rate != sample_rate {
self.sample_rate = sample_rate;
self.max_delay_samples = ((MAX_DELAY_MS / 1000.0) * sample_rate as f32) as usize;
self.delay_buffer_left.resize(self.max_delay_samples, 0.0);
self.delay_buffer_right.resize(self.max_delay_samples, 0.0);
self.write_position = 0;
}
let input = inputs[0];
let output = &mut outputs[0];
// Audio signals are stereo (interleaved L/R)
let frames = input.len() / 2;
let output_frames = output.len() / 2;
let frames_to_process = frames.min(output_frames);
let dry_gain = 1.0 - self.wet_dry;
let wet_gain = self.wet_dry;
let base_delay_samples = (BASE_DELAY_MS / 1000.0) * self.sample_rate as f32;
let max_modulation_samples = (MAX_DELAY_MS - BASE_DELAY_MS) / 1000.0 * self.sample_rate as f32;
for frame in 0..frames_to_process {
let left_in = input[frame * 2];
let right_in = input[frame * 2 + 1];
// Generate LFO value (sine wave, 0 to 1)
let lfo_value = ((self.lfo_phase * 2.0 * PI).sin() * 0.5 + 0.5) * self.depth;
// Calculate modulated delay time
let delay_samples = base_delay_samples + lfo_value * max_modulation_samples;
// Read delayed samples with interpolation
let left_delayed = self.read_interpolated_sample(&self.delay_buffer_left, delay_samples);
let right_delayed = self.read_interpolated_sample(&self.delay_buffer_right, delay_samples);
// Mix dry and wet signals
output[frame * 2] = left_in * dry_gain + left_delayed * wet_gain;
output[frame * 2 + 1] = right_in * dry_gain + right_delayed * wet_gain;
// Write to delay buffer
self.delay_buffer_left[self.write_position] = left_in;
self.delay_buffer_right[self.write_position] = right_in;
// Advance write position
self.write_position = (self.write_position + 1) % self.max_delay_samples;
// Advance LFO phase
self.lfo_phase += self.rate / self.sample_rate as f32;
if self.lfo_phase >= 1.0 {
self.lfo_phase -= 1.0;
}
}
}
fn reset(&mut self) {
self.delay_buffer_left.fill(0.0);
self.delay_buffer_right.fill(0.0);
self.write_position = 0;
self.lfo_phase = 0.0;
}
fn node_type(&self) -> &str {
"Chorus"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
rate: self.rate,
depth: self.depth,
wet_dry: self.wet_dry,
delay_buffer_left: vec![0.0; self.max_delay_samples],
delay_buffer_right: vec![0.0; self.max_delay_samples],
write_position: 0,
max_delay_samples: self.max_delay_samples,
sample_rate: self.sample_rate,
lfo_phase: 0.0,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,261 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
const PARAM_THRESHOLD: u32 = 0;
const PARAM_RATIO: u32 = 1;
const PARAM_ATTACK: u32 = 2;
const PARAM_RELEASE: u32 = 3;
const PARAM_MAKEUP_GAIN: u32 = 4;
const PARAM_KNEE: u32 = 5;
/// Compressor node for dynamic range compression
pub struct CompressorNode {
name: String,
threshold_db: f32,
ratio: f32,
attack_ms: f32,
release_ms: f32,
makeup_gain_db: f32,
knee_db: f32,
// State
envelope: f32,
attack_coeff: f32,
release_coeff: f32,
sample_rate: u32,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl CompressorNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Audio In", SignalType::Audio, 0),
];
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
let parameters = vec![
Parameter::new(PARAM_THRESHOLD, "Threshold", -60.0, 0.0, -20.0, ParameterUnit::Decibels),
Parameter::new(PARAM_RATIO, "Ratio", 1.0, 20.0, 4.0, ParameterUnit::Generic),
Parameter::new(PARAM_ATTACK, "Attack", 0.1, 100.0, 5.0, ParameterUnit::Time),
Parameter::new(PARAM_RELEASE, "Release", 10.0, 1000.0, 50.0, ParameterUnit::Time),
Parameter::new(PARAM_MAKEUP_GAIN, "Makeup", 0.0, 24.0, 0.0, ParameterUnit::Decibels),
Parameter::new(PARAM_KNEE, "Knee", 0.0, 12.0, 3.0, ParameterUnit::Decibels),
];
let sample_rate = 44100;
let attack_coeff = Self::ms_to_coeff(5.0, sample_rate);
let release_coeff = Self::ms_to_coeff(50.0, sample_rate);
Self {
name,
threshold_db: -20.0,
ratio: 4.0,
attack_ms: 5.0,
release_ms: 50.0,
makeup_gain_db: 0.0,
knee_db: 3.0,
envelope: 0.0,
attack_coeff,
release_coeff,
sample_rate,
inputs,
outputs,
parameters,
}
}
/// Convert milliseconds to exponential smoothing coefficient
fn ms_to_coeff(time_ms: f32, sample_rate: u32) -> f32 {
let time_seconds = time_ms / 1000.0;
let samples = time_seconds * sample_rate as f32;
(-1.0 / samples).exp()
}
fn update_coefficients(&mut self) {
self.attack_coeff = Self::ms_to_coeff(self.attack_ms, self.sample_rate);
self.release_coeff = Self::ms_to_coeff(self.release_ms, self.sample_rate);
}
/// Convert linear amplitude to dB
fn linear_to_db(linear: f32) -> f32 {
if linear > 0.0 {
20.0 * linear.log10()
} else {
-160.0
}
}
/// Convert dB to linear gain
fn db_to_linear(db: f32) -> f32 {
10.0_f32.powf(db / 20.0)
}
/// Calculate gain reduction for a given input level
fn calculate_gain_reduction(&self, input_db: f32) -> f32 {
let threshold = self.threshold_db;
let knee = self.knee_db;
let ratio = self.ratio;
// Soft knee implementation
if input_db < threshold - knee / 2.0 {
// Below threshold - no compression
0.0
} else if input_db > threshold + knee / 2.0 {
// Above threshold - full compression
let overshoot = input_db - threshold;
overshoot * (1.0 - 1.0 / ratio)
} else {
// In knee region - gradual compression
let overshoot = input_db - threshold + knee / 2.0;
let knee_factor = overshoot / knee;
overshoot * knee_factor * (1.0 - 1.0 / ratio) / 2.0
}
}
fn process_sample(&mut self, input: f32) -> f32 {
// Detect input level (using absolute value as simple peak detector)
let input_level = input.abs();
// Convert to dB
let input_db = Self::linear_to_db(input_level);
// Calculate target gain reduction
let target_gr_db = self.calculate_gain_reduction(input_db);
let target_gr_linear = Self::db_to_linear(-target_gr_db);
// Smooth envelope with attack/release
let coeff = if target_gr_linear < self.envelope {
self.attack_coeff // Attack (faster response to louder signal)
} else {
self.release_coeff // Release (slower response when signal gets quieter)
};
self.envelope = target_gr_linear + coeff * (self.envelope - target_gr_linear);
// Apply compression and makeup gain
let makeup_linear = Self::db_to_linear(self.makeup_gain_db);
input * self.envelope * makeup_linear
}
}
impl AudioNode for CompressorNode {
fn category(&self) -> NodeCategory {
NodeCategory::Effect
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_THRESHOLD => self.threshold_db = value,
PARAM_RATIO => self.ratio = value,
PARAM_ATTACK => {
self.attack_ms = value;
self.update_coefficients();
}
PARAM_RELEASE => {
self.release_ms = value;
self.update_coefficients();
}
PARAM_MAKEUP_GAIN => self.makeup_gain_db = value,
PARAM_KNEE => self.knee_db = value,
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_THRESHOLD => self.threshold_db,
PARAM_RATIO => self.ratio,
PARAM_ATTACK => self.attack_ms,
PARAM_RELEASE => self.release_ms,
PARAM_MAKEUP_GAIN => self.makeup_gain_db,
PARAM_KNEE => self.knee_db,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
if inputs.is_empty() || outputs.is_empty() {
return;
}
// Update sample rate if changed
if self.sample_rate != sample_rate {
self.sample_rate = sample_rate;
self.update_coefficients();
}
let input = inputs[0];
let output = &mut outputs[0];
let len = input.len().min(output.len());
for i in 0..len {
output[i] = self.process_sample(input[i]);
}
}
fn reset(&mut self) {
self.envelope = 0.0;
}
fn node_type(&self) -> &str {
"Compressor"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
threshold_db: self.threshold_db,
ratio: self.ratio,
attack_ms: self.attack_ms,
release_ms: self.release_ms,
makeup_gain_db: self.makeup_gain_db,
knee_db: self.knee_db,
envelope: 0.0, // Reset state for clone
attack_coeff: self.attack_coeff,
release_coeff: self.release_coeff,
sample_rate: self.sample_rate,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,121 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
const PARAM_VALUE: u32 = 0;
/// Constant CV source - outputs a constant voltage
/// Useful for providing fixed values to CV inputs, offsets, etc.
pub struct ConstantNode {
name: String,
value: f32,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl ConstantNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![];
let outputs = vec![
NodePort::new("CV Out", SignalType::CV, 0),
];
let parameters = vec![
Parameter::new(PARAM_VALUE, "Value", -10.0, 10.0, 0.0, ParameterUnit::Generic),
];
Self {
name,
value: 0.0,
inputs,
outputs,
parameters,
}
}
}
impl AudioNode for ConstantNode {
fn category(&self) -> NodeCategory {
NodeCategory::Utility
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_VALUE => self.value = value.clamp(-10.0, 10.0),
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_VALUE => self.value,
_ => 0.0,
}
}
fn process(
&mut self,
_inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
_sample_rate: u32,
) {
if outputs.is_empty() {
return;
}
let output = &mut outputs[0];
let length = output.len();
// Fill output with constant value
for i in 0..length {
output[i] = self.value;
}
}
fn reset(&mut self) {
// No state to reset
}
fn node_type(&self) -> &str {
"Constant"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
value: self.value,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,219 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
const PARAM_DELAY_TIME: u32 = 0;
const PARAM_FEEDBACK: u32 = 1;
const PARAM_WET_DRY: u32 = 2;
const MAX_DELAY_SECONDS: f32 = 2.0;
/// Stereo delay node with feedback
pub struct DelayNode {
name: String,
delay_time: f32, // seconds
feedback: f32, // 0.0 to 0.95
wet_dry: f32, // 0.0 = dry only, 1.0 = wet only
// Delay buffers for left and right channels
delay_buffer_left: Vec<f32>,
delay_buffer_right: Vec<f32>,
write_position: usize,
max_delay_samples: usize,
sample_rate: u32,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl DelayNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Audio In", SignalType::Audio, 0),
];
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
let parameters = vec![
Parameter::new(PARAM_DELAY_TIME, "Delay Time", 0.001, MAX_DELAY_SECONDS, 0.5, ParameterUnit::Time),
Parameter::new(PARAM_FEEDBACK, "Feedback", 0.0, 0.95, 0.5, ParameterUnit::Generic),
Parameter::new(PARAM_WET_DRY, "Wet/Dry", 0.0, 1.0, 0.5, ParameterUnit::Generic),
];
// Allocate max delay buffer size (will be initialized properly when we get sample rate)
let max_delay_samples = (MAX_DELAY_SECONDS * 48000.0) as usize; // Assume max 48kHz
Self {
name,
delay_time: 0.5,
feedback: 0.5,
wet_dry: 0.5,
delay_buffer_left: vec![0.0; max_delay_samples],
delay_buffer_right: vec![0.0; max_delay_samples],
write_position: 0,
max_delay_samples,
sample_rate: 48000,
inputs,
outputs,
parameters,
}
}
fn get_delay_samples(&self) -> usize {
(self.delay_time * self.sample_rate as f32) as usize
}
fn read_delayed_sample(&self, buffer: &[f32], delay_samples: usize) -> f32 {
// Calculate read position (wrap around)
let read_pos = if self.write_position >= delay_samples {
self.write_position - delay_samples
} else {
self.max_delay_samples + self.write_position - delay_samples
};
buffer[read_pos % self.max_delay_samples]
}
}
impl AudioNode for DelayNode {
fn category(&self) -> NodeCategory {
NodeCategory::Effect
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_DELAY_TIME => {
self.delay_time = value.clamp(0.001, MAX_DELAY_SECONDS);
}
PARAM_FEEDBACK => {
self.feedback = value.clamp(0.0, 0.95);
}
PARAM_WET_DRY => {
self.wet_dry = value.clamp(0.0, 1.0);
}
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_DELAY_TIME => self.delay_time,
PARAM_FEEDBACK => self.feedback,
PARAM_WET_DRY => self.wet_dry,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
if inputs.is_empty() || outputs.is_empty() {
return;
}
// Update sample rate if changed
if self.sample_rate != sample_rate {
self.sample_rate = sample_rate;
self.max_delay_samples = (MAX_DELAY_SECONDS * sample_rate as f32) as usize;
self.delay_buffer_left.resize(self.max_delay_samples, 0.0);
self.delay_buffer_right.resize(self.max_delay_samples, 0.0);
self.write_position = 0;
}
let input = inputs[0];
let output = &mut outputs[0];
// Audio signals are stereo (interleaved L/R)
let frames = input.len() / 2;
let output_frames = output.len() / 2;
let frames_to_process = frames.min(output_frames);
let delay_samples = self.get_delay_samples().max(1).min(self.max_delay_samples - 1);
for frame in 0..frames_to_process {
let left_in = input[frame * 2];
let right_in = input[frame * 2 + 1];
// Read delayed samples
let left_delayed = self.read_delayed_sample(&self.delay_buffer_left, delay_samples);
let right_delayed = self.read_delayed_sample(&self.delay_buffer_right, delay_samples);
// Mix dry and wet signals
let dry_gain = 1.0 - self.wet_dry;
let wet_gain = self.wet_dry;
let left_out = left_in * dry_gain + left_delayed * wet_gain;
let right_out = right_in * dry_gain + right_delayed * wet_gain;
output[frame * 2] = left_out;
output[frame * 2 + 1] = right_out;
// Write to delay buffer with feedback
self.delay_buffer_left[self.write_position] = left_in + left_delayed * self.feedback;
self.delay_buffer_right[self.write_position] = right_in + right_delayed * self.feedback;
// Advance write position
self.write_position = (self.write_position + 1) % self.max_delay_samples;
}
}
fn reset(&mut self) {
self.delay_buffer_left.fill(0.0);
self.delay_buffer_right.fill(0.0);
self.write_position = 0;
}
fn node_type(&self) -> &str {
"Delay"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
delay_time: self.delay_time,
feedback: self.feedback,
wet_dry: self.wet_dry,
delay_buffer_left: vec![0.0; self.max_delay_samples],
delay_buffer_right: vec![0.0; self.max_delay_samples],
write_position: 0,
max_delay_samples: self.max_delay_samples,
sample_rate: self.sample_rate,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,265 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
const PARAM_DRIVE: u32 = 0;
const PARAM_TYPE: u32 = 1;
const PARAM_TONE: u32 = 2;
const PARAM_MIX: u32 = 3;
#[derive(Debug, Clone, Copy, PartialEq)]
pub enum DistortionType {
SoftClip = 0,
HardClip = 1,
Tanh = 2,
Asymmetric = 3,
}
impl DistortionType {
fn from_f32(value: f32) -> Self {
match value.round() as i32 {
1 => DistortionType::HardClip,
2 => DistortionType::Tanh,
3 => DistortionType::Asymmetric,
_ => DistortionType::SoftClip,
}
}
}
/// Distortion node with multiple waveshaping algorithms
pub struct DistortionNode {
name: String,
drive: f32, // 0.01 to 20.0 (linear gain)
distortion_type: DistortionType,
tone: f32, // 0.0 to 1.0 (low-pass filter cutoff)
mix: f32, // 0.0 to 1.0 (dry/wet)
// Tone filter state (simple one-pole low-pass)
filter_state_left: f32,
filter_state_right: f32,
sample_rate: u32,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl DistortionNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Audio In", SignalType::Audio, 0),
];
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
let parameters = vec![
Parameter::new(PARAM_DRIVE, "Drive", 0.01, 20.0, 1.0, ParameterUnit::Generic),
Parameter::new(PARAM_TYPE, "Type", 0.0, 3.0, 0.0, ParameterUnit::Generic),
Parameter::new(PARAM_TONE, "Tone", 0.0, 1.0, 0.7, ParameterUnit::Generic),
Parameter::new(PARAM_MIX, "Mix", 0.0, 1.0, 1.0, ParameterUnit::Generic),
];
Self {
name,
drive: 1.0,
distortion_type: DistortionType::SoftClip,
tone: 0.7,
mix: 1.0,
filter_state_left: 0.0,
filter_state_right: 0.0,
sample_rate: 44100,
inputs,
outputs,
parameters,
}
}
/// Soft clipping using cubic waveshaping
fn soft_clip(&self, x: f32) -> f32 {
let x = x.clamp(-2.0, 2.0);
if x.abs() <= 1.0 {
x
} else {
let sign = x.signum();
sign * (2.0 - (2.0 - x.abs()).powi(2)) / 2.0
}
}
/// Hard clipping
fn hard_clip(&self, x: f32) -> f32 {
x.clamp(-1.0, 1.0)
}
/// Hyperbolic tangent waveshaping
fn tanh_distortion(&self, x: f32) -> f32 {
x.tanh()
}
/// Asymmetric waveshaping (different curves for positive/negative)
fn asymmetric(&self, x: f32) -> f32 {
if x >= 0.0 {
// Positive: soft clip
self.soft_clip(x)
} else {
// Negative: harder clip
self.hard_clip(x * 1.5) / 1.5
}
}
/// Apply waveshaping based on type
fn apply_waveshaping(&self, x: f32) -> f32 {
match self.distortion_type {
DistortionType::SoftClip => self.soft_clip(x),
DistortionType::HardClip => self.hard_clip(x),
DistortionType::Tanh => self.tanh_distortion(x),
DistortionType::Asymmetric => self.asymmetric(x),
}
}
/// Simple one-pole low-pass filter for tone control
fn apply_tone_filter(&mut self, input: f32, is_left: bool) -> f32 {
// Tone parameter controls cutoff frequency (0 = dark, 1 = bright)
// Map tone to filter coefficient (0.1 to 0.99)
let coeff = 0.1 + self.tone * 0.89;
let state = if is_left {
&mut self.filter_state_left
} else {
&mut self.filter_state_right
};
*state = *state * coeff + input * (1.0 - coeff);
*state
}
fn process_sample(&mut self, input: f32, is_left: bool) -> f32 {
// Apply drive (input gain)
let driven = input * self.drive;
// Apply waveshaping
let distorted = self.apply_waveshaping(driven);
// Apply tone control (low-pass filter to tame harshness)
let filtered = self.apply_tone_filter(distorted, is_left);
// Apply output gain compensation and mix
let output_gain = 1.0 / (1.0 + self.drive * 0.2); // Compensate for loudness increase
let wet = filtered * output_gain;
let dry = input;
// Mix dry and wet
dry * (1.0 - self.mix) + wet * self.mix
}
}
impl AudioNode for DistortionNode {
fn category(&self) -> NodeCategory {
NodeCategory::Effect
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_DRIVE => self.drive = value.clamp(0.01, 20.0),
PARAM_TYPE => self.distortion_type = DistortionType::from_f32(value),
PARAM_TONE => self.tone = value.clamp(0.0, 1.0),
PARAM_MIX => self.mix = value.clamp(0.0, 1.0),
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_DRIVE => self.drive,
PARAM_TYPE => self.distortion_type as i32 as f32,
PARAM_TONE => self.tone,
PARAM_MIX => self.mix,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
if inputs.is_empty() || outputs.is_empty() {
return;
}
// Update sample rate if changed
if self.sample_rate != sample_rate {
self.sample_rate = sample_rate;
}
let input = inputs[0];
let output = &mut outputs[0];
// Audio signals are stereo (interleaved L/R)
let frames = input.len() / 2;
let output_frames = output.len() / 2;
let frames_to_process = frames.min(output_frames);
for frame in 0..frames_to_process {
let left_in = input[frame * 2];
let right_in = input[frame * 2 + 1];
output[frame * 2] = self.process_sample(left_in, true);
output[frame * 2 + 1] = self.process_sample(right_in, false);
}
}
fn reset(&mut self) {
self.filter_state_left = 0.0;
self.filter_state_right = 0.0;
}
fn node_type(&self) -> &str {
"Distortion"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
drive: self.drive,
distortion_type: self.distortion_type,
tone: self.tone,
mix: self.mix,
filter_state_left: 0.0, // Reset state for clone
filter_state_right: 0.0,
sample_rate: self.sample_rate,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,166 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
const PARAM_ATTACK: u32 = 0;
const PARAM_RELEASE: u32 = 1;
/// Envelope Follower - extracts amplitude envelope from audio signal
/// Outputs a CV signal that follows the loudness of the input
pub struct EnvelopeFollowerNode {
name: String,
attack_time: f32, // seconds
release_time: f32, // seconds
envelope: f32, // current envelope level
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl EnvelopeFollowerNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Audio In", SignalType::Audio, 0),
];
let outputs = vec![
NodePort::new("CV Out", SignalType::CV, 0),
];
let parameters = vec![
Parameter::new(PARAM_ATTACK, "Attack", 0.001, 1.0, 0.01, ParameterUnit::Time),
Parameter::new(PARAM_RELEASE, "Release", 0.001, 1.0, 0.1, ParameterUnit::Time),
];
Self {
name,
attack_time: 0.01,
release_time: 0.1,
envelope: 0.0,
inputs,
outputs,
parameters,
}
}
}
impl AudioNode for EnvelopeFollowerNode {
fn category(&self) -> NodeCategory {
NodeCategory::Utility
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_ATTACK => self.attack_time = value.clamp(0.001, 1.0),
PARAM_RELEASE => self.release_time = value.clamp(0.001, 1.0),
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_ATTACK => self.attack_time,
PARAM_RELEASE => self.release_time,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
if outputs.is_empty() {
return;
}
let output = &mut outputs[0];
let length = output.len();
let input = if !inputs.is_empty() && !inputs[0].is_empty() {
inputs[0]
} else {
&[]
};
// Calculate filter coefficients
// One-pole filter: y[n] = y[n-1] + coefficient * (x[n] - y[n-1])
let sample_duration = 1.0 / sample_rate as f32;
// Time constant τ = time to reach ~63% of target
// Coefficient = 1 - e^(-1/(τ * sample_rate))
// Simplified approximation: coefficient ≈ sample_duration / time_constant
let attack_coeff = (sample_duration / self.attack_time).min(1.0);
let release_coeff = (sample_duration / self.release_time).min(1.0);
// Process each sample
for i in 0..length {
// Get absolute value of input (rectify)
let input_level = if i < input.len() {
input[i].abs()
} else {
0.0
};
// Apply attack or release
let coeff = if input_level > self.envelope {
attack_coeff // Rising - use attack time
} else {
release_coeff // Falling - use release time
};
// One-pole filter
self.envelope += (input_level - self.envelope) * coeff;
output[i] = self.envelope;
}
}
fn reset(&mut self) {
self.envelope = 0.0;
}
fn node_type(&self) -> &str {
"EnvelopeFollower"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
attack_time: self.attack_time,
release_time: self.release_time,
envelope: self.envelope,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,267 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
use crate::dsp::biquad::BiquadFilter;
// Low band (shelving)
const PARAM_LOW_FREQ: u32 = 0;
const PARAM_LOW_GAIN: u32 = 1;
// Mid band (peaking)
const PARAM_MID_FREQ: u32 = 2;
const PARAM_MID_GAIN: u32 = 3;
const PARAM_MID_Q: u32 = 4;
// High band (shelving)
const PARAM_HIGH_FREQ: u32 = 5;
const PARAM_HIGH_GAIN: u32 = 6;
/// 3-Band Parametric EQ Node
/// All three bands use peaking filters at different frequencies
pub struct EQNode {
name: String,
// Parameters
low_freq: f32,
low_gain_db: f32,
low_q: f32,
mid_freq: f32,
mid_gain_db: f32,
mid_q: f32,
high_freq: f32,
high_gain_db: f32,
high_q: f32,
// Filters (stereo)
low_filter_left: BiquadFilter,
low_filter_right: BiquadFilter,
mid_filter_left: BiquadFilter,
mid_filter_right: BiquadFilter,
high_filter_left: BiquadFilter,
high_filter_right: BiquadFilter,
sample_rate: u32,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl EQNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Audio In", SignalType::Audio, 0),
];
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
let parameters = vec![
Parameter::new(PARAM_LOW_FREQ, "Low Freq", 20.0, 500.0, 100.0, ParameterUnit::Frequency),
Parameter::new(PARAM_LOW_GAIN, "Low Gain", -24.0, 24.0, 0.0, ParameterUnit::Decibels),
Parameter::new(PARAM_MID_FREQ, "Mid Freq", 200.0, 5000.0, 1000.0, ParameterUnit::Frequency),
Parameter::new(PARAM_MID_GAIN, "Mid Gain", -24.0, 24.0, 0.0, ParameterUnit::Decibels),
Parameter::new(PARAM_MID_Q, "Mid Q", 0.1, 10.0, 0.707, ParameterUnit::Generic),
Parameter::new(PARAM_HIGH_FREQ, "High Freq", 2000.0, 20000.0, 8000.0, ParameterUnit::Frequency),
Parameter::new(PARAM_HIGH_GAIN, "High Gain", -24.0, 24.0, 0.0, ParameterUnit::Decibels),
];
let sample_rate = 44100;
// Initialize filters - all peaking
let low_filter_left = BiquadFilter::peaking(100.0, 1.0, 0.0, sample_rate as f32);
let low_filter_right = BiquadFilter::peaking(100.0, 1.0, 0.0, sample_rate as f32);
let mid_filter_left = BiquadFilter::peaking(1000.0, 0.707, 0.0, sample_rate as f32);
let mid_filter_right = BiquadFilter::peaking(1000.0, 0.707, 0.0, sample_rate as f32);
let high_filter_left = BiquadFilter::peaking(8000.0, 1.0, 0.0, sample_rate as f32);
let high_filter_right = BiquadFilter::peaking(8000.0, 1.0, 0.0, sample_rate as f32);
Self {
name,
low_freq: 100.0,
low_gain_db: 0.0,
low_q: 1.0,
mid_freq: 1000.0,
mid_gain_db: 0.0,
mid_q: 0.707,
high_freq: 8000.0,
high_gain_db: 0.0,
high_q: 1.0,
low_filter_left,
low_filter_right,
mid_filter_left,
mid_filter_right,
high_filter_left,
high_filter_right,
sample_rate,
inputs,
outputs,
parameters,
}
}
fn update_filters(&mut self) {
let sr = self.sample_rate as f32;
// Update low band peaking filter
self.low_filter_left.set_peaking(self.low_freq, self.low_q, self.low_gain_db, sr);
self.low_filter_right.set_peaking(self.low_freq, self.low_q, self.low_gain_db, sr);
// Update mid band peaking filter
self.mid_filter_left.set_peaking(self.mid_freq, self.mid_q, self.mid_gain_db, sr);
self.mid_filter_right.set_peaking(self.mid_freq, self.mid_q, self.mid_gain_db, sr);
// Update high band peaking filter
self.high_filter_left.set_peaking(self.high_freq, self.high_q, self.high_gain_db, sr);
self.high_filter_right.set_peaking(self.high_freq, self.high_q, self.high_gain_db, sr);
}
}
impl AudioNode for EQNode {
fn category(&self) -> NodeCategory {
NodeCategory::Effect
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_LOW_FREQ => {
self.low_freq = value;
self.update_filters();
}
PARAM_LOW_GAIN => {
self.low_gain_db = value;
self.update_filters();
}
PARAM_MID_FREQ => {
self.mid_freq = value;
self.update_filters();
}
PARAM_MID_GAIN => {
self.mid_gain_db = value;
self.update_filters();
}
PARAM_MID_Q => {
self.mid_q = value;
self.update_filters();
}
PARAM_HIGH_FREQ => {
self.high_freq = value;
self.update_filters();
}
PARAM_HIGH_GAIN => {
self.high_gain_db = value;
self.update_filters();
}
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_LOW_FREQ => self.low_freq,
PARAM_LOW_GAIN => self.low_gain_db,
PARAM_MID_FREQ => self.mid_freq,
PARAM_MID_GAIN => self.mid_gain_db,
PARAM_MID_Q => self.mid_q,
PARAM_HIGH_FREQ => self.high_freq,
PARAM_HIGH_GAIN => self.high_gain_db,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
if inputs.is_empty() || outputs.is_empty() {
return;
}
// Update sample rate if changed
if self.sample_rate != sample_rate {
self.sample_rate = sample_rate;
self.update_filters();
}
let input = inputs[0];
let output = &mut outputs[0];
// Audio signals are stereo (interleaved L/R)
let frames = input.len() / 2;
let output_frames = output.len() / 2;
let frames_to_process = frames.min(output_frames);
for frame in 0..frames_to_process {
let mut left = input[frame * 2];
let mut right = input[frame * 2 + 1];
// Process through all three bands
left = self.low_filter_left.process_sample(left, 0);
left = self.mid_filter_left.process_sample(left, 0);
left = self.high_filter_left.process_sample(left, 0);
right = self.low_filter_right.process_sample(right, 1);
right = self.mid_filter_right.process_sample(right, 1);
right = self.high_filter_right.process_sample(right, 1);
output[frame * 2] = left;
output[frame * 2 + 1] = right;
}
}
fn reset(&mut self) {
self.low_filter_left.reset();
self.low_filter_right.reset();
self.mid_filter_left.reset();
self.mid_filter_right.reset();
self.high_filter_left.reset();
self.high_filter_right.reset();
}
fn node_type(&self) -> &str {
"EQ"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
let mut node = Self::new(self.name.clone());
node.low_freq = self.low_freq;
node.low_gain_db = self.low_gain_db;
node.mid_freq = self.mid_freq;
node.mid_gain_db = self.mid_gain_db;
node.mid_q = self.mid_q;
node.high_freq = self.high_freq;
node.high_gain_db = self.high_gain_db;
node.sample_rate = self.sample_rate;
node.update_filters();
Box::new(node)
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,209 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
use crate::dsp::biquad::BiquadFilter;
const PARAM_CUTOFF: u32 = 0;
const PARAM_RESONANCE: u32 = 1;
const PARAM_TYPE: u32 = 2;
#[derive(Debug, Clone, Copy, PartialEq)]
pub enum FilterType {
Lowpass = 0,
Highpass = 1,
}
impl FilterType {
fn from_f32(value: f32) -> Self {
match value.round() as i32 {
1 => FilterType::Highpass,
_ => FilterType::Lowpass,
}
}
}
/// Filter node using biquad implementation
pub struct FilterNode {
name: String,
filter: BiquadFilter,
cutoff: f32,
resonance: f32,
filter_type: FilterType,
sample_rate: u32,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl FilterNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Audio In", SignalType::Audio, 0),
NodePort::new("Cutoff CV", SignalType::CV, 1),
];
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
let parameters = vec![
Parameter::new(PARAM_CUTOFF, "Cutoff", 20.0, 20000.0, 1000.0, ParameterUnit::Frequency),
Parameter::new(PARAM_RESONANCE, "Resonance", 0.1, 10.0, 0.707, ParameterUnit::Generic),
Parameter::new(PARAM_TYPE, "Type", 0.0, 1.0, 0.0, ParameterUnit::Generic),
];
let filter = BiquadFilter::lowpass(1000.0, 0.707, 44100.0);
Self {
name,
filter,
cutoff: 1000.0,
resonance: 0.707,
filter_type: FilterType::Lowpass,
sample_rate: 44100,
inputs,
outputs,
parameters,
}
}
fn update_filter(&mut self) {
match self.filter_type {
FilterType::Lowpass => {
self.filter.set_lowpass(self.cutoff, self.resonance, self.sample_rate as f32);
}
FilterType::Highpass => {
self.filter.set_highpass(self.cutoff, self.resonance, self.sample_rate as f32);
}
}
}
}
impl AudioNode for FilterNode {
fn category(&self) -> NodeCategory {
NodeCategory::Effect
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_CUTOFF => {
self.cutoff = value.clamp(20.0, 20000.0);
self.update_filter();
}
PARAM_RESONANCE => {
self.resonance = value.clamp(0.1, 10.0);
self.update_filter();
}
PARAM_TYPE => {
self.filter_type = FilterType::from_f32(value);
self.update_filter();
}
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_CUTOFF => self.cutoff,
PARAM_RESONANCE => self.resonance,
PARAM_TYPE => self.filter_type as i32 as f32,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
if inputs.is_empty() || outputs.is_empty() {
return;
}
// Update sample rate if changed
if self.sample_rate != sample_rate {
self.sample_rate = sample_rate;
self.update_filter();
}
let input = inputs[0];
let output = &mut outputs[0];
let len = input.len().min(output.len());
// Copy input to output
output[..len].copy_from_slice(&input[..len]);
// Check for CV modulation (modulates cutoff)
if inputs.len() > 1 && !inputs[1].is_empty() {
// CV input modulates cutoff frequency
// For now, just use the base cutoff - per-sample modulation would be expensive
// TODO: Sample CV at frame rate and update filter periodically
}
// Apply filter (processes stereo interleaved)
self.filter.process_buffer(&mut output[..len], 2);
}
fn reset(&mut self) {
self.filter.reset();
}
fn node_type(&self) -> &str {
"Filter"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
// Create new filter with same parameters but reset state
let mut new_filter = BiquadFilter::new();
// Set filter to match current type
match self.filter_type {
FilterType::Lowpass => {
new_filter.set_lowpass(self.sample_rate as f32, self.cutoff, self.resonance);
}
FilterType::Highpass => {
new_filter.set_highpass(self.sample_rate as f32, self.cutoff, self.resonance);
}
}
Box::new(Self {
name: self.name.clone(),
filter: new_filter,
cutoff: self.cutoff,
resonance: self.resonance,
filter_type: self.filter_type,
sample_rate: self.sample_rate,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,251 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
use std::f32::consts::PI;
const PARAM_RATE: u32 = 0;
const PARAM_DEPTH: u32 = 1;
const PARAM_FEEDBACK: u32 = 2;
const PARAM_WET_DRY: u32 = 3;
const MAX_DELAY_MS: f32 = 10.0;
const BASE_DELAY_MS: f32 = 1.0;
/// Flanger effect using modulated delay lines with feedback
pub struct FlangerNode {
name: String,
rate: f32, // LFO rate in Hz (0.1 to 10 Hz)
depth: f32, // Modulation depth 0.0 to 1.0
feedback: f32, // Feedback amount -0.95 to 0.95
wet_dry: f32, // 0.0 = dry only, 1.0 = wet only
// Delay buffers for left and right channels
delay_buffer_left: Vec<f32>,
delay_buffer_right: Vec<f32>,
write_position: usize,
max_delay_samples: usize,
sample_rate: u32,
// LFO state
lfo_phase: f32,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl FlangerNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Audio In", SignalType::Audio, 0),
];
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
let parameters = vec![
Parameter::new(PARAM_RATE, "Rate", 0.1, 10.0, 0.5, ParameterUnit::Frequency),
Parameter::new(PARAM_DEPTH, "Depth", 0.0, 1.0, 0.7, ParameterUnit::Generic),
Parameter::new(PARAM_FEEDBACK, "Feedback", -0.95, 0.95, 0.5, ParameterUnit::Generic),
Parameter::new(PARAM_WET_DRY, "Wet/Dry", 0.0, 1.0, 0.5, ParameterUnit::Generic),
];
// Allocate max delay buffer size
let max_delay_samples = ((MAX_DELAY_MS / 1000.0) * 48000.0) as usize;
Self {
name,
rate: 0.5,
depth: 0.7,
feedback: 0.5,
wet_dry: 0.5,
delay_buffer_left: vec![0.0; max_delay_samples],
delay_buffer_right: vec![0.0; max_delay_samples],
write_position: 0,
max_delay_samples,
sample_rate: 48000,
lfo_phase: 0.0,
inputs,
outputs,
parameters,
}
}
fn read_interpolated_sample(&self, buffer: &[f32], delay_samples: f32) -> f32 {
// Linear interpolation for smooth delay modulation
let delay_samples = delay_samples.clamp(0.0, (self.max_delay_samples - 1) as f32);
let read_pos_float = self.write_position as f32 - delay_samples;
let read_pos_float = if read_pos_float < 0.0 {
read_pos_float + self.max_delay_samples as f32
} else {
read_pos_float
};
let read_pos_int = read_pos_float.floor() as usize;
let frac = read_pos_float - read_pos_int as f32;
let sample1 = buffer[read_pos_int % self.max_delay_samples];
let sample2 = buffer[(read_pos_int + 1) % self.max_delay_samples];
sample1 * (1.0 - frac) + sample2 * frac
}
}
impl AudioNode for FlangerNode {
fn category(&self) -> NodeCategory {
NodeCategory::Effect
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_RATE => {
self.rate = value.clamp(0.1, 10.0);
}
PARAM_DEPTH => {
self.depth = value.clamp(0.0, 1.0);
}
PARAM_FEEDBACK => {
self.feedback = value.clamp(-0.95, 0.95);
}
PARAM_WET_DRY => {
self.wet_dry = value.clamp(0.0, 1.0);
}
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_RATE => self.rate,
PARAM_DEPTH => self.depth,
PARAM_FEEDBACK => self.feedback,
PARAM_WET_DRY => self.wet_dry,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
if inputs.is_empty() || outputs.is_empty() {
return;
}
// Update sample rate if changed
if self.sample_rate != sample_rate {
self.sample_rate = sample_rate;
self.max_delay_samples = ((MAX_DELAY_MS / 1000.0) * sample_rate as f32) as usize;
self.delay_buffer_left.resize(self.max_delay_samples, 0.0);
self.delay_buffer_right.resize(self.max_delay_samples, 0.0);
self.write_position = 0;
}
let input = inputs[0];
let output = &mut outputs[0];
// Audio signals are stereo (interleaved L/R)
let frames = input.len() / 2;
let output_frames = output.len() / 2;
let frames_to_process = frames.min(output_frames);
let dry_gain = 1.0 - self.wet_dry;
let wet_gain = self.wet_dry;
let base_delay_samples = (BASE_DELAY_MS / 1000.0) * self.sample_rate as f32;
let max_modulation_samples = (MAX_DELAY_MS - BASE_DELAY_MS) / 1000.0 * self.sample_rate as f32;
for frame in 0..frames_to_process {
let left_in = input[frame * 2];
let right_in = input[frame * 2 + 1];
// Generate LFO value (sine wave, 0 to 1)
let lfo_value = ((self.lfo_phase * 2.0 * PI).sin() * 0.5 + 0.5) * self.depth;
// Calculate modulated delay time
let delay_samples = base_delay_samples + lfo_value * max_modulation_samples;
// Read delayed samples with interpolation
let left_delayed = self.read_interpolated_sample(&self.delay_buffer_left, delay_samples);
let right_delayed = self.read_interpolated_sample(&self.delay_buffer_right, delay_samples);
// Mix dry and wet signals
output[frame * 2] = left_in * dry_gain + left_delayed * wet_gain;
output[frame * 2 + 1] = right_in * dry_gain + right_delayed * wet_gain;
// Write to delay buffer with feedback
self.delay_buffer_left[self.write_position] = left_in + left_delayed * self.feedback;
self.delay_buffer_right[self.write_position] = right_in + right_delayed * self.feedback;
// Advance write position
self.write_position = (self.write_position + 1) % self.max_delay_samples;
// Advance LFO phase
self.lfo_phase += self.rate / self.sample_rate as f32;
if self.lfo_phase >= 1.0 {
self.lfo_phase -= 1.0;
}
}
}
fn reset(&mut self) {
self.delay_buffer_left.fill(0.0);
self.delay_buffer_right.fill(0.0);
self.write_position = 0;
self.lfo_phase = 0.0;
}
fn node_type(&self) -> &str {
"Flanger"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
rate: self.rate,
depth: self.depth,
feedback: self.feedback,
wet_dry: self.wet_dry,
delay_buffer_left: vec![0.0; self.max_delay_samples],
delay_buffer_right: vec![0.0; self.max_delay_samples],
write_position: 0,
max_delay_samples: self.max_delay_samples,
sample_rate: self.sample_rate,
lfo_phase: 0.0,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,311 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
use std::f32::consts::PI;
// Parameters for the FM synth
const PARAM_ALGORITHM: u32 = 0;
const PARAM_OP1_RATIO: u32 = 1;
const PARAM_OP1_LEVEL: u32 = 2;
const PARAM_OP2_RATIO: u32 = 3;
const PARAM_OP2_LEVEL: u32 = 4;
const PARAM_OP3_RATIO: u32 = 5;
const PARAM_OP3_LEVEL: u32 = 6;
const PARAM_OP4_RATIO: u32 = 7;
const PARAM_OP4_LEVEL: u32 = 8;
/// FM Algorithm types (inspired by DX7)
/// Algorithm determines how operators modulate each other
#[derive(Debug, Clone, Copy, PartialEq)]
enum FMAlgorithm {
/// Stack: 1->2->3->4 (most harmonic)
Stack = 0,
/// Parallel: All operators to output (organ-like)
Parallel = 1,
/// Bell: 1->2, 3->4, both to output
Bell = 2,
/// Dual: 1->2->output, 3->4->output
Dual = 3,
}
impl FMAlgorithm {
fn from_u32(value: u32) -> Self {
match value {
0 => FMAlgorithm::Stack,
1 => FMAlgorithm::Parallel,
2 => FMAlgorithm::Bell,
3 => FMAlgorithm::Dual,
_ => FMAlgorithm::Stack,
}
}
}
/// Single FM operator (oscillator)
struct FMOperator {
phase: f32,
frequency_ratio: f32, // Multiplier of base frequency (e.g., 1.0, 2.0, 0.5)
level: f32, // Output amplitude 0.0-1.0
}
impl FMOperator {
fn new() -> Self {
Self {
phase: 0.0,
frequency_ratio: 1.0,
level: 1.0,
}
}
/// Process one sample with optional frequency modulation
fn process(&mut self, base_freq: f32, modulation: f32, sample_rate: f32) -> f32 {
let freq = base_freq * self.frequency_ratio;
// Phase modulation (PM, which sounds like FM)
let output = (self.phase * 2.0 * PI + modulation).sin() * self.level;
// Advance phase
self.phase += freq / sample_rate;
if self.phase >= 1.0 {
self.phase -= 1.0;
}
output
}
fn reset(&mut self) {
self.phase = 0.0;
}
}
/// 4-operator FM synthesizer node
pub struct FMSynthNode {
name: String,
algorithm: FMAlgorithm,
// Four operators
operators: [FMOperator; 4],
// Current frequency from V/oct input
current_frequency: f32,
gate_active: bool,
sample_rate: u32,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl FMSynthNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("V/Oct", SignalType::CV, 0),
NodePort::new("Gate", SignalType::CV, 1),
];
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
let parameters = vec![
Parameter::new(PARAM_ALGORITHM, "Algorithm", 0.0, 3.0, 0.0, ParameterUnit::Generic),
Parameter::new(PARAM_OP1_RATIO, "Op1 Ratio", 0.25, 16.0, 1.0, ParameterUnit::Generic),
Parameter::new(PARAM_OP1_LEVEL, "Op1 Level", 0.0, 1.0, 1.0, ParameterUnit::Generic),
Parameter::new(PARAM_OP2_RATIO, "Op2 Ratio", 0.25, 16.0, 2.0, ParameterUnit::Generic),
Parameter::new(PARAM_OP2_LEVEL, "Op2 Level", 0.0, 1.0, 0.8, ParameterUnit::Generic),
Parameter::new(PARAM_OP3_RATIO, "Op3 Ratio", 0.25, 16.0, 3.0, ParameterUnit::Generic),
Parameter::new(PARAM_OP3_LEVEL, "Op3 Level", 0.0, 1.0, 0.6, ParameterUnit::Generic),
Parameter::new(PARAM_OP4_RATIO, "Op4 Ratio", 0.25, 16.0, 4.0, ParameterUnit::Generic),
Parameter::new(PARAM_OP4_LEVEL, "Op4 Level", 0.0, 1.0, 0.4, ParameterUnit::Generic),
];
Self {
name,
algorithm: FMAlgorithm::Stack,
operators: [
FMOperator::new(),
FMOperator::new(),
FMOperator::new(),
FMOperator::new(),
],
current_frequency: 440.0,
gate_active: false,
sample_rate: 48000,
inputs,
outputs,
parameters,
}
}
/// Convert V/oct CV to frequency
fn voct_to_freq(voct: f32) -> f32 {
440.0 * 2.0_f32.powf(voct)
}
/// Process FM synthesis based on current algorithm
fn process_algorithm(&mut self) -> f32 {
if !self.gate_active {
return 0.0;
}
let base_freq = self.current_frequency;
let sr = self.sample_rate as f32;
match self.algorithm {
FMAlgorithm::Stack => {
// 1 -> 2 -> 3 -> 4 -> output
let op4_out = self.operators[3].process(base_freq, 0.0, sr);
let op3_out = self.operators[2].process(base_freq, op4_out * 2.0, sr);
let op2_out = self.operators[1].process(base_freq, op3_out * 2.0, sr);
let op1_out = self.operators[0].process(base_freq, op2_out * 2.0, sr);
op1_out
}
FMAlgorithm::Parallel => {
// All operators output directly (no modulation)
let op1_out = self.operators[0].process(base_freq, 0.0, sr);
let op2_out = self.operators[1].process(base_freq, 0.0, sr);
let op3_out = self.operators[2].process(base_freq, 0.0, sr);
let op4_out = self.operators[3].process(base_freq, 0.0, sr);
(op1_out + op2_out + op3_out + op4_out) * 0.25
}
FMAlgorithm::Bell => {
// 1 -> 2, 3 -> 4, both to output
let op2_out = self.operators[1].process(base_freq, 0.0, sr);
let op1_out = self.operators[0].process(base_freq, op2_out * 2.0, sr);
let op4_out = self.operators[3].process(base_freq, 0.0, sr);
let op3_out = self.operators[2].process(base_freq, op4_out * 2.0, sr);
(op1_out + op3_out) * 0.5
}
FMAlgorithm::Dual => {
// 1 -> 2 -> output, 3 -> 4 -> output
let op2_out = self.operators[1].process(base_freq, 0.0, sr);
let op1_out = self.operators[0].process(base_freq, op2_out * 2.0, sr);
let op4_out = self.operators[3].process(base_freq, 0.0, sr);
let op3_out = self.operators[2].process(base_freq, op4_out * 2.0, sr);
(op1_out + op3_out) * 0.5
}
}
}
}
impl AudioNode for FMSynthNode {
fn category(&self) -> NodeCategory {
NodeCategory::Generator
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_ALGORITHM => {
self.algorithm = FMAlgorithm::from_u32(value as u32);
}
PARAM_OP1_RATIO => self.operators[0].frequency_ratio = value.clamp(0.25, 16.0),
PARAM_OP1_LEVEL => self.operators[0].level = value.clamp(0.0, 1.0),
PARAM_OP2_RATIO => self.operators[1].frequency_ratio = value.clamp(0.25, 16.0),
PARAM_OP2_LEVEL => self.operators[1].level = value.clamp(0.0, 1.0),
PARAM_OP3_RATIO => self.operators[2].frequency_ratio = value.clamp(0.25, 16.0),
PARAM_OP3_LEVEL => self.operators[2].level = value.clamp(0.0, 1.0),
PARAM_OP4_RATIO => self.operators[3].frequency_ratio = value.clamp(0.25, 16.0),
PARAM_OP4_LEVEL => self.operators[3].level = value.clamp(0.0, 1.0),
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_ALGORITHM => self.algorithm as u32 as f32,
PARAM_OP1_RATIO => self.operators[0].frequency_ratio,
PARAM_OP1_LEVEL => self.operators[0].level,
PARAM_OP2_RATIO => self.operators[1].frequency_ratio,
PARAM_OP2_LEVEL => self.operators[1].level,
PARAM_OP3_RATIO => self.operators[2].frequency_ratio,
PARAM_OP3_LEVEL => self.operators[2].level,
PARAM_OP4_RATIO => self.operators[3].frequency_ratio,
PARAM_OP4_LEVEL => self.operators[3].level,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
if outputs.is_empty() {
return;
}
self.sample_rate = sample_rate;
let output = &mut outputs[0];
let frames = output.len() / 2;
for frame in 0..frames {
// Read CV inputs
let voct = if inputs.len() > 0 && !inputs[0].is_empty() {
inputs[0][frame.min(inputs[0].len() / 2 - 1) * 2]
} else {
0.0
};
let gate = if inputs.len() > 1 && !inputs[1].is_empty() {
inputs[1][frame.min(inputs[1].len() / 2 - 1) * 2]
} else {
0.0
};
// Update state
self.current_frequency = Self::voct_to_freq(voct);
self.gate_active = gate > 0.5;
// Generate sample
let sample = self.process_algorithm() * 0.3; // Scale down to prevent clipping
// Output stereo (same signal to both channels)
output[frame * 2] = sample;
output[frame * 2 + 1] = sample;
}
}
fn reset(&mut self) {
for op in &mut self.operators {
op.reset();
}
self.gate_active = false;
}
fn node_type(&self) -> &str {
"FMSynth"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self::new(self.name.clone()))
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,138 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
const PARAM_GAIN: u32 = 0;
/// Gain/volume control node
pub struct GainNode {
name: String,
gain: f32,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl GainNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Audio In", SignalType::Audio, 0),
NodePort::new("Gain CV", SignalType::CV, 1),
];
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
let parameters = vec![
Parameter::new(PARAM_GAIN, "Gain", 0.0, 2.0, 1.0, ParameterUnit::Generic),
];
Self {
name,
gain: 1.0,
inputs,
outputs,
parameters,
}
}
}
impl AudioNode for GainNode {
fn category(&self) -> NodeCategory {
NodeCategory::Utility
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_GAIN => self.gain = value.clamp(0.0, 2.0),
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_GAIN => self.gain,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
_sample_rate: u32,
) {
if inputs.is_empty() || outputs.is_empty() {
return;
}
let input = inputs[0];
let output = &mut outputs[0];
// Audio signals are stereo (interleaved L/R)
// Process by frames, not samples
let frames = input.len().min(output.len()) / 2;
for frame in 0..frames {
// Calculate final gain
let mut final_gain = self.gain;
// CV input acts as a VCA (voltage-controlled amplifier)
// CV ranges from 0.0 (silence) to 1.0 (full gain parameter value)
if inputs.len() > 1 && frame < inputs[1].len() {
let cv = inputs[1][frame];
final_gain *= cv; // Multiply gain by CV (0.0 = silence, 1.0 = full gain)
}
// Apply gain to both channels
output[frame * 2] = input[frame * 2] * final_gain; // Left
output[frame * 2 + 1] = input[frame * 2 + 1] * final_gain; // Right
}
}
fn reset(&mut self) {
// No state to reset
}
fn node_type(&self) -> &str {
"Gain"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
gain: self.gain,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,230 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
use std::f32::consts::PI;
use rand::Rng;
const PARAM_FREQUENCY: u32 = 0;
const PARAM_AMPLITUDE: u32 = 1;
const PARAM_WAVEFORM: u32 = 2;
const PARAM_PHASE_OFFSET: u32 = 3;
#[derive(Debug, Clone, Copy, PartialEq)]
pub enum LFOWaveform {
Sine = 0,
Triangle = 1,
Saw = 2,
Square = 3,
Random = 4,
}
impl LFOWaveform {
fn from_f32(value: f32) -> Self {
match value.round() as i32 {
1 => LFOWaveform::Triangle,
2 => LFOWaveform::Saw,
3 => LFOWaveform::Square,
4 => LFOWaveform::Random,
_ => LFOWaveform::Sine,
}
}
}
/// Low Frequency Oscillator node for modulation
pub struct LFONode {
name: String,
frequency: f32,
amplitude: f32,
waveform: LFOWaveform,
phase_offset: f32,
phase: f32,
last_random_value: f32,
next_random_value: f32,
random_phase: f32,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl LFONode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![];
let outputs = vec![
NodePort::new("CV Out", SignalType::CV, 0),
];
let parameters = vec![
Parameter::new(PARAM_FREQUENCY, "Frequency", 0.01, 20.0, 1.0, ParameterUnit::Frequency),
Parameter::new(PARAM_AMPLITUDE, "Amplitude", 0.0, 1.0, 1.0, ParameterUnit::Generic),
Parameter::new(PARAM_WAVEFORM, "Waveform", 0.0, 4.0, 0.0, ParameterUnit::Generic),
Parameter::new(PARAM_PHASE_OFFSET, "Phase", 0.0, 1.0, 0.0, ParameterUnit::Generic),
];
let mut rng = rand::thread_rng();
Self {
name,
frequency: 1.0,
amplitude: 1.0,
waveform: LFOWaveform::Sine,
phase_offset: 0.0,
phase: 0.0,
last_random_value: rng.gen_range(-1.0..1.0),
next_random_value: rng.gen_range(-1.0..1.0),
random_phase: 0.0,
inputs,
outputs,
parameters,
}
}
}
impl AudioNode for LFONode {
fn category(&self) -> NodeCategory {
NodeCategory::Utility
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_FREQUENCY => self.frequency = value.clamp(0.01, 20.0),
PARAM_AMPLITUDE => self.amplitude = value.clamp(0.0, 1.0),
PARAM_WAVEFORM => self.waveform = LFOWaveform::from_f32(value),
PARAM_PHASE_OFFSET => self.phase_offset = value.clamp(0.0, 1.0),
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_FREQUENCY => self.frequency,
PARAM_AMPLITUDE => self.amplitude,
PARAM_WAVEFORM => self.waveform as i32 as f32,
PARAM_PHASE_OFFSET => self.phase_offset,
_ => 0.0,
}
}
fn process(
&mut self,
_inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
if outputs.is_empty() {
return;
}
let output = &mut outputs[0];
let sample_rate_f32 = sample_rate as f32;
// CV signals are mono
for sample_idx in 0..output.len() {
let current_phase = (self.phase + self.phase_offset) % 1.0;
// Generate waveform sample based on waveform type
let raw_sample = match self.waveform {
LFOWaveform::Sine => (current_phase * 2.0 * PI).sin(),
LFOWaveform::Triangle => {
// Triangle: rises from -1 to 1, falls back to -1
4.0 * (current_phase - 0.5).abs() - 1.0
}
LFOWaveform::Saw => {
// Sawtooth: ramp from -1 to 1
2.0 * current_phase - 1.0
}
LFOWaveform::Square => {
if current_phase < 0.5 { 1.0 } else { -1.0 }
}
LFOWaveform::Random => {
// Sample & hold random values with smooth interpolation
// Interpolate between last and next random value
let t = self.random_phase;
self.last_random_value * (1.0 - t) + self.next_random_value * t
}
};
// Scale to 0-1 range and apply amplitude
let sample = (raw_sample * 0.5 + 0.5) * self.amplitude;
output[sample_idx] = sample;
// Update phase
self.phase += self.frequency / sample_rate_f32;
if self.phase >= 1.0 {
self.phase -= 1.0;
// For random waveform, generate new random value at each cycle
if self.waveform == LFOWaveform::Random {
self.last_random_value = self.next_random_value;
let mut rng = rand::thread_rng();
self.next_random_value = rng.gen_range(-1.0..1.0);
self.random_phase = 0.0;
}
}
// Update random interpolation phase
if self.waveform == LFOWaveform::Random {
self.random_phase += self.frequency / sample_rate_f32;
if self.random_phase >= 1.0 {
self.random_phase -= 1.0;
}
}
}
}
fn reset(&mut self) {
self.phase = 0.0;
self.random_phase = 0.0;
let mut rng = rand::thread_rng();
self.last_random_value = rng.gen_range(-1.0..1.0);
self.next_random_value = rng.gen_range(-1.0..1.0);
}
fn node_type(&self) -> &str {
"LFO"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
frequency: self.frequency,
amplitude: self.amplitude,
waveform: self.waveform,
phase_offset: self.phase_offset,
phase: 0.0, // Reset phase for new instance
last_random_value: self.last_random_value,
next_random_value: self.next_random_value,
random_phase: 0.0,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,223 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
const PARAM_THRESHOLD: u32 = 0;
const PARAM_RELEASE: u32 = 1;
const PARAM_CEILING: u32 = 2;
/// Limiter node for preventing audio peaks from exceeding a threshold
/// Essentially a compressor with infinite ratio and very fast attack
pub struct LimiterNode {
name: String,
threshold_db: f32,
release_ms: f32,
ceiling_db: f32,
// State
envelope: f32,
release_coeff: f32,
sample_rate: u32,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl LimiterNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Audio In", SignalType::Audio, 0),
];
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
let parameters = vec![
Parameter::new(PARAM_THRESHOLD, "Threshold", -60.0, 0.0, -1.0, ParameterUnit::Decibels),
Parameter::new(PARAM_RELEASE, "Release", 1.0, 500.0, 50.0, ParameterUnit::Time),
Parameter::new(PARAM_CEILING, "Ceiling", -60.0, 0.0, 0.0, ParameterUnit::Decibels),
];
let sample_rate = 44100;
let release_coeff = Self::ms_to_coeff(50.0, sample_rate);
Self {
name,
threshold_db: -1.0,
release_ms: 50.0,
ceiling_db: 0.0,
envelope: 0.0,
release_coeff,
sample_rate,
inputs,
outputs,
parameters,
}
}
/// Convert milliseconds to exponential smoothing coefficient
fn ms_to_coeff(time_ms: f32, sample_rate: u32) -> f32 {
let time_seconds = time_ms / 1000.0;
let samples = time_seconds * sample_rate as f32;
(-1.0 / samples).exp()
}
fn update_coefficients(&mut self) {
self.release_coeff = Self::ms_to_coeff(self.release_ms, self.sample_rate);
}
/// Convert linear amplitude to dB
fn linear_to_db(linear: f32) -> f32 {
if linear > 0.0 {
20.0 * linear.log10()
} else {
-160.0
}
}
/// Convert dB to linear gain
fn db_to_linear(db: f32) -> f32 {
10.0_f32.powf(db / 20.0)
}
fn process_sample(&mut self, input: f32) -> f32 {
// Detect input level (using absolute value as peak detector)
let input_level = input.abs();
// Convert to dB
let input_db = Self::linear_to_db(input_level);
// Calculate gain reduction needed
// If above threshold, apply infinite ratio (hard limit)
let target_gr_db = if input_db > self.threshold_db {
input_db - self.threshold_db // Amount of overshoot to reduce
} else {
0.0
};
let target_gr_linear = Self::db_to_linear(-target_gr_db);
// Very fast attack (instant for limiter), but slower release
// Attack coeff is very close to 0 for near-instant response
let attack_coeff = 0.0001; // Extremely fast attack
let coeff = if target_gr_linear < self.envelope {
attack_coeff // Attack (instant response to louder signal)
} else {
self.release_coeff // Release (slower recovery)
};
self.envelope = target_gr_linear + coeff * (self.envelope - target_gr_linear);
// Apply limiting and output ceiling
let limited = input * self.envelope;
let ceiling_linear = Self::db_to_linear(self.ceiling_db);
// Hard clip at ceiling
limited.clamp(-ceiling_linear, ceiling_linear)
}
}
impl AudioNode for LimiterNode {
fn category(&self) -> NodeCategory {
NodeCategory::Effect
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_THRESHOLD => self.threshold_db = value,
PARAM_RELEASE => {
self.release_ms = value;
self.update_coefficients();
}
PARAM_CEILING => self.ceiling_db = value,
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_THRESHOLD => self.threshold_db,
PARAM_RELEASE => self.release_ms,
PARAM_CEILING => self.ceiling_db,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
if inputs.is_empty() || outputs.is_empty() {
return;
}
// Update sample rate if changed
if self.sample_rate != sample_rate {
self.sample_rate = sample_rate;
self.update_coefficients();
}
let input = inputs[0];
let output = &mut outputs[0];
let len = input.len().min(output.len());
for i in 0..len {
output[i] = self.process_sample(input[i]);
}
}
fn reset(&mut self) {
self.envelope = 0.0;
}
fn node_type(&self) -> &str {
"Limiter"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
threshold_db: self.threshold_db,
release_ms: self.release_ms,
ceiling_db: self.ceiling_db,
envelope: 0.0, // Reset state for clone
release_coeff: self.release_coeff,
sample_rate: self.sample_rate,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,172 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
const PARAM_OPERATION: u32 = 0;
/// Mathematical and logical operations on CV signals
/// Operations:
/// 0 = Add, 1 = Subtract, 2 = Multiply, 3 = Divide
/// 4 = Min, 5 = Max, 6 = Average
/// 7 = Invert (1.0 - x), 8 = Absolute Value
/// 9 = Clamp (0.0 to 1.0), 10 = Wrap (-1.0 to 1.0)
/// 11 = Greater Than, 12 = Less Than, 13 = Equal (with tolerance)
pub struct MathNode {
name: String,
operation: u32,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl MathNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("CV In A", SignalType::CV, 0),
NodePort::new("CV In B", SignalType::CV, 1),
];
let outputs = vec![
NodePort::new("CV Out", SignalType::CV, 0),
];
let parameters = vec![
Parameter::new(PARAM_OPERATION, "Operation", 0.0, 13.0, 0.0, ParameterUnit::Generic),
];
Self {
name,
operation: 0,
inputs,
outputs,
parameters,
}
}
fn apply_operation(&self, a: f32, b: f32) -> f32 {
match self.operation {
0 => a + b, // Add
1 => a - b, // Subtract
2 => a * b, // Multiply
3 => if b.abs() > 0.0001 { a / b } else { 0.0 }, // Divide (with protection)
4 => a.min(b), // Min
5 => a.max(b), // Max
6 => (a + b) * 0.5, // Average
7 => 1.0 - a, // Invert (ignores b)
8 => a.abs(), // Absolute Value (ignores b)
9 => a.clamp(0.0, 1.0), // Clamp to 0-1 (ignores b)
10 => { // Wrap -1 to 1
let mut result = a;
while result > 1.0 {
result -= 2.0;
}
while result < -1.0 {
result += 2.0;
}
result
},
11 => if a > b { 1.0 } else { 0.0 }, // Greater Than
12 => if a < b { 1.0 } else { 0.0 }, // Less Than
13 => if (a - b).abs() < 0.01 { 1.0 } else { 0.0 }, // Equal (with tolerance)
_ => a, // Unknown operation - pass through
}
}
}
impl AudioNode for MathNode {
fn category(&self) -> NodeCategory {
NodeCategory::Utility
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_OPERATION => self.operation = (value as u32).clamp(0, 13),
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_OPERATION => self.operation as f32,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
_sample_rate: u32,
) {
if outputs.is_empty() {
return;
}
let output = &mut outputs[0];
let length = output.len();
// Process each sample
for i in 0..length {
// Get input A (or 0.0 if not connected)
let a = if !inputs.is_empty() && i < inputs[0].len() {
inputs[0][i]
} else {
0.0
};
// Get input B (or 0.0 if not connected)
let b = if inputs.len() > 1 && i < inputs[1].len() {
inputs[1][i]
} else {
0.0
};
output[i] = self.apply_operation(a, b);
}
}
fn reset(&mut self) {
// No state to reset
}
fn node_type(&self) -> &str {
"Math"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
operation: self.operation,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,113 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, SignalType};
use crate::audio::midi::MidiEvent;
/// MIDI Input node - receives MIDI events from the track and passes them through
pub struct MidiInputNode {
name: String,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
pending_events: Vec<MidiEvent>,
}
impl MidiInputNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![];
let outputs = vec![
NodePort::new("MIDI Out", SignalType::Midi, 0),
];
Self {
name,
inputs,
outputs,
parameters: vec![],
pending_events: Vec::new(),
}
}
/// Add MIDI events to be processed
pub fn add_midi_events(&mut self, events: Vec<MidiEvent>) {
self.pending_events.extend(events);
}
/// Get pending MIDI events (used for routing to connected nodes)
pub fn take_midi_events(&mut self) -> Vec<MidiEvent> {
std::mem::take(&mut self.pending_events)
}
}
impl AudioNode for MidiInputNode {
fn category(&self) -> NodeCategory {
NodeCategory::Input
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, _id: u32, _value: f32) {
// No parameters
}
fn get_parameter(&self, _id: u32) -> f32 {
0.0
}
fn process(
&mut self,
_inputs: &[&[f32]],
_outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
_sample_rate: u32,
) {
// MidiInput receives MIDI from external sources (marked as MIDI target)
// and outputs it through the graph
// The MIDI was already placed in midi_outputs by the graph before calling process()
}
fn reset(&mut self) {
self.pending_events.clear();
}
fn node_type(&self) -> &str {
"MidiInput"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
pending_events: Vec::new(),
})
}
fn handle_midi(&mut self, event: &MidiEvent) {
self.pending_events.push(*event);
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,194 @@
use crate::audio::midi::MidiEvent;
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, SignalType};
/// MIDI to CV converter
/// Converts MIDI note events to control voltage signals
pub struct MidiToCVNode {
name: String,
note: u8, // Current MIDI note number
gate: f32, // Gate CV (1.0 when note on, 0.0 when off)
velocity: f32, // Velocity CV (0.0-1.0)
pitch_cv: f32, // Pitch CV (V/Oct: 0V = A4, ±1V per octave)
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl MidiToCVNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
// MIDI input port for receiving MIDI through graph connections
let inputs = vec![
NodePort::new("MIDI In", SignalType::Midi, 0),
];
let outputs = vec![
NodePort::new("V/Oct", SignalType::CV, 0), // V/Oct: 0V = A4, ±1V per octave
NodePort::new("Gate", SignalType::CV, 1), // 1.0 = on, 0.0 = off
NodePort::new("Velocity", SignalType::CV, 2), // 0.0-1.0
];
Self {
name,
note: 60, // Middle C
gate: 0.0,
velocity: 0.0,
pitch_cv: Self::midi_note_to_voct(60),
inputs,
outputs,
parameters: vec![], // No user parameters
}
}
/// Convert MIDI note to V/oct CV (proper V/Oct standard)
/// 0V = A4 (MIDI 69), ±1V per octave
/// Middle C (MIDI 60) = -0.75V, A5 (MIDI 81) = +1.0V
fn midi_note_to_voct(note: u8) -> f32 {
// Standard V/Oct: 0V at A4, 1V per octave (12 semitones)
(note as f32 - 69.0) / 12.0
}
}
impl AudioNode for MidiToCVNode {
fn category(&self) -> NodeCategory {
NodeCategory::Input
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, _id: u32, _value: f32) {
// No parameters
}
fn get_parameter(&self, _id: u32) -> f32 {
0.0
}
fn handle_midi(&mut self, event: &MidiEvent) {
let status = event.status & 0xF0;
match status {
0x90 => {
// Note on
if event.data2 > 0 {
// Velocity > 0 means note on
self.note = event.data1;
self.pitch_cv = Self::midi_note_to_voct(self.note);
self.velocity = event.data2 as f32 / 127.0;
self.gate = 1.0;
} else {
// Velocity = 0 means note off
if event.data1 == self.note {
self.gate = 0.0;
}
}
}
0x80 => {
// Note off
if event.data1 == self.note {
self.gate = 0.0;
}
}
_ => {}
}
}
fn process(
&mut self,
_inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
_sample_rate: u32,
) {
// Process MIDI events from input buffer
if !midi_inputs.is_empty() {
for event in midi_inputs[0] {
let status = event.status & 0xF0;
match status {
0x90 if event.data2 > 0 => {
// Note on
self.note = event.data1;
self.pitch_cv = Self::midi_note_to_voct(self.note);
self.velocity = event.data2 as f32 / 127.0;
self.gate = 1.0;
}
0x80 | 0x90 => {
// Note off (or note on with velocity 0)
if event.data1 == self.note {
self.gate = 0.0;
}
}
_ => {}
}
}
}
if outputs.len() < 3 {
return;
}
// CV signals are mono
// Use split_at_mut to get multiple mutable references
let (pitch_and_rest, rest) = outputs.split_at_mut(1);
let (gate_and_rest, velocity_slice) = rest.split_at_mut(1);
let pitch_out = &mut pitch_and_rest[0];
let gate_out = &mut gate_and_rest[0];
let velocity_out = &mut velocity_slice[0];
let frames = pitch_out.len();
// Output constant CV values for the entire buffer
for frame in 0..frames {
pitch_out[frame] = self.pitch_cv;
gate_out[frame] = self.gate;
velocity_out[frame] = self.velocity;
}
}
fn reset(&mut self) {
self.gate = 0.0;
self.velocity = 0.0;
}
fn node_type(&self) -> &str {
"MidiToCV"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
note: 60, // Reset to middle C
gate: 0.0, // Reset gate
velocity: 0.0, // Reset velocity
pitch_cv: Self::midi_note_to_voct(60), // Reset pitch
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,153 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
const PARAM_GAIN_1: u32 = 0;
const PARAM_GAIN_2: u32 = 1;
const PARAM_GAIN_3: u32 = 2;
const PARAM_GAIN_4: u32 = 3;
/// Mixer node - combines multiple audio inputs with independent gain controls
pub struct MixerNode {
name: String,
gains: [f32; 4],
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl MixerNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Input 1", SignalType::Audio, 0),
NodePort::new("Input 2", SignalType::Audio, 1),
NodePort::new("Input 3", SignalType::Audio, 2),
NodePort::new("Input 4", SignalType::Audio, 3),
];
let outputs = vec![
NodePort::new("Mixed Out", SignalType::Audio, 0),
];
let parameters = vec![
Parameter::new(PARAM_GAIN_1, "Gain 1", 0.0, 2.0, 1.0, ParameterUnit::Generic),
Parameter::new(PARAM_GAIN_2, "Gain 2", 0.0, 2.0, 1.0, ParameterUnit::Generic),
Parameter::new(PARAM_GAIN_3, "Gain 3", 0.0, 2.0, 1.0, ParameterUnit::Generic),
Parameter::new(PARAM_GAIN_4, "Gain 4", 0.0, 2.0, 1.0, ParameterUnit::Generic),
];
Self {
name,
gains: [1.0, 1.0, 1.0, 1.0],
inputs,
outputs,
parameters,
}
}
}
impl AudioNode for MixerNode {
fn category(&self) -> NodeCategory {
NodeCategory::Utility
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_GAIN_1 => self.gains[0] = value.clamp(0.0, 2.0),
PARAM_GAIN_2 => self.gains[1] = value.clamp(0.0, 2.0),
PARAM_GAIN_3 => self.gains[2] = value.clamp(0.0, 2.0),
PARAM_GAIN_4 => self.gains[3] = value.clamp(0.0, 2.0),
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_GAIN_1 => self.gains[0],
PARAM_GAIN_2 => self.gains[1],
PARAM_GAIN_3 => self.gains[2],
PARAM_GAIN_4 => self.gains[3],
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
_sample_rate: u32,
) {
if outputs.is_empty() {
return;
}
let output = &mut outputs[0];
// Audio signals are stereo (interleaved L/R)
let frames = output.len() / 2;
// Clear output buffer first
output.fill(0.0);
// Mix each input with its gain
for (input_idx, input) in inputs.iter().enumerate().take(4) {
if input_idx >= self.gains.len() {
break;
}
let gain = self.gains[input_idx];
let input_frames = input.len() / 2;
let process_frames = frames.min(input_frames);
for frame in 0..process_frames {
output[frame * 2] += input[frame * 2] * gain; // Left
output[frame * 2 + 1] += input[frame * 2 + 1] * gain; // Right
}
}
}
fn reset(&mut self) {
// No state to reset
}
fn node_type(&self) -> &str {
"Mixer"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
gains: self.gains,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,83 @@
mod adsr;
mod audio_input;
mod audio_to_cv;
mod automation_input;
mod bit_crusher;
mod bpm_detector;
mod chorus;
mod compressor;
mod constant;
mod delay;
mod distortion;
mod envelope_follower;
mod eq;
mod filter;
mod flanger;
mod limiter;
mod fm_synth;
mod gain;
mod lfo;
mod math;
mod midi_input;
mod midi_to_cv;
mod mixer;
mod multi_sampler;
mod noise;
mod oscillator;
mod oscilloscope;
mod output;
mod pan;
mod phaser;
mod quantizer;
mod reverb;
mod ring_modulator;
mod sample_hold;
mod simple_sampler;
mod slew_limiter;
mod splitter;
mod template_io;
mod vocoder;
mod voice_allocator;
mod wavetable_oscillator;
pub use adsr::ADSRNode;
pub use audio_input::AudioInputNode;
pub use audio_to_cv::AudioToCVNode;
pub use automation_input::{AutomationInputNode, AutomationKeyframe, InterpolationType};
pub use bit_crusher::BitCrusherNode;
pub use bpm_detector::BpmDetectorNode;
pub use chorus::ChorusNode;
pub use compressor::CompressorNode;
pub use constant::ConstantNode;
pub use delay::DelayNode;
pub use distortion::DistortionNode;
pub use envelope_follower::EnvelopeFollowerNode;
pub use eq::EQNode;
pub use filter::FilterNode;
pub use flanger::FlangerNode;
pub use limiter::LimiterNode;
pub use fm_synth::FMSynthNode;
pub use gain::GainNode;
pub use lfo::LFONode;
pub use math::MathNode;
pub use midi_input::MidiInputNode;
pub use midi_to_cv::MidiToCVNode;
pub use mixer::MixerNode;
pub use multi_sampler::{MultiSamplerNode, LoopMode};
pub use noise::NoiseGeneratorNode;
pub use oscillator::OscillatorNode;
pub use oscilloscope::OscilloscopeNode;
pub use output::AudioOutputNode;
pub use pan::PanNode;
pub use phaser::PhaserNode;
pub use quantizer::QuantizerNode;
pub use reverb::ReverbNode;
pub use ring_modulator::RingModulatorNode;
pub use sample_hold::SampleHoldNode;
pub use simple_sampler::SimpleSamplerNode;
pub use slew_limiter::SlewLimiterNode;
pub use splitter::SplitterNode;
pub use template_io::{TemplateInputNode, TemplateOutputNode};
pub use vocoder::VocoderNode;
pub use voice_allocator::VoiceAllocatorNode;
pub use wavetable_oscillator::WavetableOscillatorNode;

View File

@ -0,0 +1,779 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
// Parameters
const PARAM_GAIN: u32 = 0;
const PARAM_ATTACK: u32 = 1;
const PARAM_RELEASE: u32 = 2;
const PARAM_TRANSPOSE: u32 = 3;
/// Loop playback mode
#[derive(Clone, Copy, Debug, PartialEq, serde::Serialize, serde::Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum LoopMode {
/// Play sample once, no looping
OneShot,
/// Loop continuously between loop_start and loop_end
Continuous,
}
/// Metadata about a loaded sample layer (for preset serialization)
#[derive(Clone, Debug)]
pub struct LayerInfo {
pub file_path: String,
pub key_min: u8,
pub key_max: u8,
pub root_key: u8,
pub velocity_min: u8,
pub velocity_max: u8,
pub loop_start: Option<usize>, // Loop start point in samples
pub loop_end: Option<usize>, // Loop end point in samples
pub loop_mode: LoopMode,
}
/// Single sample with velocity range and key range
#[derive(Clone)]
struct SampleLayer {
sample_data: Vec<f32>,
sample_rate: f32,
// Key range: C-1 = 0, C0 = 12, middle C (C4) = 60, C9 = 120
key_min: u8,
key_max: u8,
root_key: u8, // The original pitch of the sample
// Velocity range: 0-127
velocity_min: u8,
velocity_max: u8,
// Loop points (in samples)
loop_start: Option<usize>,
loop_end: Option<usize>,
loop_mode: LoopMode,
}
impl SampleLayer {
fn new(
sample_data: Vec<f32>,
sample_rate: f32,
key_min: u8,
key_max: u8,
root_key: u8,
velocity_min: u8,
velocity_max: u8,
loop_start: Option<usize>,
loop_end: Option<usize>,
loop_mode: LoopMode,
) -> Self {
Self {
sample_data,
sample_rate,
key_min,
key_max,
root_key,
velocity_min,
velocity_max,
loop_start,
loop_end,
loop_mode,
}
}
/// Check if this layer matches the given key and velocity
fn matches(&self, key: u8, velocity: u8) -> bool {
key >= self.key_min
&& key <= self.key_max
&& velocity >= self.velocity_min
&& velocity <= self.velocity_max
}
/// Auto-detect loop points using autocorrelation to find a good loop region
/// Returns (loop_start, loop_end) in samples
fn detect_loop_points(sample_data: &[f32], sample_rate: f32) -> Option<(usize, usize)> {
if sample_data.len() < (sample_rate * 0.5) as usize {
return None; // Need at least 0.5 seconds of audio
}
// Look for loop in the sustain region (skip attack/decay, avoid release)
// For sustained instruments, look in the middle 50% of the sample
let search_start = (sample_data.len() as f32 * 0.25) as usize;
let search_end = (sample_data.len() as f32 * 0.75) as usize;
if search_end <= search_start {
return None;
}
// Find the best loop point using autocorrelation
// For sustained instruments like brass/woodwind, we want longer loops
let min_loop_length = (sample_rate * 0.1) as usize; // Min 0.1s loop (more stable)
let max_loop_length = (sample_rate * 10.0) as usize; // Max 10 second loop
let mut best_correlation = -1.0;
let mut best_loop_start = search_start;
let mut best_loop_end = search_end;
// Try different loop lengths from LONGEST to SHORTEST
// This way we prefer longer loops and stop early if we find a good one
let length_step = ((sample_rate * 0.05) as usize).max(512); // 50ms steps
let actual_max_length = max_loop_length.min(search_end - search_start);
// Manually iterate backwards since step_by().rev() doesn't work on RangeInclusive<usize>
let mut loop_length = actual_max_length;
while loop_length >= min_loop_length {
// Try different starting points in the sustain region (finer steps)
let start_step = ((sample_rate * 0.02) as usize).max(256); // 20ms steps
for start in (search_start..search_end - loop_length).step_by(start_step) {
let end = start + loop_length;
if end > search_end {
break;
}
// Calculate correlation between loop end and loop start
let correlation = Self::calculate_loop_correlation(sample_data, start, end);
if correlation > best_correlation {
best_correlation = correlation;
best_loop_start = start;
best_loop_end = end;
}
}
// If we found a good enough loop, stop searching shorter ones
if best_correlation > 0.8 {
break;
}
// Decrement loop_length, with underflow protection
if loop_length < length_step {
break;
}
loop_length -= length_step;
}
// Lower threshold since longer loops are harder to match perfectly
if best_correlation > 0.6 {
Some((best_loop_start, best_loop_end))
} else {
// Fallback: use a reasonable chunk of the sustain region
let fallback_length = ((search_end - search_start) / 2).max(min_loop_length);
Some((search_start, search_start + fallback_length))
}
}
/// Calculate how well the audio loops at the given points
/// Returns correlation value between -1.0 and 1.0 (higher is better)
fn calculate_loop_correlation(sample_data: &[f32], loop_start: usize, loop_end: usize) -> f32 {
let loop_length = loop_end - loop_start;
let window_size = (loop_length / 10).max(128).min(2048); // Compare last 10% of loop
if loop_end + window_size >= sample_data.len() {
return -1.0;
}
// Compare the end of the loop region with the beginning
let region1_start = loop_end - window_size;
let region2_start = loop_start;
let mut sum_xy = 0.0;
let mut sum_x2 = 0.0;
let mut sum_y2 = 0.0;
for i in 0..window_size {
let x = sample_data[region1_start + i];
let y = sample_data[region2_start + i];
sum_xy += x * y;
sum_x2 += x * x;
sum_y2 += y * y;
}
// Pearson correlation coefficient
let denominator = (sum_x2 * sum_y2).sqrt();
if denominator > 0.0 {
sum_xy / denominator
} else {
-1.0
}
}
}
/// Active voice playing a sample
struct Voice {
layer_index: usize,
playhead: f32,
note: u8,
velocity: u8,
is_active: bool,
// Envelope
envelope_phase: EnvelopePhase,
envelope_value: f32,
// Loop crossfade state
crossfade_buffer: Vec<f32>, // Stores samples from before loop_start for crossfading
crossfade_length: usize, // Length of crossfade in samples (e.g., 100 samples = ~2ms @ 48kHz)
}
#[derive(Debug, Clone, Copy, PartialEq)]
enum EnvelopePhase {
Attack,
Sustain,
Release,
}
impl Voice {
fn new(layer_index: usize, note: u8, velocity: u8) -> Self {
Self {
layer_index,
playhead: 0.0,
note,
velocity,
is_active: true,
envelope_phase: EnvelopePhase::Attack,
envelope_value: 0.0,
crossfade_buffer: Vec::new(),
crossfade_length: 1000, // ~20ms at 48kHz (longer for smoother loops)
}
}
}
/// Multi-sample instrument with velocity layers and key zones
pub struct MultiSamplerNode {
name: String,
// Sample layers
layers: Vec<SampleLayer>,
layer_infos: Vec<LayerInfo>, // Metadata about loaded layers
// Voice management
voices: Vec<Voice>,
max_voices: usize,
// Parameters
gain: f32,
attack_time: f32, // seconds
release_time: f32, // seconds
transpose: i8, // semitones
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl MultiSamplerNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("MIDI In", SignalType::Midi, 0),
];
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
let parameters = vec![
Parameter::new(PARAM_GAIN, "Gain", 0.0, 2.0, 1.0, ParameterUnit::Generic),
Parameter::new(PARAM_ATTACK, "Attack", 0.001, 1.0, 0.01, ParameterUnit::Time),
Parameter::new(PARAM_RELEASE, "Release", 0.01, 5.0, 0.1, ParameterUnit::Time),
Parameter::new(PARAM_TRANSPOSE, "Transpose", -24.0, 24.0, 0.0, ParameterUnit::Generic),
];
Self {
name,
layers: Vec::new(),
layer_infos: Vec::new(),
voices: Vec::new(),
max_voices: 16,
gain: 1.0,
attack_time: 0.01,
release_time: 0.1,
transpose: 0,
inputs,
outputs,
parameters,
}
}
/// Add a sample layer
pub fn add_layer(
&mut self,
sample_data: Vec<f32>,
sample_rate: f32,
key_min: u8,
key_max: u8,
root_key: u8,
velocity_min: u8,
velocity_max: u8,
loop_start: Option<usize>,
loop_end: Option<usize>,
loop_mode: LoopMode,
) {
let layer = SampleLayer::new(
sample_data,
sample_rate,
key_min,
key_max,
root_key,
velocity_min,
velocity_max,
loop_start,
loop_end,
loop_mode,
);
self.layers.push(layer);
}
/// Load a sample layer from a file path
pub fn load_layer_from_file(
&mut self,
path: &str,
key_min: u8,
key_max: u8,
root_key: u8,
velocity_min: u8,
velocity_max: u8,
loop_start: Option<usize>,
loop_end: Option<usize>,
loop_mode: LoopMode,
) -> Result<(), String> {
use crate::audio::sample_loader::load_audio_file;
let sample_data = load_audio_file(path)?;
// Auto-detect loop points if not provided and mode is Continuous
let (final_loop_start, final_loop_end) = if loop_mode == LoopMode::Continuous && loop_start.is_none() && loop_end.is_none() {
if let Some((start, end)) = SampleLayer::detect_loop_points(&sample_data.samples, sample_data.sample_rate as f32) {
(Some(start), Some(end))
} else {
(None, None)
}
} else {
(loop_start, loop_end)
};
self.add_layer(
sample_data.samples,
sample_data.sample_rate as f32,
key_min,
key_max,
root_key,
velocity_min,
velocity_max,
final_loop_start,
final_loop_end,
loop_mode,
);
// Store layer metadata for preset serialization
self.layer_infos.push(LayerInfo {
file_path: path.to_string(),
key_min,
key_max,
root_key,
velocity_min,
velocity_max,
loop_start: final_loop_start,
loop_end: final_loop_end,
loop_mode,
});
Ok(())
}
/// Get information about all loaded layers
pub fn get_layers_info(&self) -> &[LayerInfo] {
&self.layer_infos
}
/// Get sample data for a specific layer (for preset embedding)
pub fn get_layer_data(&self, layer_index: usize) -> Option<(Vec<f32>, f32)> {
self.layers.get(layer_index).map(|layer| {
(layer.sample_data.clone(), layer.sample_rate)
})
}
/// Update a layer's configuration
pub fn update_layer(
&mut self,
layer_index: usize,
key_min: u8,
key_max: u8,
root_key: u8,
velocity_min: u8,
velocity_max: u8,
loop_start: Option<usize>,
loop_end: Option<usize>,
loop_mode: LoopMode,
) -> Result<(), String> {
if layer_index >= self.layers.len() {
return Err("Layer index out of bounds".to_string());
}
// Update the layer data
self.layers[layer_index].key_min = key_min;
self.layers[layer_index].key_max = key_max;
self.layers[layer_index].root_key = root_key;
self.layers[layer_index].velocity_min = velocity_min;
self.layers[layer_index].velocity_max = velocity_max;
self.layers[layer_index].loop_start = loop_start;
self.layers[layer_index].loop_end = loop_end;
self.layers[layer_index].loop_mode = loop_mode;
// Update the layer info
if layer_index < self.layer_infos.len() {
self.layer_infos[layer_index].key_min = key_min;
self.layer_infos[layer_index].key_max = key_max;
self.layer_infos[layer_index].root_key = root_key;
self.layer_infos[layer_index].velocity_min = velocity_min;
self.layer_infos[layer_index].velocity_max = velocity_max;
self.layer_infos[layer_index].loop_start = loop_start;
self.layer_infos[layer_index].loop_end = loop_end;
self.layer_infos[layer_index].loop_mode = loop_mode;
}
Ok(())
}
/// Remove a layer
pub fn remove_layer(&mut self, layer_index: usize) -> Result<(), String> {
if layer_index >= self.layers.len() {
return Err("Layer index out of bounds".to_string());
}
self.layers.remove(layer_index);
if layer_index < self.layer_infos.len() {
self.layer_infos.remove(layer_index);
}
// Stop any voices playing this layer
for voice in &mut self.voices {
if voice.layer_index == layer_index {
voice.is_active = false;
} else if voice.layer_index > layer_index {
// Adjust indices for layers that were shifted down
voice.layer_index -= 1;
}
}
Ok(())
}
/// Find the best matching layer for a given note and velocity
fn find_layer(&self, note: u8, velocity: u8) -> Option<usize> {
self.layers
.iter()
.enumerate()
.find(|(_, layer)| layer.matches(note, velocity))
.map(|(index, _)| index)
}
/// Trigger a note
fn note_on(&mut self, note: u8, velocity: u8) {
let transposed_note = (note as i16 + self.transpose as i16).clamp(0, 127) as u8;
if let Some(layer_index) = self.find_layer(transposed_note, velocity) {
// Find an inactive voice or reuse the oldest one
let voice_index = self
.voices
.iter()
.position(|v| !v.is_active)
.unwrap_or_else(|| {
// All voices active, reuse the first one
if self.voices.len() < self.max_voices {
self.voices.len()
} else {
0
}
});
let voice = Voice::new(layer_index, note, velocity);
if voice_index < self.voices.len() {
self.voices[voice_index] = voice;
} else {
self.voices.push(voice);
}
}
}
/// Release a note
fn note_off(&mut self, note: u8) {
for voice in &mut self.voices {
if voice.note == note && voice.is_active {
voice.envelope_phase = EnvelopePhase::Release;
}
}
}
}
impl AudioNode for MultiSamplerNode {
fn category(&self) -> NodeCategory {
NodeCategory::Generator
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_GAIN => {
self.gain = value.clamp(0.0, 2.0);
}
PARAM_ATTACK => {
self.attack_time = value.clamp(0.001, 1.0);
}
PARAM_RELEASE => {
self.release_time = value.clamp(0.01, 5.0);
}
PARAM_TRANSPOSE => {
self.transpose = value.clamp(-24.0, 24.0) as i8;
}
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_GAIN => self.gain,
PARAM_ATTACK => self.attack_time,
PARAM_RELEASE => self.release_time,
PARAM_TRANSPOSE => self.transpose as f32,
_ => 0.0,
}
}
fn process(
&mut self,
_inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
if outputs.is_empty() {
return;
}
let output = &mut outputs[0];
let frames = output.len() / 2;
// Clear output
output.fill(0.0);
// Process MIDI events
if !midi_inputs.is_empty() {
for event in midi_inputs[0].iter() {
if event.is_note_on() {
self.note_on(event.data1, event.data2);
} else if event.is_note_off() {
self.note_off(event.data1);
}
}
}
// Extract parameters needed for processing
let gain = self.gain;
let attack_time = self.attack_time;
let release_time = self.release_time;
// Process all active voices
for voice in &mut self.voices {
if !voice.is_active {
continue;
}
if voice.layer_index >= self.layers.len() {
continue;
}
let layer = &self.layers[voice.layer_index];
// Calculate playback speed
let semitone_diff = voice.note as i16 - layer.root_key as i16;
let speed = 2.0_f32.powf(semitone_diff as f32 / 12.0);
let speed_adjusted = speed * (layer.sample_rate / sample_rate as f32);
for frame in 0..frames {
// Read sample with linear interpolation and loop handling
let playhead = voice.playhead;
let mut sample = 0.0;
if !layer.sample_data.is_empty() && playhead >= 0.0 {
let index = playhead.floor() as usize;
// Check if we need to handle looping
if layer.loop_mode == LoopMode::Continuous {
if let (Some(loop_start), Some(loop_end)) = (layer.loop_start, layer.loop_end) {
// Validate loop points
if loop_start < loop_end && loop_end <= layer.sample_data.len() {
// Fill crossfade buffer on first loop with samples just before loop_start
// These will be crossfaded with the beginning of the loop for seamless looping
if voice.crossfade_buffer.is_empty() && loop_start >= voice.crossfade_length {
let crossfade_start = loop_start.saturating_sub(voice.crossfade_length);
voice.crossfade_buffer = layer.sample_data[crossfade_start..loop_start].to_vec();
}
// Check if we've reached the loop end
if index >= loop_end {
// Wrap around to loop start
let loop_length = loop_end - loop_start;
let offset_from_end = index - loop_end;
let wrapped_index = loop_start + (offset_from_end % loop_length);
voice.playhead = wrapped_index as f32 + (playhead - playhead.floor());
}
// Read sample at current position
let current_index = voice.playhead.floor() as usize;
if current_index < layer.sample_data.len() {
let frac = voice.playhead - voice.playhead.floor();
let sample1 = layer.sample_data[current_index];
let sample2 = if current_index + 1 < layer.sample_data.len() {
layer.sample_data[current_index + 1]
} else {
layer.sample_data[loop_start] // Wrap to loop start for interpolation
};
sample = sample1 + (sample2 - sample1) * frac;
// Apply crossfade only at the END of loop
// Crossfade the end of loop with samples BEFORE loop_start
if current_index >= loop_start && current_index < loop_end {
if !voice.crossfade_buffer.is_empty() {
let crossfade_len = voice.crossfade_length.min(voice.crossfade_buffer.len());
// Only crossfade at loop end (last crossfade_length samples)
// This blends end samples (i,j,k) with pre-loop samples (a,b,c)
if current_index >= loop_end - crossfade_len && current_index < loop_end {
let crossfade_pos = current_index - (loop_end - crossfade_len);
if crossfade_pos < voice.crossfade_buffer.len() {
let end_sample = sample; // Current sample at end of loop (i, j, or k)
let pre_loop_sample = voice.crossfade_buffer[crossfade_pos]; // Corresponding pre-loop sample (a, b, or c)
// Equal-power crossfade: fade out end, fade in pre-loop
let fade_ratio = crossfade_pos as f32 / crossfade_len as f32;
let fade_out = (1.0 - fade_ratio).sqrt();
let fade_in = fade_ratio.sqrt();
sample = end_sample * fade_out + pre_loop_sample * fade_in;
}
}
}
}
}
} else {
// Invalid loop points, play normally
if index < layer.sample_data.len() {
let frac = playhead - playhead.floor();
let sample1 = layer.sample_data[index];
let sample2 = if index + 1 < layer.sample_data.len() {
layer.sample_data[index + 1]
} else {
0.0
};
sample = sample1 + (sample2 - sample1) * frac;
}
}
} else {
// No loop points defined, play normally
if index < layer.sample_data.len() {
let frac = playhead - playhead.floor();
let sample1 = layer.sample_data[index];
let sample2 = if index + 1 < layer.sample_data.len() {
layer.sample_data[index + 1]
} else {
0.0
};
sample = sample1 + (sample2 - sample1) * frac;
}
}
} else {
// OneShot mode - play normally without looping
if index < layer.sample_data.len() {
let frac = playhead - playhead.floor();
let sample1 = layer.sample_data[index];
let sample2 = if index + 1 < layer.sample_data.len() {
layer.sample_data[index + 1]
} else {
0.0
};
sample = sample1 + (sample2 - sample1) * frac;
}
}
}
// Process envelope
match voice.envelope_phase {
EnvelopePhase::Attack => {
let attack_samples = attack_time * sample_rate as f32;
voice.envelope_value += 1.0 / attack_samples;
if voice.envelope_value >= 1.0 {
voice.envelope_value = 1.0;
voice.envelope_phase = EnvelopePhase::Sustain;
}
}
EnvelopePhase::Sustain => {
voice.envelope_value = 1.0;
}
EnvelopePhase::Release => {
let release_samples = release_time * sample_rate as f32;
voice.envelope_value -= 1.0 / release_samples;
if voice.envelope_value <= 0.0 {
voice.envelope_value = 0.0;
voice.is_active = false;
}
}
}
let envelope = voice.envelope_value.clamp(0.0, 1.0);
// Apply velocity scaling (0-127 -> 0-1)
let velocity_scale = voice.velocity as f32 / 127.0;
// Mix into output
let final_sample = sample * envelope * velocity_scale * gain;
output[frame * 2] += final_sample;
output[frame * 2 + 1] += final_sample;
// Advance playhead
voice.playhead += speed_adjusted;
// Stop if we've reached the end (only for OneShot mode)
if layer.loop_mode == LoopMode::OneShot {
if voice.playhead >= layer.sample_data.len() as f32 {
voice.is_active = false;
break;
}
}
}
}
}
fn reset(&mut self) {
self.voices.clear();
}
fn node_type(&self) -> &str {
"MultiSampler"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self::new(self.name.clone()))
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,205 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
use rand::Rng;
const PARAM_AMPLITUDE: u32 = 0;
const PARAM_COLOR: u32 = 1;
#[derive(Debug, Clone, Copy, PartialEq)]
pub enum NoiseColor {
White = 0,
Pink = 1,
}
impl NoiseColor {
fn from_f32(value: f32) -> Self {
match value.round() as i32 {
1 => NoiseColor::Pink,
_ => NoiseColor::White,
}
}
}
/// Noise generator node with white and pink noise
pub struct NoiseGeneratorNode {
name: String,
amplitude: f32,
color: NoiseColor,
// Pink noise state (Paul Kellet's pink noise algorithm)
pink_b0: f32,
pink_b1: f32,
pink_b2: f32,
pink_b3: f32,
pink_b4: f32,
pink_b5: f32,
pink_b6: f32,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl NoiseGeneratorNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![];
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
let parameters = vec![
Parameter::new(PARAM_AMPLITUDE, "Amplitude", 0.0, 1.0, 0.5, ParameterUnit::Generic),
Parameter::new(PARAM_COLOR, "Color", 0.0, 1.0, 0.0, ParameterUnit::Generic),
];
Self {
name,
amplitude: 0.5,
color: NoiseColor::White,
pink_b0: 0.0,
pink_b1: 0.0,
pink_b2: 0.0,
pink_b3: 0.0,
pink_b4: 0.0,
pink_b5: 0.0,
pink_b6: 0.0,
inputs,
outputs,
parameters,
}
}
/// Generate white noise sample
fn generate_white(&self) -> f32 {
let mut rng = rand::thread_rng();
rng.gen_range(-1.0..1.0)
}
/// Generate pink noise sample using Paul Kellet's algorithm
fn generate_pink(&mut self) -> f32 {
let mut rng = rand::thread_rng();
let white: f32 = rng.gen_range(-1.0..1.0);
self.pink_b0 = 0.99886 * self.pink_b0 + white * 0.0555179;
self.pink_b1 = 0.99332 * self.pink_b1 + white * 0.0750759;
self.pink_b2 = 0.96900 * self.pink_b2 + white * 0.1538520;
self.pink_b3 = 0.86650 * self.pink_b3 + white * 0.3104856;
self.pink_b4 = 0.55000 * self.pink_b4 + white * 0.5329522;
self.pink_b5 = -0.7616 * self.pink_b5 - white * 0.0168980;
let pink = self.pink_b0 + self.pink_b1 + self.pink_b2 + self.pink_b3 + self.pink_b4 + self.pink_b5 + self.pink_b6 + white * 0.5362;
self.pink_b6 = white * 0.115926;
// Scale to approximately -1 to 1
pink * 0.11
}
}
impl AudioNode for NoiseGeneratorNode {
fn category(&self) -> NodeCategory {
NodeCategory::Generator
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_AMPLITUDE => self.amplitude = value.clamp(0.0, 1.0),
PARAM_COLOR => self.color = NoiseColor::from_f32(value),
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_AMPLITUDE => self.amplitude,
PARAM_COLOR => self.color as i32 as f32,
_ => 0.0,
}
}
fn process(
&mut self,
_inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
_sample_rate: u32,
) {
if outputs.is_empty() {
return;
}
let output = &mut outputs[0];
// Audio signals are stereo (interleaved L/R)
// Process by frames, not samples
let frames = output.len() / 2;
for frame in 0..frames {
let sample = match self.color {
NoiseColor::White => self.generate_white(),
NoiseColor::Pink => self.generate_pink(),
} * self.amplitude;
// Write to both channels (mono source duplicated to stereo)
output[frame * 2] = sample; // Left
output[frame * 2 + 1] = sample; // Right
}
}
fn reset(&mut self) {
self.pink_b0 = 0.0;
self.pink_b1 = 0.0;
self.pink_b2 = 0.0;
self.pink_b3 = 0.0;
self.pink_b4 = 0.0;
self.pink_b5 = 0.0;
self.pink_b6 = 0.0;
}
fn node_type(&self) -> &str {
"NoiseGenerator"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
amplitude: self.amplitude,
color: self.color,
pink_b0: 0.0,
pink_b1: 0.0,
pink_b2: 0.0,
pink_b3: 0.0,
pink_b4: 0.0,
pink_b5: 0.0,
pink_b6: 0.0,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,205 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
use std::f32::consts::PI;
const PARAM_FREQUENCY: u32 = 0;
const PARAM_AMPLITUDE: u32 = 1;
const PARAM_WAVEFORM: u32 = 2;
#[derive(Debug, Clone, Copy, PartialEq)]
pub enum Waveform {
Sine = 0,
Saw = 1,
Square = 2,
Triangle = 3,
}
impl Waveform {
fn from_f32(value: f32) -> Self {
match value.round() as i32 {
1 => Waveform::Saw,
2 => Waveform::Square,
3 => Waveform::Triangle,
_ => Waveform::Sine,
}
}
}
/// Oscillator node with multiple waveforms
pub struct OscillatorNode {
name: String,
frequency: f32,
amplitude: f32,
waveform: Waveform,
phase: f32,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl OscillatorNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("V/Oct", SignalType::CV, 0),
NodePort::new("FM", SignalType::CV, 1),
];
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
let parameters = vec![
Parameter::new(PARAM_FREQUENCY, "Frequency", 20.0, 20000.0, 440.0, ParameterUnit::Frequency),
Parameter::new(PARAM_AMPLITUDE, "Amplitude", 0.0, 1.0, 0.5, ParameterUnit::Generic),
Parameter::new(PARAM_WAVEFORM, "Waveform", 0.0, 3.0, 0.0, ParameterUnit::Generic),
];
Self {
name,
frequency: 440.0,
amplitude: 0.5,
waveform: Waveform::Sine,
phase: 0.0,
inputs,
outputs,
parameters,
}
}
}
impl AudioNode for OscillatorNode {
fn category(&self) -> NodeCategory {
NodeCategory::Generator
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_FREQUENCY => self.frequency = value.clamp(20.0, 20000.0),
PARAM_AMPLITUDE => self.amplitude = value.clamp(0.0, 1.0),
PARAM_WAVEFORM => self.waveform = Waveform::from_f32(value),
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_FREQUENCY => self.frequency,
PARAM_AMPLITUDE => self.amplitude,
PARAM_WAVEFORM => self.waveform as i32 as f32,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
if outputs.is_empty() {
return;
}
let output = &mut outputs[0];
let sample_rate_f32 = sample_rate as f32;
// Audio signals are stereo (interleaved L/R)
// Process by frames, not samples
let frames = output.len() / 2;
for frame in 0..frames {
// Start with base frequency
let mut frequency = self.frequency;
// V/Oct input: Standard V/Oct (0V = A4 440Hz, ±1V per octave)
if !inputs.is_empty() && frame < inputs[0].len() {
let voct = inputs[0][frame]; // Read V/Oct CV (mono)
// Convert V/Oct to frequency: f = 440 * 2^(voct)
// voct = 0.0 -> 440 Hz (A4)
// voct = 1.0 -> 880 Hz (A5)
// voct = -0.75 -> 261.6 Hz (C4, middle C)
frequency = 440.0 * 2.0_f32.powf(voct);
}
// FM input: modulates the frequency
if inputs.len() > 1 && frame < inputs[1].len() {
let fm = inputs[1][frame]; // Read FM CV (mono)
frequency *= 1.0 + fm;
}
let freq_mod = frequency;
// Generate waveform sample based on waveform type
let sample = match self.waveform {
Waveform::Sine => (self.phase * 2.0 * PI).sin(),
Waveform::Saw => 2.0 * self.phase - 1.0, // Ramp from -1 to 1
Waveform::Square => {
if self.phase < 0.5 { 1.0 } else { -1.0 }
}
Waveform::Triangle => {
// Triangle: rises from -1 to 1, falls back to -1
4.0 * (self.phase - 0.5).abs() - 1.0
}
} * self.amplitude;
// Write to both channels (mono source duplicated to stereo)
output[frame * 2] = sample; // Left
output[frame * 2 + 1] = sample; // Right
// Update phase once per frame
self.phase += freq_mod / sample_rate_f32;
if self.phase >= 1.0 {
self.phase -= 1.0;
}
}
}
fn reset(&mut self) {
self.phase = 0.0;
}
fn node_type(&self) -> &str {
"Oscillator"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
frequency: self.frequency,
amplitude: self.amplitude,
waveform: self.waveform,
phase: 0.0, // Reset phase for new instance
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,310 @@
use crate::audio::midi::MidiEvent;
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use std::sync::{Arc, Mutex};
const PARAM_TIME_SCALE: u32 = 0;
const PARAM_TRIGGER_MODE: u32 = 1;
const PARAM_TRIGGER_LEVEL: u32 = 2;
const BUFFER_SIZE: usize = 96000; // 2 seconds at 48kHz (stereo)
#[derive(Debug, Clone, Copy, PartialEq)]
pub enum TriggerMode {
FreeRunning = 0,
RisingEdge = 1,
FallingEdge = 2,
VoltPerOctave = 3,
}
impl TriggerMode {
fn from_f32(value: f32) -> Self {
match value.round() as i32 {
1 => TriggerMode::RisingEdge,
2 => TriggerMode::FallingEdge,
3 => TriggerMode::VoltPerOctave,
_ => TriggerMode::FreeRunning,
}
}
}
/// Circular buffer for storing audio samples
pub struct CircularBuffer {
buffer: Vec<f32>,
write_pos: usize,
capacity: usize,
}
impl CircularBuffer {
fn new(capacity: usize) -> Self {
Self {
buffer: vec![0.0; capacity],
write_pos: 0,
capacity,
}
}
fn write(&mut self, samples: &[f32]) {
for &sample in samples {
self.buffer[self.write_pos] = sample;
self.write_pos = (self.write_pos + 1) % self.capacity;
}
}
fn read(&self, count: usize) -> Vec<f32> {
let count = count.min(self.capacity);
let mut result = Vec::with_capacity(count);
// Read backwards from current write position
let start_pos = if self.write_pos >= count {
self.write_pos - count
} else {
self.capacity - (count - self.write_pos)
};
for i in 0..count {
let pos = (start_pos + i) % self.capacity;
result.push(self.buffer[pos]);
}
result
}
fn clear(&mut self) {
self.buffer.fill(0.0);
self.write_pos = 0;
}
}
/// Oscilloscope node for visualizing audio and CV signals
pub struct OscilloscopeNode {
name: String,
time_scale: f32, // Milliseconds to display (10-1000ms)
trigger_mode: TriggerMode,
trigger_level: f32, // -1.0 to 1.0
last_sample: f32, // For edge detection
voct_value: f32, // Current V/oct input value
sample_counter: usize, // Counter for V/oct triggering
trigger_period: usize, // Period in samples for V/oct triggering
// Shared buffers for reading from Tauri commands
buffer: Arc<Mutex<CircularBuffer>>, // Audio buffer
cv_buffer: Arc<Mutex<CircularBuffer>>, // CV buffer
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl OscilloscopeNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Audio In", SignalType::Audio, 0),
NodePort::new("V/oct", SignalType::CV, 1),
NodePort::new("CV In", SignalType::CV, 2),
];
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
let parameters = vec![
Parameter::new(PARAM_TIME_SCALE, "Time Scale", 10.0, 1000.0, 100.0, ParameterUnit::Time),
Parameter::new(PARAM_TRIGGER_MODE, "Trigger", 0.0, 3.0, 0.0, ParameterUnit::Generic),
Parameter::new(PARAM_TRIGGER_LEVEL, "Trigger Level", -1.0, 1.0, 0.0, ParameterUnit::Generic),
];
Self {
name,
time_scale: 100.0,
trigger_mode: TriggerMode::FreeRunning,
trigger_level: 0.0,
last_sample: 0.0,
voct_value: 0.0,
sample_counter: 0,
trigger_period: 480, // Default to ~100Hz at 48kHz
buffer: Arc::new(Mutex::new(CircularBuffer::new(BUFFER_SIZE))),
cv_buffer: Arc::new(Mutex::new(CircularBuffer::new(BUFFER_SIZE))),
inputs,
outputs,
parameters,
}
}
/// Get a clone of the buffer Arc for reading from external code (Tauri commands)
pub fn get_buffer(&self) -> Arc<Mutex<CircularBuffer>> {
Arc::clone(&self.buffer)
}
/// Read samples from the buffer (for Tauri commands)
pub fn read_samples(&self, count: usize) -> Vec<f32> {
if let Ok(buffer) = self.buffer.lock() {
buffer.read(count)
} else {
vec![0.0; count]
}
}
/// Read CV samples from the CV buffer (for Tauri commands)
pub fn read_cv_samples(&self, count: usize) -> Vec<f32> {
if let Ok(buffer) = self.cv_buffer.lock() {
buffer.read(count)
} else {
vec![0.0; count]
}
}
/// Clear the buffer
pub fn clear_buffer(&self) {
if let Ok(mut buffer) = self.buffer.lock() {
buffer.clear();
}
if let Ok(mut cv_buffer) = self.cv_buffer.lock() {
cv_buffer.clear();
}
}
/// Convert V/oct to frequency in Hz (matches oscillator convention)
/// 0V = A4 (440 Hz), ±1V per octave
fn voct_to_frequency(voct: f32) -> f32 {
440.0 * 2.0_f32.powf(voct)
}
}
impl AudioNode for OscilloscopeNode {
fn category(&self) -> NodeCategory {
NodeCategory::Utility
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_TIME_SCALE => self.time_scale = value.clamp(10.0, 1000.0),
PARAM_TRIGGER_MODE => self.trigger_mode = TriggerMode::from_f32(value),
PARAM_TRIGGER_LEVEL => self.trigger_level = value.clamp(-1.0, 1.0),
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_TIME_SCALE => self.time_scale,
PARAM_TRIGGER_MODE => self.trigger_mode as i32 as f32,
PARAM_TRIGGER_LEVEL => self.trigger_level,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
if inputs.is_empty() || outputs.is_empty() {
return;
}
let input = inputs[0];
let output = &mut outputs[0];
let len = input.len().min(output.len());
// Read V/oct input if available and update trigger period
if inputs.len() > 1 && !inputs[1].is_empty() {
self.voct_value = inputs[1][0]; // Use first sample of V/oct input
let frequency = Self::voct_to_frequency(self.voct_value);
// Calculate period in samples, clamped to reasonable range
let period_samples = (sample_rate as f32 / frequency).max(1.0);
self.trigger_period = period_samples as usize;
}
// Update sample counter for V/oct triggering
if self.trigger_mode == TriggerMode::VoltPerOctave {
self.sample_counter = (self.sample_counter + len) % self.trigger_period;
}
// Pass through audio (copy input to output)
output[..len].copy_from_slice(&input[..len]);
// Capture audio samples to buffer
if let Ok(mut buffer) = self.buffer.lock() {
buffer.write(&input[..len]);
}
// Capture CV samples if CV input is connected (input 2)
if inputs.len() > 2 && !inputs[2].is_empty() {
let cv_input = inputs[2];
if let Ok(mut cv_buffer) = self.cv_buffer.lock() {
cv_buffer.write(&cv_input[..len.min(cv_input.len())]);
}
}
// Update last sample for trigger detection (use left channel, frame 0)
if !input.is_empty() {
self.last_sample = input[0];
}
}
fn reset(&mut self) {
self.last_sample = 0.0;
self.voct_value = 0.0;
self.sample_counter = 0;
self.clear_buffer();
}
fn node_type(&self) -> &str {
"Oscilloscope"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
time_scale: self.time_scale,
trigger_mode: self.trigger_mode,
trigger_level: self.trigger_level,
last_sample: 0.0,
voct_value: 0.0,
sample_counter: 0,
trigger_period: 480,
buffer: Arc::new(Mutex::new(CircularBuffer::new(BUFFER_SIZE))),
cv_buffer: Arc::new(Mutex::new(CircularBuffer::new(BUFFER_SIZE))),
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn get_oscilloscope_data(&self, sample_count: usize) -> Option<Vec<f32>> {
Some(self.read_samples(sample_count))
}
fn get_oscilloscope_cv_data(&self, sample_count: usize) -> Option<Vec<f32>> {
Some(self.read_cv_samples(sample_count))
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,104 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, SignalType};
use crate::audio::midi::MidiEvent;
/// Audio output node - collects audio and passes it to the main output
pub struct AudioOutputNode {
name: String,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
}
impl AudioOutputNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Audio In", SignalType::Audio, 0),
];
// Output node has an output for graph consistency, but it's typically the final node
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
Self {
name,
inputs,
outputs,
}
}
}
impl AudioNode for AudioOutputNode {
fn category(&self) -> NodeCategory {
NodeCategory::Output
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&[] // No parameters
}
fn set_parameter(&mut self, _id: u32, _value: f32) {
// No parameters
}
fn get_parameter(&self, _id: u32) -> f32 {
0.0
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
_sample_rate: u32,
) {
if inputs.is_empty() || outputs.is_empty() {
return;
}
// Simply pass through the input to the output
let input = inputs[0];
let output = &mut outputs[0];
let len = input.len().min(output.len());
output[..len].copy_from_slice(&input[..len]);
}
fn reset(&mut self) {
// No state to reset
}
fn node_type(&self) -> &str {
"AudioOutput"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,176 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
use std::f32::consts::PI;
const PARAM_PAN: u32 = 0;
/// Stereo panning node using constant-power panning law
/// Converts mono audio to stereo with controllable pan position
pub struct PanNode {
name: String,
pan: f32,
left_gain: f32,
right_gain: f32,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl PanNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Audio In", SignalType::Audio, 0),
NodePort::new("Pan CV", SignalType::CV, 1),
];
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
let parameters = vec![
Parameter::new(PARAM_PAN, "Pan", -1.0, 1.0, 0.0, ParameterUnit::Generic),
];
let mut node = Self {
name,
pan: 0.0,
left_gain: 1.0,
right_gain: 1.0,
inputs,
outputs,
parameters,
};
node.update_gains();
node
}
/// Update left/right gains using constant-power panning law
fn update_gains(&mut self) {
// Constant-power panning: pan from -1 to +1 maps to angle 0 to PI/2
let angle = (self.pan + 1.0) * 0.5 * PI / 2.0;
self.left_gain = angle.cos();
self.right_gain = angle.sin();
}
}
impl AudioNode for PanNode {
fn category(&self) -> NodeCategory {
NodeCategory::Utility
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_PAN => {
self.pan = value.clamp(-1.0, 1.0);
self.update_gains();
}
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_PAN => self.pan,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
_sample_rate: u32,
) {
if inputs.is_empty() || outputs.is_empty() {
return;
}
let audio_input = inputs[0];
let output = &mut outputs[0];
// Audio signals are stereo (interleaved L/R)
// Process by frames, not samples
let frames = audio_input.len() / 2;
let output_frames = output.len() / 2;
let frames_to_process = frames.min(output_frames);
for frame in 0..frames_to_process {
// Get base pan position
let mut pan = self.pan;
// Add CV modulation if connected
if inputs.len() > 1 && frame < inputs[1].len() {
let cv = inputs[1][frame]; // CV is mono
// CV is 0-1, map to -1 to +1 range
pan += cv * 2.0 - 1.0;
pan = pan.clamp(-1.0, 1.0);
}
// Update gains if pan changed from CV
let angle = (pan + 1.0) * 0.5 * PI / 2.0;
let left_gain = angle.cos();
let right_gain = angle.sin();
// Read stereo input
let left_in = audio_input[frame * 2];
let right_in = audio_input[frame * 2 + 1];
// Mix both input channels with panning
// When pan is -1 (full left), left gets full signal, right gets nothing
// When pan is 0 (center), both get equal signal
// When pan is +1 (full right), right gets full signal, left gets nothing
output[frame * 2] = (left_in + right_in) * left_gain; // Left
output[frame * 2 + 1] = (left_in + right_in) * right_gain; // Right
}
}
fn reset(&mut self) {
// No state to reset
}
fn node_type(&self) -> &str {
"Pan"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
pan: self.pan,
left_gain: self.left_gain,
right_gain: self.right_gain,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,297 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
use std::f32::consts::PI;
const PARAM_RATE: u32 = 0;
const PARAM_DEPTH: u32 = 1;
const PARAM_STAGES: u32 = 2;
const PARAM_FEEDBACK: u32 = 3;
const PARAM_WET_DRY: u32 = 4;
const MAX_STAGES: usize = 8;
/// First-order all-pass filter for phaser
struct AllPassFilter {
a1: f32,
zm1_left: f32,
zm1_right: f32,
}
impl AllPassFilter {
fn new() -> Self {
Self {
a1: 0.0,
zm1_left: 0.0,
zm1_right: 0.0,
}
}
fn set_coefficient(&mut self, frequency: f32, sample_rate: f32) {
// First-order all-pass coefficient
// a1 = (tan(π*f/fs) - 1) / (tan(π*f/fs) + 1)
let tan_val = ((PI * frequency) / sample_rate).tan();
self.a1 = (tan_val - 1.0) / (tan_val + 1.0);
}
fn process(&mut self, input: f32, is_left: bool) -> f32 {
let zm1 = if is_left {
&mut self.zm1_left
} else {
&mut self.zm1_right
};
// All-pass filter: y[n] = a1*x[n] + x[n-1] - a1*y[n-1]
let output = self.a1 * input + *zm1;
*zm1 = input - self.a1 * output;
output
}
fn reset(&mut self) {
self.zm1_left = 0.0;
self.zm1_right = 0.0;
}
}
/// Phaser effect using cascaded all-pass filters
pub struct PhaserNode {
name: String,
rate: f32, // LFO rate in Hz (0.1 to 10 Hz)
depth: f32, // Modulation depth 0.0 to 1.0
stages: usize, // Number of all-pass stages (2, 4, 6, or 8)
feedback: f32, // Feedback amount -0.95 to 0.95
wet_dry: f32, // 0.0 = dry only, 1.0 = wet only
// All-pass filters
filters: Vec<AllPassFilter>,
// Feedback buffers
feedback_left: f32,
feedback_right: f32,
// LFO state
lfo_phase: f32,
sample_rate: u32,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl PhaserNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Audio In", SignalType::Audio, 0),
];
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
let parameters = vec![
Parameter::new(PARAM_RATE, "Rate", 0.1, 10.0, 0.5, ParameterUnit::Frequency),
Parameter::new(PARAM_DEPTH, "Depth", 0.0, 1.0, 0.7, ParameterUnit::Generic),
Parameter::new(PARAM_STAGES, "Stages", 2.0, 8.0, 6.0, ParameterUnit::Generic),
Parameter::new(PARAM_FEEDBACK, "Feedback", -0.95, 0.95, 0.5, ParameterUnit::Generic),
Parameter::new(PARAM_WET_DRY, "Wet/Dry", 0.0, 1.0, 0.5, ParameterUnit::Generic),
];
let mut filters = Vec::with_capacity(MAX_STAGES);
for _ in 0..MAX_STAGES {
filters.push(AllPassFilter::new());
}
Self {
name,
rate: 0.5,
depth: 0.7,
stages: 6,
feedback: 0.5,
wet_dry: 0.5,
filters,
feedback_left: 0.0,
feedback_right: 0.0,
lfo_phase: 0.0,
sample_rate: 48000,
inputs,
outputs,
parameters,
}
}
}
impl AudioNode for PhaserNode {
fn category(&self) -> NodeCategory {
NodeCategory::Effect
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_RATE => {
self.rate = value.clamp(0.1, 10.0);
}
PARAM_DEPTH => {
self.depth = value.clamp(0.0, 1.0);
}
PARAM_STAGES => {
// Round to even numbers: 2, 4, 6, 8
let stages = (value.round() as usize).clamp(2, 8);
self.stages = if stages % 2 == 0 { stages } else { stages + 1 };
}
PARAM_FEEDBACK => {
self.feedback = value.clamp(-0.95, 0.95);
}
PARAM_WET_DRY => {
self.wet_dry = value.clamp(0.0, 1.0);
}
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_RATE => self.rate,
PARAM_DEPTH => self.depth,
PARAM_STAGES => self.stages as f32,
PARAM_FEEDBACK => self.feedback,
PARAM_WET_DRY => self.wet_dry,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
if inputs.is_empty() || outputs.is_empty() {
return;
}
// Update sample rate if changed
if self.sample_rate != sample_rate {
self.sample_rate = sample_rate;
}
let input = inputs[0];
let output = &mut outputs[0];
// Audio signals are stereo (interleaved L/R)
let frames = input.len() / 2;
let output_frames = output.len() / 2;
let frames_to_process = frames.min(output_frames);
let dry_gain = 1.0 - self.wet_dry;
let wet_gain = self.wet_dry;
// Frequency range for all-pass filters (200 Hz to 2000 Hz)
let min_freq = 200.0;
let max_freq = 2000.0;
for frame in 0..frames_to_process {
let left_in = input[frame * 2];
let right_in = input[frame * 2 + 1];
// Generate LFO value (sine wave, 0 to 1)
let lfo_value = (self.lfo_phase * 2.0 * PI).sin() * 0.5 + 0.5;
// Calculate modulated frequency
let frequency = min_freq + (max_freq - min_freq) * lfo_value * self.depth;
// Update all filter coefficients
for filter in self.filters.iter_mut().take(self.stages) {
filter.set_coefficient(frequency, self.sample_rate as f32);
}
// Add feedback
let mut left_sig = left_in + self.feedback_left * self.feedback;
let mut right_sig = right_in + self.feedback_right * self.feedback;
// Process through all-pass filter chain
for i in 0..self.stages {
left_sig = self.filters[i].process(left_sig, true);
right_sig = self.filters[i].process(right_sig, false);
}
// Store feedback
self.feedback_left = left_sig;
self.feedback_right = right_sig;
// Mix dry and wet signals
output[frame * 2] = left_in * dry_gain + left_sig * wet_gain;
output[frame * 2 + 1] = right_in * dry_gain + right_sig * wet_gain;
// Advance LFO phase
self.lfo_phase += self.rate / self.sample_rate as f32;
if self.lfo_phase >= 1.0 {
self.lfo_phase -= 1.0;
}
}
}
fn reset(&mut self) {
for filter in &mut self.filters {
filter.reset();
}
self.feedback_left = 0.0;
self.feedback_right = 0.0;
self.lfo_phase = 0.0;
}
fn node_type(&self) -> &str {
"Phaser"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
let mut filters = Vec::with_capacity(MAX_STAGES);
for _ in 0..MAX_STAGES {
filters.push(AllPassFilter::new());
}
Box::new(Self {
name: self.name.clone(),
rate: self.rate,
depth: self.depth,
stages: self.stages,
feedback: self.feedback,
wet_dry: self.wet_dry,
filters,
feedback_left: 0.0,
feedback_right: 0.0,
lfo_phase: 0.0,
sample_rate: self.sample_rate,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,232 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
const PARAM_SCALE: u32 = 0;
const PARAM_ROOT_NOTE: u32 = 1;
/// Quantizer - snaps CV values to musical scales
/// Converts continuous CV into discrete pitch values based on a scale
/// Scale parameter:
/// 0 = Chromatic (all 12 notes)
/// 1 = Major scale
/// 2 = Minor scale (natural)
/// 3 = Pentatonic major
/// 4 = Pentatonic minor
/// 5 = Dorian
/// 6 = Phrygian
/// 7 = Lydian
/// 8 = Mixolydian
/// 9 = Whole tone
/// 10 = Octaves only
pub struct QuantizerNode {
name: String,
scale: u32,
root_note: u32, // 0-11 (C-B)
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl QuantizerNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("CV In", SignalType::CV, 0),
];
let outputs = vec![
NodePort::new("CV Out", SignalType::CV, 0),
NodePort::new("Gate Out", SignalType::CV, 1), // Trigger when note changes
];
let parameters = vec![
Parameter::new(PARAM_SCALE, "Scale", 0.0, 10.0, 0.0, ParameterUnit::Generic),
Parameter::new(PARAM_ROOT_NOTE, "Root", 0.0, 11.0, 0.0, ParameterUnit::Generic),
];
Self {
name,
scale: 0,
root_note: 0,
inputs,
outputs,
parameters,
}
}
/// Get the scale intervals (semitones from root)
fn get_scale_intervals(&self) -> Vec<u32> {
match self.scale {
0 => vec![0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], // Chromatic
1 => vec![0, 2, 4, 5, 7, 9, 11], // Major
2 => vec![0, 2, 3, 5, 7, 8, 10], // Minor (natural)
3 => vec![0, 2, 4, 7, 9], // Pentatonic major
4 => vec![0, 3, 5, 7, 10], // Pentatonic minor
5 => vec![0, 2, 3, 5, 7, 9, 10], // Dorian
6 => vec![0, 1, 3, 5, 7, 8, 10], // Phrygian
7 => vec![0, 2, 4, 6, 7, 9, 11], // Lydian
8 => vec![0, 2, 4, 5, 7, 9, 10], // Mixolydian
9 => vec![0, 2, 4, 6, 8, 10], // Whole tone
10 => vec![0], // Octaves only
_ => vec![0, 2, 4, 5, 7, 9, 11], // Default to major
}
}
/// Quantize a CV value to the nearest note in the scale
fn quantize(&self, cv: f32) -> f32 {
// Convert V/Oct to MIDI note (standard: 0V = A4 = MIDI 69)
// cv = (midi_note - 69) / 12.0
// midi_note = cv * 12.0 + 69
let input_midi_note = cv * 12.0 + 69.0;
// Clamp to reasonable range
let input_midi_note = input_midi_note.clamp(0.0, 127.0);
// Get scale intervals (relative to root)
let intervals = self.get_scale_intervals();
// Find which octave we're in (relative to C)
let octave = (input_midi_note / 12.0).floor() as i32;
let note_in_octave = input_midi_note % 12.0;
// Adjust note relative to root (e.g., if root is D (2), then C becomes 10, D becomes 0)
let note_relative_to_root = (note_in_octave - self.root_note as f32 + 12.0) % 12.0;
// Find the nearest note in the scale (scale intervals are relative to root)
let mut closest_interval = intervals[0];
let mut min_distance = (note_relative_to_root - closest_interval as f32).abs();
for &interval in &intervals {
let distance = (note_relative_to_root - interval as f32).abs();
if distance < min_distance {
min_distance = distance;
closest_interval = interval;
}
}
// Calculate final MIDI note
// The scale interval is relative to root, so add root back to get absolute note
let quantized_note_in_octave = (self.root_note + closest_interval) % 12;
let quantized_midi_note = (octave * 12) as f32 + quantized_note_in_octave as f32;
// Clamp result
let quantized_midi_note = quantized_midi_note.clamp(0.0, 127.0);
// Convert back to V/Oct: voct = (midi_note - 69) / 12.0
(quantized_midi_note - 69.0) / 12.0
}
}
impl AudioNode for QuantizerNode {
fn category(&self) -> NodeCategory {
NodeCategory::Utility
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_SCALE => self.scale = (value as u32).clamp(0, 10),
PARAM_ROOT_NOTE => self.root_note = (value as u32).clamp(0, 11),
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_SCALE => self.scale as f32,
PARAM_ROOT_NOTE => self.root_note as f32,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
_sample_rate: u32,
) {
if inputs.is_empty() || outputs.is_empty() {
return;
}
let input = inputs[0];
let length = input.len().min(outputs[0].len());
// Split outputs to avoid borrow conflicts
if outputs.len() > 1 {
let (cv_out, gate_out) = outputs.split_at_mut(1);
let cv_output = &mut cv_out[0];
let gate_output = &mut gate_out[0];
let gate_length = length.min(gate_output.len());
let mut last_note: Option<f32> = None;
for i in 0..length {
let quantized = self.quantize(input[i]);
cv_output[i] = quantized;
// Generate gate trigger when note changes
if i < gate_length {
if let Some(prev) = last_note {
gate_output[i] = if (quantized - prev).abs() > 0.001 { 1.0 } else { 0.0 };
} else {
gate_output[i] = 1.0; // First note triggers gate
}
}
last_note = Some(quantized);
}
} else {
// No gate output, just quantize CV
let cv_output = &mut outputs[0];
for i in 0..length {
cv_output[i] = self.quantize(input[i]);
}
}
}
fn reset(&mut self) {
// No state to reset
}
fn node_type(&self) -> &str {
"Quantizer"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
scale: self.scale,
root_note: self.root_note,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,321 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
const PARAM_ROOM_SIZE: u32 = 0;
const PARAM_DAMPING: u32 = 1;
const PARAM_WET_DRY: u32 = 2;
// Schroeder reverb uses a parallel bank of comb filters followed by series all-pass filters
// Comb filter delays (in samples at 48kHz)
const COMB_DELAYS: [usize; 8] = [1557, 1617, 1491, 1422, 1277, 1356, 1188, 1116];
// All-pass filter delays (in samples at 48kHz)
const ALLPASS_DELAYS: [usize; 4] = [225, 556, 441, 341];
/// Process a single channel through comb and all-pass filters
fn process_channel(
input: f32,
comb_filters: &mut [CombFilter],
allpass_filters: &mut [AllPassFilter],
) -> f32 {
// Sum parallel comb filters and scale down to prevent excessive gain
// With 8 comb filters, we need to scale the output significantly
let mut output = 0.0;
for comb in comb_filters.iter_mut() {
output += comb.process(input);
}
output *= 0.015; // Scale down the summed comb output
// Series all-pass filters
for allpass in allpass_filters.iter_mut() {
output = allpass.process(output);
}
output
}
/// Single comb filter for reverb
struct CombFilter {
buffer: Vec<f32>,
buffer_size: usize,
filter_store: f32,
write_pos: usize,
damp: f32,
feedback: f32,
}
impl CombFilter {
fn new(size: usize) -> Self {
Self {
buffer: vec![0.0; size],
buffer_size: size,
filter_store: 0.0,
write_pos: 0,
damp: 0.5,
feedback: 0.5,
}
}
fn process(&mut self, input: f32) -> f32 {
let output = self.buffer[self.write_pos];
// One-pole lowpass filter
self.filter_store = output * (1.0 - self.damp) + self.filter_store * self.damp;
self.buffer[self.write_pos] = input + self.filter_store * self.feedback;
self.write_pos = (self.write_pos + 1) % self.buffer_size;
output
}
fn mute(&mut self) {
self.buffer.fill(0.0);
self.filter_store = 0.0;
}
fn set_damp(&mut self, val: f32) {
self.damp = val;
}
fn set_feedback(&mut self, val: f32) {
self.feedback = val;
}
}
/// Single all-pass filter for reverb
struct AllPassFilter {
buffer: Vec<f32>,
buffer_size: usize,
write_pos: usize,
}
impl AllPassFilter {
fn new(size: usize) -> Self {
Self {
buffer: vec![0.0; size],
buffer_size: size,
write_pos: 0,
}
}
fn process(&mut self, input: f32) -> f32 {
let delayed = self.buffer[self.write_pos];
let output = -input + delayed;
self.buffer[self.write_pos] = input + delayed * 0.5;
self.write_pos = (self.write_pos + 1) % self.buffer_size;
output
}
fn mute(&mut self) {
self.buffer.fill(0.0);
}
}
/// Schroeder reverb node with room size and damping controls
pub struct ReverbNode {
name: String,
room_size: f32, // 0.0 to 1.0
damping: f32, // 0.0 to 1.0
wet_dry: f32, // 0.0 = dry only, 1.0 = wet only
// Left channel filters
comb_filters_left: Vec<CombFilter>,
allpass_filters_left: Vec<AllPassFilter>,
// Right channel filters
comb_filters_right: Vec<CombFilter>,
allpass_filters_right: Vec<AllPassFilter>,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl ReverbNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Audio In", SignalType::Audio, 0),
];
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
let parameters = vec![
Parameter::new(PARAM_ROOM_SIZE, "Room Size", 0.0, 1.0, 0.5, ParameterUnit::Generic),
Parameter::new(PARAM_DAMPING, "Damping", 0.0, 1.0, 0.5, ParameterUnit::Generic),
Parameter::new(PARAM_WET_DRY, "Wet/Dry", 0.0, 1.0, 0.3, ParameterUnit::Generic),
];
// Create comb filters for both channels
// Right channel has slightly different delays to create stereo effect
let comb_filters_left: Vec<CombFilter> = COMB_DELAYS.iter().map(|&d| CombFilter::new(d)).collect();
let comb_filters_right: Vec<CombFilter> = COMB_DELAYS.iter().map(|&d| CombFilter::new(d + 23)).collect();
// Create all-pass filters for both channels
let allpass_filters_left: Vec<AllPassFilter> = ALLPASS_DELAYS.iter().map(|&d| AllPassFilter::new(d)).collect();
let allpass_filters_right: Vec<AllPassFilter> = ALLPASS_DELAYS.iter().map(|&d| AllPassFilter::new(d + 23)).collect();
let mut node = Self {
name,
room_size: 0.5,
damping: 0.5,
wet_dry: 0.3,
comb_filters_left,
allpass_filters_left,
comb_filters_right,
allpass_filters_right,
inputs,
outputs,
parameters,
};
node.update_filters();
node
}
fn update_filters(&mut self) {
// Room size affects feedback (larger room = more feedback)
let feedback = 0.28 + self.room_size * 0.7;
// Update all comb filters
for comb in &mut self.comb_filters_left {
comb.set_feedback(feedback);
comb.set_damp(self.damping);
}
for comb in &mut self.comb_filters_right {
comb.set_feedback(feedback);
comb.set_damp(self.damping);
}
}
}
impl AudioNode for ReverbNode {
fn category(&self) -> NodeCategory {
NodeCategory::Effect
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_ROOM_SIZE => {
self.room_size = value.clamp(0.0, 1.0);
self.update_filters();
}
PARAM_DAMPING => {
self.damping = value.clamp(0.0, 1.0);
self.update_filters();
}
PARAM_WET_DRY => {
self.wet_dry = value.clamp(0.0, 1.0);
}
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_ROOM_SIZE => self.room_size,
PARAM_DAMPING => self.damping,
PARAM_WET_DRY => self.wet_dry,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
_sample_rate: u32,
) {
if inputs.is_empty() || outputs.is_empty() {
return;
}
let input = inputs[0];
let output = &mut outputs[0];
// Audio signals are stereo (interleaved L/R)
let frames = input.len() / 2;
let output_frames = output.len() / 2;
let frames_to_process = frames.min(output_frames);
let dry_gain = 1.0 - self.wet_dry;
let wet_gain = self.wet_dry;
for frame in 0..frames_to_process {
let left_in = input[frame * 2];
let right_in = input[frame * 2 + 1];
// Process both channels
let left_wet = process_channel(
left_in,
&mut self.comb_filters_left,
&mut self.allpass_filters_left,
);
let right_wet = process_channel(
right_in,
&mut self.comb_filters_right,
&mut self.allpass_filters_right,
);
// Mix dry and wet signals
output[frame * 2] = left_in * dry_gain + left_wet * wet_gain;
output[frame * 2 + 1] = right_in * dry_gain + right_wet * wet_gain;
}
}
fn reset(&mut self) {
for comb in &mut self.comb_filters_left {
comb.mute();
}
for comb in &mut self.comb_filters_right {
comb.mute();
}
for allpass in &mut self.allpass_filters_left {
allpass.mute();
}
for allpass in &mut self.allpass_filters_right {
allpass.mute();
}
}
fn node_type(&self) -> &str {
"Reverb"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self::new(self.name.clone()))
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,145 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
const PARAM_MIX: u32 = 0;
/// Ring Modulator - multiplies two signals together
/// Creates metallic, inharmonic timbres by multiplying carrier and modulator
pub struct RingModulatorNode {
name: String,
mix: f32, // 0.0 = dry (carrier only), 1.0 = fully modulated
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl RingModulatorNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Carrier", SignalType::Audio, 0),
NodePort::new("Modulator", SignalType::Audio, 1),
];
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
let parameters = vec![
Parameter::new(PARAM_MIX, "Mix", 0.0, 1.0, 1.0, ParameterUnit::Generic),
];
Self {
name,
mix: 1.0,
inputs,
outputs,
parameters,
}
}
}
impl AudioNode for RingModulatorNode {
fn category(&self) -> NodeCategory {
NodeCategory::Effect
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_MIX => self.mix = value.clamp(0.0, 1.0),
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_MIX => self.mix,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
_sample_rate: u32,
) {
if outputs.is_empty() {
return;
}
let output = &mut outputs[0];
let length = output.len();
// Get carrier input
let carrier = if !inputs.is_empty() && !inputs[0].is_empty() {
inputs[0]
} else {
&[]
};
// Get modulator input
let modulator = if inputs.len() > 1 && !inputs[1].is_empty() {
inputs[1]
} else {
&[]
};
// Process each sample
for i in 0..length {
let carrier_sample = if i < carrier.len() { carrier[i] } else { 0.0 };
let modulator_sample = if i < modulator.len() { modulator[i] } else { 0.0 };
// Ring modulation: multiply the two signals
let modulated = carrier_sample * modulator_sample;
// Mix between dry (carrier) and wet (modulated)
output[i] = carrier_sample * (1.0 - self.mix) + modulated * self.mix;
}
}
fn reset(&mut self) {
// No state to reset
}
fn node_type(&self) -> &str {
"RingModulator"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
mix: self.mix,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,145 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, SignalType};
use crate::audio::midi::MidiEvent;
/// Sample & Hold - samples input CV when triggered by a gate signal
/// Classic modular synth utility for creating stepped sequences
pub struct SampleHoldNode {
name: String,
held_value: f32,
last_gate: f32,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl SampleHoldNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("CV In", SignalType::CV, 0),
NodePort::new("Gate In", SignalType::CV, 1),
];
let outputs = vec![
NodePort::new("CV Out", SignalType::CV, 0),
];
let parameters = vec![];
Self {
name,
held_value: 0.0,
last_gate: 0.0,
inputs,
outputs,
parameters,
}
}
}
impl AudioNode for SampleHoldNode {
fn category(&self) -> NodeCategory {
NodeCategory::Utility
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, _id: u32, _value: f32) {
// No parameters
}
fn get_parameter(&self, _id: u32) -> f32 {
0.0
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
_sample_rate: u32,
) {
if outputs.is_empty() {
return;
}
let output = &mut outputs[0];
let length = output.len();
// Get CV input
let cv_input = if !inputs.is_empty() && !inputs[0].is_empty() {
inputs[0]
} else {
&[]
};
// Get Gate input
let gate_input = if inputs.len() > 1 && !inputs[1].is_empty() {
inputs[1]
} else {
&[]
};
// Process each sample
for i in 0..length {
let cv = if i < cv_input.len() { cv_input[i] } else { 0.0 };
let gate = if i < gate_input.len() { gate_input[i] } else { 0.0 };
// Detect rising edge (trigger)
let gate_active = gate > 0.5;
let last_gate_active = self.last_gate > 0.5;
if gate_active && !last_gate_active {
// Rising edge detected - sample the input
self.held_value = cv;
}
self.last_gate = gate;
output[i] = self.held_value;
}
}
fn reset(&mut self) {
self.held_value = 0.0;
self.last_gate = 0.0;
}
fn node_type(&self) -> &str {
"SampleHold"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
held_value: self.held_value,
last_gate: self.last_gate,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,286 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
use std::sync::{Arc, Mutex};
// Parameters
const PARAM_GAIN: u32 = 0;
const PARAM_LOOP: u32 = 1;
const PARAM_PITCH_SHIFT: u32 = 2;
/// Simple single-sample playback node with pitch shifting
pub struct SimpleSamplerNode {
name: String,
// Sample data (shared, can be set externally)
sample_data: Arc<Mutex<Vec<f32>>>,
sample_rate_original: f32,
sample_path: Option<String>, // Path to loaded sample file
// Playback state
playhead: f32, // Fractional position in sample
is_playing: bool,
gate_prev: bool,
// Parameters
gain: f32,
loop_enabled: bool,
pitch_shift: f32, // Additional pitch shift in semitones
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl SimpleSamplerNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("V/Oct", SignalType::CV, 0),
NodePort::new("Gate", SignalType::CV, 1),
];
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
let parameters = vec![
Parameter::new(PARAM_GAIN, "Gain", 0.0, 2.0, 1.0, ParameterUnit::Generic),
Parameter::new(PARAM_LOOP, "Loop", 0.0, 1.0, 0.0, ParameterUnit::Generic),
Parameter::new(PARAM_PITCH_SHIFT, "Pitch Shift", -12.0, 12.0, 0.0, ParameterUnit::Generic),
];
Self {
name,
sample_data: Arc::new(Mutex::new(Vec::new())),
sample_rate_original: 48000.0,
sample_path: None,
playhead: 0.0,
is_playing: false,
gate_prev: false,
gain: 1.0,
loop_enabled: false,
pitch_shift: 0.0,
inputs,
outputs,
parameters,
}
}
/// Set the sample data (mono)
pub fn set_sample(&mut self, data: Vec<f32>, sample_rate: f32) {
let mut sample = self.sample_data.lock().unwrap();
*sample = data;
self.sample_rate_original = sample_rate;
}
/// Get the sample data reference (for external loading)
pub fn get_sample_data(&self) -> Arc<Mutex<Vec<f32>>> {
Arc::clone(&self.sample_data)
}
/// Load a sample from a file path
pub fn load_sample_from_file(&mut self, path: &str) -> Result<(), String> {
use crate::audio::sample_loader::load_audio_file;
let sample_data = load_audio_file(path)?;
self.set_sample(sample_data.samples, sample_data.sample_rate as f32);
self.sample_path = Some(path.to_string());
Ok(())
}
/// Get the currently loaded sample path
pub fn get_sample_path(&self) -> Option<&str> {
self.sample_path.as_deref()
}
/// Get the current sample data and sample rate (for preset embedding)
pub fn get_sample_data_for_embedding(&self) -> (Vec<f32>, f32) {
let sample = self.sample_data.lock().unwrap();
(sample.clone(), self.sample_rate_original)
}
/// Convert V/oct CV to playback speed multiplier
/// 0V = 1.0 (original speed), +1V = 2.0 (one octave up), -1V = 0.5 (one octave down)
fn voct_to_speed(&self, voct: f32) -> f32 {
// Add pitch shift parameter
let total_semitones = voct * 12.0 + self.pitch_shift;
2.0_f32.powf(total_semitones / 12.0)
}
/// Read sample at playhead with linear interpolation
fn read_sample(&self, playhead: f32, sample: &[f32]) -> f32 {
if sample.is_empty() {
return 0.0;
}
let index = playhead.floor() as usize;
let frac = playhead - playhead.floor();
if index >= sample.len() {
return 0.0;
}
let sample1 = sample[index];
let sample2 = if index + 1 < sample.len() {
sample[index + 1]
} else if self.loop_enabled {
sample[0] // Loop back to start
} else {
0.0
};
// Linear interpolation
sample1 + (sample2 - sample1) * frac
}
}
impl AudioNode for SimpleSamplerNode {
fn category(&self) -> NodeCategory {
NodeCategory::Generator
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_GAIN => {
self.gain = value.clamp(0.0, 2.0);
}
PARAM_LOOP => {
self.loop_enabled = value > 0.5;
}
PARAM_PITCH_SHIFT => {
self.pitch_shift = value.clamp(-12.0, 12.0);
}
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_GAIN => self.gain,
PARAM_LOOP => if self.loop_enabled { 1.0 } else { 0.0 },
PARAM_PITCH_SHIFT => self.pitch_shift,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
if outputs.is_empty() {
return;
}
// Lock the sample data
let sample_data = self.sample_data.lock().unwrap();
if sample_data.is_empty() {
// No sample loaded, output silence
for output in outputs.iter_mut() {
output.fill(0.0);
}
return;
}
let output = &mut outputs[0];
let frames = output.len() / 2;
for frame in 0..frames {
// Read CV inputs
let voct = if !inputs.is_empty() && !inputs[0].is_empty() {
inputs[0][frame.min(inputs[0].len() / 2 - 1) * 2]
} else {
0.0 // Default to original pitch
};
let gate = if inputs.len() > 1 && !inputs[1].is_empty() {
inputs[1][frame.min(inputs[1].len() / 2 - 1) * 2]
} else {
0.0
};
// Detect gate trigger (rising edge)
let gate_active = gate > 0.5;
if gate_active && !self.gate_prev {
// Trigger: start playback from beginning
self.playhead = 0.0;
self.is_playing = true;
}
self.gate_prev = gate_active;
// Generate sample
let sample = if self.is_playing {
let s = self.read_sample(self.playhead, &sample_data);
// Calculate playback speed from V/Oct
let speed = self.voct_to_speed(voct);
// Advance playhead with resampling
let speed_adjusted = speed * (self.sample_rate_original / sample_rate as f32);
self.playhead += speed_adjusted;
// Check if we've reached the end
if self.playhead >= sample_data.len() as f32 {
if self.loop_enabled {
// Loop back to start
self.playhead = self.playhead % sample_data.len() as f32;
} else {
// Stop playback
self.is_playing = false;
self.playhead = 0.0;
}
}
s * self.gain
} else {
0.0
};
// Output stereo (same signal to both channels)
output[frame * 2] = sample;
output[frame * 2 + 1] = sample;
}
}
fn reset(&mut self) {
self.playhead = 0.0;
self.is_playing = false;
self.gate_prev = false;
}
fn node_type(&self) -> &str {
"SimpleSampler"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self::new(self.name.clone()))
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,164 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
const PARAM_RISE_TIME: u32 = 0;
const PARAM_FALL_TIME: u32 = 1;
/// Slew limiter - limits the rate of change of a CV signal
/// Useful for creating portamento/glide effects and smoothing control signals
pub struct SlewLimiterNode {
name: String,
rise_time: f32, // Time in seconds to rise from 0 to 1
fall_time: f32, // Time in seconds to fall from 1 to 0
last_value: f32,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl SlewLimiterNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("CV In", SignalType::CV, 0),
];
let outputs = vec![
NodePort::new("CV Out", SignalType::CV, 0),
];
let parameters = vec![
Parameter::new(PARAM_RISE_TIME, "Rise Time", 0.0, 5.0, 0.01, ParameterUnit::Time),
Parameter::new(PARAM_FALL_TIME, "Fall Time", 0.0, 5.0, 0.01, ParameterUnit::Time),
];
Self {
name,
rise_time: 0.01,
fall_time: 0.01,
last_value: 0.0,
inputs,
outputs,
parameters,
}
}
}
impl AudioNode for SlewLimiterNode {
fn category(&self) -> NodeCategory {
NodeCategory::Utility
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_RISE_TIME => self.rise_time = value.clamp(0.0, 5.0),
PARAM_FALL_TIME => self.fall_time = value.clamp(0.0, 5.0),
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_RISE_TIME => self.rise_time,
PARAM_FALL_TIME => self.fall_time,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
if inputs.is_empty() || outputs.is_empty() {
return;
}
let input = inputs[0];
let output = &mut outputs[0];
let length = input.len().min(output.len());
// Calculate maximum change per sample
let sample_duration = 1.0 / sample_rate as f32;
// Rise/fall rates (units per second)
let rise_rate = if self.rise_time > 0.0001 {
1.0 / self.rise_time
} else {
f32::MAX // No limiting
};
let fall_rate = if self.fall_time > 0.0001 {
1.0 / self.fall_time
} else {
f32::MAX // No limiting
};
for i in 0..length {
let target = input[i];
let difference = target - self.last_value;
let max_change = if difference > 0.0 {
// Rising
rise_rate * sample_duration
} else {
// Falling
fall_rate * sample_duration
};
// Limit the change
let limited_difference = difference.clamp(-max_change, max_change);
self.last_value += limited_difference;
output[i] = self.last_value;
}
}
fn reset(&mut self) {
self.last_value = 0.0;
}
fn node_type(&self) -> &str {
"SlewLimiter"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
rise_time: self.rise_time,
fall_time: self.fall_time,
last_value: self.last_value,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,112 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, SignalType};
use crate::audio::midi::MidiEvent;
/// Splitter node - copies input to multiple outputs for parallel routing
pub struct SplitterNode {
name: String,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl SplitterNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Audio In", SignalType::Audio, 0),
];
let outputs = vec![
NodePort::new("Out 1", SignalType::Audio, 0),
NodePort::new("Out 2", SignalType::Audio, 1),
NodePort::new("Out 3", SignalType::Audio, 2),
NodePort::new("Out 4", SignalType::Audio, 3),
];
let parameters = vec![];
Self {
name,
inputs,
outputs,
parameters,
}
}
}
impl AudioNode for SplitterNode {
fn category(&self) -> NodeCategory {
NodeCategory::Utility
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, _id: u32, _value: f32) {
// No parameters
}
fn get_parameter(&self, _id: u32) -> f32 {
0.0
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
_sample_rate: u32,
) {
if inputs.is_empty() || outputs.is_empty() {
return;
}
let input = inputs[0];
// Copy input to all outputs
for output in outputs.iter_mut() {
let len = input.len().min(output.len());
output[..len].copy_from_slice(&input[..len]);
}
}
fn reset(&mut self) {
// No state to reset
}
fn node_type(&self) -> &str {
"Splitter"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,192 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, SignalType};
use crate::audio::midi::MidiEvent;
/// Template Input node - represents the MIDI input for one voice in a VoiceAllocator
pub struct TemplateInputNode {
name: String,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl TemplateInputNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![];
let outputs = vec![
NodePort::new("MIDI Out", SignalType::Midi, 0),
];
Self {
name,
inputs,
outputs,
parameters: vec![],
}
}
}
impl AudioNode for TemplateInputNode {
fn category(&self) -> NodeCategory {
NodeCategory::Input
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, _id: u32, _value: f32) {}
fn get_parameter(&self, _id: u32) -> f32 {
0.0
}
fn process(
&mut self,
_inputs: &[&[f32]],
_outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
_sample_rate: u32,
) {
// TemplateInput receives MIDI from VoiceAllocator and outputs it
// The MIDI was already placed in midi_outputs by the graph before calling process()
// So there's nothing to do here - the MIDI is already in the output buffer
}
fn reset(&mut self) {}
fn node_type(&self) -> &str {
"TemplateInput"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn handle_midi(&mut self, _event: &MidiEvent) {
// Pass through to connected nodes
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}
/// Template Output node - represents the audio output from one voice in a VoiceAllocator
pub struct TemplateOutputNode {
name: String,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl TemplateOutputNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Audio In", SignalType::Audio, 0),
];
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
Self {
name,
inputs,
outputs,
parameters: vec![],
}
}
}
impl AudioNode for TemplateOutputNode {
fn category(&self) -> NodeCategory {
NodeCategory::Output
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, _id: u32, _value: f32) {}
fn get_parameter(&self, _id: u32) -> f32 {
0.0
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
_sample_rate: u32,
) {
// Copy input to output - the graph reads from output buffers
if !inputs.is_empty() && !outputs.is_empty() {
let input = inputs[0];
let output = &mut outputs[0];
let len = input.len().min(output.len());
output[..len].copy_from_slice(&input[..len]);
}
}
fn reset(&mut self) {}
fn node_type(&self) -> &str {
"TemplateOutput"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self {
name: self.name.clone(),
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,370 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
use std::f32::consts::PI;
const PARAM_BANDS: u32 = 0;
const PARAM_ATTACK: u32 = 1;
const PARAM_RELEASE: u32 = 2;
const PARAM_MIX: u32 = 3;
const MAX_BANDS: usize = 32;
/// Simple bandpass filter using biquad
struct BandpassFilter {
// Biquad coefficients
b0: f32,
b1: f32,
b2: f32,
a1: f32,
a2: f32,
// State variables (separate for modulator and carrier, L/R channels)
mod_z1_left: f32,
mod_z2_left: f32,
mod_z1_right: f32,
mod_z2_right: f32,
car_z1_left: f32,
car_z2_left: f32,
car_z1_right: f32,
car_z2_right: f32,
}
impl BandpassFilter {
fn new() -> Self {
Self {
b0: 0.0,
b1: 0.0,
b2: 0.0,
a1: 0.0,
a2: 0.0,
mod_z1_left: 0.0,
mod_z2_left: 0.0,
mod_z1_right: 0.0,
mod_z2_right: 0.0,
car_z1_left: 0.0,
car_z2_left: 0.0,
car_z1_right: 0.0,
car_z2_right: 0.0,
}
}
fn set_bandpass(&mut self, frequency: f32, q: f32, sample_rate: f32) {
let omega = 2.0 * PI * frequency / sample_rate;
let sin_omega = omega.sin();
let cos_omega = omega.cos();
let alpha = sin_omega / (2.0 * q);
let a0 = 1.0 + alpha;
self.b0 = alpha / a0;
self.b1 = 0.0;
self.b2 = -alpha / a0;
self.a1 = -2.0 * cos_omega / a0;
self.a2 = (1.0 - alpha) / a0;
}
fn process_modulator(&mut self, input: f32, is_left: bool) -> f32 {
let (z1, z2) = if is_left {
(&mut self.mod_z1_left, &mut self.mod_z2_left)
} else {
(&mut self.mod_z1_right, &mut self.mod_z2_right)
};
let output = self.b0 * input + self.b1 * *z1 + self.b2 * *z2 - self.a1 * *z1 - self.a2 * *z2;
*z2 = *z1;
*z1 = output;
output
}
fn process_carrier(&mut self, input: f32, is_left: bool) -> f32 {
let (z1, z2) = if is_left {
(&mut self.car_z1_left, &mut self.car_z2_left)
} else {
(&mut self.car_z1_right, &mut self.car_z2_right)
};
let output = self.b0 * input + self.b1 * *z1 + self.b2 * *z2 - self.a1 * *z1 - self.a2 * *z2;
*z2 = *z1;
*z1 = output;
output
}
fn reset(&mut self) {
self.mod_z1_left = 0.0;
self.mod_z2_left = 0.0;
self.mod_z1_right = 0.0;
self.mod_z2_right = 0.0;
self.car_z1_left = 0.0;
self.car_z2_left = 0.0;
self.car_z1_right = 0.0;
self.car_z2_right = 0.0;
}
}
/// Vocoder band with filter and envelope follower
struct VocoderBand {
filter: BandpassFilter,
envelope_left: f32,
envelope_right: f32,
}
impl VocoderBand {
fn new() -> Self {
Self {
filter: BandpassFilter::new(),
envelope_left: 0.0,
envelope_right: 0.0,
}
}
fn reset(&mut self) {
self.filter.reset();
self.envelope_left = 0.0;
self.envelope_right = 0.0;
}
}
/// Vocoder effect - imposes spectral envelope of modulator onto carrier
pub struct VocoderNode {
name: String,
num_bands: usize, // 8 to 32 bands
attack_time: f32, // 0.001 to 0.1 seconds
release_time: f32, // 0.001 to 1.0 seconds
mix: f32, // 0.0 to 1.0
bands: Vec<VocoderBand>,
sample_rate: u32,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl VocoderNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("Modulator", SignalType::Audio, 0),
NodePort::new("Carrier", SignalType::Audio, 1),
];
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
let parameters = vec![
Parameter::new(PARAM_BANDS, "Bands", 8.0, 32.0, 16.0, ParameterUnit::Generic),
Parameter::new(PARAM_ATTACK, "Attack", 0.001, 0.1, 0.01, ParameterUnit::Time),
Parameter::new(PARAM_RELEASE, "Release", 0.001, 1.0, 0.05, ParameterUnit::Time),
Parameter::new(PARAM_MIX, "Mix", 0.0, 1.0, 1.0, ParameterUnit::Generic),
];
let mut bands = Vec::with_capacity(MAX_BANDS);
for _ in 0..MAX_BANDS {
bands.push(VocoderBand::new());
}
Self {
name,
num_bands: 16,
attack_time: 0.01,
release_time: 0.05,
mix: 1.0,
bands,
sample_rate: 48000,
inputs,
outputs,
parameters,
}
}
fn setup_bands(&mut self) {
// Distribute bands logarithmically from 200 Hz to 5000 Hz
let min_freq: f32 = 200.0;
let max_freq: f32 = 5000.0;
let q: f32 = 4.0; // Fairly narrow bands
for i in 0..self.num_bands {
let t = i as f32 / (self.num_bands - 1) as f32;
let freq = min_freq * (max_freq / min_freq).powf(t);
self.bands[i].filter.set_bandpass(freq, q, self.sample_rate as f32);
}
}
}
impl AudioNode for VocoderNode {
fn category(&self) -> NodeCategory {
NodeCategory::Effect
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_BANDS => {
let bands = (value.round() as usize).clamp(8, 32);
if bands != self.num_bands {
self.num_bands = bands;
self.setup_bands();
}
}
PARAM_ATTACK => self.attack_time = value.clamp(0.001, 0.1),
PARAM_RELEASE => self.release_time = value.clamp(0.001, 1.0),
PARAM_MIX => self.mix = value.clamp(0.0, 1.0),
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_BANDS => self.num_bands as f32,
PARAM_ATTACK => self.attack_time,
PARAM_RELEASE => self.release_time,
PARAM_MIX => self.mix,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
if inputs.len() < 2 || outputs.is_empty() {
return;
}
// Update sample rate if changed
if self.sample_rate != sample_rate {
self.sample_rate = sample_rate;
self.setup_bands();
}
let modulator = inputs[0];
let carrier = inputs[1];
let output = &mut outputs[0];
// Audio signals are stereo (interleaved L/R)
let mod_frames = modulator.len() / 2;
let car_frames = carrier.len() / 2;
let out_frames = output.len() / 2;
let frames_to_process = mod_frames.min(car_frames).min(out_frames);
// Calculate envelope follower coefficients
let sample_duration = 1.0 / self.sample_rate as f32;
let attack_coeff = (sample_duration / self.attack_time).min(1.0);
let release_coeff = (sample_duration / self.release_time).min(1.0);
for frame in 0..frames_to_process {
let mod_left = modulator[frame * 2];
let mod_right = modulator[frame * 2 + 1];
let car_left = carrier[frame * 2];
let car_right = carrier[frame * 2 + 1];
let mut out_left = 0.0;
let mut out_right = 0.0;
// Process each band
for i in 0..self.num_bands {
let band = &mut self.bands[i];
// Filter modulator and carrier through bandpass
let mod_band_left = band.filter.process_modulator(mod_left, true);
let mod_band_right = band.filter.process_modulator(mod_right, false);
let car_band_left = band.filter.process_carrier(car_left, true);
let car_band_right = band.filter.process_carrier(car_right, false);
// Extract envelope from modulator band (rectify + smooth)
let mod_level_left = mod_band_left.abs();
let mod_level_right = mod_band_right.abs();
// Envelope follower
let coeff_left = if mod_level_left > band.envelope_left {
attack_coeff
} else {
release_coeff
};
let coeff_right = if mod_level_right > band.envelope_right {
attack_coeff
} else {
release_coeff
};
band.envelope_left += (mod_level_left - band.envelope_left) * coeff_left;
band.envelope_right += (mod_level_right - band.envelope_right) * coeff_right;
// Apply envelope to carrier band
out_left += car_band_left * band.envelope_left;
out_right += car_band_right * band.envelope_right;
}
// Normalize output (roughly compensate for band summing)
let norm_factor = 1.0 / (self.num_bands as f32).sqrt();
out_left *= norm_factor;
out_right *= norm_factor;
// Mix with carrier (dry signal)
output[frame * 2] = car_left * (1.0 - self.mix) + out_left * self.mix;
output[frame * 2 + 1] = car_right * (1.0 - self.mix) + out_right * self.mix;
}
}
fn reset(&mut self) {
for band in &mut self.bands {
band.reset();
}
}
fn node_type(&self) -> &str {
"Vocoder"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
let mut bands = Vec::with_capacity(MAX_BANDS);
for _ in 0..MAX_BANDS {
bands.push(VocoderBand::new());
}
let mut node = Self {
name: self.name.clone(),
num_bands: self.num_bands,
attack_time: self.attack_time,
release_time: self.release_time,
mix: self.mix,
bands,
sample_rate: self.sample_rate,
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
};
node.setup_bands();
Box::new(node)
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,353 @@
use crate::audio::midi::MidiEvent;
use crate::audio::node_graph::{AudioNode, AudioGraph, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
const PARAM_VOICE_COUNT: u32 = 0;
const MAX_VOICES: usize = 16; // Maximum allowed voices
const DEFAULT_VOICES: usize = 8;
/// Voice state for voice allocation
#[derive(Clone)]
struct VoiceState {
active: bool,
note: u8,
age: u32, // For voice stealing
pending_events: Vec<MidiEvent>, // MIDI events to send to this voice
}
impl VoiceState {
fn new() -> Self {
Self {
active: false,
note: 0,
age: 0,
pending_events: Vec::new(),
}
}
}
/// VoiceAllocatorNode - A group node that creates N polyphonic instances of its internal graph
///
/// This node acts as a container for a "voice template" graph. At runtime, it creates
/// N instances of that graph (one per voice) and routes MIDI note events to them.
/// All voice outputs are mixed together into a single output.
pub struct VoiceAllocatorNode {
name: String,
/// The template graph (edited by user via UI)
template_graph: AudioGraph,
/// Runtime voice instances (clones of template)
voice_instances: Vec<AudioGraph>,
/// Voice allocation state
voices: [VoiceState; MAX_VOICES],
/// Number of active voices (configurable parameter)
voice_count: usize,
/// Mix buffer for combining voice outputs
mix_buffer: Vec<f32>,
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl VoiceAllocatorNode {
pub fn new(name: impl Into<String>, sample_rate: u32, buffer_size: usize) -> Self {
let name = name.into();
// MIDI input for receiving note events
let inputs = vec![
NodePort::new("MIDI In", SignalType::Midi, 0),
];
// Single mixed audio output
let outputs = vec![
NodePort::new("Mixed Out", SignalType::Audio, 0),
];
// Voice count parameter
let parameters = vec![
Parameter::new(PARAM_VOICE_COUNT, "Voices", 1.0, MAX_VOICES as f32, DEFAULT_VOICES as f32, ParameterUnit::Generic),
];
// Create empty template graph
let template_graph = AudioGraph::new(sample_rate, buffer_size);
// Create voice instances (initially empty clones of template)
let voice_instances: Vec<AudioGraph> = (0..MAX_VOICES)
.map(|_| AudioGraph::new(sample_rate, buffer_size))
.collect();
Self {
name,
template_graph,
voice_instances,
voices: std::array::from_fn(|_| VoiceState::new()),
voice_count: DEFAULT_VOICES,
mix_buffer: vec![0.0; buffer_size * 2], // Stereo
inputs,
outputs,
parameters,
}
}
/// Get mutable reference to template graph (for UI editing)
pub fn template_graph_mut(&mut self) -> &mut AudioGraph {
&mut self.template_graph
}
/// Get reference to template graph (for serialization)
pub fn template_graph(&self) -> &AudioGraph {
&self.template_graph
}
/// Rebuild voice instances from template (called after template is edited)
pub fn rebuild_voices(&mut self) {
// Clone template to all voice instances
for voice in &mut self.voice_instances {
*voice = self.template_graph.clone_graph();
// Find TemplateInput and TemplateOutput nodes
let mut template_input_idx = None;
let mut template_output_idx = None;
for node_idx in voice.node_indices() {
if let Some(node) = voice.get_node(node_idx) {
match node.node_type() {
"TemplateInput" => template_input_idx = Some(node_idx),
"TemplateOutput" => template_output_idx = Some(node_idx),
_ => {}
}
}
}
// Mark ONLY TemplateInput as a MIDI target
// MIDI will flow through graph connections to other nodes (like MidiToCV)
if let Some(input_idx) = template_input_idx {
voice.set_midi_target(input_idx, true);
}
// Set TemplateOutput as output node
voice.set_output_node(template_output_idx);
}
}
/// Find a free voice, or steal the oldest one
fn find_voice_for_note_on(&mut self) -> usize {
// Only search within active voice_count
// First, look for an inactive voice
for (i, voice) in self.voices[..self.voice_count].iter().enumerate() {
if !voice.active {
return i;
}
}
// No free voices, steal the oldest one within voice_count
self.voices[..self.voice_count]
.iter()
.enumerate()
.max_by_key(|(_, v)| v.age)
.map(|(i, _)| i)
.unwrap_or(0)
}
/// Find all voices playing a specific note
fn find_voices_for_note_off(&self, note: u8) -> Vec<usize> {
self.voices[..self.voice_count]
.iter()
.enumerate()
.filter_map(|(i, v)| {
if v.active && v.note == note {
Some(i)
} else {
None
}
})
.collect()
}
}
impl AudioNode for VoiceAllocatorNode {
fn category(&self) -> NodeCategory {
NodeCategory::Utility
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_VOICE_COUNT => {
let new_count = (value.round() as usize).clamp(1, MAX_VOICES);
if new_count != self.voice_count {
self.voice_count = new_count;
// Stop voices beyond the new count
for voice in &mut self.voices[new_count..] {
voice.active = false;
}
}
}
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_VOICE_COUNT => self.voice_count as f32,
_ => 0.0,
}
}
fn handle_midi(&mut self, event: &MidiEvent) {
let status = event.status & 0xF0;
match status {
0x90 => {
// Note on
if event.data2 > 0 {
let voice_idx = self.find_voice_for_note_on();
self.voices[voice_idx].active = true;
self.voices[voice_idx].note = event.data1;
self.voices[voice_idx].age = 0;
// Store MIDI event for this voice to process
self.voices[voice_idx].pending_events.push(*event);
} else {
// Velocity = 0 means note off - send to ALL voices playing this note
let voice_indices = self.find_voices_for_note_off(event.data1);
for voice_idx in voice_indices {
self.voices[voice_idx].active = false;
self.voices[voice_idx].pending_events.push(*event);
}
}
}
0x80 => {
// Note off - send to ALL voices playing this note
let voice_indices = self.find_voices_for_note_off(event.data1);
for voice_idx in voice_indices {
self.voices[voice_idx].active = false;
self.voices[voice_idx].pending_events.push(*event);
}
}
_ => {
// Other MIDI events (CC, pitch bend, etc.) - send to all active voices
for voice_idx in 0..self.voice_count {
if self.voices[voice_idx].active {
self.voices[voice_idx].pending_events.push(*event);
}
}
}
}
}
fn process(
&mut self,
_inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
_sample_rate: u32,
) {
// Process MIDI events from input (allocate notes to voices)
if !midi_inputs.is_empty() {
for event in midi_inputs[0] {
self.handle_midi(event);
}
}
if outputs.is_empty() {
return;
}
let output = &mut outputs[0];
let output_len = output.len();
// Process each active voice and mix (only up to voice_count)
for voice_idx in 0..self.voice_count {
let voice_state = &mut self.voices[voice_idx];
if voice_state.active {
voice_state.age = voice_state.age.saturating_add(1);
// Get pending MIDI events for this voice
let midi_events = std::mem::take(&mut voice_state.pending_events);
// IMPORTANT: Process only the slice of mix_buffer that matches output size
// This prevents phase discontinuities in oscillators
let mix_slice = &mut self.mix_buffer[..output_len];
mix_slice.fill(0.0);
// Process this voice's graph with its MIDI events
// Note: playback_time is 0.0 since voice allocator doesn't track time
self.voice_instances[voice_idx].process(mix_slice, &midi_events, 0.0);
// Mix into output (accumulate)
for (i, sample) in mix_slice.iter().enumerate() {
output[i] += sample;
}
}
}
// Apply normalization to prevent clipping (divide by active voice count)
let active_count = self.voices[..self.voice_count].iter().filter(|v| v.active).count();
if active_count > 1 {
let scale = 1.0 / (active_count as f32).sqrt(); // Use sqrt for better loudness perception
for sample in output.iter_mut() {
*sample *= scale;
}
}
}
fn reset(&mut self) {
for voice in &mut self.voices {
voice.active = false;
voice.pending_events.clear();
}
for graph in &mut self.voice_instances {
graph.reset();
}
self.template_graph.reset();
}
fn node_type(&self) -> &str {
"VoiceAllocator"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
// Clone creates a new VoiceAllocator with the same template graph
// Voice instances will be rebuilt when rebuild_voices() is called
Box::new(Self {
name: self.name.clone(),
template_graph: self.template_graph.clone_graph(),
voice_instances: self.voice_instances.iter().map(|g| g.clone_graph()).collect(),
voices: std::array::from_fn(|_| VoiceState::new()), // Reset voices
voice_count: self.voice_count,
mix_buffer: vec![0.0; self.mix_buffer.len()],
inputs: self.inputs.clone(),
outputs: self.outputs.clone(),
parameters: self.parameters.clone(),
})
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,294 @@
use crate::audio::node_graph::{AudioNode, NodeCategory, NodePort, Parameter, ParameterUnit, SignalType};
use crate::audio::midi::MidiEvent;
use std::f32::consts::PI;
const WAVETABLE_SIZE: usize = 256;
// Parameters
const PARAM_WAVETABLE: u32 = 0;
const PARAM_FINE_TUNE: u32 = 1;
const PARAM_POSITION: u32 = 2;
/// Types of preset wavetables
#[derive(Debug, Clone, Copy, PartialEq)]
enum WavetableType {
Sine = 0,
Saw = 1,
Square = 2,
Triangle = 3,
PWM = 4, // Pulse Width Modulated
Harmonic = 5, // Rich harmonics
Inharmonic = 6, // Metallic/bell-like
Digital = 7, // Stepped/digital artifacts
}
impl WavetableType {
fn from_u32(value: u32) -> Self {
match value {
0 => WavetableType::Sine,
1 => WavetableType::Saw,
2 => WavetableType::Square,
3 => WavetableType::Triangle,
4 => WavetableType::PWM,
5 => WavetableType::Harmonic,
6 => WavetableType::Inharmonic,
7 => WavetableType::Digital,
_ => WavetableType::Sine,
}
}
}
/// Generate a wavetable of the specified type
fn generate_wavetable(wave_type: WavetableType) -> Vec<f32> {
let mut table = vec![0.0; WAVETABLE_SIZE];
match wave_type {
WavetableType::Sine => {
for i in 0..WAVETABLE_SIZE {
let phase = (i as f32 / WAVETABLE_SIZE as f32) * 2.0 * PI;
table[i] = phase.sin();
}
}
WavetableType::Saw => {
for i in 0..WAVETABLE_SIZE {
let t = i as f32 / WAVETABLE_SIZE as f32;
table[i] = 2.0 * t - 1.0;
}
}
WavetableType::Square => {
for i in 0..WAVETABLE_SIZE {
table[i] = if i < WAVETABLE_SIZE / 2 { 1.0 } else { -1.0 };
}
}
WavetableType::Triangle => {
for i in 0..WAVETABLE_SIZE {
let t = i as f32 / WAVETABLE_SIZE as f32;
table[i] = if t < 0.5 {
4.0 * t - 1.0
} else {
-4.0 * t + 3.0
};
}
}
WavetableType::PWM => {
// Variable pulse width
for i in 0..WAVETABLE_SIZE {
let duty = 0.25; // 25% duty cycle
table[i] = if (i as f32 / WAVETABLE_SIZE as f32) < duty { 1.0 } else { -1.0 };
}
}
WavetableType::Harmonic => {
// Multiple harmonics for rich sound
for i in 0..WAVETABLE_SIZE {
let phase = (i as f32 / WAVETABLE_SIZE as f32) * 2.0 * PI;
table[i] = phase.sin() * 0.5
+ (phase * 2.0).sin() * 0.25
+ (phase * 3.0).sin() * 0.125
+ (phase * 4.0).sin() * 0.0625;
}
}
WavetableType::Inharmonic => {
// Non-integer harmonics for metallic/bell-like sounds
for i in 0..WAVETABLE_SIZE {
let phase = (i as f32 / WAVETABLE_SIZE as f32) * 2.0 * PI;
table[i] = phase.sin() * 0.4
+ (phase * 2.13).sin() * 0.3
+ (phase * 3.76).sin() * 0.2
+ (phase * 5.41).sin() * 0.1;
}
}
WavetableType::Digital => {
// Stepped waveform with digital artifacts
for i in 0..WAVETABLE_SIZE {
let steps = 8;
let step = (i * steps / WAVETABLE_SIZE) as f32 / steps as f32;
table[i] = step * 2.0 - 1.0;
}
}
}
table
}
/// Wavetable oscillator node
pub struct WavetableOscillatorNode {
name: String,
// Current wavetable
wavetable_type: WavetableType,
wavetable: Vec<f32>,
// Oscillator state
phase: f32,
fine_tune: f32, // -1.0 to 1.0 semitones
position: f32, // 0.0 to 1.0 (for future multi-cycle wavetables)
inputs: Vec<NodePort>,
outputs: Vec<NodePort>,
parameters: Vec<Parameter>,
}
impl WavetableOscillatorNode {
pub fn new(name: impl Into<String>) -> Self {
let name = name.into();
let inputs = vec![
NodePort::new("V/Oct", SignalType::CV, 0),
];
let outputs = vec![
NodePort::new("Audio Out", SignalType::Audio, 0),
];
let parameters = vec![
Parameter::new(PARAM_WAVETABLE, "Wavetable", 0.0, 7.0, 0.0, ParameterUnit::Generic),
Parameter::new(PARAM_FINE_TUNE, "Fine Tune", -1.0, 1.0, 0.0, ParameterUnit::Generic),
Parameter::new(PARAM_POSITION, "Position", 0.0, 1.0, 0.0, ParameterUnit::Generic),
];
let wavetable_type = WavetableType::Sine;
let wavetable = generate_wavetable(wavetable_type);
Self {
name,
wavetable_type,
wavetable,
phase: 0.0,
fine_tune: 0.0,
position: 0.0,
inputs,
outputs,
parameters,
}
}
/// Convert V/oct CV to frequency with fine tune
fn voct_to_freq(&self, voct: f32) -> f32 {
let semitones = voct * 12.0 + self.fine_tune;
440.0 * 2.0_f32.powf(semitones / 12.0)
}
/// Read from wavetable with linear interpolation
fn read_wavetable(&self, phase: f32) -> f32 {
let index = phase * WAVETABLE_SIZE as f32;
let index_floor = index.floor() as usize % WAVETABLE_SIZE;
let index_ceil = (index_floor + 1) % WAVETABLE_SIZE;
let frac = index - index.floor();
// Linear interpolation
let sample1 = self.wavetable[index_floor];
let sample2 = self.wavetable[index_ceil];
sample1 + (sample2 - sample1) * frac
}
}
impl AudioNode for WavetableOscillatorNode {
fn category(&self) -> NodeCategory {
NodeCategory::Generator
}
fn inputs(&self) -> &[NodePort] {
&self.inputs
}
fn outputs(&self) -> &[NodePort] {
&self.outputs
}
fn parameters(&self) -> &[Parameter] {
&self.parameters
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
PARAM_WAVETABLE => {
let new_type = WavetableType::from_u32(value as u32);
if new_type != self.wavetable_type {
self.wavetable_type = new_type;
self.wavetable = generate_wavetable(new_type);
}
}
PARAM_FINE_TUNE => {
self.fine_tune = value.clamp(-1.0, 1.0);
}
PARAM_POSITION => {
self.position = value.clamp(0.0, 1.0);
}
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
PARAM_WAVETABLE => self.wavetable_type as u32 as f32,
PARAM_FINE_TUNE => self.fine_tune,
PARAM_POSITION => self.position,
_ => 0.0,
}
}
fn process(
&mut self,
inputs: &[&[f32]],
outputs: &mut [&mut [f32]],
_midi_inputs: &[&[MidiEvent]],
_midi_outputs: &mut [&mut Vec<MidiEvent>],
sample_rate: u32,
) {
if outputs.is_empty() {
return;
}
let output = &mut outputs[0];
let frames = output.len() / 2;
for frame in 0..frames {
// Read V/Oct input
let voct = if !inputs.is_empty() && !inputs[0].is_empty() {
inputs[0][frame.min(inputs[0].len() / 2 - 1) * 2]
} else {
0.0 // Default to A4 (440 Hz)
};
// Calculate frequency
let freq = self.voct_to_freq(voct);
// Read from wavetable
let sample = self.read_wavetable(self.phase);
// Advance phase
self.phase += freq / sample_rate as f32;
if self.phase >= 1.0 {
self.phase -= 1.0;
}
// Output stereo (same signal to both channels)
output[frame * 2] = sample * 0.5; // Scale down to prevent clipping
output[frame * 2 + 1] = sample * 0.5;
}
}
fn reset(&mut self) {
self.phase = 0.0;
}
fn node_type(&self) -> &str {
"WavetableOscillator"
}
fn name(&self) -> &str {
&self.name
}
fn clone_node(&self) -> Box<dyn AudioNode> {
Box::new(Self::new(self.name.clone()))
}
fn as_any_mut(&mut self) -> &mut dyn std::any::Any {
self
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@ -0,0 +1,201 @@
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use super::nodes::LoopMode;
/// Sample data for preset serialization
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "type")]
pub enum SampleData {
#[serde(rename = "simple_sampler")]
SimpleSampler {
#[serde(skip_serializing_if = "Option::is_none")]
file_path: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
embedded_data: Option<EmbeddedSampleData>,
},
#[serde(rename = "multi_sampler")]
MultiSampler { layers: Vec<LayerData> },
}
/// Embedded sample data (base64-encoded for JSON compatibility)
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct EmbeddedSampleData {
/// Base64-encoded audio samples (f32 little-endian)
pub data_base64: String,
/// Original sample rate
pub sample_rate: u32,
}
/// Layer data for MultiSampler
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct LayerData {
#[serde(skip_serializing_if = "Option::is_none")]
pub file_path: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub embedded_data: Option<EmbeddedSampleData>,
pub key_min: u8,
pub key_max: u8,
pub root_key: u8,
pub velocity_min: u8,
pub velocity_max: u8,
#[serde(skip_serializing_if = "Option::is_none")]
pub loop_start: Option<usize>,
#[serde(skip_serializing_if = "Option::is_none")]
pub loop_end: Option<usize>,
#[serde(default = "default_loop_mode")]
pub loop_mode: LoopMode,
}
fn default_loop_mode() -> LoopMode {
LoopMode::OneShot
}
/// Serializable representation of a node graph preset
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct GraphPreset {
/// Preset metadata
pub metadata: PresetMetadata,
/// Nodes in the graph
pub nodes: Vec<SerializedNode>,
/// Connections between nodes
pub connections: Vec<SerializedConnection>,
/// Which node indices are MIDI targets
pub midi_targets: Vec<u32>,
/// Which node index is the audio output (None if not set)
pub output_node: Option<u32>,
}
/// Metadata about the preset
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PresetMetadata {
/// Preset name
pub name: String,
/// Description of what the preset sounds like
#[serde(default)]
pub description: String,
/// Preset author
#[serde(default)]
pub author: String,
/// Preset version (for compatibility)
#[serde(default = "default_version")]
pub version: u32,
/// Tags for categorization (e.g., "bass", "lead", "pad")
#[serde(default)]
pub tags: Vec<String>,
}
fn default_version() -> u32 {
1
}
/// Serialized node representation
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SerializedNode {
/// Unique ID (node index in the graph)
pub id: u32,
/// Node type (e.g., "Oscillator", "Filter", "ADSR")
pub node_type: String,
/// Parameter values (param_id -> value)
pub parameters: HashMap<u32, f32>,
/// UI position (for visual editor)
#[serde(default)]
pub position: (f32, f32),
/// For VoiceAllocator nodes: the nested template graph
#[serde(skip_serializing_if = "Option::is_none")]
pub template_graph: Option<Box<GraphPreset>>,
/// For sampler nodes: loaded sample data
#[serde(skip_serializing_if = "Option::is_none")]
pub sample_data: Option<SampleData>,
}
/// Serialized connection between nodes
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SerializedConnection {
/// Source node ID
pub from_node: u32,
/// Source port index
pub from_port: usize,
/// Destination node ID
pub to_node: u32,
/// Destination port index
pub to_port: usize,
}
impl GraphPreset {
/// Create a new preset with the given name
pub fn new(name: impl Into<String>) -> Self {
Self {
metadata: PresetMetadata {
name: name.into(),
description: String::new(),
author: String::new(),
version: 1,
tags: Vec::new(),
},
nodes: Vec::new(),
connections: Vec::new(),
midi_targets: Vec::new(),
output_node: None,
}
}
/// Serialize to JSON string
pub fn to_json(&self) -> Result<String, serde_json::Error> {
serde_json::to_string_pretty(self)
}
/// Deserialize from JSON string
pub fn from_json(json: &str) -> Result<Self, serde_json::Error> {
serde_json::from_str(json)
}
/// Add a node to the preset
pub fn add_node(&mut self, node: SerializedNode) {
self.nodes.push(node);
}
/// Add a connection to the preset
pub fn add_connection(&mut self, connection: SerializedConnection) {
self.connections.push(connection);
}
}
impl SerializedNode {
/// Create a new serialized node
pub fn new(id: u32, node_type: impl Into<String>) -> Self {
Self {
id,
node_type: node_type.into(),
parameters: HashMap::new(),
position: (0.0, 0.0),
template_graph: None,
sample_data: None,
}
}
/// Set a parameter value
pub fn set_parameter(&mut self, param_id: u32, value: f32) {
self.parameters.insert(param_id, value);
}
/// Set UI position
pub fn set_position(&mut self, x: f32, y: f32) {
self.position = (x, y);
}
}

View File

@ -0,0 +1,96 @@
use serde::{Deserialize, Serialize};
/// Three distinct signal types for graph edges
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
pub enum SignalType {
/// Audio-rate signals (-1.0 to 1.0 typically) - Blue in UI
Audio,
/// MIDI events (discrete messages) - Green in UI
Midi,
/// Control Voltage (modulation signals, typically 0.0 to 1.0) - Orange in UI
CV,
}
/// Port definition for node inputs/outputs
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct NodePort {
pub name: String,
pub signal_type: SignalType,
pub index: usize,
}
impl NodePort {
pub fn new(name: impl Into<String>, signal_type: SignalType, index: usize) -> Self {
Self {
name: name.into(),
signal_type,
index,
}
}
}
/// Node category for UI organization
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
pub enum NodeCategory {
Input,
Generator,
Effect,
Utility,
Output,
}
/// User-facing parameter definition
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Parameter {
pub id: u32,
pub name: String,
pub min: f32,
pub max: f32,
pub default: f32,
pub unit: ParameterUnit,
}
impl Parameter {
pub fn new(id: u32, name: impl Into<String>, min: f32, max: f32, default: f32, unit: ParameterUnit) -> Self {
Self {
id,
name: name.into(),
min,
max,
default,
unit,
}
}
}
/// Units for parameter values
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
pub enum ParameterUnit {
Generic,
Frequency, // Hz
Decibels, // dB
Time, // seconds
Percent, // 0-100
}
/// Errors that can occur during graph operations
#[derive(Debug, Clone)]
pub enum ConnectionError {
TypeMismatch { expected: SignalType, got: SignalType },
InvalidPort,
WouldCreateCycle,
}
impl std::fmt::Display for ConnectionError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
ConnectionError::TypeMismatch { expected, got } => {
write!(f, "Signal type mismatch: expected {:?}, got {:?}", expected, got)
}
ConnectionError::InvalidPort => write!(f, "Invalid port index"),
ConnectionError::WouldCreateCycle => write!(f, "Connection would create a cycle"),
}
}
}
impl std::error::Error for ConnectionError {}

View File

@ -0,0 +1,650 @@
use std::path::{Path, PathBuf};
use std::f32::consts::PI;
use serde::{Deserialize, Serialize};
/// Windowed sinc interpolation for high-quality time stretching
/// This is stateless and can handle arbitrary fractional positions
#[inline]
fn sinc(x: f32) -> f32 {
if x.abs() < 1e-5 {
1.0
} else {
let px = PI * x;
px.sin() / px
}
}
/// Blackman window function
#[inline]
fn blackman_window(x: f32, width: f32) -> f32 {
if x.abs() > width {
0.0
} else {
let a0 = 0.42;
let a1 = 0.5;
let a2 = 0.08;
// Map x from [-width, width] to [0, 1] for proper Blackman window evaluation
let n = (x / width + 1.0) / 2.0;
a0 - a1 * (2.0 * PI * n).cos() + a2 * (4.0 * PI * n).cos()
}
}
/// High-quality windowed sinc interpolation
/// Uses a 32-tap windowed sinc kernel for smooth, artifact-free interpolation
/// frac: fractional position to interpolate at (0.0 to 1.0)
/// samples: array of samples centered around the target position
#[inline]
fn windowed_sinc_interpolate(samples: &[f32], frac: f32) -> f32 {
let mut result = 0.0;
let kernel_size = samples.len();
let half_kernel = (kernel_size / 2) as f32;
for i in 0..kernel_size {
// Distance from interpolation point
// samples[half_kernel] is at position 0, we want to interpolate at position frac
let x = frac + half_kernel - (i as f32);
let sinc_val = sinc(x);
let window_val = blackman_window(x, half_kernel);
result += samples[i] * sinc_val * window_val;
}
result
}
/// Audio file stored in the pool
#[derive(Debug, Clone)]
pub struct AudioFile {
pub path: PathBuf,
pub data: Vec<f32>, // Interleaved samples
pub channels: u32,
pub sample_rate: u32,
pub frames: u64,
}
impl AudioFile {
/// Create a new AudioFile
pub fn new(path: PathBuf, data: Vec<f32>, channels: u32, sample_rate: u32) -> Self {
let frames = (data.len() / channels as usize) as u64;
Self {
path,
data,
channels,
sample_rate,
frames,
}
}
/// Get duration in seconds
pub fn duration_seconds(&self) -> f64 {
self.frames as f64 / self.sample_rate as f64
}
/// Generate a waveform overview with the specified number of peaks
/// This creates a downsampled representation suitable for timeline visualization
pub fn generate_waveform_overview(&self, target_peaks: usize) -> Vec<crate::io::WaveformPeak> {
if self.frames == 0 || target_peaks == 0 {
return Vec::new();
}
let total_frames = self.frames as usize;
let frames_per_peak = (total_frames / target_peaks).max(1);
let actual_peaks = (total_frames + frames_per_peak - 1) / frames_per_peak;
let mut peaks = Vec::with_capacity(actual_peaks);
for peak_idx in 0..actual_peaks {
let start_frame = peak_idx * frames_per_peak;
let end_frame = ((peak_idx + 1) * frames_per_peak).min(total_frames);
let mut min = 0.0f32;
let mut max = 0.0f32;
// Scan all samples in this window
for frame_idx in start_frame..end_frame {
// For multi-channel audio, combine all channels
for ch in 0..self.channels as usize {
let sample_idx = frame_idx * self.channels as usize + ch;
if sample_idx < self.data.len() {
let sample = self.data[sample_idx];
min = min.min(sample);
max = max.max(sample);
}
}
}
peaks.push(crate::io::WaveformPeak { min, max });
}
peaks
}
}
/// Pool of shared audio files
pub struct AudioPool {
files: Vec<AudioFile>,
}
impl AudioPool {
/// Create a new empty audio pool
pub fn new() -> Self {
Self {
files: Vec::new(),
}
}
/// Get the number of files in the pool
pub fn len(&self) -> usize {
self.files.len()
}
/// Check if the pool is empty
pub fn is_empty(&self) -> bool {
self.files.is_empty()
}
/// Get file info for waveform generation (duration, sample_rate, channels)
pub fn get_file_info(&self, pool_index: usize) -> Option<(f64, u32, u32)> {
self.files.get(pool_index).map(|file| {
(file.duration_seconds(), file.sample_rate, file.channels)
})
}
/// Generate waveform overview for a file in the pool
pub fn generate_waveform(&self, pool_index: usize, target_peaks: usize) -> Option<Vec<crate::io::WaveformPeak>> {
self.files.get(pool_index).map(|file| {
file.generate_waveform_overview(target_peaks)
})
}
/// Add an audio file to the pool and return its index
pub fn add_file(&mut self, file: AudioFile) -> usize {
let index = self.files.len();
self.files.push(file);
index
}
/// Get an audio file by index
pub fn get_file(&self, index: usize) -> Option<&AudioFile> {
self.files.get(index)
}
/// Get number of files in the pool
pub fn file_count(&self) -> usize {
self.files.len()
}
/// Render audio from a file in the pool with high-quality windowed sinc interpolation
/// start_time_seconds: position in the audio file to start reading from (in seconds)
/// Returns the number of samples actually rendered
pub fn render_from_file(
&self,
pool_index: usize,
output: &mut [f32],
start_time_seconds: f64,
gain: f32,
engine_sample_rate: u32,
engine_channels: u32,
) -> usize {
let Some(audio_file) = self.files.get(pool_index) else {
return 0;
};
let src_channels = audio_file.channels as usize;
let dst_channels = engine_channels as usize;
let output_frames = output.len() / dst_channels;
// Calculate starting position in source with fractional precision
let src_start_position = start_time_seconds * audio_file.sample_rate as f64;
// Sample rate conversion ratio
let rate_ratio = audio_file.sample_rate as f64 / engine_sample_rate as f64;
// Kernel size for windowed sinc (32 taps = high quality, good performance)
const KERNEL_SIZE: usize = 32;
const HALF_KERNEL: usize = KERNEL_SIZE / 2;
let mut rendered_frames = 0;
// Render frame by frame with windowed sinc interpolation
for output_frame in 0..output_frames {
// Calculate exact fractional position in source
let src_position = src_start_position + (output_frame as f64 * rate_ratio);
let src_frame = src_position.floor() as i32;
let frac = (src_position - src_frame as f64) as f32;
// Check if we've gone past the end of the audio file
if src_frame < 0 || src_frame as usize >= audio_file.frames as usize {
break;
}
// Interpolate each channel
for dst_ch in 0..dst_channels {
let sample = if src_channels == dst_channels {
// Direct channel mapping
let ch_offset = dst_ch;
// Extract channel samples for interpolation
let mut channel_samples = Vec::with_capacity(KERNEL_SIZE);
for i in -(HALF_KERNEL as i32)..(HALF_KERNEL as i32) {
let idx = src_frame + i;
if idx >= 0 && (idx as usize) < audio_file.frames as usize {
let sample_idx = (idx as usize) * src_channels + ch_offset;
channel_samples.push(audio_file.data[sample_idx]);
} else {
channel_samples.push(0.0);
}
}
windowed_sinc_interpolate(&channel_samples, frac)
} else if src_channels == 1 && dst_channels > 1 {
// Mono to stereo - duplicate
let mut channel_samples = Vec::with_capacity(KERNEL_SIZE);
for i in -(HALF_KERNEL as i32)..(HALF_KERNEL as i32) {
let idx = src_frame + i;
if idx >= 0 && (idx as usize) < audio_file.frames as usize {
channel_samples.push(audio_file.data[idx as usize]);
} else {
channel_samples.push(0.0);
}
}
windowed_sinc_interpolate(&channel_samples, frac)
} else if src_channels > 1 && dst_channels == 1 {
// Multi-channel to mono - average all source channels
let mut sum = 0.0;
for src_ch in 0..src_channels {
let mut channel_samples = Vec::with_capacity(KERNEL_SIZE);
for i in -(HALF_KERNEL as i32)..(HALF_KERNEL as i32) {
let idx = src_frame + i;
if idx >= 0 && (idx as usize) < audio_file.frames as usize {
let sample_idx = (idx as usize) * src_channels + src_ch;
channel_samples.push(audio_file.data[sample_idx]);
} else {
channel_samples.push(0.0);
}
}
sum += windowed_sinc_interpolate(&channel_samples, frac);
}
sum / src_channels as f32
} else {
// Mismatched channels - use modulo mapping
let src_ch = dst_ch % src_channels;
let mut channel_samples = Vec::with_capacity(KERNEL_SIZE);
for i in -(HALF_KERNEL as i32)..(HALF_KERNEL as i32) {
let idx = src_frame + i;
if idx >= 0 && (idx as usize) < audio_file.frames as usize {
let sample_idx = (idx as usize) * src_channels + src_ch;
channel_samples.push(audio_file.data[sample_idx]);
} else {
channel_samples.push(0.0);
}
}
windowed_sinc_interpolate(&channel_samples, frac)
};
// Mix into output with gain
let output_idx = output_frame * dst_channels + dst_ch;
output[output_idx] += sample * gain;
}
rendered_frames += 1;
}
rendered_frames * dst_channels
}
}
impl Default for AudioPool {
fn default() -> Self {
Self::new()
}
}
/// Embedded audio data stored as base64 in the project file
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct EmbeddedAudioData {
/// Base64-encoded audio data
pub data_base64: String,
/// Original file format (wav, mp3, etc.)
pub format: String,
}
/// Serializable audio pool entry for project save/load
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct AudioPoolEntry {
/// Index in the audio pool
pub pool_index: usize,
/// Original filename
pub name: String,
/// Path relative to project file (None if embedded)
pub relative_path: Option<String>,
/// Duration in seconds
pub duration: f64,
/// Sample rate
pub sample_rate: u32,
/// Number of channels
pub channels: u32,
/// Embedded audio data (for files < 10MB)
pub embedded_data: Option<EmbeddedAudioData>,
}
impl AudioPool {
/// Serialize the audio pool for project saving
///
/// Files smaller than 10MB are embedded as base64.
/// Larger files are stored as relative paths to the project file.
pub fn serialize(&self, project_path: &Path) -> Result<Vec<AudioPoolEntry>, String> {
let project_dir = project_path.parent()
.ok_or_else(|| "Project path has no parent directory".to_string())?;
let mut entries = Vec::new();
for (index, file) in self.files.iter().enumerate() {
let file_path = &file.path;
let file_path_str = file_path.to_string_lossy();
// Check if this is a temp file (from recording) or previously embedded audio
// Always embed these
let is_temp_file = file_path.starts_with(std::env::temp_dir());
let is_embedded = file_path_str.starts_with("<embedded:");
// Try to get relative path (unless it's a temp/embedded file)
let relative_path = if is_temp_file || is_embedded {
None // Don't store path for temp/embedded files, they'll be embedded
} else if let Some(rel) = pathdiff::diff_paths(file_path, project_dir) {
Some(rel.to_string_lossy().to_string())
} else {
// Fall back to absolute path if relative path fails
Some(file_path.to_string_lossy().to_string())
};
// Check if we should embed this file
// Always embed temp files (recordings) and previously embedded audio,
// otherwise use size threshold
let embedded_data = if is_temp_file || is_embedded || Self::should_embed(file_path) {
// Embed from memory - we already have the audio data loaded
Some(Self::embed_from_memory(file))
} else {
None
};
let entry = AudioPoolEntry {
pool_index: index,
name: file_path
.file_name()
.map(|n| n.to_string_lossy().to_string())
.unwrap_or_else(|| format!("file_{}", index)),
relative_path,
duration: file.duration_seconds(),
sample_rate: file.sample_rate,
channels: file.channels,
embedded_data,
};
entries.push(entry);
}
Ok(entries)
}
/// Check if a file should be embedded (< 10MB)
fn should_embed(file_path: &Path) -> bool {
const TEN_MB: u64 = 10_000_000;
std::fs::metadata(file_path)
.map(|m| m.len() < TEN_MB)
.unwrap_or(false)
}
/// Embed audio from memory (already loaded in the pool)
fn embed_from_memory(audio_file: &AudioFile) -> EmbeddedAudioData {
use base64::{Engine as _, engine::general_purpose};
// Convert the f32 interleaved samples to WAV format bytes
let wav_data = Self::encode_wav(
&audio_file.data,
audio_file.channels,
audio_file.sample_rate
);
let data_base64 = general_purpose::STANDARD.encode(&wav_data);
EmbeddedAudioData {
data_base64,
format: "wav".to_string(),
}
}
/// Encode f32 interleaved samples as WAV file bytes
fn encode_wav(samples: &[f32], channels: u32, sample_rate: u32) -> Vec<u8> {
let num_samples = samples.len();
let bytes_per_sample = 4; // 32-bit float
let data_size = num_samples * bytes_per_sample;
let file_size = 36 + data_size;
let mut wav_data = Vec::with_capacity(44 + data_size);
// RIFF header
wav_data.extend_from_slice(b"RIFF");
wav_data.extend_from_slice(&(file_size as u32).to_le_bytes());
wav_data.extend_from_slice(b"WAVE");
// fmt chunk
wav_data.extend_from_slice(b"fmt ");
wav_data.extend_from_slice(&16u32.to_le_bytes()); // chunk size
wav_data.extend_from_slice(&3u16.to_le_bytes()); // format code (3 = IEEE float)
wav_data.extend_from_slice(&(channels as u16).to_le_bytes());
wav_data.extend_from_slice(&sample_rate.to_le_bytes());
wav_data.extend_from_slice(&(sample_rate * channels * bytes_per_sample as u32).to_le_bytes()); // byte rate
wav_data.extend_from_slice(&((channels * bytes_per_sample as u32) as u16).to_le_bytes()); // block align
wav_data.extend_from_slice(&32u16.to_le_bytes()); // bits per sample
// data chunk
wav_data.extend_from_slice(b"data");
wav_data.extend_from_slice(&(data_size as u32).to_le_bytes());
// Write samples as little-endian f32
for &sample in samples {
wav_data.extend_from_slice(&sample.to_le_bytes());
}
wav_data
}
/// Load audio pool from serialized entries
///
/// Returns a list of pool indices that failed to load (missing files).
/// The caller should present these to the user for resolution.
pub fn load_from_serialized(
&mut self,
entries: Vec<AudioPoolEntry>,
project_path: &Path,
) -> Result<Vec<usize>, String> {
let project_dir = project_path.parent()
.ok_or_else(|| "Project path has no parent directory".to_string())?;
let mut missing_indices = Vec::new();
// Clear existing pool
self.files.clear();
// Find the maximum pool index to determine required size
let max_index = entries.iter()
.map(|e| e.pool_index)
.max()
.unwrap_or(0);
// Ensure we have space for all entries
self.files.resize(max_index + 1, AudioFile::new(PathBuf::new(), Vec::new(), 2, 44100));
for entry in entries {
let success = if let Some(embedded) = entry.embedded_data {
// Load from embedded data
match Self::load_from_embedded_into_pool(self, entry.pool_index, embedded, &entry.name) {
Ok(_) => {
eprintln!("[AudioPool] Successfully loaded embedded audio: {}", entry.name);
true
}
Err(e) => {
eprintln!("[AudioPool] Failed to load embedded audio {}: {}", entry.name, e);
false
}
}
} else if let Some(rel_path) = entry.relative_path {
// Load from file path
let full_path = project_dir.join(&rel_path);
if full_path.exists() {
Self::load_file_into_pool(self, entry.pool_index, &full_path).is_ok()
} else {
eprintln!("[AudioPool] File not found: {:?}", full_path);
false
}
} else {
eprintln!("[AudioPool] Entry has neither embedded data nor path: {}", entry.name);
false
};
if !success {
missing_indices.push(entry.pool_index);
}
}
Ok(missing_indices)
}
/// Load audio from embedded base64 data
fn load_from_embedded_into_pool(
&mut self,
pool_index: usize,
embedded: EmbeddedAudioData,
name: &str,
) -> Result<(), String> {
use base64::{Engine as _, engine::general_purpose};
// Decode base64
let data = general_purpose::STANDARD
.decode(&embedded.data_base64)
.map_err(|e| format!("Failed to decode base64: {}", e))?;
// Write to temporary file for symphonia to decode
let temp_dir = std::env::temp_dir();
let temp_path = temp_dir.join(format!("lightningbeam_embedded_{}.{}", pool_index, embedded.format));
std::fs::write(&temp_path, &data)
.map_err(|e| format!("Failed to write temporary file: {}", e))?;
// Load the temporary file using existing infrastructure
let result = Self::load_file_into_pool(self, pool_index, &temp_path);
// Clean up temporary file
let _ = std::fs::remove_file(&temp_path);
// Update the path to reflect it was embedded
if result.is_ok() && pool_index < self.files.len() {
self.files[pool_index].path = PathBuf::from(format!("<embedded: {}>", name));
}
result
}
/// Load an audio file into a specific pool index
fn load_file_into_pool(&mut self, pool_index: usize, file_path: &Path) -> Result<(), String> {
use symphonia::core::audio::SampleBuffer;
use symphonia::core::codecs::{DecoderOptions, CODEC_TYPE_NULL};
use symphonia::core::formats::FormatOptions;
use symphonia::core::io::MediaSourceStream;
use symphonia::core::meta::MetadataOptions;
use symphonia::core::probe::Hint;
let file = std::fs::File::open(file_path)
.map_err(|e| format!("Failed to open audio file: {}", e))?;
let mss = MediaSourceStream::new(Box::new(file), Default::default());
let mut hint = Hint::new();
if let Some(ext) = file_path.extension() {
hint.with_extension(&ext.to_string_lossy());
}
let format_opts = FormatOptions::default();
let metadata_opts = MetadataOptions::default();
let decoder_opts = DecoderOptions::default();
let probed = symphonia::default::get_probe()
.format(&hint, mss, &format_opts, &metadata_opts)
.map_err(|e| format!("Failed to probe audio file: {}", e))?;
let mut format = probed.format;
let track = format
.tracks()
.iter()
.find(|t| t.codec_params.codec != CODEC_TYPE_NULL)
.ok_or_else(|| "No audio track found".to_string())?;
let mut decoder = symphonia::default::get_codecs()
.make(&track.codec_params, &decoder_opts)
.map_err(|e| format!("Failed to create decoder: {}", e))?;
let track_id = track.id;
let sample_rate = track.codec_params.sample_rate.unwrap_or(44100);
let channels = track.codec_params.channels.map(|c| c.count()).unwrap_or(2) as u32;
let mut samples = Vec::new();
let mut sample_buf = None;
loop {
let packet = match format.next_packet() {
Ok(packet) => packet,
Err(_) => break,
};
if packet.track_id() != track_id {
continue;
}
match decoder.decode(&packet) {
Ok(decoded) => {
if sample_buf.is_none() {
let spec = *decoded.spec();
let duration = decoded.capacity() as u64;
sample_buf = Some(SampleBuffer::<f32>::new(duration, spec));
}
if let Some(ref mut buf) = sample_buf {
buf.copy_interleaved_ref(decoded);
samples.extend_from_slice(buf.samples());
}
}
Err(_) => continue,
}
}
let audio_file = AudioFile::new(
file_path.to_path_buf(),
samples,
channels,
sample_rate,
);
if pool_index >= self.files.len() {
return Err(format!("Pool index {} out of bounds", pool_index));
}
self.files[pool_index] = audio_file;
Ok(())
}
/// Resolve a missing audio file by loading from a new path
/// This is called from the UI when the user manually locates a missing file
pub fn resolve_missing_file(&mut self, pool_index: usize, new_path: &Path) -> Result<(), String> {
Self::load_file_into_pool(self, pool_index, new_path)
}
}

View File

@ -0,0 +1,429 @@
use super::buffer_pool::BufferPool;
use super::clip::Clip;
use super::midi::{MidiClip, MidiEvent};
use super::pool::AudioPool;
use super::track::{AudioTrack, Metatrack, MidiTrack, RenderContext, TrackId, TrackNode};
use std::collections::HashMap;
/// Project manages the hierarchical track structure
///
/// Tracks are stored in a flat HashMap but can be organized into groups,
/// forming a tree structure. Groups render their children recursively.
pub struct Project {
tracks: HashMap<TrackId, TrackNode>,
next_track_id: TrackId,
root_tracks: Vec<TrackId>, // Top-level tracks (not in any group)
sample_rate: u32, // System sample rate
}
impl Project {
/// Create a new empty project
pub fn new(sample_rate: u32) -> Self {
Self {
tracks: HashMap::new(),
next_track_id: 0,
root_tracks: Vec::new(),
sample_rate,
}
}
/// Generate a new unique track ID
fn next_id(&mut self) -> TrackId {
let id = self.next_track_id;
self.next_track_id += 1;
id
}
/// Add an audio track to the project
///
/// # Arguments
/// * `name` - Track name
/// * `parent_id` - Optional parent group ID
///
/// # Returns
/// The new track's ID
pub fn add_audio_track(&mut self, name: String, parent_id: Option<TrackId>) -> TrackId {
let id = self.next_id();
let track = AudioTrack::new(id, name, self.sample_rate);
self.tracks.insert(id, TrackNode::Audio(track));
if let Some(parent) = parent_id {
// Add to parent group
if let Some(TrackNode::Group(group)) = self.tracks.get_mut(&parent) {
group.add_child(id);
}
} else {
// Add to root level
self.root_tracks.push(id);
}
id
}
/// Add a group track to the project
///
/// # Arguments
/// * `name` - Group name
/// * `parent_id` - Optional parent group ID
///
/// # Returns
/// The new group's ID
pub fn add_group_track(&mut self, name: String, parent_id: Option<TrackId>) -> TrackId {
let id = self.next_id();
let group = Metatrack::new(id, name);
self.tracks.insert(id, TrackNode::Group(group));
if let Some(parent) = parent_id {
// Add to parent group
if let Some(TrackNode::Group(parent_group)) = self.tracks.get_mut(&parent) {
parent_group.add_child(id);
}
} else {
// Add to root level
self.root_tracks.push(id);
}
id
}
/// Add a MIDI track to the project
///
/// # Arguments
/// * `name` - Track name
/// * `parent_id` - Optional parent group ID
///
/// # Returns
/// The new track's ID
pub fn add_midi_track(&mut self, name: String, parent_id: Option<TrackId>) -> TrackId {
let id = self.next_id();
let track = MidiTrack::new(id, name, self.sample_rate);
self.tracks.insert(id, TrackNode::Midi(track));
if let Some(parent) = parent_id {
// Add to parent group
if let Some(TrackNode::Group(group)) = self.tracks.get_mut(&parent) {
group.add_child(id);
}
} else {
// Add to root level
self.root_tracks.push(id);
}
id
}
/// Remove a track from the project
///
/// If the track is a group, all children are moved to the parent (or root)
pub fn remove_track(&mut self, track_id: TrackId) {
if let Some(node) = self.tracks.remove(&track_id) {
// If it's a group, handle its children
if let TrackNode::Group(group) = node {
// Find the parent of this group
let parent_id = self.find_parent(track_id);
// Move children to parent or root
for child_id in group.children {
if let Some(parent) = parent_id {
if let Some(TrackNode::Group(parent_group)) = self.tracks.get_mut(&parent) {
parent_group.add_child(child_id);
}
} else {
self.root_tracks.push(child_id);
}
}
}
// Remove from parent or root
if let Some(parent_id) = self.find_parent(track_id) {
if let Some(TrackNode::Group(parent)) = self.tracks.get_mut(&parent_id) {
parent.remove_child(track_id);
}
} else {
self.root_tracks.retain(|&id| id != track_id);
}
}
}
/// Find the parent group of a track
fn find_parent(&self, track_id: TrackId) -> Option<TrackId> {
for (id, node) in &self.tracks {
if let TrackNode::Group(group) = node {
if group.children.contains(&track_id) {
return Some(*id);
}
}
}
None
}
/// Move a track to a different group
pub fn move_to_group(&mut self, track_id: TrackId, new_parent_id: TrackId) {
// First remove from current parent
if let Some(old_parent_id) = self.find_parent(track_id) {
if let Some(TrackNode::Group(parent)) = self.tracks.get_mut(&old_parent_id) {
parent.remove_child(track_id);
}
} else {
// Remove from root
self.root_tracks.retain(|&id| id != track_id);
}
// Add to new parent
if let Some(TrackNode::Group(new_parent)) = self.tracks.get_mut(&new_parent_id) {
new_parent.add_child(track_id);
}
}
/// Move a track to the root level (remove from any group)
pub fn move_to_root(&mut self, track_id: TrackId) {
// Remove from current parent if any
if let Some(parent_id) = self.find_parent(track_id) {
if let Some(TrackNode::Group(parent)) = self.tracks.get_mut(&parent_id) {
parent.remove_child(track_id);
}
// Add to root if not already there
if !self.root_tracks.contains(&track_id) {
self.root_tracks.push(track_id);
}
}
}
/// Get a reference to a track node
pub fn get_track(&self, track_id: TrackId) -> Option<&TrackNode> {
self.tracks.get(&track_id)
}
/// Get a mutable reference to a track node
pub fn get_track_mut(&mut self, track_id: TrackId) -> Option<&mut TrackNode> {
self.tracks.get_mut(&track_id)
}
/// Get oscilloscope data from a node in a track's graph
pub fn get_oscilloscope_data(&self, track_id: TrackId, node_id: u32, sample_count: usize) -> Option<(Vec<f32>, Vec<f32>)> {
if let Some(TrackNode::Midi(track)) = self.tracks.get(&track_id) {
let graph = &track.instrument_graph;
let node_idx = petgraph::stable_graph::NodeIndex::new(node_id as usize);
// Get audio data
let audio = graph.get_oscilloscope_data(node_idx, sample_count)?;
// Get CV data (may be empty if no CV input or not an oscilloscope node)
let cv = graph.get_oscilloscope_cv_data(node_idx, sample_count).unwrap_or_default();
return Some((audio, cv));
}
None
}
/// Get all root-level track IDs
pub fn root_tracks(&self) -> &[TrackId] {
&self.root_tracks
}
/// Get the number of tracks in the project
pub fn track_count(&self) -> usize {
self.tracks.len()
}
/// Check if any track is soloed
pub fn any_solo(&self) -> bool {
self.tracks.values().any(|node| node.is_solo())
}
/// Add a clip to an audio track
pub fn add_clip(&mut self, track_id: TrackId, clip: Clip) -> Result<(), &'static str> {
if let Some(TrackNode::Audio(track)) = self.tracks.get_mut(&track_id) {
track.add_clip(clip);
Ok(())
} else {
Err("Track not found or is not an audio track")
}
}
/// Add a MIDI clip to a MIDI track
pub fn add_midi_clip(&mut self, track_id: TrackId, clip: MidiClip) -> Result<(), &'static str> {
if let Some(TrackNode::Midi(track)) = self.tracks.get_mut(&track_id) {
track.add_clip(clip);
Ok(())
} else {
Err("Track not found or is not a MIDI track")
}
}
/// Render all root tracks into the output buffer
pub fn render(
&mut self,
output: &mut [f32],
pool: &AudioPool,
buffer_pool: &mut BufferPool,
playhead_seconds: f64,
sample_rate: u32,
channels: u32,
) {
output.fill(0.0);
let any_solo = self.any_solo();
// Create initial render context
let ctx = RenderContext::new(
playhead_seconds,
sample_rate,
channels,
output.len(),
);
// Render each root track
for &track_id in &self.root_tracks.clone() {
self.render_track(
track_id,
output,
pool,
buffer_pool,
ctx,
any_solo,
false, // root tracks are not inside a soloed parent
);
}
}
/// Recursively render a track (audio or group) into the output buffer
fn render_track(
&mut self,
track_id: TrackId,
output: &mut [f32],
pool: &AudioPool,
buffer_pool: &mut BufferPool,
ctx: RenderContext,
any_solo: bool,
parent_is_soloed: bool,
) {
// Check if track should be rendered based on mute/solo
let should_render = match self.tracks.get(&track_id) {
Some(TrackNode::Audio(track)) => {
// If parent is soloed, only check mute state
// Otherwise, check normal solo logic
if parent_is_soloed {
!track.muted
} else {
track.is_active(any_solo)
}
}
Some(TrackNode::Midi(track)) => {
// Same logic for MIDI tracks
if parent_is_soloed {
!track.muted
} else {
track.is_active(any_solo)
}
}
Some(TrackNode::Group(group)) => {
// Same logic for groups
if parent_is_soloed {
!group.muted
} else {
group.is_active(any_solo)
}
}
None => return,
};
if !should_render {
return;
}
// Handle audio track vs MIDI track vs group track
match self.tracks.get_mut(&track_id) {
Some(TrackNode::Audio(track)) => {
// Render audio track directly into output
track.render(output, pool, ctx.playhead_seconds, ctx.sample_rate, ctx.channels);
}
Some(TrackNode::Midi(track)) => {
// Render MIDI track directly into output
track.render(output, ctx.playhead_seconds, ctx.sample_rate, ctx.channels);
}
Some(TrackNode::Group(group)) => {
// Get children IDs, check if this group is soloed, and transform context
let children: Vec<TrackId> = group.children.clone();
let this_group_is_soloed = group.solo;
let child_ctx = group.transform_context(ctx);
// Acquire a temporary buffer for the group mix
let mut group_buffer = buffer_pool.acquire();
group_buffer.resize(output.len(), 0.0);
group_buffer.fill(0.0);
// Recursively render all children into the group buffer
// If this group is soloed (or parent was soloed), children inherit that state
let children_parent_soloed = parent_is_soloed || this_group_is_soloed;
for &child_id in &children {
self.render_track(
child_id,
&mut group_buffer,
pool,
buffer_pool,
child_ctx,
any_solo,
children_parent_soloed,
);
}
// Apply group volume and mix into output
if let Some(TrackNode::Group(group)) = self.tracks.get_mut(&track_id) {
for (out_sample, group_sample) in output.iter_mut().zip(group_buffer.iter()) {
*out_sample += group_sample * group.volume;
}
}
// Release buffer back to pool
buffer_pool.release(group_buffer);
}
None => {}
}
}
/// Stop all notes on all MIDI tracks
pub fn stop_all_notes(&mut self) {
for track in self.tracks.values_mut() {
if let TrackNode::Midi(midi_track) = track {
midi_track.stop_all_notes();
}
}
}
/// Process live MIDI input from all MIDI tracks (called even when not playing)
pub fn process_live_midi(&mut self, output: &mut [f32], sample_rate: u32, channels: u32) {
// Process all MIDI tracks to handle queued live input events
for track in self.tracks.values_mut() {
if let TrackNode::Midi(midi_track) = track {
// Process only queued live events, not clips
midi_track.process_live_input(output, sample_rate, channels);
}
}
}
/// Send a live MIDI note on event to a track's instrument
/// Note: With node-based instruments, MIDI events are handled during the process() call
pub fn send_midi_note_on(&mut self, track_id: TrackId, note: u8, velocity: u8) {
// Queue the MIDI note-on event to the track's live MIDI queue
if let Some(TrackNode::Midi(track)) = self.tracks.get_mut(&track_id) {
let event = MidiEvent::note_on(0.0, 0, note, velocity);
track.queue_live_midi(event);
}
}
/// Send a live MIDI note off event to a track's instrument
pub fn send_midi_note_off(&mut self, track_id: TrackId, note: u8) {
// Queue the MIDI note-off event to the track's live MIDI queue
if let Some(TrackNode::Midi(track)) = self.tracks.get_mut(&track_id) {
let event = MidiEvent::note_off(0.0, 0, note, 0);
track.queue_live_midi(event);
}
}
}
impl Default for Project {
fn default() -> Self {
Self::new(48000) // Use 48kHz as default, will be overridden when created with actual sample rate
}
}

View File

@ -0,0 +1,310 @@
/// Audio recording system for capturing microphone input
use crate::audio::{ClipId, MidiClipId, TrackId};
use crate::io::{WavWriter, WaveformPeak};
use std::collections::HashMap;
use std::path::PathBuf;
/// State of an active recording session
pub struct RecordingState {
/// Track being recorded to
pub track_id: TrackId,
/// Clip ID for the intermediate clip
pub clip_id: ClipId,
/// Path to temporary WAV file
pub temp_file_path: PathBuf,
/// WAV file writer
pub writer: WavWriter,
/// Sample rate of recording
pub sample_rate: u32,
/// Number of channels
pub channels: u32,
/// Timeline start position in seconds
pub start_time: f64,
/// Total frames written to disk
pub frames_written: usize,
/// Accumulation buffer for next flush
pub buffer: Vec<f32>,
/// Number of frames to accumulate before flushing
pub flush_interval_frames: usize,
/// Whether recording is currently paused
pub paused: bool,
/// Number of samples remaining to skip (to discard stale buffer data)
pub samples_to_skip: usize,
/// Waveform peaks generated incrementally during recording
pub waveform: Vec<WaveformPeak>,
/// Temporary buffer for collecting samples for next waveform peak
pub waveform_buffer: Vec<f32>,
/// Number of frames per waveform peak
pub frames_per_peak: usize,
/// All recorded audio data accumulated in memory (for fast finalization)
pub audio_data: Vec<f32>,
}
impl RecordingState {
/// Create a new recording state
pub fn new(
track_id: TrackId,
clip_id: ClipId,
temp_file_path: PathBuf,
writer: WavWriter,
sample_rate: u32,
channels: u32,
start_time: f64,
flush_interval_seconds: f64,
) -> Self {
let flush_interval_frames = (sample_rate as f64 * flush_interval_seconds) as usize;
// Calculate frames per waveform peak
// Target ~300 peaks per second with minimum 1000 samples per peak
let target_peaks_per_second = 300;
let frames_per_peak = (sample_rate / target_peaks_per_second).max(1000) as usize;
Self {
track_id,
clip_id,
temp_file_path,
writer,
sample_rate,
channels,
start_time,
frames_written: 0,
buffer: Vec::new(),
flush_interval_frames,
paused: false,
samples_to_skip: 0, // Will be set by engine when it knows buffer size
waveform: Vec::new(),
waveform_buffer: Vec::new(),
frames_per_peak,
audio_data: Vec::new(),
}
}
/// Add samples to the accumulation buffer
/// Returns true if a flush occurred
pub fn add_samples(&mut self, samples: &[f32]) -> Result<bool, std::io::Error> {
if self.paused {
return Ok(false);
}
// Determine which samples to process
let samples_to_process = if self.samples_to_skip > 0 {
let to_skip = self.samples_to_skip.min(samples.len());
self.samples_to_skip -= to_skip;
if to_skip == samples.len() {
// Skip entire batch
return Ok(false);
}
// Skip partial batch and process the rest
&samples[to_skip..]
} else {
samples
};
// Add to disk buffer
self.buffer.extend_from_slice(samples_to_process);
// Add to audio data (accumulate in memory for fast finalization)
self.audio_data.extend_from_slice(samples_to_process);
// Add to waveform buffer and generate peaks incrementally
self.waveform_buffer.extend_from_slice(samples_to_process);
self.generate_waveform_peaks();
// Check if we should flush to disk
let frames_in_buffer = self.buffer.len() / self.channels as usize;
if frames_in_buffer >= self.flush_interval_frames {
self.flush()?;
return Ok(true);
}
Ok(false)
}
/// Generate waveform peaks from accumulated samples
/// This is called incrementally as samples arrive
fn generate_waveform_peaks(&mut self) {
let samples_per_peak = self.frames_per_peak * self.channels as usize;
while self.waveform_buffer.len() >= samples_per_peak {
let mut min = 0.0f32;
let mut max = 0.0f32;
// Scan all samples for this peak
for sample in &self.waveform_buffer[..samples_per_peak] {
min = min.min(*sample);
max = max.max(*sample);
}
self.waveform.push(WaveformPeak { min, max });
// Remove processed samples from waveform buffer
self.waveform_buffer.drain(..samples_per_peak);
}
}
/// Flush accumulated samples to disk
pub fn flush(&mut self) -> Result<(), std::io::Error> {
if self.buffer.is_empty() {
return Ok(());
}
// Write to WAV file
self.writer.write_samples(&self.buffer)?;
// Update frames written
let frames_flushed = self.buffer.len() / self.channels as usize;
self.frames_written += frames_flushed;
// Clear buffer
self.buffer.clear();
Ok(())
}
/// Get current recording duration in seconds
/// Includes both flushed frames and buffered frames
pub fn duration(&self) -> f64 {
let buffered_frames = self.buffer.len() / self.channels as usize;
let total_frames = self.frames_written + buffered_frames;
total_frames as f64 / self.sample_rate as f64
}
/// Finalize the recording and return the temp file path, waveform, and audio data
pub fn finalize(mut self) -> Result<(PathBuf, Vec<WaveformPeak>, Vec<f32>), std::io::Error> {
// Flush any remaining samples to disk
self.flush()?;
// Generate final waveform peak from any remaining samples
if !self.waveform_buffer.is_empty() {
let mut min = 0.0f32;
let mut max = 0.0f32;
for sample in &self.waveform_buffer {
min = min.min(*sample);
max = max.max(*sample);
}
self.waveform.push(WaveformPeak { min, max });
}
// Finalize the WAV file
self.writer.finalize()?;
Ok((self.temp_file_path, self.waveform, self.audio_data))
}
/// Pause recording
pub fn pause(&mut self) {
self.paused = true;
}
/// Resume recording
pub fn resume(&mut self) {
self.paused = false;
}
}
/// Active MIDI note waiting for its noteOff event
#[derive(Debug, Clone)]
struct ActiveMidiNote {
/// MIDI note number (0-127)
note: u8,
/// Velocity (0-127)
velocity: u8,
/// Absolute time when note started (seconds)
start_time: f64,
}
/// State of an active MIDI recording session
pub struct MidiRecordingState {
/// Track being recorded to
pub track_id: TrackId,
/// MIDI clip ID
pub clip_id: MidiClipId,
/// Timeline start position in seconds
pub start_time: f64,
/// Currently active notes (noteOn without matching noteOff)
/// Maps note number to ActiveMidiNote
active_notes: HashMap<u8, ActiveMidiNote>,
/// Completed notes ready to be added to clip
/// Format: (time_offset, note, velocity, duration)
pub completed_notes: Vec<(f64, u8, u8, f64)>,
}
impl MidiRecordingState {
/// Create a new MIDI recording state
pub fn new(track_id: TrackId, clip_id: MidiClipId, start_time: f64) -> Self {
Self {
track_id,
clip_id,
start_time,
active_notes: HashMap::new(),
completed_notes: Vec::new(),
}
}
/// Handle a MIDI note on event
pub fn note_on(&mut self, note: u8, velocity: u8, absolute_time: f64) {
// Store this note as active
self.active_notes.insert(note, ActiveMidiNote {
note,
velocity,
start_time: absolute_time,
});
}
/// Handle a MIDI note off event
pub fn note_off(&mut self, note: u8, absolute_time: f64) {
// Find the matching noteOn
if let Some(active_note) = self.active_notes.remove(&note) {
// Calculate relative time offset and duration
let time_offset = active_note.start_time - self.start_time;
let duration = absolute_time - active_note.start_time;
eprintln!("[MIDI_RECORDING_STATE] Completing note {}: note_start={:.3}s, note_end={:.3}s, recording_start={:.3}s, time_offset={:.3}s, duration={:.3}s",
note, active_note.start_time, absolute_time, self.start_time, time_offset, duration);
// Add to completed notes
self.completed_notes.push((
time_offset,
active_note.note,
active_note.velocity,
duration,
));
}
// If no matching noteOn found, ignore the noteOff
}
/// Get all completed notes
pub fn get_notes(&self) -> &[(f64, u8, u8, f64)] {
&self.completed_notes
}
/// Get the number of completed notes
pub fn note_count(&self) -> usize {
self.completed_notes.len()
}
/// Close out all active notes at the given time
/// This should be called when stopping recording to end any held notes
pub fn close_active_notes(&mut self, end_time: f64) {
// Collect all active notes and close them
let active_notes: Vec<_> = self.active_notes.drain().collect();
for (_note_num, active_note) in active_notes {
// Calculate relative time offset and duration
let time_offset = active_note.start_time - self.start_time;
let duration = end_time - active_note.start_time;
// Add to completed notes
self.completed_notes.push((
time_offset,
active_note.note,
active_note.velocity,
duration,
));
}
}
}

View File

@ -0,0 +1,316 @@
use symphonia::core::audio::{AudioBufferRef, Signal};
use symphonia::core::codecs::{DecoderOptions, CODEC_TYPE_NULL};
use symphonia::core::errors::Error as SymphoniaError;
use symphonia::core::formats::FormatOptions;
use symphonia::core::io::MediaSourceStream;
use symphonia::core::meta::MetadataOptions;
use symphonia::core::probe::Hint;
use std::fs::File;
use std::path::Path;
/// Loaded audio sample data
#[derive(Debug, Clone)]
pub struct SampleData {
/// Audio samples (mono, f32 format)
pub samples: Vec<f32>,
/// Original sample rate
pub sample_rate: u32,
}
/// Load an audio file and decode it to mono f32 samples
pub fn load_audio_file(path: impl AsRef<Path>) -> Result<SampleData, String> {
let path = path.as_ref();
// Open the file
let file = File::open(path)
.map_err(|e| format!("Failed to open file: {}", e))?;
// Create a media source stream
let mss = MediaSourceStream::new(Box::new(file), Default::default());
// Create a hint to help the format registry guess the format
let mut hint = Hint::new();
if let Some(extension) = path.extension() {
if let Some(ext_str) = extension.to_str() {
hint.with_extension(ext_str);
}
}
// Probe the media source for a format
let format_opts = FormatOptions::default();
let metadata_opts = MetadataOptions::default();
let probed = symphonia::default::get_probe()
.format(&hint, mss, &format_opts, &metadata_opts)
.map_err(|e| format!("Failed to probe format: {}", e))?;
let mut format = probed.format;
// Find the first audio track
let track = format
.tracks()
.iter()
.find(|t| t.codec_params.codec != CODEC_TYPE_NULL)
.ok_or_else(|| "No audio tracks found".to_string())?;
let track_id = track.id;
let sample_rate = track.codec_params.sample_rate.unwrap_or(48000);
// Create a decoder for the track
let dec_opts = DecoderOptions::default();
let mut decoder = symphonia::default::get_codecs()
.make(&track.codec_params, &dec_opts)
.map_err(|e| format!("Failed to create decoder: {}", e))?;
// Decode all packets
let mut all_samples = Vec::new();
loop {
// Get the next packet
let packet = match format.next_packet() {
Ok(packet) => packet,
Err(SymphoniaError::IoError(e)) if e.kind() == std::io::ErrorKind::UnexpectedEof => {
// End of stream
break;
}
Err(e) => {
return Err(format!("Error reading packet: {}", e));
}
};
// Skip packets that don't belong to the selected track
if packet.track_id() != track_id {
continue;
}
// Decode the packet
let decoded = decoder
.decode(&packet)
.map_err(|e| format!("Failed to decode packet: {}", e))?;
// Convert to f32 samples and mix to mono
let samples = convert_to_mono_f32(&decoded);
all_samples.extend_from_slice(&samples);
}
Ok(SampleData {
samples: all_samples,
sample_rate,
})
}
/// Convert an audio buffer to mono f32 samples
fn convert_to_mono_f32(buf: &AudioBufferRef) -> Vec<f32> {
match buf {
AudioBufferRef::F32(buf) => {
let channels = buf.spec().channels.count();
let frames = buf.frames();
let mut mono = Vec::with_capacity(frames);
if channels == 1 {
// Already mono
mono.extend_from_slice(buf.chan(0));
} else {
// Mix down to mono by averaging all channels
for frame in 0..frames {
let mut sum = 0.0;
for ch in 0..channels {
sum += buf.chan(ch)[frame];
}
mono.push(sum / channels as f32);
}
}
mono
}
AudioBufferRef::U8(buf) => {
let channels = buf.spec().channels.count();
let frames = buf.frames();
let mut mono = Vec::with_capacity(frames);
if channels == 1 {
for &sample in buf.chan(0) {
mono.push((sample as f32 - 128.0) / 128.0);
}
} else {
for frame in 0..frames {
let mut sum = 0.0;
for ch in 0..channels {
sum += (buf.chan(ch)[frame] as f32 - 128.0) / 128.0;
}
mono.push(sum / channels as f32);
}
}
mono
}
AudioBufferRef::U16(buf) => {
let channels = buf.spec().channels.count();
let frames = buf.frames();
let mut mono = Vec::with_capacity(frames);
if channels == 1 {
for &sample in buf.chan(0) {
mono.push((sample as f32 - 32768.0) / 32768.0);
}
} else {
for frame in 0..frames {
let mut sum = 0.0;
for ch in 0..channels {
sum += (buf.chan(ch)[frame] as f32 - 32768.0) / 32768.0;
}
mono.push(sum / channels as f32);
}
}
mono
}
AudioBufferRef::U24(buf) => {
let channels = buf.spec().channels.count();
let frames = buf.frames();
let mut mono = Vec::with_capacity(frames);
if channels == 1 {
for &sample in buf.chan(0) {
mono.push((sample.inner() as f32 - 8388608.0) / 8388608.0);
}
} else {
for frame in 0..frames {
let mut sum = 0.0;
for ch in 0..channels {
sum += (buf.chan(ch)[frame].inner() as f32 - 8388608.0) / 8388608.0;
}
mono.push(sum / channels as f32);
}
}
mono
}
AudioBufferRef::U32(buf) => {
let channels = buf.spec().channels.count();
let frames = buf.frames();
let mut mono = Vec::with_capacity(frames);
if channels == 1 {
for &sample in buf.chan(0) {
mono.push((sample as f32 - 2147483648.0) / 2147483648.0);
}
} else {
for frame in 0..frames {
let mut sum = 0.0;
for ch in 0..channels {
sum += (buf.chan(ch)[frame] as f32 - 2147483648.0) / 2147483648.0;
}
mono.push(sum / channels as f32);
}
}
mono
}
AudioBufferRef::S8(buf) => {
let channels = buf.spec().channels.count();
let frames = buf.frames();
let mut mono = Vec::with_capacity(frames);
if channels == 1 {
for &sample in buf.chan(0) {
mono.push(sample as f32 / 128.0);
}
} else {
for frame in 0..frames {
let mut sum = 0.0;
for ch in 0..channels {
sum += buf.chan(ch)[frame] as f32 / 128.0;
}
mono.push(sum / channels as f32);
}
}
mono
}
AudioBufferRef::S16(buf) => {
let channels = buf.spec().channels.count();
let frames = buf.frames();
let mut mono = Vec::with_capacity(frames);
if channels == 1 {
for &sample in buf.chan(0) {
mono.push(sample as f32 / 32768.0);
}
} else {
for frame in 0..frames {
let mut sum = 0.0;
for ch in 0..channels {
sum += buf.chan(ch)[frame] as f32 / 32768.0;
}
mono.push(sum / channels as f32);
}
}
mono
}
AudioBufferRef::S24(buf) => {
let channels = buf.spec().channels.count();
let frames = buf.frames();
let mut mono = Vec::with_capacity(frames);
if channels == 1 {
for &sample in buf.chan(0) {
mono.push(sample.inner() as f32 / 8388608.0);
}
} else {
for frame in 0..frames {
let mut sum = 0.0;
for ch in 0..channels {
sum += buf.chan(ch)[frame].inner() as f32 / 8388608.0;
}
mono.push(sum / channels as f32);
}
}
mono
}
AudioBufferRef::S32(buf) => {
let channels = buf.spec().channels.count();
let frames = buf.frames();
let mut mono = Vec::with_capacity(frames);
if channels == 1 {
for &sample in buf.chan(0) {
mono.push(sample as f32 / 2147483648.0);
}
} else {
for frame in 0..frames {
let mut sum = 0.0;
for ch in 0..channels {
sum += buf.chan(ch)[frame] as f32 / 2147483648.0;
}
mono.push(sum / channels as f32);
}
}
mono
}
AudioBufferRef::F64(buf) => {
let channels = buf.spec().channels.count();
let frames = buf.frames();
let mut mono = Vec::with_capacity(frames);
if channels == 1 {
for &sample in buf.chan(0) {
mono.push(sample as f32);
}
} else {
for frame in 0..frames {
let mut sum = 0.0;
for ch in 0..channels {
sum += buf.chan(ch)[frame] as f32;
}
mono.push(sum / channels as f32);
}
}
mono
}
}
}

View File

@ -0,0 +1,725 @@
use super::automation::{AutomationLane, AutomationLaneId, ParameterId};
use super::clip::Clip;
use super::midi::{MidiClip, MidiEvent};
use super::node_graph::AudioGraph;
use super::node_graph::nodes::{AudioInputNode, AudioOutputNode};
use super::pool::AudioPool;
use std::collections::HashMap;
/// Track ID type
pub type TrackId = u32;
/// Type alias for backwards compatibility
pub type Track = AudioTrack;
/// Rendering context that carries timing information through the track hierarchy
///
/// This allows metatracks to transform time for their children (time stretch, offset, etc.)
#[derive(Debug, Clone, Copy)]
pub struct RenderContext {
/// Current playhead position in seconds (in transformed time)
pub playhead_seconds: f64,
/// Audio sample rate
pub sample_rate: u32,
/// Number of channels
pub channels: u32,
/// Size of the buffer being rendered (in interleaved samples)
pub buffer_size: usize,
/// Accumulated time stretch factor (1.0 = normal, 0.5 = half speed, 2.0 = double speed)
pub time_stretch: f32,
}
impl RenderContext {
/// Create a new render context
pub fn new(
playhead_seconds: f64,
sample_rate: u32,
channels: u32,
buffer_size: usize,
) -> Self {
Self {
playhead_seconds,
sample_rate,
channels,
buffer_size,
time_stretch: 1.0,
}
}
/// Get the duration of the buffer in seconds
pub fn buffer_duration(&self) -> f64 {
self.buffer_size as f64 / (self.sample_rate as f64 * self.channels as f64)
}
/// Get the end time of the buffer
pub fn buffer_end(&self) -> f64 {
self.playhead_seconds + self.buffer_duration()
}
}
/// Node in the track hierarchy - can be an audio track, MIDI track, or a metatrack
pub enum TrackNode {
Audio(AudioTrack),
Midi(MidiTrack),
Group(Metatrack),
}
impl TrackNode {
/// Get the track ID
pub fn id(&self) -> TrackId {
match self {
TrackNode::Audio(track) => track.id,
TrackNode::Midi(track) => track.id,
TrackNode::Group(group) => group.id,
}
}
/// Get the track name
pub fn name(&self) -> &str {
match self {
TrackNode::Audio(track) => &track.name,
TrackNode::Midi(track) => &track.name,
TrackNode::Group(group) => &group.name,
}
}
/// Get muted state
pub fn is_muted(&self) -> bool {
match self {
TrackNode::Audio(track) => track.muted,
TrackNode::Midi(track) => track.muted,
TrackNode::Group(group) => group.muted,
}
}
/// Get solo state
pub fn is_solo(&self) -> bool {
match self {
TrackNode::Audio(track) => track.solo,
TrackNode::Midi(track) => track.solo,
TrackNode::Group(group) => group.solo,
}
}
/// Set volume
pub fn set_volume(&mut self, volume: f32) {
match self {
TrackNode::Audio(track) => track.set_volume(volume),
TrackNode::Midi(track) => track.set_volume(volume),
TrackNode::Group(group) => group.set_volume(volume),
}
}
/// Set muted state
pub fn set_muted(&mut self, muted: bool) {
match self {
TrackNode::Audio(track) => track.set_muted(muted),
TrackNode::Midi(track) => track.set_muted(muted),
TrackNode::Group(group) => group.set_muted(muted),
}
}
/// Set solo state
pub fn set_solo(&mut self, solo: bool) {
match self {
TrackNode::Audio(track) => track.set_solo(solo),
TrackNode::Midi(track) => track.set_solo(solo),
TrackNode::Group(group) => group.set_solo(solo),
}
}
}
/// Metatrack that contains other tracks with time transformation capabilities
pub struct Metatrack {
pub id: TrackId,
pub name: String,
pub children: Vec<TrackId>,
pub volume: f32,
pub muted: bool,
pub solo: bool,
/// Time stretch factor (0.5 = half speed, 1.0 = normal, 2.0 = double speed)
pub time_stretch: f32,
/// Pitch shift in semitones (for future implementation)
pub pitch_shift: f32,
/// Time offset in seconds (shift content forward/backward in time)
pub offset: f64,
/// Automation lanes for this metatrack
pub automation_lanes: HashMap<AutomationLaneId, AutomationLane>,
next_automation_id: AutomationLaneId,
}
impl Metatrack {
/// Create a new metatrack
pub fn new(id: TrackId, name: String) -> Self {
Self {
id,
name,
children: Vec::new(),
volume: 1.0,
muted: false,
solo: false,
time_stretch: 1.0,
pitch_shift: 0.0,
offset: 0.0,
automation_lanes: HashMap::new(),
next_automation_id: 0,
}
}
/// Add an automation lane to this metatrack
pub fn add_automation_lane(&mut self, parameter_id: ParameterId) -> AutomationLaneId {
let lane_id = self.next_automation_id;
self.next_automation_id += 1;
let lane = AutomationLane::new(lane_id, parameter_id);
self.automation_lanes.insert(lane_id, lane);
lane_id
}
/// Get an automation lane by ID
pub fn get_automation_lane(&self, lane_id: AutomationLaneId) -> Option<&AutomationLane> {
self.automation_lanes.get(&lane_id)
}
/// Get a mutable automation lane by ID
pub fn get_automation_lane_mut(&mut self, lane_id: AutomationLaneId) -> Option<&mut AutomationLane> {
self.automation_lanes.get_mut(&lane_id)
}
/// Remove an automation lane
pub fn remove_automation_lane(&mut self, lane_id: AutomationLaneId) -> bool {
self.automation_lanes.remove(&lane_id).is_some()
}
/// Evaluate automation at a specific time and return effective parameters
pub fn evaluate_automation_at_time(&self, time: f64) -> (f32, f32, f64) {
let mut volume = self.volume;
let mut time_stretch = self.time_stretch;
let mut offset = self.offset;
// Check for automation
for lane in self.automation_lanes.values() {
if !lane.enabled {
continue;
}
match lane.parameter_id {
ParameterId::TrackVolume => {
if let Some(automated_value) = lane.evaluate(time) {
volume = automated_value;
}
}
ParameterId::TimeStretch => {
if let Some(automated_value) = lane.evaluate(time) {
time_stretch = automated_value;
}
}
ParameterId::TimeOffset => {
if let Some(automated_value) = lane.evaluate(time) {
offset = automated_value as f64;
}
}
_ => {}
}
}
(volume, time_stretch, offset)
}
/// Add a child track to this group
pub fn add_child(&mut self, track_id: TrackId) {
if !self.children.contains(&track_id) {
self.children.push(track_id);
}
}
/// Remove a child track from this group
pub fn remove_child(&mut self, track_id: TrackId) {
self.children.retain(|&id| id != track_id);
}
/// Set group volume
pub fn set_volume(&mut self, volume: f32) {
self.volume = volume.max(0.0);
}
/// Set mute state
pub fn set_muted(&mut self, muted: bool) {
self.muted = muted;
}
/// Set solo state
pub fn set_solo(&mut self, solo: bool) {
self.solo = solo;
}
/// Check if this group should be audible given the solo state
pub fn is_active(&self, any_solo: bool) -> bool {
!self.muted && (!any_solo || self.solo)
}
/// Transform a render context for this metatrack's children
///
/// Applies time stretching and offset transformations.
/// Time stretch affects how fast content plays: 0.5 = half speed, 2.0 = double speed
/// Offset shifts content forward/backward in time
pub fn transform_context(&self, ctx: RenderContext) -> RenderContext {
let mut transformed = ctx;
// Apply transformations in order:
// 1. First, subtract offset (positive offset = content appears later)
// At parent time 0.0s with offset=2.0s, child sees -2.0s (before content starts)
// At parent time 2.0s with offset=2.0s, child sees 0.0s (content starts)
let adjusted_playhead = transformed.playhead_seconds - self.offset;
// 2. Then apply time stretch (< 1.0 = slower/half speed, > 1.0 = faster/double speed)
// With stretch=0.5, when parent time is 2.0s, child reads from 1.0s (plays slower, pitches down)
// With stretch=2.0, when parent time is 2.0s, child reads from 4.0s (plays faster, pitches up)
// Note: This creates pitch shift as well - true time stretching would require resampling
transformed.playhead_seconds = adjusted_playhead * self.time_stretch as f64;
// Accumulate time stretch for nested metatracks
transformed.time_stretch *= self.time_stretch;
transformed
}
}
/// MIDI track with MIDI clips and a node-based instrument
pub struct MidiTrack {
pub id: TrackId,
pub name: String,
pub clips: Vec<MidiClip>,
pub instrument_graph: AudioGraph,
pub volume: f32,
pub muted: bool,
pub solo: bool,
/// Automation lanes for this track
pub automation_lanes: HashMap<AutomationLaneId, AutomationLane>,
next_automation_id: AutomationLaneId,
/// Queue for live MIDI input (virtual keyboard, MIDI controllers)
live_midi_queue: Vec<MidiEvent>,
}
impl MidiTrack {
/// Create a new MIDI track with default settings
pub fn new(id: TrackId, name: String, sample_rate: u32) -> Self {
// Use a large buffer size that can accommodate any callback
let default_buffer_size = 8192;
Self {
id,
name,
clips: Vec::new(),
instrument_graph: AudioGraph::new(sample_rate, default_buffer_size),
volume: 1.0,
muted: false,
solo: false,
automation_lanes: HashMap::new(),
next_automation_id: 0,
live_midi_queue: Vec::new(),
}
}
/// Add an automation lane to this track
pub fn add_automation_lane(&mut self, parameter_id: ParameterId) -> AutomationLaneId {
let lane_id = self.next_automation_id;
self.next_automation_id += 1;
let lane = AutomationLane::new(lane_id, parameter_id);
self.automation_lanes.insert(lane_id, lane);
lane_id
}
/// Get an automation lane by ID
pub fn get_automation_lane(&self, lane_id: AutomationLaneId) -> Option<&AutomationLane> {
self.automation_lanes.get(&lane_id)
}
/// Get a mutable automation lane by ID
pub fn get_automation_lane_mut(&mut self, lane_id: AutomationLaneId) -> Option<&mut AutomationLane> {
self.automation_lanes.get_mut(&lane_id)
}
/// Remove an automation lane
pub fn remove_automation_lane(&mut self, lane_id: AutomationLaneId) -> bool {
self.automation_lanes.remove(&lane_id).is_some()
}
/// Add a MIDI clip to this track
pub fn add_clip(&mut self, clip: MidiClip) {
self.clips.push(clip);
}
/// Set track volume
pub fn set_volume(&mut self, volume: f32) {
self.volume = volume.max(0.0);
}
/// Set mute state
pub fn set_muted(&mut self, muted: bool) {
self.muted = muted;
}
/// Set solo state
pub fn set_solo(&mut self, solo: bool) {
self.solo = solo;
}
/// Check if this track should be audible given the solo state
pub fn is_active(&self, any_solo: bool) -> bool {
!self.muted && (!any_solo || self.solo)
}
/// Stop all currently playing notes on this track's instrument
/// Note: With node-based instruments, stopping is handled by ceasing MIDI input
pub fn stop_all_notes(&mut self) {
// Send note-off for all 128 possible MIDI notes to silence the instrument
let mut note_offs = Vec::new();
for note in 0..128 {
note_offs.push(MidiEvent::note_off(0.0, 0, note, 0));
}
// Create a silent buffer to process the note-offs
let buffer_size = 512 * 2; // stereo
let mut silent_buffer = vec![0.0f32; buffer_size];
self.instrument_graph.process(&mut silent_buffer, &note_offs, 0.0);
}
/// Queue a live MIDI event (from virtual keyboard or MIDI controller)
pub fn queue_live_midi(&mut self, event: MidiEvent) {
self.live_midi_queue.push(event);
}
/// Clear the live MIDI queue
pub fn clear_live_midi_queue(&mut self) {
self.live_midi_queue.clear();
}
/// Process only live MIDI input (queued events) without rendering clips
/// This is used when playback is stopped but we want to hear live input
pub fn process_live_input(
&mut self,
output: &mut [f32],
_sample_rate: u32,
_channels: u32,
) {
// Generate audio using instrument graph with live MIDI events
self.instrument_graph.process(output, &self.live_midi_queue, 0.0);
// Clear the queue after processing
self.live_midi_queue.clear();
// Apply track volume (no automation during live input)
for sample in output.iter_mut() {
*sample *= self.volume;
}
}
/// Render this MIDI track into the output buffer
pub fn render(
&mut self,
output: &mut [f32],
playhead_seconds: f64,
sample_rate: u32,
channels: u32,
) {
let buffer_duration_seconds = output.len() as f64 / (sample_rate as f64 * channels as f64);
let buffer_end_seconds = playhead_seconds + buffer_duration_seconds;
// Collect MIDI events from all clips that overlap with current time range
let mut midi_events = Vec::new();
for clip in &self.clips {
let events = clip.get_events_in_range(
playhead_seconds,
buffer_end_seconds,
sample_rate,
);
// Events now have timestamps in seconds relative to clip start
midi_events.extend(events);
}
// Add live MIDI events (from virtual keyboard or MIDI controllers)
// This allows real-time input to be heard during playback/recording
midi_events.extend(self.live_midi_queue.drain(..));
// Generate audio using instrument graph
self.instrument_graph.process(output, &midi_events, playhead_seconds);
// Evaluate and apply automation
let effective_volume = self.evaluate_automation_at_time(playhead_seconds);
// Apply track volume
for sample in output.iter_mut() {
*sample *= effective_volume;
}
}
/// Evaluate automation at a specific time and return the effective volume
fn evaluate_automation_at_time(&self, time: f64) -> f32 {
let mut volume = self.volume;
// Check for volume automation
for lane in self.automation_lanes.values() {
if !lane.enabled {
continue;
}
match lane.parameter_id {
ParameterId::TrackVolume => {
if let Some(automated_value) = lane.evaluate(time) {
volume = automated_value;
}
}
_ => {}
}
}
volume
}
}
/// Audio track with clips
pub struct AudioTrack {
pub id: TrackId,
pub name: String,
pub clips: Vec<Clip>,
pub volume: f32,
pub muted: bool,
pub solo: bool,
/// Automation lanes for this track
pub automation_lanes: HashMap<AutomationLaneId, AutomationLane>,
next_automation_id: AutomationLaneId,
/// Effects processing graph for this audio track
pub effects_graph: AudioGraph,
}
impl AudioTrack {
/// Create a new audio track with default settings
pub fn new(id: TrackId, name: String, sample_rate: u32) -> Self {
// Use a large buffer size that can accommodate any callback
let default_buffer_size = 8192;
// Create the effects graph with default AudioInput -> AudioOutput chain
let mut effects_graph = AudioGraph::new(sample_rate, default_buffer_size);
// Add AudioInput node
let input_node = Box::new(AudioInputNode::new("Audio Input"));
let input_id = effects_graph.add_node(input_node);
// Set position for AudioInput (left side, similar to instrument preset spacing)
effects_graph.set_node_position(input_id, 100.0, 150.0);
// Add AudioOutput node
let output_node = Box::new(AudioOutputNode::new("Audio Output"));
let output_id = effects_graph.add_node(output_node);
// Set position for AudioOutput (right side, spaced apart)
effects_graph.set_node_position(output_id, 500.0, 150.0);
// Connect AudioInput -> AudioOutput
let _ = effects_graph.connect(input_id, 0, output_id, 0);
// Set the AudioOutput node as the graph's output
effects_graph.set_output_node(Some(output_id));
Self {
id,
name,
clips: Vec::new(),
volume: 1.0,
muted: false,
solo: false,
automation_lanes: HashMap::new(),
next_automation_id: 0,
effects_graph,
}
}
/// Add an automation lane to this track
pub fn add_automation_lane(&mut self, parameter_id: ParameterId) -> AutomationLaneId {
let lane_id = self.next_automation_id;
self.next_automation_id += 1;
let lane = AutomationLane::new(lane_id, parameter_id);
self.automation_lanes.insert(lane_id, lane);
lane_id
}
/// Get an automation lane by ID
pub fn get_automation_lane(&self, lane_id: AutomationLaneId) -> Option<&AutomationLane> {
self.automation_lanes.get(&lane_id)
}
/// Get a mutable automation lane by ID
pub fn get_automation_lane_mut(&mut self, lane_id: AutomationLaneId) -> Option<&mut AutomationLane> {
self.automation_lanes.get_mut(&lane_id)
}
/// Remove an automation lane
pub fn remove_automation_lane(&mut self, lane_id: AutomationLaneId) -> bool {
self.automation_lanes.remove(&lane_id).is_some()
}
/// Add a clip to this track
pub fn add_clip(&mut self, clip: Clip) {
self.clips.push(clip);
}
/// Set track volume (0.0 = silence, 1.0 = unity gain, >1.0 = amplification)
pub fn set_volume(&mut self, volume: f32) {
self.volume = volume.max(0.0);
}
/// Set mute state
pub fn set_muted(&mut self, muted: bool) {
self.muted = muted;
}
/// Set solo state
pub fn set_solo(&mut self, solo: bool) {
self.solo = solo;
}
/// Check if this track should be audible given the solo state of all tracks
pub fn is_active(&self, any_solo: bool) -> bool {
!self.muted && (!any_solo || self.solo)
}
/// Render this track into the output buffer at a given timeline position
/// Returns the number of samples actually rendered
pub fn render(
&mut self,
output: &mut [f32],
pool: &AudioPool,
playhead_seconds: f64,
sample_rate: u32,
channels: u32,
) -> usize {
let buffer_duration_seconds = output.len() as f64 / (sample_rate as f64 * channels as f64);
let buffer_end_seconds = playhead_seconds + buffer_duration_seconds;
// Create a temporary buffer for clip rendering
let mut clip_buffer = vec![0.0f32; output.len()];
let mut rendered = 0;
// Render all active clips into the temporary buffer
for clip in &self.clips {
// Check if clip overlaps with current buffer time range
if clip.start_time < buffer_end_seconds && clip.end_time() > playhead_seconds {
rendered += self.render_clip(
clip,
&mut clip_buffer,
pool,
playhead_seconds,
sample_rate,
channels,
);
}
}
// Find and inject audio into the AudioInputNode
let node_indices: Vec<_> = self.effects_graph.node_indices().collect();
for node_idx in node_indices {
if let Some(graph_node) = self.effects_graph.get_graph_node_mut(node_idx) {
if graph_node.node.node_type() == "AudioInput" {
if let Some(input_node) = graph_node.node.as_any_mut().downcast_mut::<AudioInputNode>() {
input_node.inject_audio(&clip_buffer);
break;
}
}
}
}
// Process through the effects graph (this will write to output buffer)
self.effects_graph.process(output, &[], playhead_seconds);
// Evaluate and apply automation
let effective_volume = self.evaluate_automation_at_time(playhead_seconds);
// Apply track volume
for sample in output.iter_mut() {
*sample *= effective_volume;
}
rendered
}
/// Evaluate automation at a specific time and return the effective volume
fn evaluate_automation_at_time(&self, time: f64) -> f32 {
let mut volume = self.volume;
// Check for volume automation
for lane in self.automation_lanes.values() {
if !lane.enabled {
continue;
}
match lane.parameter_id {
ParameterId::TrackVolume => {
if let Some(automated_value) = lane.evaluate(time) {
volume = automated_value;
}
}
_ => {}
}
}
volume
}
/// Render a single clip into the output buffer
fn render_clip(
&self,
clip: &Clip,
output: &mut [f32],
pool: &AudioPool,
playhead_seconds: f64,
sample_rate: u32,
channels: u32,
) -> usize {
let buffer_duration_seconds = output.len() as f64 / (sample_rate as f64 * channels as f64);
let buffer_end_seconds = playhead_seconds + buffer_duration_seconds;
// Determine the time range we need to render (intersection of buffer and clip)
let render_start_seconds = playhead_seconds.max(clip.start_time);
let render_end_seconds = buffer_end_seconds.min(clip.end_time());
// If no overlap, return early
if render_start_seconds >= render_end_seconds {
return 0;
}
// Calculate offset into the output buffer (in interleaved samples)
let output_offset_seconds = render_start_seconds - playhead_seconds;
let output_offset_samples = (output_offset_seconds * sample_rate as f64 * channels as f64) as usize;
// Calculate position within the clip's audio file (in seconds)
let clip_position_seconds = render_start_seconds - clip.start_time + clip.offset;
// Calculate how many samples to render in the output
let render_duration_seconds = render_end_seconds - render_start_seconds;
let samples_to_render = (render_duration_seconds * sample_rate as f64 * channels as f64) as usize;
let samples_to_render = samples_to_render.min(output.len() - output_offset_samples);
// Get the slice of output buffer to write to
if output_offset_samples + samples_to_render > output.len() {
return 0;
}
let output_slice = &mut output[output_offset_samples..output_offset_samples + samples_to_render];
// Calculate combined gain
let combined_gain = clip.gain * self.volume;
// Render from pool with sample rate conversion
// Pass the time position in seconds, let the pool handle sample rate conversion
pool.render_from_file(
clip.audio_pool_index,
output_slice,
clip_position_seconds,
combined_gain,
sample_rate,
channels,
)
}
}

View File

@ -0,0 +1,3 @@
pub mod types;
pub use types::{AudioEvent, Command, MidiClipData, OscilloscopeData, Query, QueryResponse};

View File

@ -0,0 +1,317 @@
use crate::audio::{
AutomationLaneId, ClipId, CurveType, MidiClip, MidiClipId, ParameterId,
TrackId,
};
use crate::audio::buffer_pool::BufferPoolStats;
use crate::audio::node_graph::nodes::LoopMode;
use crate::io::WaveformPeak;
/// Commands sent from UI/control thread to audio thread
#[derive(Debug, Clone)]
pub enum Command {
// Transport commands
/// Start playback
Play,
/// Stop playback and reset to beginning
Stop,
/// Pause playback (maintains position)
Pause,
/// Seek to a specific position in seconds
Seek(f64),
// Track management commands
/// Set track volume (0.0 = silence, 1.0 = unity gain)
SetTrackVolume(TrackId, f32),
/// Set track mute state
SetTrackMute(TrackId, bool),
/// Set track solo state
SetTrackSolo(TrackId, bool),
// Clip management commands
/// Move a clip to a new timeline position
MoveClip(TrackId, ClipId, f64),
/// Trim a clip (track_id, clip_id, new_start_time, new_duration, new_offset)
TrimClip(TrackId, ClipId, f64, f64, f64),
// Metatrack management commands
/// Create a new metatrack with a name
CreateMetatrack(String),
/// Add a track to a metatrack (track_id, metatrack_id)
AddToMetatrack(TrackId, TrackId),
/// Remove a track from its parent metatrack
RemoveFromMetatrack(TrackId),
// Metatrack transformation commands
/// Set metatrack time stretch factor (track_id, stretch_factor)
/// 0.5 = half speed, 1.0 = normal, 2.0 = double speed
SetTimeStretch(TrackId, f32),
/// Set metatrack time offset in seconds (track_id, offset)
/// Positive = shift content later, negative = shift earlier
SetOffset(TrackId, f64),
/// Set metatrack pitch shift in semitones (track_id, semitones) - for future use
SetPitchShift(TrackId, f32),
// Audio track commands
/// Create a new audio track with a name
CreateAudioTrack(String),
/// Add an audio file to the pool (path, data, channels, sample_rate)
/// Returns the pool index via an AudioEvent
AddAudioFile(String, Vec<f32>, u32, u32),
/// Add a clip to an audio track (track_id, pool_index, start_time, duration, offset)
AddAudioClip(TrackId, usize, f64, f64, f64),
// MIDI commands
/// Create a new MIDI track with a name
CreateMidiTrack(String),
/// Create a new MIDI clip on a track (track_id, start_time, duration)
CreateMidiClip(TrackId, f64, f64),
/// Add a MIDI note to a clip (track_id, clip_id, time_offset, note, velocity, duration)
AddMidiNote(TrackId, MidiClipId, f64, u8, u8, f64),
/// Add a pre-loaded MIDI clip to a track
AddLoadedMidiClip(TrackId, MidiClip),
/// Update MIDI clip notes (track_id, clip_id, notes: Vec<(start_time, note, velocity, duration)>)
/// NOTE: May need to switch to individual note operations if this becomes slow on clips with many notes
UpdateMidiClipNotes(TrackId, MidiClipId, Vec<(f64, u8, u8, f64)>),
// Diagnostics commands
/// Request buffer pool statistics
RequestBufferPoolStats,
// Automation commands
/// Create a new automation lane on a track (track_id, parameter_id)
CreateAutomationLane(TrackId, ParameterId),
/// Add an automation point to a lane (track_id, lane_id, time, value, curve)
AddAutomationPoint(TrackId, AutomationLaneId, f64, f32, CurveType),
/// Remove an automation point at a specific time (track_id, lane_id, time, tolerance)
RemoveAutomationPoint(TrackId, AutomationLaneId, f64, f64),
/// Clear all automation points from a lane (track_id, lane_id)
ClearAutomationLane(TrackId, AutomationLaneId),
/// Remove an automation lane (track_id, lane_id)
RemoveAutomationLane(TrackId, AutomationLaneId),
/// Enable/disable an automation lane (track_id, lane_id, enabled)
SetAutomationLaneEnabled(TrackId, AutomationLaneId, bool),
// Recording commands
/// Start recording on a track (track_id, start_time)
StartRecording(TrackId, f64),
/// Stop the current recording
StopRecording,
/// Pause the current recording
PauseRecording,
/// Resume the current recording
ResumeRecording,
// MIDI Recording commands
/// Start MIDI recording on a track (track_id, clip_id, start_time)
StartMidiRecording(TrackId, MidiClipId, f64),
/// Stop the current MIDI recording
StopMidiRecording,
// Project commands
/// Reset the entire project (remove all tracks, clear audio pool, reset state)
Reset,
// Live MIDI input commands
/// Send a live MIDI note on event to a track's instrument (track_id, note, velocity)
SendMidiNoteOn(TrackId, u8, u8),
/// Send a live MIDI note off event to a track's instrument (track_id, note)
SendMidiNoteOff(TrackId, u8),
/// Set the active MIDI track for external MIDI input routing (track_id or None)
SetActiveMidiTrack(Option<TrackId>),
// Metronome command
/// Enable or disable the metronome click track
SetMetronomeEnabled(bool),
// Node graph commands
/// Add a node to a track's instrument graph (track_id, node_type, position_x, position_y)
GraphAddNode(TrackId, String, f32, f32),
/// Add a node to a VoiceAllocator's template graph (track_id, voice_allocator_node_id, node_type, position_x, position_y)
GraphAddNodeToTemplate(TrackId, u32, String, f32, f32),
/// Remove a node from a track's instrument graph (track_id, node_index)
GraphRemoveNode(TrackId, u32),
/// Connect two nodes in a track's graph (track_id, from_node, from_port, to_node, to_port)
GraphConnect(TrackId, u32, usize, u32, usize),
/// Connect nodes in a VoiceAllocator template (track_id, voice_allocator_node_id, from_node, from_port, to_node, to_port)
GraphConnectInTemplate(TrackId, u32, u32, usize, u32, usize),
/// Disconnect two nodes in a track's graph (track_id, from_node, from_port, to_node, to_port)
GraphDisconnect(TrackId, u32, usize, u32, usize),
/// Set a parameter on a node (track_id, node_index, param_id, value)
GraphSetParameter(TrackId, u32, u32, f32),
/// Set which node receives MIDI events (track_id, node_index, enabled)
GraphSetMidiTarget(TrackId, u32, bool),
/// Set which node is the audio output (track_id, node_index)
GraphSetOutputNode(TrackId, u32),
/// Save current graph as a preset (track_id, preset_path, preset_name, description, tags)
GraphSavePreset(TrackId, String, String, String, Vec<String>),
/// Load a preset into a track's graph (track_id, preset_path)
GraphLoadPreset(TrackId, String),
/// Save a VoiceAllocator's template graph as a preset (track_id, voice_allocator_id, preset_path, preset_name)
GraphSaveTemplatePreset(TrackId, u32, String, String),
/// Load a sample into a SimpleSampler node (track_id, node_id, file_path)
SamplerLoadSample(TrackId, u32, String),
/// Add a sample layer to a MultiSampler node (track_id, node_id, file_path, key_min, key_max, root_key, velocity_min, velocity_max, loop_start, loop_end, loop_mode)
MultiSamplerAddLayer(TrackId, u32, String, u8, u8, u8, u8, u8, Option<usize>, Option<usize>, LoopMode),
/// Update a MultiSampler layer's configuration (track_id, node_id, layer_index, key_min, key_max, root_key, velocity_min, velocity_max, loop_start, loop_end, loop_mode)
MultiSamplerUpdateLayer(TrackId, u32, usize, u8, u8, u8, u8, u8, Option<usize>, Option<usize>, LoopMode),
/// Remove a layer from a MultiSampler node (track_id, node_id, layer_index)
MultiSamplerRemoveLayer(TrackId, u32, usize),
// Automation Input Node commands
/// Add or update a keyframe on an AutomationInput node (track_id, node_id, time, value, interpolation, ease_out, ease_in)
AutomationAddKeyframe(TrackId, u32, f64, f32, String, (f32, f32), (f32, f32)),
/// Remove a keyframe from an AutomationInput node (track_id, node_id, time)
AutomationRemoveKeyframe(TrackId, u32, f64),
/// Set the display name of an AutomationInput node (track_id, node_id, name)
AutomationSetName(TrackId, u32, String),
}
/// Events sent from audio thread back to UI/control thread
#[derive(Debug, Clone)]
pub enum AudioEvent {
/// Current playback position in seconds
PlaybackPosition(f64),
/// Playback has stopped (reached end of audio)
PlaybackStopped,
/// Audio buffer underrun detected
BufferUnderrun,
/// A new track was created (track_id, is_metatrack, name)
TrackCreated(TrackId, bool, String),
/// An audio file was added to the pool (pool_index, path)
AudioFileAdded(usize, String),
/// A clip was added to a track (track_id, clip_id)
ClipAdded(TrackId, ClipId),
/// Buffer pool statistics response
BufferPoolStats(BufferPoolStats),
/// Automation lane created (track_id, lane_id, parameter_id)
AutomationLaneCreated(TrackId, AutomationLaneId, ParameterId),
/// Recording started (track_id, clip_id)
RecordingStarted(TrackId, ClipId),
/// Recording progress update (clip_id, current_duration)
RecordingProgress(ClipId, f64),
/// Recording stopped (clip_id, pool_index, waveform)
RecordingStopped(ClipId, usize, Vec<WaveformPeak>),
/// Recording error (error_message)
RecordingError(String),
/// MIDI recording stopped (track_id, clip_id, note_count)
MidiRecordingStopped(TrackId, MidiClipId, usize),
/// MIDI recording progress (track_id, clip_id, duration, notes)
/// Notes format: (start_time, note, velocity, duration)
MidiRecordingProgress(TrackId, MidiClipId, f64, Vec<(f64, u8, u8, f64)>),
/// Project has been reset
ProjectReset,
/// MIDI note started playing (note, velocity)
NoteOn(u8, u8),
/// MIDI note stopped playing (note)
NoteOff(u8),
// Node graph events
/// Node added to graph (track_id, node_index, node_type)
GraphNodeAdded(TrackId, u32, String),
/// Connection error occurred (track_id, error_message)
GraphConnectionError(TrackId, String),
/// Graph state changed (for full UI sync)
GraphStateChanged(TrackId),
/// Preset fully loaded (track_id) - emitted after all nodes and samples are loaded
GraphPresetLoaded(TrackId),
/// Preset has been saved to file (track_id, preset_path)
GraphPresetSaved(TrackId, String),
}
/// Synchronous queries sent from UI thread to audio thread
#[derive(Debug)]
pub enum Query {
/// Get the current graph state as JSON (track_id)
GetGraphState(TrackId),
/// Get a voice allocator's template graph state as JSON (track_id, voice_allocator_id)
GetTemplateState(TrackId, u32),
/// Get oscilloscope data from a node (track_id, node_id, sample_count)
GetOscilloscopeData(TrackId, u32, usize),
/// Get MIDI clip data (track_id, clip_id)
GetMidiClip(TrackId, MidiClipId),
/// Get keyframes from an AutomationInput node (track_id, node_id)
GetAutomationKeyframes(TrackId, u32),
/// Get the display name of an AutomationInput node (track_id, node_id)
GetAutomationName(TrackId, u32),
/// Serialize audio pool for project saving (project_path)
SerializeAudioPool(std::path::PathBuf),
/// Load audio pool from serialized entries (entries, project_path)
LoadAudioPool(Vec<crate::audio::pool::AudioPoolEntry>, std::path::PathBuf),
/// Resolve a missing audio file (pool_index, new_path)
ResolveMissingAudioFile(usize, std::path::PathBuf),
/// Serialize a track's effects/instrument graph (track_id, project_path)
SerializeTrackGraph(TrackId, std::path::PathBuf),
/// Load a track's effects/instrument graph (track_id, preset_json, project_path)
LoadTrackGraph(TrackId, String, std::path::PathBuf),
/// Create a new audio track (name) - returns track ID synchronously
CreateAudioTrackSync(String),
/// Create a new MIDI track (name) - returns track ID synchronously
CreateMidiTrackSync(String),
/// Get waveform data from audio pool (pool_index, target_peaks)
GetPoolWaveform(usize, usize),
/// Get file info from audio pool (pool_index) - returns (duration, sample_rate, channels)
GetPoolFileInfo(usize),
/// Export audio to file (settings, output_path)
ExportAudio(crate::audio::ExportSettings, std::path::PathBuf),
}
/// Oscilloscope data from a node
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
pub struct OscilloscopeData {
/// Audio samples
pub audio: Vec<f32>,
/// CV samples (may be empty if no CV input)
pub cv: Vec<f32>,
}
/// MIDI clip data for serialization
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
pub struct MidiClipData {
pub duration: f64,
pub events: Vec<crate::audio::midi::MidiEvent>,
}
/// Automation keyframe data for serialization
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
pub struct AutomationKeyframeData {
pub time: f64,
pub value: f32,
pub interpolation: String,
pub ease_out: (f32, f32),
pub ease_in: (f32, f32),
}
/// Responses to synchronous queries
#[derive(Debug)]
pub enum QueryResponse {
/// Graph state as JSON string
GraphState(Result<String, String>),
/// Oscilloscope data samples
OscilloscopeData(Result<OscilloscopeData, String>),
/// MIDI clip data
MidiClipData(Result<MidiClipData, String>),
/// Automation keyframes
AutomationKeyframes(Result<Vec<AutomationKeyframeData>, String>),
/// Automation node name
AutomationName(Result<String, String>),
/// Serialized audio pool entries
AudioPoolSerialized(Result<Vec<crate::audio::pool::AudioPoolEntry>, String>),
/// Audio pool loaded (returns list of missing pool indices)
AudioPoolLoaded(Result<Vec<usize>, String>),
/// Audio file resolved
AudioFileResolved(Result<(), String>),
/// Track graph serialized as JSON
TrackGraphSerialized(Result<String, String>),
/// Track graph loaded
TrackGraphLoaded(Result<(), String>),
/// Track created (returns track ID)
TrackCreated(Result<TrackId, String>),
/// Pool waveform data
PoolWaveform(Result<Vec<crate::io::WaveformPeak>, String>),
/// Pool file info (duration, sample_rate, channels)
PoolFileInfo(Result<(f64, u32, u32), String>),
/// Audio exported
AudioExported(Result<(), String>),
}

View File

@ -0,0 +1,175 @@
use std::f32::consts::PI;
/// Biquad filter implementation (2-pole IIR filter)
///
/// Transfer function: H(z) = (b0 + b1*z^-1 + b2*z^-2) / (1 + a1*z^-1 + a2*z^-2)
#[derive(Clone)]
pub struct BiquadFilter {
// Filter coefficients
b0: f32,
b1: f32,
b2: f32,
a1: f32,
a2: f32,
// State variables (per channel, supporting up to 2 channels)
x1: [f32; 2],
x2: [f32; 2],
y1: [f32; 2],
y2: [f32; 2],
}
impl BiquadFilter {
/// Create a new biquad filter with unity gain (pass-through)
pub fn new() -> Self {
Self {
b0: 1.0,
b1: 0.0,
b2: 0.0,
a1: 0.0,
a2: 0.0,
x1: [0.0; 2],
x2: [0.0; 2],
y1: [0.0; 2],
y2: [0.0; 2],
}
}
/// Create a lowpass filter
///
/// # Arguments
/// * `frequency` - Cutoff frequency in Hz
/// * `q` - Quality factor (resonance), typically 0.707 for Butterworth
/// * `sample_rate` - Sample rate in Hz
pub fn lowpass(frequency: f32, q: f32, sample_rate: f32) -> Self {
let mut filter = Self::new();
filter.set_lowpass(frequency, q, sample_rate);
filter
}
/// Create a highpass filter
///
/// # Arguments
/// * `frequency` - Cutoff frequency in Hz
/// * `q` - Quality factor (resonance), typically 0.707 for Butterworth
/// * `sample_rate` - Sample rate in Hz
pub fn highpass(frequency: f32, q: f32, sample_rate: f32) -> Self {
let mut filter = Self::new();
filter.set_highpass(frequency, q, sample_rate);
filter
}
/// Create a peaking EQ filter
///
/// # Arguments
/// * `frequency` - Center frequency in Hz
/// * `q` - Quality factor (bandwidth)
/// * `gain_db` - Gain in decibels
/// * `sample_rate` - Sample rate in Hz
pub fn peaking(frequency: f32, q: f32, gain_db: f32, sample_rate: f32) -> Self {
let mut filter = Self::new();
filter.set_peaking(frequency, q, gain_db, sample_rate);
filter
}
/// Set coefficients for a lowpass filter
pub fn set_lowpass(&mut self, frequency: f32, q: f32, sample_rate: f32) {
let omega = 2.0 * PI * frequency / sample_rate;
let sin_omega = omega.sin();
let cos_omega = omega.cos();
let alpha = sin_omega / (2.0 * q);
let a0 = 1.0 + alpha;
self.b0 = ((1.0 - cos_omega) / 2.0) / a0;
self.b1 = (1.0 - cos_omega) / a0;
self.b2 = ((1.0 - cos_omega) / 2.0) / a0;
self.a1 = (-2.0 * cos_omega) / a0;
self.a2 = (1.0 - alpha) / a0;
}
/// Set coefficients for a highpass filter
pub fn set_highpass(&mut self, frequency: f32, q: f32, sample_rate: f32) {
let omega = 2.0 * PI * frequency / sample_rate;
let sin_omega = omega.sin();
let cos_omega = omega.cos();
let alpha = sin_omega / (2.0 * q);
let a0 = 1.0 + alpha;
self.b0 = ((1.0 + cos_omega) / 2.0) / a0;
self.b1 = -(1.0 + cos_omega) / a0;
self.b2 = ((1.0 + cos_omega) / 2.0) / a0;
self.a1 = (-2.0 * cos_omega) / a0;
self.a2 = (1.0 - alpha) / a0;
}
/// Set coefficients for a peaking EQ filter
pub fn set_peaking(&mut self, frequency: f32, q: f32, gain_db: f32, sample_rate: f32) {
let omega = 2.0 * PI * frequency / sample_rate;
let sin_omega = omega.sin();
let cos_omega = omega.cos();
let a_gain = 10.0_f32.powf(gain_db / 40.0);
let alpha = sin_omega / (2.0 * q);
let a0 = 1.0 + alpha / a_gain;
self.b0 = (1.0 + alpha * a_gain) / a0;
self.b1 = (-2.0 * cos_omega) / a0;
self.b2 = (1.0 - alpha * a_gain) / a0;
self.a1 = (-2.0 * cos_omega) / a0;
self.a2 = (1.0 - alpha / a_gain) / a0;
}
/// Process a single sample
///
/// # Arguments
/// * `input` - Input sample
/// * `channel` - Channel index (0 or 1)
///
/// # Returns
/// Filtered output sample
#[inline]
pub fn process_sample(&mut self, input: f32, channel: usize) -> f32 {
let channel = channel.min(1); // Clamp to 0 or 1
// Direct Form II Transposed implementation
let output = self.b0 * input + self.x1[channel];
self.x1[channel] = self.b1 * input - self.a1 * output + self.x2[channel];
self.x2[channel] = self.b2 * input - self.a2 * output;
output
}
/// Process a buffer of interleaved samples
///
/// # Arguments
/// * `buffer` - Interleaved audio samples
/// * `channels` - Number of channels
pub fn process_buffer(&mut self, buffer: &mut [f32], channels: usize) {
if channels == 1 {
// Mono
for sample in buffer.iter_mut() {
*sample = self.process_sample(*sample, 0);
}
} else if channels == 2 {
// Stereo
for frame in buffer.chunks_exact_mut(2) {
frame[0] = self.process_sample(frame[0], 0);
frame[1] = self.process_sample(frame[1], 1);
}
}
}
/// Reset filter state (clear delay lines)
pub fn reset(&mut self) {
self.x1 = [0.0; 2];
self.x2 = [0.0; 2];
self.y1 = [0.0; 2];
self.y2 = [0.0; 2];
}
}
impl Default for BiquadFilter {
fn default() -> Self {
Self::new()
}
}

View File

@ -0,0 +1,3 @@
pub mod biquad;
pub use biquad::BiquadFilter;

View File

@ -0,0 +1,35 @@
/// Audio effect processor trait
///
/// All effects must be Send to be usable in the audio thread.
/// Effects should be real-time safe: no allocations, no blocking operations.
pub trait Effect: Send {
/// Process audio buffer in-place
///
/// # Arguments
/// * `buffer` - Interleaved audio samples to process
/// * `channels` - Number of audio channels (2 for stereo)
/// * `sample_rate` - Sample rate in Hz
fn process(&mut self, buffer: &mut [f32], channels: usize, sample_rate: u32);
/// Set an effect parameter
///
/// # Arguments
/// * `id` - Parameter identifier
/// * `value` - Parameter value (normalized or specific units depending on parameter)
fn set_parameter(&mut self, id: u32, value: f32);
/// Get an effect parameter value
///
/// # Arguments
/// * `id` - Parameter identifier
///
/// # Returns
/// Current parameter value
fn get_parameter(&self, id: u32) -> f32;
/// Reset effect state (clear delays, resonances, etc.)
fn reset(&mut self);
/// Get the effect name
fn name(&self) -> &str;
}

View File

@ -0,0 +1,148 @@
use super::Effect;
use crate::dsp::BiquadFilter;
/// Simple 3-band EQ (low shelf, mid peak, high shelf)
///
/// Parameters:
/// - 0: Low gain in dB (-12.0 to +12.0)
/// - 1: Mid gain in dB (-12.0 to +12.0)
/// - 2: High gain in dB (-12.0 to +12.0)
/// - 3: Low frequency in Hz (default: 250)
/// - 4: Mid frequency in Hz (default: 1000)
/// - 5: High frequency in Hz (default: 8000)
pub struct SimpleEQ {
low_gain: f32,
mid_gain: f32,
high_gain: f32,
low_freq: f32,
mid_freq: f32,
high_freq: f32,
low_filter: BiquadFilter,
mid_filter: BiquadFilter,
high_filter: BiquadFilter,
sample_rate: f32,
}
impl SimpleEQ {
/// Create a new SimpleEQ with flat response
pub fn new() -> Self {
Self {
low_gain: 0.0,
mid_gain: 0.0,
high_gain: 0.0,
low_freq: 250.0,
mid_freq: 1000.0,
high_freq: 8000.0,
low_filter: BiquadFilter::new(),
mid_filter: BiquadFilter::new(),
high_filter: BiquadFilter::new(),
sample_rate: 48000.0, // Default, will be updated on first process
}
}
/// Set low band gain in decibels
pub fn set_low_gain(&mut self, gain_db: f32) {
self.low_gain = gain_db.clamp(-12.0, 12.0);
self.update_filters();
}
/// Set mid band gain in decibels
pub fn set_mid_gain(&mut self, gain_db: f32) {
self.mid_gain = gain_db.clamp(-12.0, 12.0);
self.update_filters();
}
/// Set high band gain in decibels
pub fn set_high_gain(&mut self, gain_db: f32) {
self.high_gain = gain_db.clamp(-12.0, 12.0);
self.update_filters();
}
/// Set low band frequency
pub fn set_low_freq(&mut self, freq: f32) {
self.low_freq = freq.clamp(20.0, 500.0);
self.update_filters();
}
/// Set mid band frequency
pub fn set_mid_freq(&mut self, freq: f32) {
self.mid_freq = freq.clamp(200.0, 5000.0);
self.update_filters();
}
/// Set high band frequency
pub fn set_high_freq(&mut self, freq: f32) {
self.high_freq = freq.clamp(2000.0, 20000.0);
self.update_filters();
}
/// Update filter coefficients based on current parameters
fn update_filters(&mut self) {
// Only update if sample rate has been set
if self.sample_rate > 0.0 {
// Use peaking filters for all bands
// Q factor of 1.0 gives a moderate bandwidth
self.low_filter.set_peaking(self.low_freq, 1.0, self.low_gain, self.sample_rate);
self.mid_filter.set_peaking(self.mid_freq, 1.0, self.mid_gain, self.sample_rate);
self.high_filter.set_peaking(self.high_freq, 1.0, self.high_gain, self.sample_rate);
}
}
}
impl Default for SimpleEQ {
fn default() -> Self {
Self::new()
}
}
impl Effect for SimpleEQ {
fn process(&mut self, buffer: &mut [f32], channels: usize, sample_rate: u32) {
// Update sample rate if it changed
let sr = sample_rate as f32;
if (self.sample_rate - sr).abs() > 0.1 {
self.sample_rate = sr;
self.update_filters();
}
// Process through each filter in series
self.low_filter.process_buffer(buffer, channels);
self.mid_filter.process_buffer(buffer, channels);
self.high_filter.process_buffer(buffer, channels);
}
fn set_parameter(&mut self, id: u32, value: f32) {
match id {
0 => self.set_low_gain(value),
1 => self.set_mid_gain(value),
2 => self.set_high_gain(value),
3 => self.set_low_freq(value),
4 => self.set_mid_freq(value),
5 => self.set_high_freq(value),
_ => {}
}
}
fn get_parameter(&self, id: u32) -> f32 {
match id {
0 => self.low_gain,
1 => self.mid_gain,
2 => self.high_gain,
3 => self.low_freq,
4 => self.mid_freq,
5 => self.high_freq,
_ => 0.0,
}
}
fn reset(&mut self) {
self.low_filter.reset();
self.mid_filter.reset();
self.high_filter.reset();
}
fn name(&self) -> &str {
"SimpleEQ"
}
}

View File

@ -0,0 +1,97 @@
use super::Effect;
/// Simple gain/volume effect
///
/// Parameters:
/// - 0: Gain in dB (-60.0 to +12.0)
pub struct GainEffect {
gain_db: f32,
gain_linear: f32,
}
impl GainEffect {
/// Create a new gain effect with 0 dB (unity) gain
pub fn new() -> Self {
Self {
gain_db: 0.0,
gain_linear: 1.0,
}
}
/// Create a gain effect with a specific dB value
pub fn with_gain_db(gain_db: f32) -> Self {
let gain_linear = db_to_linear(gain_db);
Self {
gain_db,
gain_linear,
}
}
/// Set gain in decibels
pub fn set_gain_db(&mut self, gain_db: f32) {
self.gain_db = gain_db.clamp(-60.0, 12.0);
self.gain_linear = db_to_linear(self.gain_db);
}
/// Get current gain in decibels
pub fn gain_db(&self) -> f32 {
self.gain_db
}
}
impl Default for GainEffect {
fn default() -> Self {
Self::new()
}
}
impl Effect for GainEffect {
fn process(&mut self, buffer: &mut [f32], _channels: usize, _sample_rate: u32) {
for sample in buffer.iter_mut() {
*sample *= self.gain_linear;
}
}
fn set_parameter(&mut self, id: u32, value: f32) {
if id == 0 {
self.set_gain_db(value);
}
}
fn get_parameter(&self, id: u32) -> f32 {
if id == 0 {
self.gain_db
} else {
0.0
}
}
fn reset(&mut self) {
// Gain has no state to reset
}
fn name(&self) -> &str {
"Gain"
}
}
/// Convert decibels to linear gain
#[inline]
fn db_to_linear(db: f32) -> f32 {
if db <= -60.0 {
0.0
} else {
10.0_f32.powf(db / 20.0)
}
}
/// Convert linear gain to decibels
#[inline]
#[allow(dead_code)]
fn linear_to_db(linear: f32) -> f32 {
if linear <= 0.0 {
-60.0
} else {
20.0 * linear.log10()
}
}

View File

@ -0,0 +1,11 @@
pub mod effect_trait;
pub mod eq;
pub mod gain;
pub mod pan;
pub mod synth;
pub use effect_trait::Effect;
pub use eq::SimpleEQ;
pub use gain::GainEffect;
pub use pan::PanEffect;
pub use synth::SimpleSynth;

View File

@ -0,0 +1,98 @@
use super::Effect;
/// Stereo panning effect using constant-power panning law
///
/// Parameters:
/// - 0: Pan position (-1.0 = full left, 0.0 = center, +1.0 = full right)
pub struct PanEffect {
pan: f32,
left_gain: f32,
right_gain: f32,
}
impl PanEffect {
/// Create a new pan effect with center panning
pub fn new() -> Self {
let mut effect = Self {
pan: 0.0,
left_gain: 1.0,
right_gain: 1.0,
};
effect.update_gains();
effect
}
/// Create a pan effect with a specific pan position
pub fn with_pan(pan: f32) -> Self {
let mut effect = Self {
pan: pan.clamp(-1.0, 1.0),
left_gain: 1.0,
right_gain: 1.0,
};
effect.update_gains();
effect
}
/// Set pan position (-1.0 = left, 0.0 = center, +1.0 = right)
pub fn set_pan(&mut self, pan: f32) {
self.pan = pan.clamp(-1.0, 1.0);
self.update_gains();
}
/// Get current pan position
pub fn pan(&self) -> f32 {
self.pan
}
/// Update left/right gains using constant-power panning law
fn update_gains(&mut self) {
use std::f32::consts::PI;
// Constant-power panning: pan from -1 to +1 maps to angle 0 to PI/2
let angle = (self.pan + 1.0) * 0.5 * PI / 2.0;
self.left_gain = angle.cos();
self.right_gain = angle.sin();
}
}
impl Default for PanEffect {
fn default() -> Self {
Self::new()
}
}
impl Effect for PanEffect {
fn process(&mut self, buffer: &mut [f32], channels: usize, _sample_rate: u32) {
if channels == 2 {
// Stereo processing
for frame in buffer.chunks_exact_mut(2) {
frame[0] *= self.left_gain;
frame[1] *= self.right_gain;
}
}
// Mono and other channel counts: no panning applied
}
fn set_parameter(&mut self, id: u32, value: f32) {
if id == 0 {
self.set_pan(value);
}
}
fn get_parameter(&self, id: u32) -> f32 {
if id == 0 {
self.pan
} else {
0.0
}
}
fn reset(&mut self) {
// Pan has no state to reset
}
fn name(&self) -> &str {
"Pan"
}
}

View File

@ -0,0 +1,285 @@
use super::Effect;
use crate::audio::midi::MidiEvent;
use std::f32::consts::PI;
/// Maximum number of simultaneous voices
const MAX_VOICES: usize = 16;
/// Envelope state for a voice
#[derive(Clone, Copy, PartialEq)]
enum EnvelopeState {
Attack,
Sustain,
Release,
Off,
}
/// A single synthesizer voice
#[derive(Clone)]
struct SynthVoice {
active: bool,
note: u8,
channel: u8,
velocity: u8,
phase: f32,
frequency: f32,
age: u32, // For voice stealing
// Envelope
envelope_state: EnvelopeState,
envelope_level: f32, // 0.0 to 1.0
}
impl SynthVoice {
fn new() -> Self {
Self {
active: false,
note: 0,
channel: 0,
velocity: 0,
phase: 0.0,
frequency: 0.0,
age: 0,
envelope_state: EnvelopeState::Off,
envelope_level: 0.0,
}
}
/// Calculate frequency from MIDI note number
fn note_to_frequency(note: u8) -> f32 {
440.0 * 2.0_f32.powf((note as f32 - 69.0) / 12.0)
}
/// Start playing a note
fn note_on(&mut self, channel: u8, note: u8, velocity: u8) {
self.active = true;
self.channel = channel;
self.note = note;
self.velocity = velocity;
self.frequency = Self::note_to_frequency(note);
self.phase = 0.0;
self.age = 0;
self.envelope_state = EnvelopeState::Attack;
self.envelope_level = 0.0; // Start from silence
}
/// Stop playing (start release phase)
fn note_off(&mut self) {
// Don't stop immediately - start release phase
if self.envelope_state != EnvelopeState::Off {
self.envelope_state = EnvelopeState::Release;
}
}
/// Generate one sample
fn process_sample(&mut self, sample_rate: f32) -> f32 {
if self.envelope_state == EnvelopeState::Off {
return 0.0;
}
// Envelope timing constants (in seconds)
const ATTACK_TIME: f32 = 0.005; // 5ms attack
const RELEASE_TIME: f32 = 0.05; // 50ms release
// Update envelope
let attack_increment = 1.0 / (ATTACK_TIME * sample_rate);
let release_decrement = 1.0 / (RELEASE_TIME * sample_rate);
match self.envelope_state {
EnvelopeState::Attack => {
self.envelope_level += attack_increment;
if self.envelope_level >= 1.0 {
self.envelope_level = 1.0;
self.envelope_state = EnvelopeState::Sustain;
}
}
EnvelopeState::Sustain => {
// Stay at full level
self.envelope_level = 1.0;
}
EnvelopeState::Release => {
self.envelope_level -= release_decrement;
if self.envelope_level <= 0.0 {
self.envelope_level = 0.0;
self.envelope_state = EnvelopeState::Off;
self.active = false; // Now we can truly stop
}
}
EnvelopeState::Off => {
return 0.0;
}
}
// Simple sine wave
let sample = (self.phase * 2.0 * PI).sin() * (self.velocity as f32 / 127.0) * 0.3;
// Update phase
self.phase += self.frequency / sample_rate;
if self.phase >= 1.0 {
self.phase -= 1.0;
}
self.age += 1;
// Apply envelope
sample * self.envelope_level
}
}
/// Simple polyphonic synthesizer using sine waves
pub struct SimpleSynth {
voices: Vec<SynthVoice>,
sample_rate: f32,
pub pending_events: Vec<MidiEvent>,
}
impl SimpleSynth {
/// Create a new SimpleSynth
pub fn new() -> Self {
Self {
voices: vec![SynthVoice::new(); MAX_VOICES],
sample_rate: 44100.0,
pending_events: Vec::new(),
}
}
/// Find a free voice, or steal the oldest one
fn find_voice_for_note_on(&mut self) -> usize {
// First, look for an inactive voice
for (i, voice) in self.voices.iter().enumerate() {
if !voice.active {
return i;
}
}
// No free voices, steal the oldest one
self.voices
.iter()
.enumerate()
.max_by_key(|(_, v)| v.age)
.map(|(i, _)| i)
.unwrap_or(0)
}
/// Find the voice playing a specific note on a specific channel
/// Only matches voices in Attack or Sustain state (not already releasing)
fn find_voice_for_note_off(&mut self, channel: u8, note: u8) -> Option<usize> {
self.voices
.iter()
.position(|v| {
v.active
&& v.channel == channel
&& v.note == note
&& (v.envelope_state == EnvelopeState::Attack
|| v.envelope_state == EnvelopeState::Sustain)
})
}
/// Handle a MIDI event
pub fn handle_event(&mut self, event: &MidiEvent) {
if event.is_note_on() {
let voice_idx = self.find_voice_for_note_on();
self.voices[voice_idx].note_on(event.channel(), event.data1, event.data2);
} else if event.is_note_off() {
if let Some(voice_idx) = self.find_voice_for_note_off(event.channel(), event.data1) {
self.voices[voice_idx].note_off();
}
}
}
/// Queue a MIDI event to be processed
pub fn queue_event(&mut self, event: MidiEvent) {
self.pending_events.push(event);
}
/// Stop all currently playing notes immediately (no release envelope)
pub fn all_notes_off(&mut self) {
for voice in &mut self.voices {
voice.active = false;
voice.envelope_state = EnvelopeState::Off;
voice.envelope_level = 0.0;
}
self.pending_events.clear();
}
/// Process all queued events
fn process_events(&mut self) {
// Collect events first to avoid borrowing issues
let events: Vec<MidiEvent> = self.pending_events.drain(..).collect();
for event in events {
self.handle_event(&event);
}
}
}
impl Effect for SimpleSynth {
fn process(&mut self, buffer: &mut [f32], channels: usize, sample_rate: u32) {
self.sample_rate = sample_rate as f32;
// Process any queued MIDI events
self.process_events();
// Generate audio from all active voices
if channels == 1 {
// Mono
for sample in buffer.iter_mut() {
let mut sum = 0.0;
for voice in &mut self.voices {
sum += voice.process_sample(self.sample_rate);
}
*sample += sum;
}
} else if channels == 2 {
// Stereo (duplicate mono signal)
for frame in buffer.chunks_exact_mut(2) {
let mut sum = 0.0;
for voice in &mut self.voices {
sum += voice.process_sample(self.sample_rate);
}
frame[0] += sum;
frame[1] += sum;
}
}
}
fn set_parameter(&mut self, id: u32, value: f32) {
// Parameter 0: Note on
// Parameter 1: Note off
// This is a simple interface for testing without proper MIDI routing
match id {
0 => {
let note = value as u8;
let voice_idx = self.find_voice_for_note_on();
self.voices[voice_idx].note_on(0, note, 100);
}
1 => {
let note = value as u8;
if let Some(voice_idx) = self.find_voice_for_note_off(0, note) {
self.voices[voice_idx].note_off();
}
}
_ => {}
}
}
fn get_parameter(&self, _id: u32) -> f32 {
0.0
}
fn reset(&mut self) {
for voice in &mut self.voices {
voice.note_off();
}
self.pending_events.clear();
}
fn name(&self) -> &str {
"SimpleSynth"
}
}
impl Default for SimpleSynth {
fn default() -> Self {
Self::new()
}
}

View File

@ -0,0 +1,174 @@
use std::path::Path;
use symphonia::core::audio::SampleBuffer;
use symphonia::core::codecs::DecoderOptions;
use symphonia::core::errors::Error;
use symphonia::core::formats::FormatOptions;
use symphonia::core::io::MediaSourceStream;
use symphonia::core::meta::MetadataOptions;
use symphonia::core::probe::Hint;
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
pub struct WaveformPeak {
pub min: f32,
pub max: f32,
}
pub struct AudioFile {
pub data: Vec<f32>,
pub channels: u32,
pub sample_rate: u32,
pub frames: u64,
}
impl AudioFile {
/// Load an audio file from disk and decode it to interleaved f32 samples
pub fn load<P: AsRef<Path>>(path: P) -> Result<Self, String> {
let path = path.as_ref();
// Open the media source
let file = std::fs::File::open(path)
.map_err(|e| format!("Failed to open file: {}", e))?;
let mss = MediaSourceStream::new(Box::new(file), Default::default());
// Create a probe hint using the file extension
let mut hint = Hint::new();
if let Some(extension) = path.extension() {
if let Some(ext_str) = extension.to_str() {
hint.with_extension(ext_str);
}
}
// Probe the media source
let probed = symphonia::default::get_probe()
.format(&hint, mss, &FormatOptions::default(), &MetadataOptions::default())
.map_err(|e| format!("Failed to probe file: {}", e))?;
let mut format = probed.format;
// Find the default audio track
let track = format
.tracks()
.iter()
.find(|t| t.codec_params.codec != symphonia::core::codecs::CODEC_TYPE_NULL)
.ok_or_else(|| "No audio tracks found".to_string())?;
let track_id = track.id;
// Get audio parameters
let codec_params = &track.codec_params;
let channels = codec_params.channels
.ok_or_else(|| "Channel count not specified".to_string())?
.count() as u32;
let sample_rate = codec_params.sample_rate
.ok_or_else(|| "Sample rate not specified".to_string())?;
// Create decoder
let mut decoder = symphonia::default::get_codecs()
.make(&codec_params, &DecoderOptions::default())
.map_err(|e| format!("Failed to create decoder: {}", e))?;
// Decode all packets
let mut audio_data = Vec::new();
let mut sample_buf = None;
loop {
let packet = match format.next_packet() {
Ok(packet) => packet,
Err(Error::ResetRequired) => {
return Err("Decoder reset required (not implemented)".to_string());
}
Err(Error::IoError(e)) if e.kind() == std::io::ErrorKind::UnexpectedEof => {
// End of file
break;
}
Err(e) => {
return Err(format!("Failed to read packet: {}", e));
}
};
// Skip packets for other tracks
if packet.track_id() != track_id {
continue;
}
// Decode the packet
match decoder.decode(&packet) {
Ok(decoded) => {
// Initialize sample buffer on first packet
if sample_buf.is_none() {
let spec = *decoded.spec();
let duration = decoded.capacity() as u64;
sample_buf = Some(SampleBuffer::<f32>::new(duration, spec));
}
// Copy decoded audio to sample buffer
if let Some(ref mut buf) = sample_buf {
buf.copy_interleaved_ref(decoded);
audio_data.extend_from_slice(buf.samples());
}
}
Err(Error::DecodeError(e)) => {
eprintln!("Decode error: {}", e);
continue;
}
Err(e) => {
return Err(format!("Decode failed: {}", e));
}
}
}
let frames = (audio_data.len() / channels as usize) as u64;
Ok(AudioFile {
data: audio_data,
channels,
sample_rate,
frames,
})
}
/// Calculate the duration of the audio file in seconds
pub fn duration(&self) -> f64 {
self.frames as f64 / self.sample_rate as f64
}
/// Generate a waveform overview with the specified number of peaks
/// This creates a downsampled representation suitable for timeline visualization
pub fn generate_waveform_overview(&self, target_peaks: usize) -> Vec<WaveformPeak> {
if self.frames == 0 || target_peaks == 0 {
return Vec::new();
}
let total_frames = self.frames as usize;
let frames_per_peak = (total_frames / target_peaks).max(1);
let actual_peaks = (total_frames + frames_per_peak - 1) / frames_per_peak;
let mut peaks = Vec::with_capacity(actual_peaks);
for peak_idx in 0..actual_peaks {
let start_frame = peak_idx * frames_per_peak;
let end_frame = ((peak_idx + 1) * frames_per_peak).min(total_frames);
let mut min = 0.0f32;
let mut max = 0.0f32;
// Scan all samples in this window
for frame_idx in start_frame..end_frame {
// For multi-channel audio, combine all channels
for ch in 0..self.channels as usize {
let sample_idx = frame_idx * self.channels as usize + ch;
if sample_idx < self.data.len() {
let sample = self.data[sample_idx];
min = min.min(sample);
max = max.max(sample);
}
}
}
peaks.push(WaveformPeak { min, max });
}
peaks
}
}

View File

@ -0,0 +1,165 @@
use crate::audio::midi::{MidiClip, MidiClipId, MidiEvent};
use std::fs;
use std::path::Path;
/// Load a MIDI file and convert it to a MidiClip
pub fn load_midi_file<P: AsRef<Path>>(
path: P,
clip_id: MidiClipId,
_sample_rate: u32,
) -> Result<MidiClip, String> {
// Read the MIDI file
let data = fs::read(path.as_ref()).map_err(|e| format!("Failed to read MIDI file: {}", e))?;
// Parse with midly
let smf = midly::Smf::parse(&data).map_err(|e| format!("Failed to parse MIDI file: {}", e))?;
// Convert timing to ticks per second
let ticks_per_beat = match smf.header.timing {
midly::Timing::Metrical(tpb) => tpb.as_int() as f64,
midly::Timing::Timecode(fps, subframe) => {
// For timecode, calculate equivalent ticks per second
(fps.as_f32() * subframe as f32) as f64
}
};
// First pass: collect all events with their tick positions and tempo changes
#[derive(Debug)]
enum RawEvent {
Midi {
tick: u64,
channel: u8,
message: midly::MidiMessage,
},
Tempo {
tick: u64,
microseconds_per_beat: f64,
},
}
let mut raw_events = Vec::new();
let mut max_time_ticks = 0u64;
// Collect all events from all tracks with their absolute tick positions
for track in &smf.tracks {
let mut current_tick = 0u64;
for event in track {
current_tick += event.delta.as_int() as u64;
max_time_ticks = max_time_ticks.max(current_tick);
match event.kind {
midly::TrackEventKind::Midi { channel, message } => {
raw_events.push(RawEvent::Midi {
tick: current_tick,
channel: channel.as_int(),
message,
});
}
midly::TrackEventKind::Meta(midly::MetaMessage::Tempo(tempo)) => {
raw_events.push(RawEvent::Tempo {
tick: current_tick,
microseconds_per_beat: tempo.as_int() as f64,
});
}
_ => {
// Ignore other meta events
}
}
}
}
// Sort all events by tick position
raw_events.sort_by_key(|e| match e {
RawEvent::Midi { tick, .. } => *tick,
RawEvent::Tempo { tick, .. } => *tick,
});
// Second pass: convert ticks to timestamps with proper tempo tracking
let mut events = Vec::new();
let mut microseconds_per_beat = 500000.0; // Default: 120 BPM
let mut last_tick = 0u64;
let mut accumulated_time = 0.0; // Time in seconds
for raw_event in raw_events {
match raw_event {
RawEvent::Tempo {
tick,
microseconds_per_beat: new_tempo,
} => {
// Update accumulated time up to this tempo change
let delta_ticks = tick - last_tick;
let delta_time = (delta_ticks as f64 / ticks_per_beat)
* (microseconds_per_beat / 1_000_000.0);
accumulated_time += delta_time;
last_tick = tick;
// Update tempo for future events
microseconds_per_beat = new_tempo;
}
RawEvent::Midi {
tick,
channel,
message,
} => {
// Calculate time for this event
let delta_ticks = tick - last_tick;
let delta_time = (delta_ticks as f64 / ticks_per_beat)
* (microseconds_per_beat / 1_000_000.0);
accumulated_time += delta_time;
last_tick = tick;
// Store timestamp in seconds (sample-rate independent)
let timestamp = accumulated_time;
match message {
midly::MidiMessage::NoteOn { key, vel } => {
let velocity = vel.as_int();
if velocity > 0 {
events.push(MidiEvent::note_on(
timestamp,
channel,
key.as_int(),
velocity,
));
} else {
events.push(MidiEvent::note_off(timestamp, channel, key.as_int(), 64));
}
}
midly::MidiMessage::NoteOff { key, vel } => {
events.push(MidiEvent::note_off(
timestamp,
channel,
key.as_int(),
vel.as_int(),
));
}
midly::MidiMessage::Controller { controller, value } => {
let status = 0xB0 | channel;
events.push(MidiEvent::new(
timestamp,
status,
controller.as_int(),
value.as_int(),
));
}
_ => {
// Ignore other MIDI messages
}
}
}
}
}
// Calculate final clip duration
let final_delta_ticks = max_time_ticks - last_tick;
let final_delta_time =
(final_delta_ticks as f64 / ticks_per_beat) * (microseconds_per_beat / 1_000_000.0);
let duration_seconds = accumulated_time + final_delta_time;
// Create the MIDI clip
let mut clip = MidiClip::new(clip_id, 0.0, duration_seconds);
clip.events = events;
Ok(clip)
}

View File

@ -0,0 +1,267 @@
use crate::audio::track::TrackId;
use crate::command::Command;
use midir::{MidiInput, MidiInputConnection};
use std::sync::{Arc, Mutex};
use std::thread;
use std::time::Duration;
/// Manages external MIDI input devices and routes MIDI to the currently active track
pub struct MidiInputManager {
connections: Arc<Mutex<Vec<ActiveMidiConnection>>>,
active_track_id: Arc<Mutex<Option<TrackId>>>,
#[allow(dead_code)]
command_tx: Arc<Mutex<rtrb::Producer<Command>>>,
}
struct ActiveMidiConnection {
#[allow(dead_code)]
device_name: String,
#[allow(dead_code)]
connection: MidiInputConnection<()>,
}
impl MidiInputManager {
/// Create a new MIDI input manager and auto-connect to all available devices
pub fn new(command_tx: rtrb::Producer<Command>) -> Result<Self, String> {
let active_track_id = Arc::new(Mutex::new(None));
let connections = Arc::new(Mutex::new(Vec::new()));
// Wrap command producer in Arc<Mutex> for sharing across MIDI callbacks
let shared_command_tx = Arc::new(Mutex::new(command_tx));
// Connect to all currently available devices
Self::connect_to_devices(&connections, &shared_command_tx, &active_track_id)?;
// Create the manager
let manager = Self {
connections: connections.clone(),
active_track_id: active_track_id.clone(),
command_tx: shared_command_tx.clone(),
};
// Spawn hot-plug monitoring thread
let hotplug_connections = connections.clone();
let hotplug_command_tx = shared_command_tx.clone();
let hotplug_active_id = active_track_id.clone();
thread::spawn(move || {
loop {
thread::sleep(Duration::from_secs(2)); // Check every 2 seconds
// Try to connect to new devices
if let Err(e) = Self::connect_to_devices(
&hotplug_connections,
&hotplug_command_tx,
&hotplug_active_id,
) {
eprintln!("MIDI hot-plug scan error: {}", e);
}
}
});
Ok(manager)
}
/// Connect to all available MIDI devices (skips already connected devices)
fn connect_to_devices(
connections: &Arc<Mutex<Vec<ActiveMidiConnection>>>,
command_tx: &Arc<Mutex<rtrb::Producer<Command>>>,
active_track_id: &Arc<Mutex<Option<TrackId>>>,
) -> Result<(), String> {
// Initialize MIDI input
let mut midi_in = MidiInput::new("Lightningbeam")
.map_err(|e| format!("Failed to initialize MIDI input: {}", e))?;
// Get all available MIDI input ports
let ports = midi_in.ports();
// Get list of currently available device names
let mut available_devices = Vec::new();
for port in &ports {
if let Ok(port_name) = midi_in.port_name(port) {
available_devices.push(port_name);
}
}
// Remove disconnected devices from our connections list
{
let mut conns = connections.lock().unwrap();
let before_count = conns.len();
conns.retain(|conn| available_devices.contains(&conn.device_name));
let after_count = conns.len();
if before_count != after_count {
println!("MIDI: Removed {} disconnected device(s)", before_count - after_count);
}
}
// Get list of already connected device names
let connected_devices: Vec<String> = {
let conns = connections.lock().unwrap();
conns.iter().map(|c| c.device_name.clone()).collect()
};
// Store port info first
let mut port_infos = Vec::new();
for port in &ports {
if let Ok(port_name) = midi_in.port_name(port) {
// Skip if already connected
if !connected_devices.contains(&port_name) {
port_infos.push((port.clone(), port_name));
}
}
}
// If no new devices, return early
if port_infos.is_empty() {
return Ok(());
}
println!("MIDI: Found {} new device(s)", port_infos.len());
// Connect to each new device
for (port, port_name) in port_infos {
println!("MIDI: Connecting to device: {}", port_name);
// Recreate MidiInput for this connection
midi_in = MidiInput::new("Lightningbeam")
.map_err(|e| format!("Failed to recreate MIDI input: {}", e))?;
let device_name = port_name.clone();
let cmd_tx = command_tx.clone();
let active_id = active_track_id.clone();
match midi_in.connect(
&port,
&format!("lightningbeam-{}", port_name),
move |_timestamp, message, _| {
Self::on_midi_message(message, &cmd_tx, &active_id, &device_name);
},
(),
) {
Ok(connection) => {
let mut conns = connections.lock().unwrap();
conns.push(ActiveMidiConnection {
device_name: port_name.clone(),
connection,
});
println!("MIDI: Connected to: {}", port_name);
// Need to recreate MidiInput for next iteration
let _midi_in = MidiInput::new("Lightningbeam")
.map_err(|e| format!("Failed to recreate MIDI input: {}", e))?;
midi_in = _midi_in;
}
Err(e) => {
eprintln!("MIDI: Failed to connect to {}: {}", port_name, e);
// Recreate MidiInput to continue with other ports
let _midi_in = MidiInput::new("Lightningbeam")
.map_err(|e| format!("Failed to recreate MIDI input: {}", e))?;
midi_in = _midi_in;
}
}
}
let conn_count = connections.lock().unwrap().len();
println!("MIDI Input: Total connected devices: {}", conn_count);
Ok(())
}
/// MIDI input callback - parses MIDI messages and sends commands to audio engine
fn on_midi_message(
message: &[u8],
command_tx: &Mutex<rtrb::Producer<Command>>,
active_track_id: &Arc<Mutex<Option<TrackId>>>,
device_name: &str,
) {
if message.is_empty() {
return;
}
// Get the currently active track
let track_id = {
let active = active_track_id.lock().unwrap();
match *active {
Some(id) => id,
None => {
// No active track, ignore MIDI input
return;
}
}
};
let status_byte = message[0];
let status = status_byte & 0xF0;
let _channel = status_byte & 0x0F;
match status {
0x90 => {
// Note On
if message.len() >= 3 {
let note = message[1];
let velocity = message[2];
// Treat velocity 0 as Note Off (per MIDI spec)
if velocity == 0 {
let mut tx = command_tx.lock().unwrap();
let _ = tx.push(Command::SendMidiNoteOff(track_id, note));
println!("MIDI [{}] Note Off: {} (velocity 0)", device_name, note);
} else {
let mut tx = command_tx.lock().unwrap();
let _ = tx.push(Command::SendMidiNoteOn(track_id, note, velocity));
println!("MIDI [{}] Note On: {} vel {}", device_name, note, velocity);
}
}
}
0x80 => {
// Note Off
if message.len() >= 3 {
let note = message[1];
let mut tx = command_tx.lock().unwrap();
let _ = tx.push(Command::SendMidiNoteOff(track_id, note));
println!("MIDI [{}] Note Off: {}", device_name, note);
}
}
0xB0 => {
// Control Change
if message.len() >= 3 {
let controller = message[1];
let value = message[2];
println!("MIDI [{}] CC: {} = {}", device_name, controller, value);
// TODO: Map to automation lanes in Phase 5
}
}
0xE0 => {
// Pitch Bend
if message.len() >= 3 {
let lsb = message[1] as u16;
let msb = message[2] as u16;
let value = (msb << 7) | lsb;
println!("MIDI [{}] Pitch Bend: {}", device_name, value);
// TODO: Map to pitch automation in Phase 5
}
}
_ => {
// Other MIDI messages (aftertouch, program change, etc.)
// Ignore for now
}
}
}
/// Set the currently active MIDI track
pub fn set_active_track(&self, track_id: Option<TrackId>) {
let mut active = self.active_track_id.lock().unwrap();
*active = track_id;
match track_id {
Some(id) => println!("MIDI Input: Routing to track {}", id),
None => println!("MIDI Input: No active track"),
}
}
/// Get the number of connected devices
pub fn device_count(&self) -> usize {
self.connections.lock().unwrap().len()
}
}

View File

@ -0,0 +1,9 @@
pub mod audio_file;
pub mod midi_file;
pub mod midi_input;
pub mod wav_writer;
pub use audio_file::{AudioFile, WaveformPeak};
pub use midi_file::load_midi_file;
pub use midi_input::MidiInputManager;
pub use wav_writer::WavWriter;

View File

@ -0,0 +1,125 @@
/// Incremental WAV file writer for streaming audio to disk
use std::fs::File;
use std::io::{self, Seek, SeekFrom, Write};
use std::path::Path;
/// WAV file writer that supports incremental writing
pub struct WavWriter {
file: File,
sample_rate: u32,
channels: u32,
frames_written: usize,
}
impl WavWriter {
/// Create a new WAV file and write initial header
/// The header is written with placeholder sizes that will be updated on finalization
pub fn create(path: impl AsRef<Path>, sample_rate: u32, channels: u32) -> io::Result<Self> {
let mut file = File::create(path)?;
// Write initial WAV header with placeholder sizes
write_wav_header(&mut file, sample_rate, channels, 0)?;
Ok(Self {
file,
sample_rate,
channels,
frames_written: 0,
})
}
/// Append audio samples to the file
/// Expects interleaved f32 samples in range [-1.0, 1.0]
pub fn write_samples(&mut self, samples: &[f32]) -> io::Result<()> {
// Convert f32 samples to 16-bit PCM
let pcm_data: Vec<u8> = samples
.iter()
.flat_map(|&sample| {
let clamped = sample.clamp(-1.0, 1.0);
let pcm_value = (clamped * 32767.0) as i16;
pcm_value.to_le_bytes()
})
.collect();
self.file.write_all(&pcm_data)?;
self.frames_written += samples.len() / self.channels as usize;
Ok(())
}
/// Get the current number of frames written
pub fn frames_written(&self) -> usize {
self.frames_written
}
/// Get the current duration in seconds
pub fn duration(&self) -> f64 {
self.frames_written as f64 / self.sample_rate as f64
}
/// Finalize the WAV file by updating the header with correct sizes
pub fn finalize(mut self) -> io::Result<()> {
// Flush any remaining data
self.file.flush()?;
// Calculate total data size
let data_size = self.frames_written * self.channels as usize * 2; // 2 bytes per sample (16-bit)
// WAV file structure:
// RIFF header (12 bytes): "RIFF" + size + "WAVE"
// fmt chunk (24 bytes): "fmt " + size + format data
// data chunk header (8 bytes): "data" + size
// Total header = 44 bytes
// RIFF chunk size = everything after offset 8 = 4 (WAVE) + 24 (fmt) + 8 (data header) + data_size
let riff_chunk_size = 36 + data_size; // 36 = size from "WAVE" to end of data chunk header
// Seek to RIFF chunk size (offset 4)
self.file.seek(SeekFrom::Start(4))?;
self.file.write_all(&(riff_chunk_size as u32).to_le_bytes())?;
// Seek to data chunk size (offset 40)
self.file.seek(SeekFrom::Start(40))?;
self.file.write_all(&(data_size as u32).to_le_bytes())?;
// Flush and sync to ensure all data is written to disk before file is closed
self.file.flush()?;
self.file.sync_all()?;
Ok(())
}
}
/// Write WAV header with specified parameters
fn write_wav_header(file: &mut File, sample_rate: u32, channels: u32, frames: usize) -> io::Result<()> {
let bytes_per_sample = 2u16; // 16-bit PCM
let data_size = (frames * channels as usize * bytes_per_sample as usize) as u32;
// RIFF chunk size = everything after offset 8
// = 4 (WAVE) + 24 (fmt chunk) + 8 (data chunk header) + data_size
let riff_chunk_size = 36 + data_size;
// RIFF header
file.write_all(b"RIFF")?;
file.write_all(&riff_chunk_size.to_le_bytes())?;
file.write_all(b"WAVE")?;
// fmt chunk
file.write_all(b"fmt ")?;
file.write_all(&16u32.to_le_bytes())?; // fmt chunk size
file.write_all(&1u16.to_le_bytes())?; // PCM format
file.write_all(&(channels as u16).to_le_bytes())?;
file.write_all(&sample_rate.to_le_bytes())?;
let byte_rate = sample_rate * channels * bytes_per_sample as u32;
file.write_all(&byte_rate.to_le_bytes())?;
let block_align = channels as u16 * bytes_per_sample;
file.write_all(&block_align.to_le_bytes())?;
file.write_all(&(bytes_per_sample * 8).to_le_bytes())?; // bits per sample
// data chunk header
file.write_all(b"data")?;
file.write_all(&data_size.to_le_bytes())?;
Ok(())
}

235
daw-backend/src/lib.rs Normal file
View File

@ -0,0 +1,235 @@
// DAW Backend - Phase 6: Hierarchical Tracks
//
// A DAW backend with timeline-based playback, clips, audio pool, effects, and hierarchical track groups.
// Supports multiple tracks, mixing, per-track volume/mute/solo, shared audio data, effect chains, and nested groups.
// Uses lock-free command queues, cpal for audio I/O, and symphonia for audio file decoding.
pub mod audio;
pub mod command;
pub mod dsp;
pub mod effects;
pub mod io;
pub mod tui;
// Re-export commonly used types
pub use audio::{
AudioPool, AudioTrack, AutomationLane, AutomationLaneId, AutomationPoint, BufferPool, Clip, ClipId, CurveType, Engine, EngineController,
Metatrack, MidiClip, MidiClipId, MidiEvent, MidiTrack, ParameterId, PoolAudioFile, Project, RecordingState, RenderContext, Track, TrackId,
TrackNode,
};
pub use audio::node_graph::{GraphPreset, AudioGraph, PresetMetadata, SerializedConnection, SerializedNode};
pub use command::{AudioEvent, Command, OscilloscopeData};
pub use command::types::AutomationKeyframeData;
pub use io::{load_midi_file, AudioFile, WaveformPeak, WavWriter};
use cpal::traits::{DeviceTrait, HostTrait, StreamTrait};
/// Trait for emitting audio events to external systems (UI, logging, etc.)
/// This allows the DAW backend to remain framework-agnostic
pub trait EventEmitter: Send + Sync {
/// Emit an audio event
fn emit(&self, event: AudioEvent);
}
/// Simple audio system that handles cpal initialization internally
pub struct AudioSystem {
pub controller: EngineController,
pub stream: cpal::Stream,
pub sample_rate: u32,
pub channels: u32,
}
impl AudioSystem {
/// Initialize the audio system with default input and output devices
///
/// # Arguments
/// * `event_emitter` - Optional event emitter for pushing events to external systems
/// * `buffer_size` - Audio buffer size in frames (128, 256, 512, 1024, etc.)
/// Smaller = lower latency but higher CPU usage. Default: 256
pub fn new(
event_emitter: Option<std::sync::Arc<dyn EventEmitter>>,
buffer_size: u32,
) -> Result<Self, String> {
let host = cpal::default_host();
// Get output device
let output_device = host
.default_output_device()
.ok_or("No output device available")?;
let default_output_config = output_device.default_output_config().map_err(|e| e.to_string())?;
let sample_rate = default_output_config.sample_rate().0;
let channels = default_output_config.channels() as u32;
// Create queues
let (command_tx, command_rx) = rtrb::RingBuffer::new(512); // Larger buffer for MIDI + UI commands
let (event_tx, event_rx) = rtrb::RingBuffer::new(256);
let (query_tx, query_rx) = rtrb::RingBuffer::new(16); // Smaller buffer for synchronous queries
let (query_response_tx, query_response_rx) = rtrb::RingBuffer::new(16);
// Create input ringbuffer for recording (large buffer for audio samples)
// Buffer size: 10 seconds of audio at 48kHz stereo = 48000 * 2 * 10 = 960000 samples
let input_buffer_size = (sample_rate * channels * 10) as usize;
let (mut input_tx, input_rx) = rtrb::RingBuffer::new(input_buffer_size);
// Create engine
let mut engine = Engine::new(sample_rate, channels, command_rx, event_tx, query_rx, query_response_tx);
engine.set_input_rx(input_rx);
let controller = engine.get_controller(command_tx, query_tx, query_response_rx);
// Initialize MIDI input manager for external MIDI devices
// Create a separate command channel for MIDI input
let (midi_command_tx, midi_command_rx) = rtrb::RingBuffer::new(256);
match io::MidiInputManager::new(midi_command_tx) {
Ok(midi_manager) => {
println!("MIDI input initialized successfully");
engine.set_midi_input_manager(midi_manager);
engine.set_midi_command_rx(midi_command_rx);
}
Err(e) => {
eprintln!("Warning: Failed to initialize MIDI input: {}", e);
eprintln!("External MIDI controllers will not be available");
}
}
// Build output stream with configurable buffer size
let mut output_config: cpal::StreamConfig = default_output_config.clone().into();
// Set the requested buffer size
output_config.buffer_size = cpal::BufferSize::Fixed(buffer_size);
let mut output_buffer = vec![0.0f32; 16384];
// Log audio configuration
println!("Audio Output Configuration:");
println!(" Sample Rate: {} Hz", output_config.sample_rate.0);
println!(" Channels: {}", output_config.channels);
println!(" Buffer Size: {:?}", output_config.buffer_size);
// Calculate expected latency
if let cpal::BufferSize::Fixed(size) = output_config.buffer_size {
let latency_ms = (size as f64 / output_config.sample_rate.0 as f64) * 1000.0;
println!(" Expected Latency: {:.2} ms", latency_ms);
}
let mut first_callback = true;
let output_stream = output_device
.build_output_stream(
&output_config,
move |data: &mut [f32], _: &cpal::OutputCallbackInfo| {
if first_callback {
let frames = data.len() / output_config.channels as usize;
let latency_ms = (frames as f64 / output_config.sample_rate.0 as f64) * 1000.0;
println!("Audio callback buffer size: {} samples ({} frames, {:.2} ms latency)",
data.len(), frames, latency_ms);
first_callback = false;
}
let buf = &mut output_buffer[..data.len()];
buf.fill(0.0);
engine.process(buf);
data.copy_from_slice(buf);
},
|err| eprintln!("Output stream error: {}", err),
None,
)
.map_err(|e| e.to_string())?;
// Get input device
let input_device = match host.default_input_device() {
Some(device) => device,
None => {
eprintln!("Warning: No input device available, recording will be disabled");
// Start output stream and return without input
output_stream.play().map_err(|e| e.to_string())?;
// Spawn emitter thread if provided
if let Some(emitter) = event_emitter {
Self::spawn_emitter_thread(event_rx, emitter);
}
return Ok(Self {
controller,
stream: output_stream,
sample_rate,
channels,
});
}
};
// Get input config matching output sample rate and channels if possible
let input_config = match input_device.default_input_config() {
Ok(config) => {
let mut cfg: cpal::StreamConfig = config.into();
// Try to match output sample rate and channels
cfg.sample_rate = cpal::SampleRate(sample_rate);
cfg.channels = channels as u16;
cfg
}
Err(e) => {
eprintln!("Warning: Could not get input config: {}, recording will be disabled", e);
output_stream.play().map_err(|e| e.to_string())?;
// Spawn emitter thread if provided
if let Some(emitter) = event_emitter {
Self::spawn_emitter_thread(event_rx, emitter);
}
return Ok(Self {
controller,
stream: output_stream,
sample_rate,
channels,
});
}
};
// Build input stream that feeds into the ringbuffer
let input_stream = input_device
.build_input_stream(
&input_config,
move |data: &[f32], _: &cpal::InputCallbackInfo| {
// Push input samples to ringbuffer for recording
for &sample in data {
let _ = input_tx.push(sample);
}
},
|err| eprintln!("Input stream error: {}", err),
None,
)
.map_err(|e| e.to_string())?;
// Start both streams
output_stream.play().map_err(|e| e.to_string())?;
input_stream.play().map_err(|e| e.to_string())?;
// Leak the input stream to keep it alive
Box::leak(Box::new(input_stream));
// Spawn emitter thread if provided
if let Some(emitter) = event_emitter {
Self::spawn_emitter_thread(event_rx, emitter);
}
Ok(Self {
controller,
stream: output_stream,
sample_rate,
channels,
})
}
/// Spawn a background thread to emit events from the ringbuffer
fn spawn_emitter_thread(mut event_rx: rtrb::Consumer<AudioEvent>, emitter: std::sync::Arc<dyn EventEmitter>) {
std::thread::spawn(move || {
loop {
// Wait for events and emit them
if let Ok(event) = event_rx.pop() {
emitter.emit(event);
} else {
// No events available, sleep briefly to avoid busy-waiting
std::thread::sleep(std::time::Duration::from_millis(1));
}
}
});
}
}

84
daw-backend/src/main.rs Normal file
View File

@ -0,0 +1,84 @@
use daw_backend::{AudioEvent, AudioSystem, EventEmitter};
use daw_backend::tui::run_tui;
use std::env;
use std::sync::{Arc, Mutex};
/// Event emitter that pushes events to a ringbuffer for the TUI
struct TuiEventEmitter {
tx: Arc<Mutex<rtrb::Producer<AudioEvent>>>,
}
impl TuiEventEmitter {
fn new(tx: rtrb::Producer<AudioEvent>) -> Self {
Self {
tx: Arc::new(Mutex::new(tx)),
}
}
}
impl EventEmitter for TuiEventEmitter {
fn emit(&self, event: AudioEvent) {
if let Ok(mut tx) = self.tx.lock() {
let _ = tx.push(event);
}
}
}
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Check if user wants the old CLI mode
let args: Vec<String> = env::args().collect();
if args.len() > 1 && args[1] == "--help" {
print_usage();
return Ok(());
}
println!("Lightningbeam DAW - Starting TUI...\n");
println!("Controls:");
println!(" ESC - Enter Command mode (type commands like 'track MyTrack')");
println!(" i - Enter Play mode (play MIDI notes with keyboard)");
println!(" awsedftgyhujkolp;' - Play MIDI notes (chromatic scale in Play mode)");
println!(" r - Release all notes (in Play mode)");
println!(" SPACE - Play/Pause");
println!(" Ctrl+Q - Quit");
println!("\nStarting audio system...");
// Create event channel for TUI
let (event_tx, event_rx) = rtrb::RingBuffer::new(256);
let emitter = Arc::new(TuiEventEmitter::new(event_tx));
// Initialize audio system with event emitter and default buffer size
let mut audio_system = AudioSystem::new(Some(emitter), 256)?;
println!("Audio system initialized:");
println!(" Sample rate: {} Hz", audio_system.sample_rate);
println!(" Channels: {}", audio_system.channels);
// Create a test MIDI track to verify event handling
audio_system.controller.create_midi_track("Test Track".to_string());
println!("\nTUI starting...\n");
std::thread::sleep(std::time::Duration::from_millis(100)); // Give time for event
// Wrap event receiver for TUI
let event_rx = Arc::new(Mutex::new(event_rx));
// Run the TUI
run_tui(audio_system.controller, event_rx)?;
println!("\nGoodbye!");
Ok(())
}
fn print_usage() {
println!("Lightningbeam DAW - Terminal User Interface");
println!("\nUsage: {} [OPTIONS]", env::args().next().unwrap());
println!("\nOptions:");
println!(" --help Show this help message");
println!("\nThe DAW will start in TUI mode with an empty project.");
println!("Use commands to create tracks and load audio:");
println!(" :track <name> - Create MIDI track");
println!(" :audiotrack <name> - Create audio track");
println!(" :play - Start playback");
println!(" :stop - Stop playback");
println!(" :quit - Exit application");
}

923
daw-backend/src/tui/mod.rs Normal file
View File

@ -0,0 +1,923 @@
use crate::audio::EngineController;
use crate::command::AudioEvent;
use crate::io::load_midi_file;
use crossterm::{
event::{self, DisableMouseCapture, EnableMouseCapture, Event, KeyCode, KeyModifiers},
execute,
terminal::{disable_raw_mode, enable_raw_mode, EnterAlternateScreen, LeaveAlternateScreen},
};
use ratatui::{
backend::CrosstermBackend,
layout::{Constraint, Direction, Layout},
style::{Color, Modifier, Style},
text::{Line, Span},
widgets::{Block, Borders, List, ListItem, Paragraph},
Frame, Terminal,
};
use std::io;
use std::sync::{Arc, Mutex};
use std::time::Duration;
/// TUI application mode
#[derive(Debug, Clone, Copy, PartialEq)]
pub enum AppMode {
/// Command mode - type vim-style commands
Command,
/// Play mode - use keyboard to play MIDI notes
Play,
}
/// TUI application state
pub struct TuiApp {
/// Current application mode
mode: AppMode,
/// Command input buffer (for Command mode)
command_input: String,
/// Current playback position (seconds)
playback_position: f64,
/// Whether playback is active
is_playing: bool,
/// Status message to display
status_message: String,
/// List of tracks (track_id, name)
tracks: Vec<(u32, String)>,
/// Currently selected track for MIDI input
selected_track: Option<u32>,
/// Active MIDI notes (currently held down)
active_notes: Vec<u8>,
/// Command history for up/down navigation
command_history: Vec<String>,
/// Current position in command history
history_index: Option<usize>,
/// Clips on timeline: (track_id, clip_id, start_time, duration, name, notes)
/// Notes: Vec<(pitch, time_offset, duration)>
clips: Vec<(u32, u32, f64, f64, String, Vec<(u8, f64, f64)>)>,
/// Next clip ID for locally created clips
next_clip_id: u32,
/// Timeline scroll offset in seconds (start of visible window)
timeline_scroll: f64,
/// Timeline visible duration in seconds (zoom level)
timeline_visible_duration: f64,
}
impl TuiApp {
pub fn new() -> Self {
Self {
mode: AppMode::Command,
command_input: String::new(),
playback_position: 0.0,
is_playing: false,
status_message: "SPACE=play/pause | ←/→ scroll | -/+ zoom | 'i'=Play mode | Type 'help'".to_string(),
tracks: Vec::new(),
selected_track: None,
active_notes: Vec::new(),
command_history: Vec::new(),
history_index: None,
clips: Vec::new(),
next_clip_id: 0,
timeline_scroll: 0.0,
timeline_visible_duration: 10.0, // Show 10 seconds at a time by default
}
}
/// Switch to command mode
pub fn enter_command_mode(&mut self) {
self.mode = AppMode::Command;
self.command_input.clear();
self.history_index = None;
self.status_message = "-- COMMAND -- SPACE=play/pause | ←/→ scroll | -/+ zoom | 'i' for Play mode | Type 'help'".to_string();
}
/// Switch to play mode
pub fn enter_play_mode(&mut self) {
self.mode = AppMode::Play;
self.command_input.clear();
self.status_message = "-- PLAY -- Press '?' for help, 'ESC' for Command mode".to_string();
}
/// Add a character to command input
pub fn push_command_char(&mut self, c: char) {
self.command_input.push(c);
}
/// Remove last character from command input
pub fn pop_command_char(&mut self) {
self.command_input.pop();
}
/// Get the current command input
pub fn command_input(&self) -> &str {
&self.command_input
}
/// Clear command input
pub fn clear_command(&mut self) {
self.command_input.clear();
self.history_index = None;
}
/// Add command to history
pub fn add_to_history(&mut self, command: String) {
if !command.is_empty() && self.command_history.last() != Some(&command) {
self.command_history.push(command);
}
}
/// Navigate command history up
pub fn history_up(&mut self) {
if self.command_history.is_empty() {
return;
}
let new_index = match self.history_index {
None => Some(self.command_history.len() - 1),
Some(0) => Some(0),
Some(i) => Some(i - 1),
};
if let Some(idx) = new_index {
self.history_index = Some(idx);
self.command_input = self.command_history[idx].clone();
}
}
/// Navigate command history down
pub fn history_down(&mut self) {
match self.history_index {
None => {}
Some(i) if i >= self.command_history.len() - 1 => {
self.history_index = None;
self.command_input.clear();
}
Some(i) => {
let new_idx = i + 1;
self.history_index = Some(new_idx);
self.command_input = self.command_history[new_idx].clone();
}
}
}
/// Update playback position and auto-scroll timeline if needed
pub fn update_playback_position(&mut self, position: f64) {
self.playback_position = position;
// Auto-scroll to keep playhead in view when playing
if self.is_playing {
// Keep playhead in the visible window, with some margin
let margin = self.timeline_visible_duration * 0.1; // 10% margin
// If playhead is ahead of visible window, scroll forward
if position > self.timeline_scroll + self.timeline_visible_duration - margin {
self.timeline_scroll = (position - self.timeline_visible_duration * 0.5).max(0.0);
}
// If playhead is behind visible window, scroll backward
else if position < self.timeline_scroll + margin {
self.timeline_scroll = (position - margin).max(0.0);
}
}
}
/// Set playing state
pub fn set_playing(&mut self, playing: bool) {
self.is_playing = playing;
}
/// Set status message
pub fn set_status(&mut self, message: String) {
self.status_message = message;
}
/// Add a track to the UI
pub fn add_track(&mut self, track_id: u32, name: String) {
self.tracks.push((track_id, name));
// Auto-select first MIDI track for playing
if self.selected_track.is_none() {
self.selected_track = Some(track_id);
}
}
/// Clear all tracks
pub fn clear_tracks(&mut self) {
self.tracks.clear();
self.clips.clear();
self.selected_track = None;
self.next_clip_id = 0;
self.timeline_scroll = 0.0;
}
/// Select a track by index
pub fn select_track(&mut self, index: usize) {
if let Some((track_id, _)) = self.tracks.get(index) {
self.selected_track = Some(*track_id);
}
}
/// Get selected track
pub fn selected_track(&self) -> Option<u32> {
self.selected_track
}
/// Add a clip to the timeline
pub fn add_clip(&mut self, track_id: u32, clip_id: u32, start_time: f64, duration: f64, name: String, notes: Vec<(u8, f64, f64)>) {
self.clips.push((track_id, clip_id, start_time, duration, name, notes));
}
/// Get max timeline duration based on clips
pub fn get_timeline_duration(&self) -> f64 {
self.clips
.iter()
.map(|(_, _, start, dur, _, _)| start + dur)
.max_by(|a, b| a.partial_cmp(b).unwrap_or(std::cmp::Ordering::Equal))
.unwrap_or(10.0) // Default to 10 seconds if no clips
}
/// Add an active MIDI note
pub fn add_active_note(&mut self, note: u8) {
if !self.active_notes.contains(&note) {
self.active_notes.push(note);
}
}
/// Remove an active MIDI note
pub fn remove_active_note(&mut self, note: u8) {
self.active_notes.retain(|&n| n != note);
}
/// Get current mode
pub fn mode(&self) -> AppMode {
self.mode
}
/// Scroll timeline left
pub fn scroll_timeline_left(&mut self) {
let scroll_amount = self.timeline_visible_duration * 0.2; // Scroll by 20% of visible duration
self.timeline_scroll = (self.timeline_scroll - scroll_amount).max(0.0);
}
/// Scroll timeline right
pub fn scroll_timeline_right(&mut self) {
let scroll_amount = self.timeline_visible_duration * 0.2; // Scroll by 20% of visible duration
let max_duration = self.get_timeline_duration();
self.timeline_scroll = (self.timeline_scroll + scroll_amount).min(max_duration - self.timeline_visible_duration).max(0.0);
}
/// Zoom timeline in (show less time, more detail)
pub fn zoom_timeline_in(&mut self) {
self.timeline_visible_duration = (self.timeline_visible_duration * 0.8).max(1.0); // Min 1 second visible
}
/// Zoom timeline out (show more time, less detail)
pub fn zoom_timeline_out(&mut self) {
let max_duration = self.get_timeline_duration();
self.timeline_visible_duration = (self.timeline_visible_duration * 1.25).min(max_duration).max(1.0);
}
}
/// Map keyboard keys to MIDI notes
/// Uses chromatic layout: awsedftgyhujkolp;'
/// This provides 1.5 octaves starting from C4 (MIDI note 60)
pub fn key_to_midi_note(key: KeyCode) -> Option<u8> {
let base = 60; // C4
match key {
KeyCode::Char('a') => Some(base), // C4
KeyCode::Char('w') => Some(base + 1), // C#4
KeyCode::Char('s') => Some(base + 2), // D4
KeyCode::Char('e') => Some(base + 3), // D#4
KeyCode::Char('d') => Some(base + 4), // E4
KeyCode::Char('f') => Some(base + 5), // F4
KeyCode::Char('t') => Some(base + 6), // F#4
KeyCode::Char('g') => Some(base + 7), // G4
KeyCode::Char('y') => Some(base + 8), // G#4
KeyCode::Char('h') => Some(base + 9), // A4
KeyCode::Char('u') => Some(base + 10), // A#4
KeyCode::Char('j') => Some(base + 11), // B4
KeyCode::Char('k') => Some(base + 12), // C5
KeyCode::Char('o') => Some(base + 13), // C#5
KeyCode::Char('l') => Some(base + 14), // D5
KeyCode::Char('p') => Some(base + 15), // D#5
KeyCode::Char(';') => Some(base + 16), // E5
KeyCode::Char('\'') => Some(base + 17), // F5
_ => None,
}
}
/// Convert pitch % 8 to braille dot bit position
fn pitch_to_braille_bit(pitch_mod_8: u8) -> u8 {
match pitch_mod_8 {
0 => 0x01, // Dot 1
1 => 0x02, // Dot 2
2 => 0x04, // Dot 3
3 => 0x40, // Dot 7
4 => 0x08, // Dot 4
5 => 0x10, // Dot 5
6 => 0x20, // Dot 6
7 => 0x80, // Dot 8
_ => 0x00,
}
}
/// Draw the timeline view with clips
fn draw_timeline(f: &mut Frame, area: ratatui::layout::Rect, app: &TuiApp) {
let num_tracks = app.tracks.len();
// Use visible duration for the timeline window
let visible_start = app.timeline_scroll;
let visible_end = app.timeline_scroll + app.timeline_visible_duration;
// Create the timeline block with visible range
let block = Block::default()
.borders(Borders::ALL)
.title(format!("Timeline ({:.1}s - {:.1}s) | ←/→ scroll | -/+ zoom", visible_start, visible_end));
let inner_area = block.inner(area);
f.render_widget(block, area);
// Calculate dimensions
let width = inner_area.width as usize;
if width == 0 || num_tracks == 0 {
return;
}
// Fixed track height: 2 lines per track
let track_height = 2;
// Build timeline content with braille characters
let mut lines: Vec<Line> = Vec::new();
for track_idx in 0..num_tracks {
let track_id = if let Some((id, _)) = app.tracks.get(track_idx) {
*id
} else {
continue;
};
// Create exactly 2 lines for this track
for _ in 0..track_height {
let mut spans = Vec::new();
// Build the timeline character by character
for char_x in 0..width {
// Map character position to time, using scroll offset
let time_pos = visible_start + (char_x as f64 / width as f64) * app.timeline_visible_duration;
// Check if playhead is at this position
let is_playhead = (time_pos - app.playback_position).abs() < (app.timeline_visible_duration / width as f64);
// Find all notes active at this time position on this track
let mut braille_pattern: u8 = 0;
let mut has_notes = false;
for (clip_track_id, _clip_id, clip_start, _clip_duration, _name, notes) in &app.clips {
if *clip_track_id == track_id {
// Check each note in this clip
for (pitch, note_offset, note_duration) in notes {
let note_start = clip_start + note_offset;
let note_end = note_start + note_duration;
// Is this note active at current time position?
if time_pos >= note_start && time_pos < note_end {
let pitch_mod = pitch % 8;
braille_pattern |= pitch_to_braille_bit(pitch_mod);
has_notes = true;
}
}
}
}
// Determine color
let color = if Some(track_id) == app.selected_track {
Color::Yellow
} else {
Color::Cyan
};
// Create span
if is_playhead {
// Playhead: red background
if has_notes {
// Show white notes with red background
let braille_char = char::from_u32(0x2800 + braille_pattern as u32).unwrap_or(' ');
spans.push(Span::styled(braille_char.to_string(), Style::default().fg(Color::White).bg(Color::Red)));
} else {
spans.push(Span::styled(" ", Style::default().bg(Color::Red)));
}
} else if has_notes {
// Show white braille pattern on colored background
let braille_char = char::from_u32(0x2800 + braille_pattern as u32).unwrap_or(' ');
spans.push(Span::styled(braille_char.to_string(), Style::default().fg(Color::White).bg(color)));
} else {
// Empty space
spans.push(Span::raw(" "));
}
}
lines.push(Line::from(spans));
}
}
let paragraph = Paragraph::new(lines);
f.render_widget(paragraph, inner_area);
}
/// Draw the TUI
pub fn draw_ui(f: &mut Frame, app: &TuiApp) {
let chunks = Layout::default()
.direction(Direction::Vertical)
.constraints([
Constraint::Length(3), // Title bar
Constraint::Min(10), // Main content
Constraint::Length(3), // Status bar
Constraint::Length(1), // Command line
])
.split(f.size());
// Title bar
let title = Paragraph::new("Lightningbeam DAW")
.style(Style::default().fg(Color::Cyan).add_modifier(Modifier::BOLD))
.block(Block::default().borders(Borders::ALL));
f.render_widget(title, chunks[0]);
// Main content area - split into tracks and timeline
let content_chunks = Layout::default()
.direction(Direction::Horizontal)
.constraints([Constraint::Percentage(20), Constraint::Percentage(80)])
.split(chunks[1]);
// Tracks list - each track gets 2 lines to match timeline
let track_items: Vec<ListItem> = app
.tracks
.iter()
.map(|(id, name)| {
let style = if app.selected_track == Some(*id) {
Style::default().fg(Color::Yellow).add_modifier(Modifier::BOLD)
} else {
Style::default()
};
// Create a 2-line item: track info on first line, empty second line
let lines = vec![
Line::from(format!("T{}: {}", id, name)),
Line::from(""),
];
ListItem::new(lines).style(style)
})
.collect();
let tracks_list = List::new(track_items)
.block(Block::default().borders(Borders::ALL).title("Tracks"));
f.render_widget(tracks_list, content_chunks[0]);
// Timeline area - split vertically into playback info and timeline view
let timeline_chunks = Layout::default()
.direction(Direction::Vertical)
.constraints([Constraint::Length(4), Constraint::Min(5)])
.split(content_chunks[1]);
// Playback info
let playback_info = vec![
Line::from(vec![
Span::raw("Position: "),
Span::styled(
format!("{:.2}s", app.playback_position),
Style::default().fg(Color::Green),
),
Span::raw(" | Status: "),
Span::styled(
if app.is_playing { "Playing" } else { "Stopped" },
if app.is_playing {
Style::default().fg(Color::Green)
} else {
Style::default().fg(Color::Red)
},
),
]),
Line::from(format!("Active Notes: {}",
app.active_notes
.iter()
.map(|n| format!("{} ", n))
.collect::<String>()
)),
];
let info = Paragraph::new(playback_info)
.block(Block::default().borders(Borders::ALL).title("Playback"));
f.render_widget(info, timeline_chunks[0]);
// Draw timeline
draw_timeline(f, timeline_chunks[1], app);
// Status bar
let mode_indicator = match app.mode {
AppMode::Command => "COMMAND",
AppMode::Play => "PLAY",
};
let status_text = format!("Mode: {} | {}", mode_indicator, app.status_message);
let status_bar = Paragraph::new(status_text)
.style(Style::default().fg(Color::White))
.block(Block::default().borders(Borders::ALL));
f.render_widget(status_bar, chunks[2]);
// Command line
let command_line = if app.mode == AppMode::Command {
format!(":{}", app.command_input)
} else {
String::from("ESC=cmd mode | awsedftgyhujkolp;'=notes | R=release notes | ?=help | SPACE=play/pause")
};
let cmd_widget = Paragraph::new(command_line).style(Style::default().fg(Color::Yellow));
f.render_widget(cmd_widget, chunks[3]);
}
/// Run the TUI application
pub fn run_tui(
mut controller: EngineController,
event_rx: Arc<Mutex<rtrb::Consumer<AudioEvent>>>,
) -> Result<(), Box<dyn std::error::Error>> {
// Setup terminal
enable_raw_mode()?;
let mut stdout = io::stdout();
execute!(stdout, EnterAlternateScreen, EnableMouseCapture)?;
let backend = CrosstermBackend::new(stdout);
let mut terminal = Terminal::new(backend)?;
// Create app state
let mut app = TuiApp::new();
// Main loop
loop {
// Draw UI
terminal.draw(|f| draw_ui(f, &app))?;
// Poll for audio events
if let Ok(mut rx) = event_rx.lock() {
while let Ok(event) = rx.pop() {
match event {
AudioEvent::PlaybackPosition(pos) => {
app.update_playback_position(pos);
}
AudioEvent::PlaybackStopped => {
app.set_playing(false);
}
AudioEvent::TrackCreated(track_id, _, name) => {
app.add_track(track_id, name);
}
AudioEvent::RecordingStopped(clip_id, _pool_index, _waveform) => {
// Update status
app.set_status(format!("Recording stopped - Clip {}", clip_id));
}
AudioEvent::ProjectReset => {
app.clear_tracks();
app.set_status("Project reset".to_string());
}
_ => {}
}
}
}
// Handle keyboard input with timeout
if event::poll(Duration::from_millis(100))? {
if let Event::Key(key) = event::read()? {
match app.mode() {
AppMode::Command => {
match key.code {
KeyCode::Left => {
// Scroll timeline left only if command buffer is empty
if app.command_input().is_empty() {
app.scroll_timeline_left();
}
}
KeyCode::Right => {
// Scroll timeline right only if command buffer is empty
if app.command_input().is_empty() {
app.scroll_timeline_right();
}
}
KeyCode::Char('-') | KeyCode::Char('_') => {
// Zoom out only if command buffer is empty
if app.command_input().is_empty() {
app.zoom_timeline_out();
}
}
KeyCode::Char('+') | KeyCode::Char('=') => {
// Zoom in only if command buffer is empty
if app.command_input().is_empty() {
app.zoom_timeline_in();
}
}
KeyCode::Char(' ') => {
// Spacebar toggles play/pause only if command buffer is empty
// Otherwise, add space to command
if app.command_input().is_empty() {
if app.is_playing {
controller.pause();
app.set_playing(false);
app.set_status("Paused".to_string());
} else {
controller.play();
app.set_playing(true);
app.set_status("Playing".to_string());
}
} else {
app.push_command_char(' ');
}
}
KeyCode::Esc => {
app.clear_command();
}
KeyCode::Enter => {
let command = app.command_input().to_string();
app.add_to_history(command.clone());
// Execute command
match execute_command(&command, &mut controller, &mut app) {
Err(e) if e == "Quit requested" => {
break; // Exit the application
}
Err(e) => {
app.set_status(format!("Error: {}", e));
}
Ok(_) => {}
}
app.clear_command();
}
KeyCode::Backspace => {
app.pop_command_char();
}
KeyCode::Up => {
app.history_up();
}
KeyCode::Down => {
app.history_down();
}
KeyCode::Char('i') => {
// Only switch to Play mode if command buffer is empty
if app.command_input().is_empty() {
app.enter_play_mode();
} else {
app.push_command_char('i');
}
}
KeyCode::Char(c) => {
app.push_command_char(c);
}
_ => {}
}
}
AppMode::Play => {
// Check for mode switch first
if key.code == KeyCode::Esc {
app.enter_command_mode();
continue;
}
// Check for quit
if key.code == KeyCode::Char('q') && key.modifiers.contains(KeyModifiers::CONTROL) {
break;
}
// Handle MIDI note playing
if let Some(note) = key_to_midi_note(key.code) {
if let Some(track_id) = app.selected_track() {
// Release all previous notes before playing new one
for prev_note in app.active_notes.clone() {
controller.send_midi_note_off(track_id, prev_note);
}
app.active_notes.clear();
// Play the new note
controller.send_midi_note_on(track_id, note, 100);
app.add_active_note(note);
}
} else {
// Handle other play mode shortcuts
match key.code {
KeyCode::Char(' ') => {
// Release all notes and toggle play/pause
if let Some(track_id) = app.selected_track() {
for note in app.active_notes.clone() {
controller.send_midi_note_off(track_id, note);
}
app.active_notes.clear();
}
if app.is_playing {
controller.pause();
app.set_playing(false);
} else {
controller.play();
app.set_playing(true);
}
}
KeyCode::Char('s') if key.modifiers.contains(KeyModifiers::CONTROL) => {
// Release all notes and stop
if let Some(track_id) = app.selected_track() {
for note in app.active_notes.clone() {
controller.send_midi_note_off(track_id, note);
}
app.active_notes.clear();
}
controller.stop();
app.set_playing(false);
}
KeyCode::Char('r') | KeyCode::Char('R') => {
// Release all notes manually (r for release)
if let Some(track_id) = app.selected_track() {
for note in app.active_notes.clone() {
controller.send_midi_note_off(track_id, note);
}
app.active_notes.clear();
}
app.set_status("All notes released".to_string());
}
KeyCode::Char('?') | KeyCode::Char('h') | KeyCode::Char('H') => {
app.set_status("Play Mode: awsedftgyhujkolp;'=notes | R=release | SPACE=play/pause | ESC=command | Ctrl+Q=quit".to_string());
}
_ => {}
}
}
}
}
}
}
}
// Restore terminal
disable_raw_mode()?;
execute!(
terminal.backend_mut(),
LeaveAlternateScreen,
DisableMouseCapture
)?;
terminal.show_cursor()?;
Ok(())
}
/// Execute a command string
fn execute_command(
command: &str,
controller: &mut EngineController,
app: &mut TuiApp,
) -> Result<(), String> {
let parts: Vec<&str> = command.trim().split_whitespace().collect();
if parts.is_empty() {
return Ok(());
}
match parts[0] {
"play" => {
controller.play();
app.set_playing(true);
app.set_status("Playing".to_string());
}
"pause" => {
controller.pause();
app.set_playing(false);
app.set_status("Paused".to_string());
}
"stop" => {
controller.stop();
app.set_playing(false);
app.set_status("Stopped".to_string());
}
"seek" => {
if parts.len() < 2 {
return Err("Usage: seek <seconds>".to_string());
}
let pos: f64 = parts[1].parse().map_err(|_| "Invalid position")?;
controller.seek(pos);
app.set_status(format!("Seeked to {:.2}s", pos));
}
"track" => {
if parts.len() < 2 {
return Err("Usage: track <name>".to_string());
}
let name = parts[1..].join(" ");
controller.create_midi_track(name.clone());
app.set_status(format!("Created MIDI track: {}", name));
}
"audiotrack" => {
if parts.len() < 2 {
return Err("Usage: audiotrack <name>".to_string());
}
let name = parts[1..].join(" ");
controller.create_audio_track(name.clone());
app.set_status(format!("Created audio track: {}", name));
}
"select" => {
if parts.len() < 2 {
return Err("Usage: select <track_number>".to_string());
}
let idx: usize = parts[1].parse().map_err(|_| "Invalid track number")?;
app.select_track(idx);
app.set_status(format!("Selected track {}", idx));
}
"clip" => {
if parts.len() < 4 {
return Err("Usage: clip <track_id> <start_time> <duration>".to_string());
}
let track_id: u32 = parts[1].parse().map_err(|_| "Invalid track ID")?;
let start_time: f64 = parts[2].parse().map_err(|_| "Invalid start time")?;
let duration: f64 = parts[3].parse().map_err(|_| "Invalid duration")?;
// Add clip to local UI state (empty clip, no notes)
let clip_id = app.next_clip_id;
app.next_clip_id += 1;
app.add_clip(track_id, clip_id, start_time, duration, format!("Clip {}", clip_id), Vec::new());
controller.create_midi_clip(track_id, start_time, duration);
app.set_status(format!("Created MIDI clip on track {} at {:.2}s for {:.2}s", track_id, start_time, duration));
}
"loadmidi" => {
if parts.len() < 3 {
return Err("Usage: loadmidi <track_id> <file_path> [start_time]".to_string());
}
let track_id: u32 = parts[1].parse().map_err(|_| "Invalid track ID")?;
let file_path = parts[2];
let start_time: f64 = if parts.len() >= 4 {
parts[3].parse().unwrap_or(0.0)
} else {
0.0
};
// Load the MIDI file
match load_midi_file(file_path, app.next_clip_id, 48000) {
Ok(mut midi_clip) => {
midi_clip.start_time = start_time;
let clip_id = midi_clip.id;
let duration = midi_clip.duration;
let event_count = midi_clip.events.len();
// Extract note data for visualization
let mut notes = Vec::new();
let mut active_notes: std::collections::HashMap<u8, f64> = std::collections::HashMap::new();
let sample_rate = 48000.0; // Sample rate used for loading MIDI
for event in &midi_clip.events {
let status = event.status & 0xF0;
let time_seconds = event.timestamp as f64 / sample_rate;
match status {
0x90 if event.data2 > 0 => {
// Note on
active_notes.insert(event.data1, time_seconds);
}
0x80 | 0x90 => {
// Note off (or note on with velocity 0)
if let Some(start) = active_notes.remove(&event.data1) {
let note_duration = time_seconds - start;
notes.push((event.data1, start, note_duration));
}
}
_ => {}
}
}
// Add to local UI state with note data
app.add_clip(track_id, clip_id, start_time, duration, file_path.to_string(), notes);
app.next_clip_id += 1;
// Send to audio engine
controller.add_loaded_midi_clip(track_id, midi_clip);
app.set_status(format!("Loaded {} ({} events, {:.2}s) to track {} at {:.2}s",
file_path, event_count, duration, track_id, start_time));
}
Err(e) => {
return Err(format!("Failed to load MIDI file: {}", e));
}
}
}
"reset" => {
controller.reset();
app.clear_tracks();
app.set_status("Project reset".to_string());
}
"q" | "quit" => {
return Err("Quit requested".to_string());
}
"help" | "h" | "?" => {
// Show comprehensive help
let help_msg = concat!(
"Commands: ",
"play | pause | stop | seek <s> | ",
"track <name> | audiotrack <name> | select <idx> | ",
"clip <track_id> <start> <dur> | ",
"loadmidi <track_id> <file> [start] | ",
"reset | quit | help | ",
"Keys: ←/→ scroll | -/+ zoom"
);
app.set_status(help_msg.to_string());
}
_ => {
return Err(format!("Unknown command: '{}'. Type 'help' for commands", parts[0]));
}
}
Ok(())
}

BIN
daw-backend/test.wav Normal file

Binary file not shown.

View File

@ -0,0 +1,109 @@
use daw_backend::audio::node_graph::{
nodes::{AudioOutputNode, GainNode, OscillatorNode},
ConnectionError, InstrumentGraph, SignalType,
};
#[test]
fn test_basic_node_graph() {
// Create a graph with sample rate 44100 and buffer size 512
let mut graph = InstrumentGraph::new(44100, 512);
// Create nodes
let osc = Box::new(OscillatorNode::new("Oscillator"));
let gain = Box::new(GainNode::new("Gain"));
let output = Box::new(AudioOutputNode::new("Output"));
// Add nodes to graph
let osc_idx = graph.add_node(osc);
let gain_idx = graph.add_node(gain);
let output_idx = graph.add_node(output);
// Connect: Oscillator -> Gain -> Output
assert!(graph.connect(osc_idx, 0, gain_idx, 0).is_ok());
assert!(graph.connect(gain_idx, 0, output_idx, 0).is_ok());
// Set output node
graph.set_output_node(Some(output_idx));
// Set oscillator frequency to 440 Hz
if let Some(node) = graph.get_graph_node_mut(osc_idx) {
node.node.set_parameter(0, 440.0); // Frequency parameter
}
// Process a buffer
let mut output_buffer = vec![0.0f32; 512];
graph.process(&mut output_buffer, &[]);
// Check that we got some audio output (oscillator should produce non-zero samples)
let has_output = output_buffer.iter().any(|&s| s != 0.0);
assert!(has_output, "Expected non-zero audio output from oscillator");
// Check that output is within reasonable bounds
let max_amplitude = output_buffer.iter().map(|s| s.abs()).fold(0.0f32, f32::max);
assert!(max_amplitude <= 1.0, "Output amplitude too high: {}", max_amplitude);
}
#[test]
fn test_connection_type_validation() {
let mut graph = InstrumentGraph::new(44100, 512);
let osc = Box::new(OscillatorNode::new("Oscillator"));
let output = Box::new(AudioOutputNode::new("Output"));
let osc_idx = graph.add_node(osc);
let output_idx = graph.add_node(output);
// This should work (Audio -> Audio)
let result = graph.connect(osc_idx, 0, output_idx, 0);
assert!(result.is_ok());
// Try to connect CV to Audio - should fail
// Oscillator CV input (port 0 - wait, actually oscillator has CV input)
// Let me create a more clear test:
let osc2 = Box::new(OscillatorNode::new("Oscillator2"));
let osc2_idx = graph.add_node(osc2);
// Try to connect audio output to CV input
// This would be caught if we had different signal types
// For now, just verify the connection succeeds with matching types
let result = graph.connect(osc_idx, 0, osc2_idx, 0);
// This should actually fail because audio output can't connect to CV input
assert!(result.is_err());
match result {
Err(ConnectionError::TypeMismatch { expected, got }) => {
assert_eq!(expected, SignalType::CV);
assert_eq!(got, SignalType::Audio);
}
_ => panic!("Expected TypeMismatch error"),
}
}
#[test]
fn test_cycle_detection() {
let mut graph = InstrumentGraph::new(44100, 512);
let gain1 = Box::new(GainNode::new("Gain1"));
let gain2 = Box::new(GainNode::new("Gain2"));
let gain3 = Box::new(GainNode::new("Gain3"));
let g1 = graph.add_node(gain1);
let g2 = graph.add_node(gain2);
let g3 = graph.add_node(gain3);
// Create a chain: g1 -> g2 -> g3
assert!(graph.connect(g1, 0, g2, 0).is_ok());
assert!(graph.connect(g2, 0, g3, 0).is_ok());
// Try to create a cycle: g3 -> g1
let result = graph.connect(g3, 0, g1, 0);
assert!(result.is_err());
match result {
Err(ConnectionError::WouldCreateCycle) => {
// Expected!
}
_ => panic!("Expected WouldCreateCycle error"),
}
}

BIN
daw-backend/ttls.mid Normal file

Binary file not shown.

View File

@ -4,10 +4,18 @@
"version": "0.1.0",
"type": "module",
"scripts": {
"tauri": "tauri"
"tauri": "tauri",
"test": "wdio run wdio.conf.js",
"test:watch": "wdio run wdio.conf.js --watch"
},
"devDependencies": {
"@tauri-apps/cli": "^2"
"@tauri-apps/cli": "^2",
"@wdio/cli": "^9.20.0",
"@wdio/globals": "^9.17.0",
"@wdio/local-runner": "8",
"@wdio/mocha-framework": "^9.20.0",
"@wdio/spec-reporter": "^9.20.0",
"webdriverio": "^9.20.0"
},
"dependencies": {
"@ffmpeg/ffmpeg": "^0.12.10",

File diff suppressed because it is too large Load Diff

BIN
screenshots/animation.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 112 KiB

BIN
screenshots/music.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 173 KiB

BIN
screenshots/video.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

1161
src-tauri/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -31,3 +31,24 @@ tracing-subscriber = {version = "0.3.19", features = ["env-filter"] }
log = "0.4"
chrono = "0.4"
# DAW backend integration
daw-backend = { path = "../daw-backend" }
cpal = "0.15"
rtrb = "0.3"
tokio = { version = "1", features = ["sync", "time"] }
# Video decoding
ffmpeg-next = "7.0"
lru = "0.12"
image = { version = "0.24", default-features = false, features = ["jpeg"] }
# HTTP server for video streaming
tiny_http = "0.12"
[profile.dev]
opt-level = 1 # Enable basic optimizations in debug mode for audio decoding performance
[profile.release]
opt-level = 3
lto = true

Some files were not shown because too many files have changed in this diff Show More