Practice like an operator
Run the live forge with hardware or demo input, score timing and dynamics in realtime, then persist the session for trend analysis instead of losing it after the animation ends.
Browser-native MIDI universe
Train with a browser-native forge that respects latency, stores every run, and keeps the roadmap aligned with the last few weeks of music-AI research instead of generic hype.
Operator poster
Realtime posture
Telemetry commit
Off the play path
Persist after the judgement stream, not inside the instrument loop.
Capture contract
UMP-first
Symbolic fidelity now, richer MIDI 2.0 clip workflows later.
Evaluation seam
Audio-ready
The stored run model is prepared for post-symbolic scoring.
Current stack
Product thesis
The winning product is not a one-shot generator. It is a durable operating loop where human performance, symbolic telemetry, and adaptive AI reinforce each other.
Telemetry off the play path
<40ms
Learning universe
34 modes
MIDI 2.0 capture contract
UMP-first
AI music generation seam
Lyria 3
Evaluation roadmap
Audio-ready
Embodied instrument surfaces
WebXR + MediaPipe
Platform Spine
MidiverseForge is strongest when the public story, runtime safeguards, and forge telemetry model all point at the same operating thesis. This section shows the seams that now hold the whole application together.
Public marketing, protected workspaces, signed demo sessions, Supabase live auth, billing entry points, and route-aware middleware now behave like one system instead of separate prototypes.
Browser MIDI permissions, UMP translation, judgement scoring, capture persistence, and export-ready history are wired as one capture contract.
The UI now states what is genuinely shipped, what is staged for audio-grade evaluation, and which new papers or platform docs are changing the next product moves.
Security headers, public-vs-protected routing, health semantics, and short-lived realtime tokens are now treated as production concerns instead of afterthoughts.
Release posture
Auth/session state, dashboard payloads, billing posture, and forge configuration all resolve on the server before the UI renders.
Web MIDI access is treated as a first-class browser permission flow with explicit fallback to a demo input stream.
Product copy, scoring seams, archive exports, and roadmap priorities are driven by recent music-AI, live-agent, and education research rather than generic SaaS filler.
Telemetry off the play path
<40ms
Learning universe
34 modes
MIDI 2.0 capture contract
UMP-first
AI music generation seam
Lyria 3
Evaluation roadmap
Audio-ready
Embodied instrument surfaces
WebXR + MediaPipe
Core Workflows
The right upgrade was not “more features.” It was making every surface reinforce the same operating model: play live, store the run, and route the next action.
Run the live forge with hardware or demo input, score timing and dynamics in realtime, then persist the session for trend analysis instead of losing it after the animation ends.
Dashboard, forge, library, billing, and learn modes now share the same session and account spine, so the app behaves like one product even when infrastructure is partially configured.
Recent music-AI work changes what should be shipped next: low-latency live loops, audio-grade evaluation seams, and multi-session creative support over one-shot novelty.
Instrument Surfaces
The upgraded forge starts with the inputs we can actually support today while preserving the product language for MPE, winds, drums, and future clip workflows.
3D falling notes orbiting your hands with full velocity response
88 keys, sustain pedal, aftertouch
Orbital note cascade
Floating fretboard with string bends as glowing tension lines
MPE per-string expression, pitch bend
Tension line visualization
Pads that explode into particles based on velocity
Multi-zone pads, hi-hat control
Particle explosions
Breath and expression data sculpts the environment in real time
Breath CC, expression, aftertouch
Volumetric particle trails
Full per-note expression with 3D spatial mapping
Per-note pitch, slide, pressure
Spatial expression fields
Build your own integration with the Forge SDK — any MIDI device
Full MIDI 2.0 UMP support
User-defined visualizations
Research Translation
Last verified 2026-04-04. Every signal below ends with a concrete move for MidiverseForge, because “AI music is moving fast” is not a product strategy.
Google's Lyria 3 docs now position the Gemini API as a direct path to 48kHz stereo music generation, with 30-second clip flows, longer Pro outputs, text or image prompting, and mixed audio-plus-text responses.
Product move
Keep MidiverseForge clip-first in the browser: fast loop generation, lyrics-aware parsing, and a clear upgrade path to longer Pro renders for exports and practice backing tracks.
Verified 2026-04-04
Proposes SMDIM, a long-sequence symbolic diffusion approach that improves generation quality and computational efficiency by combining global structure construction with lightweight local refinement.
Product move
Preserve a session model that can scale from short drills to longer clip editing. Saved captures, archive export, and comparison views should stay compatible with longer symbolic timelines.
Verified 2026-04-04
Compares LSTM, Transformer, and hybrid architectures for symbolic music generation and finds that the hybrid approach improves local continuity and global coherence together.
Product move
Prefer hybrid symbolic workflows that preserve local continuity without losing longer-form coherence. Product surfaces should support both immediate drills and multi-section idea development.
Verified 2026-04-04
AILive Mixer targets zero-latency automatic mixing for live performance, reinforcing that real-time music systems cannot afford heavy post-processing on the interactive path.
Product move
Keep telemetry and persistence off the play path. Live interaction quality still wins or loses on latency discipline before any downstream analytics matter.
Verified 2026-04-04
Benchmarks piano performance evaluation and finds that audio foundation models outperform symbolic representations across all 19 perceptual dimensions tested.
Product move
Keep symbolic scoring for instant browser feedback, but leave an explicit seam for audio-grade evaluation. The roadmap should not pretend MIDI-only metrics capture expressive quality completely.
Verified 2026-04-04
Argues that sustained, reflective multi-session support matters more than single-session novelty in generative music workflows for Deaf and Hard-of-Hearing creators.
Product move
Design around durable multi-session loops rather than one-shot novelty. Reflection, saved history, and continued iteration are stronger product seams than prompt-and-forget generation.
Verified 2026-04-04
Pricing
The free tier is deliberately useful. Pro exists for the musicians who want AI-backed analysis, generated accompaniments, and a stronger operator surface around each saved run.
Access 28 learning modes, connect MIDI devices, and track your progress across every session.
AI-powered coaching, music generation, and advanced analytics for serious musicians.
For schools, labs, and hardware teams standardizing on browser-native symbolic tooling.