Skip to main content

Browser-native MIDI universe

MidiverseForgeturns practice, capture, and live creation into one operating surface.

Train with a browser-native forge that respects latency, stores every run, and keeps the roadmap aligned with the last few weeks of music-AI research instead of generic hype.

Live MIDI + demo fallbackResearch window: Feb-Apr 2026 verified research windowStripe, Supabase, WebXR, UMP

Operator poster

One loop for play, telemetry, and AI-backed next actions.

Live

Realtime posture

Telemetry commit

Off the play path

Persist after the judgement stream, not inside the instrument loop.

Capture contract

UMP-first

Symbolic fidelity now, richer MIDI 2.0 clip workflows later.

Evaluation seam

Audio-ready

The stored run model is prepared for post-symbolic scoring.

Capture
Score
Jam
Replay

Current stack

Live scoringTiming + velocity + expression
GenerationLyria 3 clip + realtime seams
HardwarePermission-aware Web MIDI access
PersistenceSaved runs, archive, billing posture

Product thesis

The winning product is not a one-shot generator. It is a durable operating loop where human performance, symbolic telemetry, and adaptive AI reinforce each other.

Telemetry off the play path

<40ms

Learning universe

34 modes

MIDI 2.0 capture contract

UMP-first

AI music generation seam

Lyria 3

Evaluation roadmap

Audio-ready

Embodied instrument surfaces

WebXR + MediaPipe

Platform Spine

The repo now reads like a product platform, not a disconnected demo reel.

MidiverseForge is strongest when the public story, runtime safeguards, and forge telemetry model all point at the same operating thesis. This section shows the seams that now hold the whole application together.

Authenticated operator loop

Public marketing, protected workspaces, signed demo sessions, Supabase live auth, billing entry points, and route-aware middleware now behave like one system instead of separate prototypes.

Realtime forge spine

Browser MIDI permissions, UMP translation, judgement scoring, capture persistence, and export-ready history are wired as one capture contract.

Research-driven roadmap

The UI now states what is genuinely shipped, what is staged for audio-grade evaluation, and which new papers or platform docs are changing the next product moves.

Deployment posture

Security headers, public-vs-protected routing, health semantics, and short-lived realtime tokens are now treated as production concerns instead of afterthoughts.

Release posture

Backend truth first

Auth/session state, dashboard payloads, billing posture, and forge configuration all resolve on the server before the UI renders.

Permission-aware MIDI UX

Web MIDI access is treated as a first-class browser permission flow with explicit fallback to a demo input stream.

Research-backed operator loop

Product copy, scoring seams, archive exports, and roadmap priorities are driven by recent music-AI, live-agent, and education research rather than generic SaaS filler.

Telemetry off the play path

<40ms

Learning universe

34 modes

MIDI 2.0 capture contract

UMP-first

AI music generation seam

Lyria 3

Evaluation roadmap

Audio-ready

Embodied instrument surfaces

WebXR + MediaPipe

Core Workflows

Three loops define the product now.

The right upgrade was not “more features.” It was making every surface reinforce the same operating model: play live, store the run, and route the next action.

Loop 01
Saved captures, comparison cards, export-ready history

Practice like an operator

Run the live forge with hardware or demo input, score timing and dynamics in realtime, then persist the session for trend analysis instead of losing it after the animation ends.

Loop 02
Protected workspaces, sane fallbacks, consistent routing

Move between surfaces without context loss

Dashboard, forge, library, billing, and learn modes now share the same session and account spine, so the app behaves like one product even when infrastructure is partially configured.

Loop 03
Roadmap driven by current evidence, not generic AI claims

Translate research into product moves

Recent music-AI work changes what should be shipped next: low-latency live loops, audio-grade evaluation seams, and multi-session creative support over one-shot novelty.

Instrument Surfaces

Every instrument still matters, but now the browser surface is honest about what is ready.

The upgraded forge starts with the inputs we can actually support today while preserving the product language for MPE, winds, drums, and future clip workflows.

Piano

Ready path

3D falling notes orbiting your hands with full velocity response

MIDI

88 keys, sustain pedal, aftertouch

Visual

Orbital note cascade

Guitar (MPE)

Ready path

Floating fretboard with string bends as glowing tension lines

MIDI

MPE per-string expression, pitch bend

Visual

Tension line visualization

Drums

Ready path

Pads that explode into particles based on velocity

MIDI

Multi-zone pads, hi-hat control

Visual

Particle explosions

Winds & Breath

Ready path

Breath and expression data sculpts the environment in real time

MIDI

Breath CC, expression, aftertouch

Visual

Volumetric particle trails

MPE Controllers

Ready path

Full per-note expression with 3D spatial mapping

MIDI

Per-note pitch, slide, pressure

Visual

Spatial expression fields

Custom (SDK)

Ready path

Build your own integration with the Forge SDK — any MIDI device

MIDI

Full MIDI 2.0 UMP support

Visual

User-defined visualizations

Research Translation

Recent papers and platform changes are being turned into product constraints.

Last verified 2026-04-04. Every signal below ends with a concrete move for MidiverseForge, because “AI music is moving fast” is not a product strategy.

Google AI DocsGenerationOfficial docsPublished Apr 4, 2026

Generate music with Lyria 3

Google's Lyria 3 docs now position the Gemini API as a direct path to 48kHz stereo music generation, with 30-second clip flows, longer Pro outputs, text or image prompting, and mixed audio-plus-text responses.

Product move

Keep MidiverseForge clip-first in the browser: fast loop generation, lyrics-aware parsing, and a clear upgrade path to longer Pro renders for exports and practice backing tracks.

Verified 2026-04-04

arXivGenerationPeer reviewedPublished Feb 28, 2026

Efficient Long-Sequence Diffusion Modeling for Symbolic Music Generation

Proposes SMDIM, a long-sequence symbolic diffusion approach that improves generation quality and computational efficiency by combining global structure construction with lightweight local refinement.

Product move

Preserve a session model that can scale from short drills to longer clip editing. Saved captures, archive export, and comparison views should stay compatible with longer symbolic timelines.

Verified 2026-04-04

arXivGenerationPeer reviewedPublished Mar 27, 2026

Fusing Memory and Attention: A study on LSTM, Transformer and Hybrid Architectures for Symbolic Music Generation

Compares LSTM, Transformer, and hybrid architectures for symbolic music generation and finds that the hybrid approach improves local continuity and global coherence together.

Product move

Prefer hybrid symbolic workflows that preserve local continuity without losing longer-form coherence. Product surfaces should support both immediate drills and multi-section idea development.

Verified 2026-04-04

arXivLatencyPeer reviewedPublished Mar 16, 2026

AILive Mixer: A Deep Learning based Zero Latency Automatic Music Mixer for Live Music Performances

AILive Mixer targets zero-latency automatic mixing for live performance, reinforcing that real-time music systems cannot afford heavy post-processing on the interactive path.

Product move

Keep telemetry and persistence off the play path. Live interaction quality still wins or loses on latency discipline before any downstream analytics matter.

Verified 2026-04-04

arXivEvaluationPeer reviewedPublished Jan 26, 2026

Audio Foundation Models Outperform Symbolic Representations for Piano Performance Evaluation

Benchmarks piano performance evaluation and finds that audio foundation models outperform symbolic representations across all 19 perceptual dimensions tested.

Product move

Keep symbolic scoring for instant browser feedback, but leave an explicit seam for audio-grade evaluation. The roadmap should not pretend MIDI-only metrics capture expressive quality completely.

Verified 2026-04-04

arXivWorkflowPeer reviewedPublished Mar 9, 2026

From Daily Song to Daily Self: Supporting Reflective Songwriting of Deaf and Hard-of-Hearing Individuals through Generative Music AI

Argues that sustained, reflective multi-session support matters more than single-session novelty in generative music workflows for Deaf and Hard-of-Hearing creators.

Product move

Design around durable multi-session loops rather than one-shot novelty. Reflection, saved history, and continued iteration are stronger product seams than prompt-and-forget generation.

Verified 2026-04-04

Pricing

Start free. Move to pro when the loop becomes part of your daily rig.

The free tier is deliberately useful. Pro exists for the musicians who want AI-backed analysis, generated accompaniments, and a stronger operator surface around each saved run.

Free

$0forever

Access 28 learning modes, connect MIDI devices, and track your progress across every session.

Start Free
  • All 28 learning modes
  • Web MIDI workbench + demo fallback
  • Progress tracking and streaks
  • Global leaderboards
  • Achievement badges
Recommended

Pro

$12.99/month

AI-powered coaching, music generation, and advanced analytics for serious musicians.

Upgrade To Pro
  • Everything in Free
  • AI coach feedback (Gemini)
  • AI music generation (Lyria 3)
  • Advanced practice analytics
  • Priority matchmaking in duels
  • Export practice history

Enterprise

Custom

For schools, labs, and hardware teams standardizing on browser-native symbolic tooling.

Talk To Us
  • Bulk onboarding
  • Teacher and fleet management surfaces
  • Shared device libraries
  • Custom integration support