Recent Posts
Archives

Posts Tagged ‘Moshi’

PostHeaderIcon [DotAI2024] DotAI 2024: Neil Zeghidour – Forging Multimodal Foundations for Voice AI

Neil Zeghidour, co-founder and Chief Modeling Officer at Kyutai, demystified multimodal language models at DotAI 2024. Transitioning from Google DeepMind’s generative audio vanguard—pioneering text-to-music APIs and neural codecs—to Kyutai’s open-science bastion, Zeghidour chronicled Moshi’s genesis: the inaugural open-source, real-time voice AI blending text fluency with auditory nuance.

Elevating Text LLMs to Sensory Savants

Zeghidour contextualized text LLMs’ ubiquity—from translation relics to coding savants—yet lamented their sensory myopia. True assistants demand perceptual breadth: visual discernment, auditory acuity, and generative expressivity like image synthesis or fluid discourse.

Moshi embodies this fusion, channeling voice bidirectionally with duplex latency under 200ms. Unlike predecessors—Siri’s scripted retorts or ChatGPT’s turn-taking delays—Moshi interweaves streams, parsing interruptions sans artifacts via multi-stream modeling: discrete tokens for phonetics, continuous for prosody.

This architecture, Zeghidour detailed, disentangles content from timbre, enabling role-aware training. Voice actress Alice’s emotive recordings—whispers to cowboy drawls—seed synthetic dialogues, yielding hundreds of thousands of hours where Moshi learns deference, yielding floors fluidly.

Unveiling Technical Ingenuity and Open Horizons

Zeghidour dissected Mimi, Kyutai’s streaming codec: outperforming FLAC in fidelity while slashing bandwidth, it encodes raw audio into manageable tokens for LLM ingestion. Training on vast, permissioned corpora—podcasts, audiobooks—Moshi masters accents, emotions, and interruptions, rivaling human cadence.

Challenges abounded: duplexity’s echo cancellation, prosody’s subtlety. Yet, open-sourcing weights, code, and a 60-page treatise democratizes replication, from MacBook quantization to commercial scaling.

Zeghidour’s Moshi-Moshi vignette hinted at emergent quirks—self-dialogues veering philosophical—while inviting scrutiny via Twitter. Kyutai’s mandate: propel voice agents through transparency, fostering adoption in research and beyond.

In Moshi, Zeghidour glimpsed assistants unbound by text’s tyranny, conversing as kin— a sonic stride toward AGI’s empathetic embrace.

Links: