Technologybreaking
AssemblyAI Universal-3 Pro Streaming: LLM-as-Decoder ASR Under 300ms Latency
AssemblyAI's Universal-3 Pro Streaming uses an LLM as the decoder, achieving <300ms P50 latency, mid-sentence multilingual switching, and runtime vocabulary injection.
May 1, 20261 min read