Scale AI launches Voice Showdown, the primary real-world benchmark for voice AI — and the outcomes are humbling for some high fashions

Scale AI launches Voice Showdown, the primary real-world benchmark for voice AI — and the outcomes are humbling for some high fashions

Last Updated: March 21, 2026By


Voice AI is transferring quicker than the instruments we use to measure it. Each main AI lab — OpenAI, Google DeepMind, Anthropic, xAI — is racing to ship voice fashions able to pure, real-time dialog.

However the benchmarks used to guage these fashions are largely nonetheless working on artificial speech, English-only prompts, and scripted take a look at units that bear little resemblance to how folks really speak.

Scale AI, the big information annotation startup whose founder was poached by Meta last year to lead its Superintelligence Lab, remains to be going robust and tackling the issue head on: as we speak it launches Voice Showdown, what it calls the primary international preference-based area designed to benchmark voice AI by means of the lens of actual human interplay.

This product presents a novel strategic worth to customers: free entry to the world’s main frontier fashions. By means of Scale’s ChatLab platform, customers can work together with high-tier fashions—which usually require a number of $20-per-month subscriptions—without charge. In alternate, customers take part in occasional blind, head-to-head "battles" to decide on which of two anonymized main voice fashions presents a greater expertise, offering information for the trade’s most genuine, human-preference leaderboard of voice AI fashions.

"Voice AI is actually the quickest transferring frontier in AI proper now," mentioned Janie Gu, product supervisor for Showdown at Scale AI. "However the way in which that we consider voice fashions hasn't stored up."

The outcomes, drawn from 1000’s of spontaneous voice conversations throughout greater than 60 languages, reveal functionality gaps that different benchmarks have constantly missed.

How Scale's Voice Showdown works

Voice Showdown is constructed on ChatLab, Scale's model-agnostic chat platform the place customers can freely work together with whichever frontier AI mannequin they select — without spending a dime — inside a single app. The platform has been accessible to Scale's international neighborhood of over 500,000 annotators, with roughly 300,000 having submitted a minimum of one immediate. Scale is opening the platform to a public waitlist as we speak.

The analysis mechanism is elegant in its simplicity: whereas a person is having a pure voice dialog with a mannequin, the system often — on fewer than 5% of all voice prompts — surfaces a blind side-by-side comparability. The identical immediate is shipped to a second, nameless mannequin, and the person picks which response they like.

This design solves three issues that plague present voice benchmarks.

First, each immediate comes from actual human speech — with accents, background noise, half-finished sentences, and conversational filler — fairly than synthesized audio generated from textual content.

Second, the platform spans greater than 60 languages throughout 6 continents, with over a 3rd of battles occurring in non-English languages together with Spanish, Arabic, Japanese, Portuguese, Hindi, and French.

Third, as a result of battles happen inside customers' precise each day conversations, 81% of prompts are conversational or open-ended — questions and not using a single right reply. That guidelines out automated scoring and makes human choice the one credible sign.

Voice Showdown at present runs two analysis modes: Dictate (customers converse, fashions reply with textual content) and Speech-to-Speech, or S2S (Speech-to-Speech, customers converse, fashions speak again). A 3rd mode — Full Duplex, which captures real-time, interruptible dialog — is in growth.

Incentive-aligned voting

One design element units Voice Showdown other than Chatbot Area (LM Area), the textual content benchmark it most carefully resembles. In LM Area, critics have famous that customers generally solid throwaway votes with little stake within the consequence. Voice Showdown addresses this immediately: after a person votes for the mannequin they most popular, the app switches them to that mannequin for the remainder of their dialog. In case you voted for GPT-4o Audio over Gemini, you're now speaking to GPT-4o Audio. That alignment of consequence with choice discourages informal or dishonest voting.

The system additionally controls for confounds that might corrupt comparisons: each mannequin responses start streaming concurrently (eliminating pace bias), voice gender is matched throughout each choices (eliminating gender choice bias), and neither mannequin is recognized by title throughout voting.

The brand new Voice AI leaderboard each enterprise decision-maker ought to take note of

Voice Showdown launches with 11 frontier fashions evaluated throughout 52 model-voice pairs as of March 18, 2026. Not all fashions help each analysis modes — the Dictate leaderboard contains 8 fashions, whereas S2S contains 6.

Dictate Leaderboard (Speech-In, Textual content-Out)

On this mode, customers present a spoken immediate and consider two side-by-side textual content responses. Listed below are the baseline scores:

  1. Gemini 3 Professional (1073)

  2. Gemini 3 Flash (1068)

  3. GPT-4o Audio (1019)

  4. Qwen 3 Omni (1000)

  5. Voxtral Small (925)

  6. Gemma 3n (918)

  7. GPT Realtime (875)

  8. Phi-4 Multimodal (729)

Be aware: Gemini 3 Professional and Gemini 3 Flash are statistically tied for the highest rank.

Speech-to-Speech (S2S) Leaderboard

On this mode, customers converse to the mannequin and consider two competing audio responses. Additionally baselines:

  1. Gemini 2.5 Flash Audio (1060)

  2. GPT-4o Audio (1059)

  3. Grok Voice (1024)

  4. Qwen 3 Omni (1000)

  5. GPT Realtime (962)

  6. GPT Realtime 1.5 (920)

Be aware: Gemini 2.5 Flash Audio and GPT-4o Audio are statistically tied for the highest rank in baseline evaluations.

Dictate rankings are led by Google's Gemini 3 Professional and Gemini 3 Flash, that are statistically tied at #1 with Elo scores round 1,043-1,044 after type controls.

GPT-4o Audio holds a transparent third place. Open-weight fashions together with Gemma3n, Voxtral Small, and Phi-4 Multimodal path considerably.

Speech-to-Speech (S2S) rankings present a tighter race on the high, with Gemini 2.5 Flash Audio and GPT-4o Audio statistically tied at #1 within the baseline rankings.

After adjusting for response size and formatting — elements that may inflate perceived high quality — GPT-4o Audio pulls forward (1,102 Elo vs. 1,075 for Gemini 2.5 Flash Audio).

Grok Voice jumps to an in depth second at 1,093 underneath type controls, suggesting its uncooked #3 rating undersells its precise efficiency high quality.

Qwen 3 Omni, the open-weight mannequin from Alibaba's Qwen group, performs higher on pure choice than its reputation would recommend — rating fourth in each modes, forward of a number of higher-profile names.

"When folks are available, they go for the massive names," Gu famous. "However for choice, lesser-known fashions like Qwen really pull forward."

Stunned revealed by real-world choice information

Past rankings, Voice Showdown's actual worth is within the failure diagnostics — and people paint a extra sophisticated image of voice AI than most leaderboards reveal.

The multilingual hole is worse than you suppose

Language robustness is the starkest differentiator throughout fashions. In Dictate, Gemini 3 fashions lead throughout basically each language examined.

In S2S, the winner relies upon closely on which language is being spoken: GPT-4o Audio leads in Arabic and Turkish; Gemini 2.5 Flash Audio is strongest in French; Grok Voice is aggressive in Japanese and Portuguese.

However the extra alarming discovering is how steadily some fashions merely cease responding within the person's language in any respect.

GPT Realtime 1.5 — OpenAI's newer real-time voice mannequin — responds in English to non-English prompts roughly 20% of the time, even on high-resource, formally supported languages like Hindi, Spanish, and Turkish.

Its predecessor, GPT Realtime, mismatches at about half that fee (~10%). Gemini 2.5 Flash Audio and GPT-4o Audio sit at ~7%.

The phenomenon runs each instructions: some fashions carry non-English context from earlier in a dialog into an English flip, or just mishear a immediate and generate an unrelated response within the flawed language fully.

Person verbatims from the platform seize the frustration bluntly: "I mentioned I’ve an interview as we speak with Quest Administration and as a substitute of answering, it gave me details about 'Danger Administration.'"

"GPT Realtime 1.5 thought I used to be talking incoherently and really useful psychological well being help, whereas Qwen 3 Omni accurately recognized I used to be talking a Nigerian native language."

The rationale present benchmarks miss this: they're constructed on artificial speech optimized for clear acoustic circumstances, they usually're not often multilingual. Actual audio system in actual environments — with background noise, brief utterances, and regional accents — break speech understanding in methods lab circumstances don't anticipate.

Voice choice is greater than aesthetics

Voice Showdown evaluates fashions not simply on the mannequin degree however on the particular person voice degree — and the variance inside a single mannequin's voice catalog is placing.

For one unnamed mannequin within the examine, the best-performing voice received 30 share factors extra usually than the worst-performing voice from the identical underlying mannequin. Each voices share the identical reasoning and technology backend. The distinction is solely in audio presentation.

The highest-performing voices are likely to win or lose on audio understanding and content material completeness — whether or not the mannequin heard you accurately and answered totally. However speech high quality stays a deciding issue on the voice choice degree, significantly when fashions are in any other case comparable. "Voice immediately shapes how customers consider the interplay," Gu mentioned.

Fashions degrade in dialog

Most benchmarks take a look at a single flip. Voice Showdown exams how fashions maintain up throughout prolonged conversations — and the outcomes aren't flattering.

On Flip 1, content material high quality accounts for 23% of mannequin failures. By Flip 11 and past, it turns into the first failure mode at 43%. Most fashions see their win charges decline as conversations lengthen, struggling to keep up coherence throughout a number of exchanges.

GPT Realtime variants are an exception, marginally bettering on later turns — in keeping with their recognized strengths on longer contexts, and their documented weak point on the temporary, noisy utterances that dominate early interactions.

Immediate size exhibits a complementary sample: brief prompts (underneath 10 seconds) are dominated by audio understanding failures (38%), whereas lengthy prompts (over 40 seconds) shift the first failure towards content material high quality (31%). Shorter audio provides fashions much less acoustic context to parse; longer requests are understood however tougher to reply nicely.

Why some voice AI fashions lose

After each S2S comparability, customers tag why they most popular one response over the opposite throughout three axes: audio understanding, content material high quality, and speech output. The failure signatures differ meaningfully by mannequin.

Qwen 3 Omni's losses cluster round speech technology — its reasoning is aggressive, however customers are delay by the way it sounds. GPT Realtime 1.5's losses are dominated by audio understanding failures (51%), in keeping with its language-switching conduct on difficult prompts. Grok Voice's failures are extra balanced throughout all three axes, indicating no single dominant weak point however no explicit energy both.

What's subsequent

The present leaderboard covers turn-based interplay — you converse, the mannequin responds, repeat. However actual voice conversations don't work that means. Folks interrupt, change course mid-sentence, and speak over one another.

Scale says Full Duplex analysis — designed to seize these real-time dynamics by means of human choice fairly than scripted situations or automated metrics — is coming to Showdown subsequent. No present benchmark captures full-duplex interplay by means of natural human choice information.

The leaderboard is stay at scale.com/showdown. A public waitlist to affix ChatLab and vote on comparisons is open as we speak, with customers receiving free entry to frontier voice fashions together with GPT-4o, Gemini, and Grok in alternate for infrequent choice votes.


Source link

Leave A Comment

you might also like