How to Evaluate a Voice-Over Partner Before a Global Campaign Brief
How to Evaluate a Voice-Over Partner Before a Global Campaign Brief
The framework experienced PMs use to separate a genuine operational partner from a vendor that hands the coordination burden back to you — eight dimensions, the diagnostic questions, and the benchmarks that define good.
01 — The Real Stakes of a Bad Choice
You've been burned before. Maybe it was a broadcaster rejection at 5 PM on a Friday — files delivered at the wrong loudness spec, air date Monday. Maybe it was six markets, twelve artists, no single point of contact, and forty-plus emails trying to confirm which audio file matched which script version. Or maybe it was simpler: a talent who sounded exactly right on demo and completely wrong in session.
The pain is always the same. And it always arrives at the end of the production chain, where there is no time left to absorb it.
Two weeks out from a major multi-market brief, you are not trying to find a vendor. You are trying to prevent a specific class of failure — the kind where VO becomes the thing that delays air date, costs you a client relationship, or requires your weekend.
Before the framework, the context: the failure modes on multi-market VO campaigns are predictable. They follow consistent patterns because they share the same root cause — a vendor who absorbed none of the operational complexity and left it with you.
Six markets, twelve voice talents, no dedicated project management structure. That is 40–60 emails. That is 10–20 hours of sourcing time before a single line is recorded. That is live sessions across timezones with no linguistic support on the call, where you are the last quality gate on a Polish performance you cannot evaluate.
That is broadcast rejections arriving because European delivery was mastered at -18 LUFS when EBU R128 requires -23. That is version control collapsing across thirty-second, fifteen-second, and six-second cuts in three languages, some with mid-production script revisions.
None of this is bad luck. It is what happens when the coordination burden is not absorbed by the vendor.
The eight-dimension scorecard that follows is designed to identify, before the SOW is signed, which kind of partner you are dealing with.
02 — Eight Dimensions That Separate A-List Partners from Pretenders
Dimension 1: Talent Quality and Vetting
The most common failure in multi-market VO is the demo-session gap. A talent delivers a polished reel recorded years ago under perfect conditions. The PM casts. Session day arrives and the performance is flat — nothing like the demo.
The fix is structural, not a matter of trust: custom auditions on the actual campaign script, before any talent enters a shortlist. Not a generic demo. Not a reel. The exact script, read to brief.
Ask what percentage of applicants make it into the vendor's active pool. A marketplace has no meaningful answer — sign-up is open. A curated partner describes a multi-stage gate: creative evaluation, technical quality check, native verification for dialect and regional authenticity. The answer tells you whether you are dealing with a database or a roster.
A first-pass approval rate of 9 out of 10 means one revision round instead of three, and an air date that stays where you put it. Ask for it. If the vendor cannot produce it, that is the answer.
Dimension 2: Cultural Authenticity and Linguistic QA
There is a gap between "speaks the language" and "sounds like they belong in that market." It is where international campaigns succeed or fail, and it is the hardest dimension to evaluate in a briefing call — which is exactly why pretenders survive long enough to damage your campaign.
Linguistic accuracy is necessary but nowhere near sufficient. What matters is idiomatic naturalness, contemporary register, culturally appropriate tone, and emotional delivery that mirrors how people in the target market actually speak today. The history of international advertising is littered with expensive proof of what happens when this fails.
HSBC spent $10 million rebranding after "Assume Nothing" translated to "Do Nothing" in several markets. KFC's "Finger Lickin' Good" became "Eat Your Fingers Off" in Chinese. Pepsi's "Come Alive" told Chinese consumers it would bring their ancestors back from the dead. VO casting produces subtler versions of the same problem — and the damage to brand trust is just as real.
"Warmth" sounds fundamentally different in Japanese than in Brazilian Portuguese. US-style enthusiasm reads as aggressive in German markets.
Treating Spanish as a monolithic language is a structural error, not a casting oversight. Castilian, Mexican, Argentine, and Colombian Spanish differ in vocabulary, slang, rhythm, and emotional register. Parisian and Quebec French are effectively different markets. European and Brazilian Portuguese require separate casting entirely.
The evaluation mechanism that actually works is per-market linguistic QA at multiple stages — native speaker review of pronunciation and tone, accuracy checks against script, cultural resonance validation — not a single native check at the end. More critically: the vendor should have a language supervisor per market who joins live sessions. Without that role on the call, you are the last quality gate on markets you cannot evaluate.
One more dimension that separates a creative partner from a passive executor: transcreation. A vendor that receives a translated script and records it without questioning whether transcreation was needed is not protecting your campaign — they are executing it. McDonald's "I'm lovin' it" became "Venez comme vous êtes" ("Come as you are") in France — entirely different words, same brand impact. That kind of adaptation requires a vendor willing to raise the question before the session, not discover the problem after delivery.
Dimension 3: Global Coverage and Timezone Reach
A revision that arrives at 4 PM local time on the last day before delivery needs to be absorbed without a 24-hour delay. That requires operational infrastructure, not a promise.
19 hours of daily coverage via a global workflow is not a headline. It is the structural requirement for a partner who can absorb last-minute changes at the end of the production chain. The question is not "are you available?" — every vendor says yes. The question is: "At 4 PM London time, with a revised legal line in Brazilian Portuguese and German, what happens in the next four hours?"
The answer reveals whether standby engineers and talent exist, or whether the work re-enters a queue that resets the clock. Ask it verbatim in every evaluation call.
Dimension 4: Team Flexibility and Specialist Roles
A generalist crew wearing five hats is not a multi-market VO partner. It is a coordination risk wearing the right logo.
What a genuine partner provides: a dedicated voice director on staff (not freelance, not ad hoc), a dedicated PM as single point of contact who owns your project from brief to delivery, per-market language supervisors, and backup talent identified before session day. These are not luxury roles. They are the roles that prevent the failure modes outlined above.
The most diagnostic question: ask what happens when two stakeholders give contradictory creative direction during a session. Who resolves it? How? A passive executor escalates it back to you. A partner with a dedicated voice director translates the conflict into a clear performance note and moves the session forward.
When coordination is consolidated rather than fragmented, PMs reclaim an average of 2.1 hours per week across a campaign. Across a twelve-week multi-market production, that is the difference between owning your week and being the person every market is waiting on.
Dimension 5: Proven Experience at Your Project's Scale
Experience is easy to claim and hard to fake. The questions that expose it are specific.
"You've worked with multi-market broadcast campaigns at this scale — show me the pattern of how those projects run." A vendor with genuine depth answers with process: how casting timelines are built, how revision workflows are structured, what triggers a new job versus what's absorbed. A vendor without that depth describes outputs, not operations.
The proxy metrics that matter: 90,000+ jobs delivered across 20 years of experience does not just signal scale. It signals the depth of edge cases absorbed, failure modes encountered and resolved, and cultural markets navigated. A vendor who has worked with 75% of Fortune brands has been tested at the level of scrutiny those brands apply — and survived.
Ask directly whether the vendor has experience with EBU R128 and ATSC A/85 compliance, AICP session delivery standards, and multi-format cutdown production. A partner with broadcast depth answers fluently. A vendor without it deflects to generalities.
Dimension 6: Technical Delivery and Broadcast Compliance
This dimension is where commodity providers expose themselves most clearly — because the specifications are unambiguous, and not knowing them is simply not knowing the job.
European broadcast requires delivery at -23 LUFS integrated (EBU R128), true peak maximum -1 dBTP. US broadcast requires -24 LKFS (ATSC A/85), true peak at -2 dBTP. For commercials under two minutes, EBU R128 Supplement 1 adds a maximum short-term loudness constraint of -18 LUFS. Digital platforms have their own targets: Netflix at -27 LUFS, YouTube at -14 LUFS. A vendor that cannot speak to these numbers fluently in a briefing call is telling you something important about their broadcast experience.
Recording format is settled: 48 kHz, 24-bit depth, uncompressed WAV. Mono delivery for VO elements. MP3 is never acceptable for production masters. Noise floor at -60 dB or lower — premium agencies specify -65 dB to -70 dB. A minimum of 30 seconds of clean room tone should be captured per session for editors to use in crossfades and gap fills.
Room tone must remain consistent throughout — no shifts between takes. A vendor who doesn't capture it routinely is creating work for your editors downstream. Ask whether it's part of their session checklist.
The AICP Audio Session Prep Guidelines mandate specific delivery requirements: OMF/AAF with embedded audio, 5–10 second handles, tracks organized as Narration — Dialog — Music — SFX, a 1-frame audio beep exactly 2 seconds prior to first frame of picture, and stems that sum exactly to the full mix. The vendor should know all of this without being asked.
File naming for multi-market delivery requires a locked convention before recording begins: Brand, Campaign, Spot, Version, Language (ISO 639-1 + ISO 3166-1 codes), Artist ID, Element Type, Date — underscores as delimiters, no spaces, no special characters. Changing naming schemas mid-project generates rework. The delivery matrix mapping every platform's spec should be built at project start, not reconstructed from email chains.
Dimension 7: Transparency in Timelines, Rates, and Revision Policy
Scope creep in VO does not usually happen because the vendor is dishonest. It happens because the revision policy was never defined — and the moment the script changes, nobody agrees on whether that's a pickup, a performance adjustment, or a new job.
What a genuine partner provides before the SOW is signed: all-in quoting with no hidden re-mastering or platform surcharges, a disclosed revision policy that specifies what is included (minor revisions, less than 20% of script, within 48 hours), what constitutes a performance adjustment versus a script change, and at what threshold a revision becomes a new job. Pre-cleared usage rights across territory, medium, and duration before casting begins — not discovered post-production.
"How do you handle scope expansion mid-campaign — if broadcast is added to a digital-only brief, what happens to talent rights and timeline?" The vendor with a documented process answers without hesitation. The vendor without one starts with "it depends."
In 2026, a written AI voice policy is not optional. SAG-AFTRA's Dynamic AI Audio Commercials Waiver requires performer consent for digital voice replica creation, with session fee plus 50% for AI-customized elements. A vendor without a clear policy on AI use of recorded voice is a liability.
Dimension 8: Backup Structure, Not Backup Promises
Every vendor will tell you they can handle last-minute changes. The question is whether that capability is structural or rhetorical.
The standard of evidence is operational: Is backup talent identified by name before session day, not sourced reactively if the primary drops? Is same-day re-record capability documented, and what does it require — standby engineers, cleared studio time, confirmed talent availability? Is there a crisis response protocol that exists in writing, has been activated on real campaigns, and can be described in specific operational terms?
Verbal reassurance — "we're very flexible," "we always find a way" — is not a crisis protocol. It is a risk transfer. When the primary talent goes dark two hours before a session, you want to know the backup is already booked.
20% of jobs delivered within 24 hours is the benchmark for genuine operational readiness at speed. The infrastructure that makes that possible — standby talent, on-call engineers, 19-hour coverage — is the same infrastructure that protects your campaign when the unexpected happens.
The Vendor Scorecard
Use this across evaluation conversations. Score 1–5 per dimension. A genuine multi-market partner scores 4–5 across every row. The dimensions where commodity providers consistently collapse are creative direction and cultural QA — the hardest to fake and the most expensive when absent.
| Dimension | Weight | Score 1–5 |
|---|---|---|
| Talent quality and vetting | 20% | __ / 5 |
| Cultural authenticity and linguistic QA | 20% | __ / 5 |
| Global coverage and timezone reach | 15% | __ / 5 |
| Team structure and specialist roles | 15% | __ / 5 |
| Proven experience at project scale | 10% | __ / 5 |
| Technical delivery and broadcast compliance | 10% | __ / 5 |
| Transparency: timelines, rates, revision policy | 5% | __ / 5 |
| Backup structure and crisis capability | 5% | __ / 5 |
Weight talent quality and cultural QA at 20% each — together they represent 40% of your evaluation. Price is not on this scorecard. A vendor who scores well on cost but collapses on cultural QA and creative direction will cost more in rework, retakes, and crisis management than the initial saving.
Before You Sign: 11 Behavioral Green Flags
The scorecard measures process and capability. This checklist measures how a partner shows up in the room. Both matter.
Green flags in every evaluation conversation
- You know exactly what they do best — no ambiguity about the offer
- They push back when it serves the campaign, not just take the brief
- They follow up without being asked — because they know your inbox is burning
- You get senior expertise on your project, not a pitch-to-junior handoff
- They adapt to your workflow, not the other way around
- Pricing is all-in — no surprises at invoice
- They flag problems before recording, not after delivery
- Communication is proactive — you're informed, not chasing
- They back every claim with proof and reference jobs
- They're small enough to stay sharp and personal
- After the briefing call, the decision feels obvious
03 — Ten Questions That Expose Pretenders in a Briefing Call
Move past capabilities decks immediately. These questions, asked in sequence, create a diagnostic profile no amount of marketing polish can disguise.
On talent and casting
"Walk me through your talent vetting process from application to first booking — what percentage of applicants make it into your active pool?" Then test depth: "If I need Swiss German — specifically Zurich dialect, not Bern — how do you verify authenticity, and how quickly can you present validated options?" The answer reveals whether genuine linguistic infrastructure exists or whether the vendor is discovering this alongside you.
On operations under pressure
"I have a revised legal line affecting Brazilian Portuguese, Mexican Spanish, and German — all on the same campaign. Walk me through exactly what happens in the next four hours." A partner answers with operational specifics. A vendor says "we can accommodate that" and waits for you to ask the next question.
On technical delivery
"What's your file naming convention and delivery spec compliance process — do you deliver platform-specific masters for broadcast TV, Spotify, Meta, and pre-roll?" A partner answers with the exact spec structure. A vendor says "we deliver what you need."
On creative direction
"Do you provide creative direction in sessions, or do you expect us to direct talent? Who directs, and what's their background?" Then the diagnostic killer: "Show me a case where you identified a creative problem in a brief before recording and resolved it proactively." A passive executor has no answer. A creative partner has three examples ready.
On rights and risk
"How do you pre-clear usage rights before we enter a session — territory, medium, duration — and what happens when campaign scope expands to broadcast mid-flight?" And in 2026: "What is your written policy on AI-generated voice and digital replicas?" The response to the AI question alone will tell you whether the vendor is operating at current standards or behind them.
If you are two weeks from a major brief and evaluating a partner you haven't used before — the questions above will compress that evaluation to a single conversation. The answers will tell you whether the partner absorbs operational complexity or redistributes it back to you.
That is the only question that matters. Every dimension in the scorecard above is measuring some version of the same thing: how much of the multi-market VO production will you still own after you hand them the brief?
A world-class partner takes the coordination burden off your desk — casting validation, cultural QA, session direction, technical compliance, rights management, version control, deadline protection — and returns broadcast-ready files. The PM's job on their campaign is creative decisions and client management, not chasing file deliveries across email chains in three languages.
Score for the dimensions that determine this. Everything else follows.
Running a Multi-Market Brief in the Next Few Weeks?
Send it to VoiceArchive and we'll return a quote, a proposed cast, and a timeline before the end of the day — no back-and-forth required.
Send Us Your Brief