Can AI create ASL

ASL in Veo3
Veo 3 from Google is an amazing tool with advanced video generation capabilities, but can it be used to create accurate and culturally appropriate ASL videos? Let's analyze how well it handles the complexities of American Sign Language, including handshape accuracy, facial expressions, fluency, and cultural relevance.

Google’s Veo 3 is a groundbreaking AI text-to-video generation model, transforming text prompts into high-quality videos with synchronized audio. Its creations are proliferating on X, with some hilariously entertaining and others so lifelike they nearly pass the video Turing test, appearing almost indistinguishable from human-made videos.

For the Deaf community, American Sign Language (ASL) is a cornerstone of communication, defined by its intricate linguistic complexity and visual-gestural nature, where hand shapes, movements, and facial expressions convey profound meaning.

Can AI create ASL?, capturing its linguistic depth and cultural nuances. In this post, I’ve tasked Veo 3 with generating videos of simple ASL phrases, starting with the alphabet and progressing to basic sentences, to evaluate how well it captures this nuanced language. Veo 3 holds immense promise for crafting dynamic visual content and I’m sure it will change the film industry, but its ability to render accurate and meaningful ASL sequences relies on precise prompting, a deep understanding of ASL’s linguistic structure, and the finesse to handle subtle gestures. By collaborating with the Deaf community to review these outputs, we aim to ensure cultural respect and authenticity in exploring AI’s potential for accessibility.

The following videos were the output.

Veo 3 Test: ASL ABCs

Prompt:

A professional sign language interpreter stands in front of a plain background, looking directly at the camera, signing the ASL alphabet “ABCDEFGHIJKLMNOPQRSTUVWXYZ” with clear handshapes, a neutral but engaged facial expression, and steady eye contact. The scene is well lit, professionally filmed in a medium close-up shot, documentary style. Avoid blurry hands, incorrect signs, or distracting backgrounds.

Evaluation

  • Handshape Accuracy (0-5): 1 – Mostly unintelligible; no clear alphabet sequence due to gibberish, though some vague shapes appear.
  • Movement Fluency (0-5): 1 – Chaotic and disjointed with random flow.
  • Facial Expression (0-5): 0 – No engagement or ASL cues, rendering it expressionless.
  • Clarity and Lighting (0-5): 4 – Well-lit with good resolution, but clarity can’t salvage the content.
  • Overall Authenticity (0-5): 0 – Complete gibberish, far from ASL standards.
  • Total Score: 6/25

Comments

Our ASL interpreters said the video was well lit and looked great but this is wasted on unintelligible output. The lack of facial expression further disconnects it from ASL. Scoring 6/25, this highlights Veo 3’s current inability to render ASL accurately even on something as straightforward as ABCs.

Veo 3 ASL Test: ASL 1 – 10

Prompt:

A professional sign language interpreter stands in front of a plain background, looking directly at the camera, signing the 1 - 10 “1, 2, 3, 4, 5, 6, 7, 8, 9, 10” with clear handshapes, a neutral but engaged facial expression, and steady eye contact. The scene is well lit, professionally filmed in a medium close-up shot, documentary style. Avoid blurry hands, incorrect signs, or distracting backgrounds.

Evaluation

  • Handshape Accuracy (0-5): 0.5 – Mostly gibberish; only the number 1 handshape is marginally correct, with others distorted beyond recognition.
  • Movement Fluency (0-5): 1 – Disjointed and incomplete.
  • Facial Expression (0-5): 0 – No engagement or ASL cues, despite the prompt.
  • Clarity and Lighting (0-5): 4 – Well-lit with good resolution, but clarity highlights the gibberish.
  • Overall Authenticity (0-5): 0.5 – Mostly unusable; the number 1 offers slight recognition, but the rest fails ASL standards.
  • Total Score: 6.5/25

Comments

Our interpreters found this ASL numbers video largely gibberish, with only the number 1 handshape faintly recognizable. The excellent lighting and resolution reveal the failure, and the absence of facial expression further disconnects it from ASL. Scoring 6.5/25, the slight success with number 1 suggests potential, but overall output demands ASL expert input.

Final Thoughts

Veo 3 represents a mind-blowing, science-fiction-level video creation tool, pushing the boundaries of artificial intelligence with its ability to generate stunning visuals that look so real. Its capacity to transform text prompts into high-quality, synchronized audio-visual content has sparked widespread awe, with creations proliferating on platforms like X, where they range from hilariously creative to nearly indistinguishable from human-made works, on the edge of passing the video “Turing Test”.

However, when it comes to rendering American Sign Language (ASL), this technological falls dramatically flat. Our interpreters at Partners Interpreting reviewed the tested outputs and unanimously confirmed them as gibberish, with only the number 1 handshape in the 1-10 video marginally correct. We also produced a series of other short videos, but their quality was so poor that showcasing them seemed pointless. The lighting, visuals, and accuracy of the signer’s appearance are incredible, yet the ASL itself is entirely unrendered in these AI-generated videos. This stark disappointment highlights that getting just one number right out of the extensive range of signs tested falls woefully short of the standard required for a language as rich, vital, and culturally significant as ASL.

This failure underscores a critical gap in Veo 3’s current capabilities. ASL is not merely a series of hand movements but a complex linguistic system that incorporates non-manual markers, regional variations, and nuanced gestures, elements that demand a level of precision and cultural understanding that current AI, including Veo 3, has yet to master. The gibberish output, despite the tool’s impressive visual fidelity, exposes the limitations of applying general video generation technology to specialized linguistic domains without tailored training and expert oversight. As a result, we at Partners Interpreting must advocate strongly for the continued reliance on professional interpreters over AI for now. The accuracy, authenticity, and cultural sensitivity that ASL demands can only be guaranteed by human experts who live and breathe this language every day.

That said, we recognize the potential of AI to evolve and are committed to monitoring its development closely. Future iterations, equipped with refined prompts that incorporate natural pacing and ASL expert guidance, along with enriched training data informed by the Deaf community, might one day bridge this gap. However, until that day arrives, we urge you to invest in Partners Interpreting (PI) for your ASL needs, use PI not AI. Our team of certified interpreters delivers reliable, high-quality ASL services that AI cannot yet replicate, ensuring clear communication and respect for the Deaf community. This is not a dismissal of technology but a call to prioritize human expertise where it matters most.

We welcome your insights to push this technology forward. The Deaf community’s expertise is invaluable, and we encourage you to share your thoughts in the comments below. Partners Interpreting remains dedicated to leading this conversation with integrity and collaboration.

Share:

Facebook
Twitter
Pinterest
LinkedIn

Related Posts

ASL and Deaf Accessibility News

From groundbreaking DeafBlind language research to DOJ hospital settlements and Deaflympics VRS waivers, late September brought pivotal Deaf accessibility developments. Indigenous interpreters bridge cultural gaps while workforce shortages challenge schools. Here’s what shaped Week 40 in ASL advocacy and policy.

Buzz

Buzz Lightyear Signs, San Antonio Celebrates, and AI Learns ASL

From a Space Ranger’s fluent signing that captivated millions to San Antonio’s decade-awaited festival revival, this week delivered powerful reminders that accessibility creates magic. Plus: museums experiment with AI interpreters while the FCC shapes policy that affects millions of Deaf Americans.

Robot hand does bad ASL

Deaf Awareness, Policy Shifts, and New Tech

From Deaf Awareness Month celebrations to new AI research and a federal settlement on ASL rights in prisons, this week’s stories show how advocacy, policy, and technology are reshaping accessibility. Culture, innovation, and accountability remain the threads tying it all together.

Request a Consultation

Email:

Phone:

  • 508-699-1477 (voice); answering service supports non-business hours sending messages to on call staff
  • 508-809-4894 (videophone) for ASL users