Trends

Google’s Creatability project taps AI to make creative tools more accessible

More than 56 million people in the United States are living with a disability, according to the U.S. Census Bureau, and there’s a growing digital divide between those who have a disability and those who don’t. Disabled Americans are roughly three times as likely to avoid going online and 20 percent less likely to own a computer, smartphone, or tablet. Moreover, just 40 percent of them say they’re confident in their ability to use the internet.
In an effort to promote a more accessible web, Google and New York University’s Ability Project today launched Creatability, a set of experiments exploring how artificial intelligence (AI) can lend a hand in accommodating blind, deaf, and physically differently-abled people.
They’re available at the Creatability website, and Google’s open-sourced the code. It’s soliciting new experiments from developers, who can submit their creations for a chance to be featured.

 
The experiments range from a music-composing tool that lets you create tuns by moving your face, to a digital canvas that translates sights and sounds into sketches, and a music visualizer tool that mimics the effects of synesthesia.
Most leverage Posenet — a machine learning model powered by Google’s TensorFlow machine learning framework — for body joint detection in images and videos. Using any off-the-shelf webcam, you can draw with your face or tap out a tune with your nose. It runs in Javascript, and images are processed on-device and in-browser.
Google said it worked with creators in the accessibility community to build Creatability, including composer Jay Alan Zimmerman, who’s deaf; Josh Miele, a blind scientist and designer; Chancey Fleet, a technology educator; and Open Up Music founders Barry Farrimond and Doug Bott, who work with young disabled musicians to build inclusive orchestras.

 
“We hope these experiments inspire others to unleash their inner artist regardless of ability,” Claire Kearny-Volpe, a designer and researcher at the NYU Ability Project, wrote in a blog post. “Art gives us the ability to point beyond spoken or written language, to unite us, delight, and satisfy. Done right, this process can be enhanced by technology — extending our ability and potential for play.”
It’s not the first time AI’s been used to build accessible products.
Google’s DeepMind division is using it to generate closed captions for deaf users. In a 2016 joint study with researchers at the University of Oxford, scientists created a model that significantly outperformed a professional lip-reader, successfully translating 46.8 percent of words without error in 200 randomly selected clips compared to the human professional’s 12.4 percent of words.
Facebook, meanwhile, has developed captioning tools that describe photos to visually impaired users. Google’s Cloud Vision API can understand the context of objects in photos. And Microsoft’s Seeing API can read handwritten text, describe colors and scenes, and more.
Source: VentureBeat
To Read Our Daily News Updates, Please Visit Inventiva Or Subscribe Our Newsletter & Push.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button