Ctrl-labs raises $28 million from GV and Alexa Fund for neural interfaces
On the software side of the equation, the accompanying SDK is “more mature,” with built-out JavaScript and TypeScript toolchains and new prebuilt demos that give an idea of the hardware’s capabilities. Programming is largely done through WebSockets, which provide a full-duplex communications channel.
“We’re at the point of the launch where … we want to get it out [to] developers,” Berenzweig said.
The final version of Ctrl-kit will be in one piece, and it won’t be an entirely self-contained affair. The developer kit has to be tethered to a PC for some processing, but the goal is to get to the point where overhead is such that it can run on wearable system-on-chips.
The underlying tech remains the same. Ctrl-kit leverages differential electromyography (EMG) to translate mental intent into action, specifically by measuring changes in electrical potential caused by impulses traveling from the brain to hand muscles. Sixteen electrodes monitor the motor neuron signals amplified by the muscle fibers of motor units, from which they measure signals, and with the help of AI algorithms trained using Google’s TensorFlow distinguish between the individual pulses of each nerve.
The system works independently of muscle movement; generating a brain activity pattern that Ctrl-labs’ tech can detect requires no more than the firing of a neuron down an axon, or what neuroscientists call action potential. That puts it a class above wearables that use electroencephalography (EEG), a technique that measures electrical activity in the brain through contacts pressed against the scalp. EMG devices draw from the cleaner, clearer signals from motor neurons, and as a result are limited only by the accuracy of the software’s machine learning model and the snugness of the contacts against the skin
As for what Ctrl-labs expects its early adopters to build with Ctrl-kit, video games top the list — particularly virtual reality games, which Berenzweig believes are a natural fit for the sort of immersive experiences EMG can deliver. (Imagine swiping through an inventory screen with a hand gesture, or piloting a fighter jet just by thinking about the direction you want to fly.) And not too long ago, Ctrl-labs demonstrated a virtual keyboard that maps finger movements to PC inputs, allowing a wearer to type messages by tapping on a tabletop with their fingertips.
It remains to be seen if Ctrl-labs can succeed where others have failed. In October, Amazon-backed wearables company Thalmic Labs killed its gesture- and motion-guided Myo armband, which similarly tapped the electrical activity in arm muscles to control devices.
Still, it’s managed to attract talent like former Apple autonomous systems engineer Tarin Ziyaee, who’s heading up dev at Ctrl-labs’ San Francisco office, and Anthony Moschella, previously vice president of product at Peloton and MakerBot. Moreover, investors like Erik Nordlander, general partner at GV, are convinced that Ctrl-labs’ early momentum — in addition to the robustness of its developer tools — will help it gain an early lead in the brain-machine interface race.
“Ctrl-labs’ development of neural interfaces will empower developers to create novel experiences across a wide variety of applications,” he said. “The company has assembled a team of top neuroscientists, engineers, and developers with deep technology backgrounds, creating human-computer interactions unlike anything we have seen before.”
Source: VentureBeat