FutureFive New Zealand - Consumer technology news & reviews from the future
Story image
Facebook faces new frontiers with real time brain-to-text research
Thu, 1st Aug 2019
FYI, this story is more than a year old

Back in 2017 Facebook announced that it was going to team up with the University of California, San Francisco (UCSF) to develop a wearable device that lets people type words by simply imagining themselves speaking those words aloud.

Two years later, the research is well underway. This week Facebook provided an update into how the research is going, and its potential application in augmented reality (AR) glasses.

Facebook believes that AR glasses will help to realise a future that's accessible, hands-free, and instant – regardless of distractions, geography, or disabilities.

Facebook's research is still in the early stages. It takes the concept of brain-computer interface (BCI). In the past, the technology has allowed people to feed themselves, and to fly a jet simulator, says Facebook Reality Labs research director Mark Chevillet.

But there have been limitations to BCI, because all actions have required implanted electrodes. Chevillet says the long term goal is to turn BCI and its possibilities into the reality of a non-invasive and wearable device.

While voice assistants like Siri and Google Assistant are popular at home or in the office, how many people have used them in crowded rooms, or in a busy street? That's where ‘thinking' words comes into the equation.

Facebook Research Labs didn't know if a completely silent speech interface was possible, let alone how it would work with the brain. Currently electrodes are still the go-to for making BCI work.

Chevillet and UCSF neurosurgeon Edward Chang shared similar goals for BCI technologies. They conducted their own studies into whether brain activity associated with speaking could be used to transcribe text on a computer screen.

The researchers sampled a group of volunteers who were already getting brain surgery treatment for epilepsy. Their algorithm was able to decode and transcribe a full set of spoken words and phrases in real time. The algorithm can only recognise a tiny proportion of words, but it's a start.

Facebook and UCSF see the potential for infrared as a way of making non-invasive technology work.

“We don't expect this system to solve the problem of input for AR anytime soon,” Facebook admits.

“It's currently bulky, slow, and unreliable. But the potential is significant, so we believe it's worthwhile to keep improving this state-of-the-art technology over time. And while measuring oxygenation may never allow us to decode imagined sentences, being able to recognise even a handful of imagined commands, like ‘home,' ‘select,' and ‘delete,' would provide entirely new ways of interacting with today's VR systems — and tomorrow's AR glasses.

Optical technologies in smartphones and LiDAR can also help build better BCI devices that work more accurately than electrodes – perhaps they could be the answer, Facebook says.

There are also other issues that BCI technologies need to take into account.

“For example, how can we ensure the devices are safe and secure? Will they work for everyone, regardless of skin tone? And how do we help people manage their privacy and data in the way they want?” Facebook asks.

Chevillet says the researchers have already learned a lot, and the next step is to start talking with the community. BCI technology is still in its infancy, but it's never too early to talk about ethics.

“We can't anticipate or solve all of the ethical issues associated with this technology on our own,” he says.

“What we can do is recognise when the technology has advanced beyond what people know is possible, and make sure that information is delivered back to the community.

“Neuroethical design is one of our program's key pillars — we want to be transparent about what we're working on so that people can tell us their concerns about this technology," Chevillet concludes.