Posted by on
Tags: , , , , , , , , , , , , ,
Categories: Uncategorized

Somewhat unceremoniously, Facebook this week provided an update on its brain-computer interface project, preliminary plans for which it unveiled at its F8 developer conference in 2017. In a paper published in the journal Nature Communications, a team of scientists at the University of California, San Francisco backed by Facebook Reality Labs — Facebook’s Pittsburgh-based division devoted to augmented reality and virtual reality R&D — described a prototypical system capable of reading and decoding study subjects’ brain activity while they speak.

It’s impressive no matter how you slice it: The researchers managed to make out full, spoken words and phrases in real time. Study participants (who were prepping for epilepsy surgery) had a patch of electrodes placed on the surface of their brains, which employed a technique called electrocorticography (ECoG) — the direct recording of electrical potentials associated with activity from the cerebral cortex — to derive rich insights. A set of machine learning algorithms equipped with phonological speech models learned to decode specific speech sounds from the data and to distinguish between questions and responses.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.