Posted by on
Tags: , , , , , , , , , , ,
Categories: Uncategorized

The world speaks thousands of languages — roughly 6,500, to be exact — and systems from the likes of Google, Facebook, Apple, and Amazon become better at recognizing them each day. The trouble is, not all of those languages have large corpora available, which can make training the data-hungry models underpinning those systems difficult.

That’s the reason Google researchers are exploring techniques that apply knowledge from data-rich languages to data-scarce languages. It’s borne fruit in the form of a multilingual speech parser that learns to transcribe multiple tongues, which was recently detailed in a preprint paper accepted at the Interspeech 2019 conference in Graz, Austria. The coauthors say that their single end-to-end model recognizes nine Indian languages (Hindi, Marathi, Urdu, Bengali, Tamil, Telugu, Kannada, Malayalam, and Gujarat) highly accurately, while at the same time demonstrating a “dramatic” improvement in automatic speech recognition (ASR) quality.

Read more here:


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.