Loading AI tools
From Wikipedia, the free encyclopedia
LipNet is a deep neural network for visual speech recognition. It was created by Yannis Assael, Brendan Shillingford, Shimon Whiteson and Nando de Freitas, researchers from the University of Oxford. The technique, outlined in a paper in November 2016,[1] is able to decode text from the movement of a speaker's mouth. Traditional visual speech recognition approaches separated the problem into two stages: designing or learning visual features, and prediction. LipNet was the first end-to-end sentence-level lipreading model that learned spatiotemporal visual features and a sequence model simultaneously.[2] Audio-visual speech recognition has enormous practical potential, with applications in improved hearing aids, medical applications, such as improving the recovery and wellbeing of critically ill patients,[3] and speech recognition in noisy environments,[4] such as Nvidia's autonomous vehicles.[5]
This article contains close paraphrasing of a non-free copyrighted source, https://ui.adsabs.harvard.edu/abs/2016arXiv161101599A/abstract (Copyvios report). (February 2021) |
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.