A group of Deep Learn researchers at the University of Oxford have discovered a way to employ the artificial intelligence of Deep Mind neural technology to tackle the very difficult task of reading lips. Entitled LipNet, the technology continually learns in order to recognize new patterns of speech.
LipNet is doing lipreading using Machine Learning, aiming to help those who are hard of hearing and can revolutionise speech recognition. …LipNet is a neural network architecture for lipreading that maps variable-length sequences of video frames to text sequences, and is trained end-to-end …LipNet is the first lipreading model to operate at sentence-level, using a single end-to-end speaker-independent deep model to simultaneously learn spatiotemporal visual features and a sequence model. On the GRID corpus, LipNet achieves 93.4% accuracy, outperforming experienced human lipreaders and the previous 79.6% state-of-the-art accuracy.