Volltext-Downloads (blau) und Frontdoor-Views (grau)
  • search hit 1 of 1
Back to Result List

Words from spontaneous conversational speech can be recognized with human-like accuracy by an error-driven learning algorithm that discriminates between meanings straight from smart acoustic features, bypassing the phoneme as recognition unit

  • Sound units play a pivotal role in cognitive models of auditory comprehension. The general consensus is that during perception listeners break down speech into auditory words and subsequently phones. Indeed, cognitive speech recognition is typically taken to be computationally intractable without phones. Here we present a computational model trained on 20 hours of conversational speech that recognizes word meanings within the range of human performance (model 25%, native speakers 20–44%), without making use of phone or word form representations. Our model also generates successfully predictions about the speed and accuracy of human auditory comprehension. At the heart of the model is a ‘wide’ yet sparse two-layer artificial neural network with some hundred thousand input units representing summaries of changes in acoustic frequency bands, and proxies for lexical meanings as output units. We believe that our model holds promise for resolving longstanding theoretical problems surrounding the notion of the phone in linguistic theory.

Export metadata

Additional Services

Share in Twitter Search Google Scholar


Author:Denis ArnoldORCiDGND, Fabian Tomaschek, Konstantin Sering, Florence Lopez, R. Harald BaayenGND
Parent Title (English):PLoS ONE
Editor:Hedderik van Rijn
Document Type:Article
Year of first Publication:2017
Date of Publication (online):2017/04/19
Tag:human learning; learning; speech; speech signal processing; word recognition
First Page:1
Last Page:16
Verwendete R-Pakete
Dewey Decimal Classification:400 Sprache
Open Access?:ja
Leibniz-Classification:Sprache, Linguistik
Linguistics-Classification:Quantitative Linguistik
Licence (English):License LogoCreative Commons - Attribution 4.0 International