Big Data and Little Learners

Abstract

Recent advances in the data sciences, particularly within the area of language technology, have been impressive and non-incremental. For example, within the domain of language translation, the application of deep Long Short Term Memory (LSTM) neural networks to large bodies of text have resulted in a 60% reduction in translation errors from traditional methods, significantly closing the gap between machine and human performance (Wu et al., 2016). Similarly impressive advances have been observed in, e.g., speech recognition (Hinton et al., 2012), syntactic parsing (Dyer et al., 2015) and automatic content extraction (Berant et al., 2015). Clearly, excitement is justified as a new era of linguistic technology is emerging. But should this excitement lead to a fundamental rethinking of our theories of child language and cognition?


Back to Table of Contents