Large Language Models and the Argument From the Poverty of the Stimulus
Nur Lan, Emmanuel Chemla, Roni Katzir
January 2024
 

How much of our linguistic knowledge is innate? According to much of theoretical linguistics, a fair amount. One of the best-known (and most contested) kinds of evidence for a large innate endowment is the so-called argument from the poverty of the stimulus (APS). In a nutshell, an APS obtains when human learners systematically make inductive leaps that are not warranted by the linguistic evidence. A weakness of the APS has been that it is very hard to assess what is warranted by the linguistic evidence. Current Artificial Neural Networks appear to offer a handle on this challenge. Wilcox et al. (2021) use such models to examine the available evidence as it pertains to wh-movement. They conclude that the (presumably linguistically neutral) networks acquire an adequate knowledge of wh-movement, thus undermining an APS in this domain. We examine the evidence further and show that the networks do not, in fact, succeed in acquiring wh-movement. More tentatively, our findings suggest that the failure of the networks is due to the insufficient richness of the linguistic input and not to inadequacies of the networks, thus supporting an APS, the first that is based on successful learners exposed to realistic amounts of linguistic input.
Format: [ pdf ]
Reference: lingbuzz/006829
(please use that when you cite this article)
Published in:
keywords: neural networks, deep learning, filler-gap dependency, syntactic islands, learnability, across-the-board movement, parasitic gaps, subject-aux inversion, language models, syntax
previous versions: v2 [November 2023]
v1 [September 2022]
Downloaded:1266 times

 

[ edit this article | back to article list ]