How statistical learning can play well with Universal Grammar
Lisa Pearl
February 2020

One motivation for Universal Grammar (UG) is that it’s what allows children to acquire the linguistic knowledge that they do as quickly as they do from the data that’s available to them. One key legacy of Chomsky’s work in language acquisition is highlighting the separation between children’s hypothesis space of possible representations (typically defined by UG) and how children might navigate that hypothesis space (sometimes defined by UG). While statistical learning is sometimes thought to be at odds with UG, I review how statistical learning can both complement UG and help us refine our ideas about the contents of UG. I first review some cognitively-plausible statistical learning mechanisms that can operate over a predefined hypothesis space, like those UG creates. I then review two sets of examples: (i) those where statistical learning allows more efficient navigation through a UG-defined hypothesis space, and (ii) those where statistical learning can replace prior UG proposals for navigating a hypothesis space, and in turn lead to new ideas about how UG might construct the hypothesis space in the first place. I conclude with a brief discussion of how we might make progress on understanding language acquisition by incorporating statistical learning into our UG-based acquisition theories.
Format: [ pdf ]
Reference: lingbuzz/004772
(please use that when you cite this article)
Published in: Wiley-Blackwell Companion to Chomsky (in press)
keywords: universal grammar, statistical learning, linguistic parameters, parameter setting, linking theories, subset principle, syntactic islands, reinforcement learning, variational learning, tolerance principle, sufficiency principle, bayesian inference, morphology, syntax, phonology
previous versions: v3 [December 2019]
v2 [September 2019]
v1 [September 2019]
Downloaded:1051 times


[ edit this article | back to article list ]