Learning syntactic parameter settings without triggers by assigning credit and blame
Brandon Prickett, Kaden Holladay, Shay Hucklebridge, Max Nelson, Rajesh Bhatt, Gaja Jarosz, Kyle Johnson, Aleksei Nazarov, Joe Pater
November 2019
 

Parametric approaches to syntax have been widely adopted since the advent of the Principles & Parameters framework in the early 1980s. Parameters are designed to provide a solution to the linguistic facet of Plato’s Problem—namely, the logical problem of how language, with potentially infinite well-formed expressions, can be acquired on the basis of the finite data the learner encounters (Chomsky, 1981). Parameters limit the hypothesis space for the learner, reducing syntactic acquisition to the setting of a finite number of innate parameters. Providing an explicit theory of how parameters are set has proved to be a challenge. Some theories require the presence of triggers—unambiguous data which entail that one or more parameters need to be set a certain way (Gibson & Wexler, 1994). A challenge for models utilizing triggers is that even simple parametric systems generate languages with no unambiguous data points (Gibson & Wexler, 1994); these languages are unlearnable by trigger-based approaches without further stipulations about the learning process. Other models that avoid the need for triggers require the learning space to be smooth (e.g. Yang, 2002). That is, these models require “a correlation between the similarity of grammars and the languages they generate” (Sakas et al., 2017). Unfortunately for these approaches, realistic language data rarely represents a smooth learning problem (Dresher, 1999). As we demonstrate here, even a relatively small number of parameters can lead to systems that are not efficiently learned by such approaches. In this work, we adapt two domain-general learning algorithms from computational phonology that are not dependent on smoothness or triggers because of their ability to analyze the contributions made by individual parameters. As a baseline, we compare these to Yang’s (2002) Naïve Parameter Learner, which does not perform this kind of analysis and has been shown to require smoothness (Straus, 2008). We apply all three models of acquisition to syntactic learning, testing them on two simple parametric systems containing headedness and movement parameters. Our results show that Yang’s (2002) algorithm is not sufficient for the task, while the other two models succeed—suggesting that future theories of syntactic learning should incorporate analysis of individual parameter settings into their learning.
Format: [ pdf ]
Reference: lingbuzz/006950
(please use that when you cite this article)
Published in: Proceedings from the Annual Meeting of the Chicago Linguistic Society
keywords: parameter setting, constraint ranking, learning, syntax, maximum entropy, harmonic grammar, expectation driven learning
Downloaded:72 times

 

[ edit this article | back to article list ]