Adopting Large Language Models as a theory of language does refute Chomsky (but not like you think)
Charles Reiss, Veno Volenec
November 2024
 

This paper is a response to Ambridge & Blything (2024), Piantadosi (2023), and similar claims. We provide a series of arguments showing that Large Language Models are not theories of language at all, and therefore cannot be "better at theoretical linguistics" than theoretical linguistics. What little can be learned from LLMs about the nature of language directly corroborates generative linguistics, e.g., that the competence-performance dichotomy and some equivalent of Universal Grammar are indispensable. LLM approaches fall prey to the Platonic and externalist delusions that arise from ignoring the I-language perspective where the subject matter is “a real object rather than an artificial construct” (Chomsky 1986: 28).
Format: [ pdf ]
Reference: lingbuzz/008543
(please use that when you cite this article)
Published in: submitted
keywords: large language models, chatgpt, theoretical linguistics, generative linguistics, syntax, phonology, language acquisition, universal grammar, syntax, phonology
previous versions: v3 [November 2024]
v2 [November 2024]
v1 [November 2024]
Downloaded:937 times

 

[ edit this article | back to article list ]