Large Linguistic Models: Analyzing theoretical linguistic abilities of LLMs
Gasper Begus, Maksymilian DÄ…bkowski, Ryan Rhodes
May 2023
 

The performance of large language models (LLMs) has recently improved to the point where the models can perform well on many language tasks. We show here that for the first time, the models can also generate coherent and valid analyses of linguistic data and illustrate the vast potential of large language models for analyses of their metalinguistic abilities. LLMs are primarily trained on language data in the form of text; analyzing and evaluating their metalinguistic abilities improves our understanding of their general capabilities and sheds new light on theoretical models in linguistics. In this paper, we probe into GPT-4's metalinguistic capabilities by focusing on three subfields of formal linguistics: syntax, phonology, and semantics. We outline a research program for metalinguistic analyses of large language models, propose experimental designs, provide general guidelines, discuss limitations, and offer future directions for this line of research. This line of inquiry also exemplifies behavioral interpretability of deep learning, where models' representations are accessed by explicit prompting rather than internal representations.
Format: [ pdf ]
Reference: lingbuzz/007269
(please use that when you cite this article)
Published in:
keywords: dabkowski, generative ai, linguistics, llms, theoretical linguistics, gpt-4, semantics, syntax, phonology
Downloaded:776 times

 

[ edit this article | back to article list ]