Large Linguistic Models: Analyzing theoretical linguistic abilities of LLMs
Gasper Begus, Maksymilian Dąbkowski, Ryan Rhodes
January 2025
 

The performance of large language models (LLMs) has recently improved to the point where the models can perform well on many language tasks. We show here that—for the first time—the models can also generate valid metalinguistic analyses of language data. We outline a research program where the behavioral interpretability of LLMs on these tasks is tested via prompting. LLMs are trained primarily on text—as such, evaluating their metalinguistic abilities improves our understanding of their general capabilities and sheds new light on theoretical models in linguistics. We show that OpenAI's (2024) o1 vastly outperforms other models on tasks involving drawing syntactic trees and phonological generalization. We speculate that OpenAI o1's unique advantage over other models may result from the model's chain-of-thought mechanism, which mimics the structure of human reasoning used in complex cognitive tasks, such as linguistic analysis.
Format: [ pdf ]
Reference: lingbuzz/007269
(please use that when you cite this article)
Published in:
keywords: dabkowski, generative, generate, artificial, intelligence, ai, linguistics, llm, large, language, model, theoretical, theory, formal, linguistics, gpt, transformer, gpt-3.5, turbo, gpt-4, open, openai, o1, meta, llama, 3.1, parameter, reasoning, chain-of-thought, chain, thought, metalinguistic, analysis, analytical, behavioral, interpretability, prompt, prompting, syntactic, phonological, tree, generalization, mechanism, human, reason, reasoning, syntax, phonology
previous versions: v1 [May 2023]
Downloaded:1061 times

 

[ edit this article | back to article list ]