Mid-level generalizations of generative linguistics: Experimental robustness, cognitive underpinnings and the interdisciplinarity paradox
Evelina Leivada
January 2021
 

This work examines the nature of the so-called “mid-level generalizations of generative linguistics” (MLGs). In 2015, Generative Syntax in the 21st Century: The Road Ahead was organized. One of the consensus points that emerged related to the need for establishing a canon, the absence of which was argued to be a major challenge for the field, raising issues of interdisciplinarity and interaction. Addressing this challenge, one of the outcomes of this conference was a list of MLGs. These refer to results that are well established and uncontroversially accepted. The aim of the present work is to embed some MLGs into a broader perspective. I take the Cinque hierarchies for adverbs and adjectives and the Final over- Final Constraint as case studies in order to determine their experimental robustness. It is showed that at least some MLGs face problems of inadequacy when tapped into through rigorous testing, because they rule out data that are actually attested. I then discuss the nature of some MLGs and show that in their watered down versions, they do hold and can be derived from general cognitive/computational biases. This voids the need to cast them as language-specific principles, in line with the Chomskyan urge to approach Universal Grammar from below.
Format: [ pdf ]
Reference: lingbuzz/005716
(please use that when you cite this article)
Published in: Zeitschrift für Sprachwissenschaft
keywords: universal grammar, adjectives, adverbs, final-over-final constraint, linguistic theory
Downloaded:277 times

 

[ edit this article | back to article list ]