Joseph Galasso
March 2024

In a 1995 book entitled Speaking Minds, editors Peter Baumgartner and Sabine Payr put together a fascinating series of interviews with twenty of the most eminent cognitive scientists of the twentieth century. Out of these interviews emerged just how deep-seated and explicitly contentious animosities ran between some of these great minds, and showcased just how it was inevitable that two camps on AI would eventually splinter. Like two emerging phoenixes out of the cognitive ashes of unfulfilled promises, this Janus-headed monster would take on almost religiously zealous overtones and contempt for one another, as both sides would attempt to explain away the other’s respective shortcomings in what was at the time thought of as an emerging field which held so much promise for future AI. While the interviews in 1995 seemed new and nuanced by today’s standards, the debates themselves have much earlier antecedents dating back to pioneers such as Donald Hebb (‘neurons which fire together wire together’), and pre-war brilliant polymaths such as von Neumann and Turing himself (the Turing Test)—all of whom fostered the famous post-war debates between Marvin Minsky and Frank Rosenblatt (classmates from the same Bronx High School of Science). The debates can be articulated in one fell swoop—namely, (i) whether AI and Cognitive Science (which would lead to deep learning, and our current Chat-GPT) should try to emulate the actual inner neurological architecture of the human brain, whereby ‘human learning’ arises from a singular mode of neuronal binary/digital activity, (the nature of which is heavily reliant upon brute-force notions such as locality, frequency and weighted strengths), or (ii) whether the brain’s architecture—as was then and still is today so impervious to our complete understanding—should be modeled not based on its poorly understood neuronal architecture, but rather modeled on its computational performance and outcomes for such capacities as logic, reasoning, cause-and-effect. These latter processes are uniquely human and seem rather analog in nature, as they give rise to symbolic rule-based procedures of language and ‘human understanding’. The Singular vs Dual Mechanism Model debates are currently ongoing in the field. These papers amount to some of my thoughts on the topic. The following links are pulled from informal working papers and squibs and represent some of my thoughts are the current state of a potential AI-to-Natural Language Interface. The last three papers (Section III), particularly ‘Why Move?’, attempts to capture this AI to Natural Language interface regarding developmental stages of child syntax. This informal e-book is organized into three sections: Section I ‘The Neuro Basis for Language’, Section II ‘Recursive Grammars’, and Section III ‘Child Language Acquisition’. *Papers, Squibs and Essays on the topic can all be found on my academia site: https://csun.academia.edu/josephgalasso
Format: [ pdf ]
Reference: lingbuzz/008000
(please use that when you cite this article)
Published in:
keywords: recursive syntax, basal ganglia grammar, neurolinguistics, artificial intelligence, syntax
Downloaded:156 times


[ edit this article | back to article list ]