Joe Emonds

The fundamental aim in the linguistic analysis of a language $latex L$…

…is to separate the grammatical sequences which are the sentences of $latex L$ from the ungrammatical sequences which are not sentences of $latex L$ and to study the structure of the grammatical sequences. The grammar of $latex L$ will thus be a device which generates all the grammatical sequences of $latex L$ and none of the ungrammatical ones.

First paragraph of Syntactic Structures: Ch. 2, ‘The Independence of Grammar’

As a student who had been strongly attracted by grammars of $latex L$ (= English, Latin, German, Greek, French) and holder of a mathematics MA, what attracted me to the MIT program, via Chomsky’s writings, was the sense that at least preliminary explicit formulations of these grammars of $latex L$ were in sight—not during my stay at MIT, but in say a couple of decades.

With almost everyone else, I was convinced from the first of ‘…the necessity for supplementing a “particular grammar” by a universal grammar if it is to achieve descriptive adequacy.’ (Aspects of the Theory of Syntax: 6). Thus, I understood,

(1) Grammar of $latex L = UG + G_i$ ( = Particular Grammar of $latex L_i$ )

These grammars $latex G_i$, supplemented by UG, were to generate all and only grammatical sequences of the $latex L_i$. So, the broad question had two parts: what was UG, perhaps the hardest part, and what were the (formalized, explicit) Particular Grammars, a supposedly easier question. Nonetheless, the second part also seemed intriguing and puzzling, since, beyond some generalities, exact aspects of e.g. English and French grammars had little in common. (Kayne’s dissertation, his later French Syntax, didn’t seem to be a book about English grammar.) Thus in addition to UG, “the broad question for which I most wanted to get an answer to” was:

(2) What exactly is the form of particular grammars $latex G_i$ that UG can then ‘supplement’?

A contentful answer would be at least preliminary formally explicit Gi of some language(s), e.g. English, French, etc. These grammars would be integrated with UG (how was of course also part of the question), and would be working hypotheses which research would further formalize, simplify and refine.

What happened however was that almost no one got interested in the equation (1). With few exceptions, involving parameters that sort of fizzled out, research proceeded as if any grammatical pattern in some $latex L_i$ could always be decomposed into an interesting UG component plus some downgraded remnant that was ‘low level’, ‘a late rule’, ‘morphology’ or ‘purely lexical’. These unformalized and ad hoc remnants were regularly put aside.

For me, therefore, a more promising alternative was the idea in Borer’s Parametric Syntax, that precisely formulated lexical entries of grammatical morphemes, or ‘Grammatical Lexicons’ (of closed class items) were the needed particular grammars $latex G_i$. However, though espoused at times by e.g. Fukui and Speas, Chomsky, and notably Ouhalla, these Grammatical Lexicons are rarely formulated or theoretically developed by research in syntax, beyond occasional focus on isolated morphemes (Hebrew sel, French se).

So the current status of question (2) is “unanswered”; there are still no preliminary explicit formulations of Grammatical Lexicons $latex G_i$. Moreover, outside of HPSG, whose grammars seem unreservedly stipulative and factor out no UG ‘supplement’, generative syntax still largely ignores (2). Yet (2) seems quite meaningful and in no way ill-conceived.

In fact, it is more than meaningful. Without formalized $latex G_i$, generative syntax is not fulfilling the fundamental aim of linguistic analysis, to produce formal Grammars of $latex L_i$. And no serious “obstacles make it a hard question” (certainly real progress in constructing UG is harder), other than lack of interest and the still unspoken hope that work on UG will somehow eventually make answers to (2) trivial.

Why aren’t the answers trivial? Staying with the example of French and English, syntacticians widely take them to be ‘similar.’ In terms of language variety they are. Nonetheless, their Grammatical Lexicons $latex G_e$ and $latex G_f$, (each containing some 400 ±100 items including affixes and grammatical Ns, V, and As) don’t share a single pair of items with the same grammar; recently I even find very ≠ très and a lot ≠ beaucoup. No grammatical preposition, no complementizer, no verbal affix, no negative word, no degree word, no quantifier, no reflexive morpheme, no grammatical verb, no pronoun, no prefix, no article has the same grammar in the two languages. And because these many differences are not even tentatively represented in generative models, the field of syntax knows very little more today than in 1960, at least in formal terms, about exactly how French and English are different.

The path to answering (2) is then for at least some researchers to work on it, after its being sidetracked and hidden from view for most of the generative period. I hope my book Lexicon and Syntax has been a step in this direction. In general, I have no doubt that generative syntax can answer (2) in interesting and even relatively complete ways, once people decide it is not a distraction from UG, but rather the best stepping stone for constructing it. Conversely, if the question of Particular Grammars remains unaddressed, then generative syntax has little to say on ‘the fundamental aim of linguistic analysis’.