Giuseppe Longobardi

I first came to MIT in the Fall 1979, under Noam’s urge after his visit to Pisa in the Spring of that year, and then I was a visiting Scholar several times (for about 6 terms between 1979 and 1989). Since the beginning of my student career in the early 1970s I had been fascinated with the issue of how aspects of grammatical diversity cluster across languages and can be scientifically described.

Thus, the most intriguing problem for me could be formulated as follows:

1) Which (and how abstract) syntactic properties can crosslinguistically vary independently of each other?

In other words, which variable properties are ultimately the real entities of grammatical diversity (regarded as one of the most central features of human culture and cognition)?

In those years at MIT, I found the development of the Principles&Parameters framework an absolutely illuminating way of addressing this problem, as well as the most interesting strategy to carry out comparative practices in linguistics since the classical historical method.

Now, over thirty years later, parametric theories have become a standard form of successfully expressing contrastive generalizations and typological clustering of variable grammatical properties.

In this sense, we can claim that the parametric format has attained some high degree of “crosslinguistic” descriptive adequacy. What is more dubious in my view is whether parametric models have achieved further levels of scientific success, first of all whether they are able to address concerns of classical explanatory adequacy, as represented in the following question:

2) Do P&P theories represent realistic models of language acquisition?

This conjecture has gone largely untested, mostly owing to the lack of a reliable and sufficiently wide sample of plausible parameters and to the difficulty of defining a set of triggers for each of them.

In fact, it is very difficult to imagine a viable alternative to a parametric model (broadly understood as any finite set of discrete predetermined choices) of grammar acquisition. In particular, empiricist views, undermined by poverty-of-stimulus considerations, but also previous nativist theories relying on evaluation measures, seem to represent no such alternative. However, empirically, parametric theories are not yet sufficiently corroborated, since nobody has so far indisputably assessed their effectiveness as acquisition models by implementing a parameter setting system over a large and realistic collection of parameters (Fodor 2001, Yang 2003; cf. Chomsky 1995, 7: “The P&P model is in part a bold speculation rather than a specific hypothesis. Nevertheless, its basic assumptions seem reasonable…. and they do suggest a natural way to resolve the tension between descriptive and explanatory adequacy”)

A plausible strategy in order to find evidence for P&P or its variants is that of collecting relatively many hypothetical parameters, set in relatively many languages, though all contained within a single submodule of grammar (in order to downsize the complexity of the task and the risk of missing some of the close interactions between contiguous parameters).

A substantial though still manageable database of this type can be subjected to various tests which are not possible with isolated parameters, e.g. a study of its abstract learnability properties. Thus, this practical approach makes it possible to meaningfully raise questions like 3):

3) Are (fragments of) parametrized grammars mathematically learnable?

However, I think similar databases may even more immediately allow for an original empirical way of testing of parametric approaches.

In the same year the P&P model was proposed, David Lightfoot happened to publish his Principles of Diachronic Syntax, now regarded as the forerunner of all the foundational work in historical generative syntax which has boomed for the past 20 years (think of Lightfoot’s notion of “local causes”, Clark and Roberts’ and Berwick and Niyogi’s concept of “logical problem of language change”, and Keenan’s idea of “inertia”). The glimpses of understanding of syntactic “change” achieved so far permit us, in my view, to empirically evaluate P&P with respect to their “historical adequacy”, i.e. their ability to provide correct insights on the actual history of languages and populations through space and time. Precisely this kind of success established linguistics as a well respected discipline in the 19th century. Therefore, I believe that a great deal of insight and respect among neighboring sciences can arise for generative linguistics if questions like 4) are successfully addressed:

4) Do P&P theories represent realistic models of language transmission through time and space?

Can parametric syntax, e.g., provide us with insights about the (pre-)history of human diversity parallel to and better than those achieved by lexical comparative linguistics? After thirty years of P&P, I hope I can still participate in an effort in that direction.

In sum, pursuing problems of the type of 3) and 4) represents in my opinion much more structured and updated ways to address the concerns which intrigued me and drew me to MIT in the 1980s. Research programs revolving around such questions seem now feasible and promising. The very concrete possibility of raising and answering them appears as the best witness of the progress of cognitive science since those exciting years.

Haj Ross

As a mathematical discipline travels far from its empirical source, or still more, if it is a second and third generation only indirectly inspired from ideas coming from ‘reality,’ it is beset with very grave dangers. It becomes more and more purely aestheticizing, more and more purely l’art pour l’art. This need not be bad, if the field is surrounded by correlated subjects, which still have closer empirical connections, or if the discipline is under the influence of men with an exceptionally well-developed taste.

But there is a grave danger that the subject will develop along the line of least resistance, that the stream, so far from its source, will separate into a multitude of insignificant branches, and that the discipline will become a disorganized mass of details and complexities.

In other words, at a great distance from its empirical source, or after much ‘abstract’ inbreeding, a mathematical subject is in danger of degeneration. At the inception the style is usually classical; when it shows signs of becoming baroque the danger signal is up. It would be easy to give examples, to trace specific evolutions into the baroque and the very high baroque, but this would be too technical.

In any event, whenever this stage is reached, the only remedy seems to me to be the rejuvenating return to the source: the reinjection of more or less directly empirical ideas. I am convinced that this is a necessary condition to conserve the freshness and the vitality of the subject, and that this will remain so in the future.”

— John von Neumann

On his biography

A present from John Lawler http://www-personal.umich.edu/~jlawler/von.neumann.html

As I remember my January 1964 mind, which I had when I left Penn, (my Penn MA thesis, which I was supposed to have written before leaving, a long thing on superlatives, which I finally did finish at MIT in May or June of 1964), was filled with wonder at how beautifully everything grammatical worked! Clockwork! Affix Hopping happened magically, and word boundaries were cleverly inserted where they would do the most good, and I was thrilled.

Phonology was like that too – the first course I took when I got to MIT was 23.762 – Phonology, with Morris. There were insanely clever things going on back then, I remember – like the e/o ablaut in PIE being determined by how many cycles there were internally to a word, all spooky stuff which I had no way of evaluating, knowing nothing of PIE. But that it all worked mechanically, that was the goal, the shining Grail.

There was a slug in the jello, though. In the good old days (1964) grammaticality was yes or no. There were some suggestions from Noam about how some sentences could have sort of similar derivations to the pure and fully grammatical sentences – Noam had written about this in a part of LSLT, and there was another paper of his that I slogged through too. It was vastly clever – but I didn’t buy it. In particular, it seemed not to come even close to being of any help for the piles of messy data I had for superlatives.

There was also one sentence that Zellig Harris had said in the first syntax class I had ever had, when I had arrived at Penn in the fall of 1962. He remarked offhandedly that “some transforms of sentences are more nounlike than others.” That seemed so true, and when I got to MIT and started trying to crank through Peter Rosenbaum’s great dissertation and rules (mechanically, natch), I began to think that Peter’s Poss Ing complements were nounier than were his for to ones. That was really the kernel that launched my long paper on nouniness.

And the fascination with errorless, clockwork-like (ordered!) rules – that took some serious hits. I think that it was Morris who first began to wean me from the goal of making the equation

shorter rules = better rules

something like a credo. Morris would just chuckle at what some student or I would come up with – something tricky that would save one feature, or seven. It seemed heretical, but it WAS Morris, after all, who was laughing. Maybe I was missing a joke somewhere.

And then Morris and I started teaching 23.751 – the first syntax course. And we got together a list of around 50-60 rules, and tried to order them, and a lot of them seemed cool, but there were continual breakdowns – new types of rules (post-cyclic rules, anywhere rules, output conditions, etc.). “The” theory was in constant flux, and clockworkiness just seemed to a goal adherence to which would have to be put off for a while.

A very long while, as it turned out. The goal of a clocklike grammar came to seem to be completely out of reach, and to be receding faster and faster to boot.

Another broad question which surfaced in my first years at MIT was the Grail of Universal Grammar. At Penn, I hadn’t even tried to think along those lines. It was Paul Postal who most put these thoughts in my mind. And Noam too – his famous Thursday afternoon class. And Noam’s A-over-A condition seemed incredibly cool and so right! But then I started poking it, and a misty understanding of what was eventually going to become my dissertation started emerging from the ooze . . .

So what I now see as the broad questions that I started with – the hope for a purely formal grammar, sharp grammaticality judgements, strong universals – these all crumbled, and I found myself trying to imagine something squishier, rubberier, something more like a poem than like a set of axioms. What I started with was fine but it had to give way pretty soon to an apparently aimless kind of ambling, sashaying towards poeticity.

I worked for around ten years at trying to articulate a non-discrete (= squishy) theory of grammar. What seemed to be necessary were rules that could decrement a sentence’s grammaticality, under certain circumstances. These rules would then output sentences with various degrees of grammaticality, say on a scale of 0–100, where 50 or better was grammatical, and 49 or less was bad, though there would have to be degrees of both goodness and badness. But I was doing this mostly on my own, and the idea that I could present something algorithmic, so that I could turn a crank and out would pop sentences with nice indices of grammaticality, all like clockwork, seemed infinitely far off. The idea of clockwork-like rules was still officially what I was striving for, but I knew it was out of reach. No – not quite. Better: whether someone would reach it someday or not, I myself stopped reaching for it.

I notice that I am leaving out that part of linguistics which drained huge amounts of my energy during these years (roughly the decade 1967-1976), namely the Linguistics Wars. Generative vs. Interpretive Semantics. Enough has been written about that to choke a horse (I like the perspective that Geoff Huck and John Goldsmith offer the best, in their Ideology and Linguistic Theory – Noam Chomsky and the Deep Structure Debates) – there are other things that concern me more for our Fiftieth than this trampled ground.

As I muse backwards, I see two main issues. The first is squibs. These I started writing to myself probably around 1963. George Lakoff, who was then an assistant professor of linguistics at Harvard, starting around the fall of 1964, if memory serves (which would be a miracle), and I started trading them back and forth from that time on. Robby Lakoff too – she was finishing her Ph.D. at Harvard, on Latin syntax, and she was (and is) an amazing sharp-shooter of a squibber. I no longer remember this, but George tells me that it was me who came up with the name squib. I have since looked up the word in the OED, and it has a history, with many meanings, one of whom would fit pretty well with the way we understand the term now, so I may have come across it somewhere, and borrowed it into the syntax that George and I were trying to set up. Whatever.

What I would like to underline here, however, is not the history of the name of these creatures, but rather the change in syntacticians’ understanding of what they were as soon as Linguistic Inquiry started to be published, in 1970. Jay Keyser, the editor, had had the great idea to have a squibs section in LI, and had invited me and Dave Perlmutter to be squibs editors. I was pleased and flattered, probably Dave was too, off we went.

I remember perceiving vaguely that the squibs that we accepted (after they were reviewed and edited, comme il faut) had changed into something else than the sort of Post-it sized flashes that squibs had been before they had gotten institutionalized, and tamed. What came out in LI were short notes – great notes, notes with deep consequences, I am happy to have helped in any way to get them out – but something was missing.

For me, that is. We published very few of what we came to call “mystery squibs.” One mystery squib of mine was a question: what is the source of that in this sentence: “The rules of Clouting and Dragoff apply in that order.”? I am very clear that not everyone feels that such mystery squibs have any right to be published. I remember Morris telling me that one indignant linguist had asked him why their money should be paid to read about what I didn’t know.

The indignation was contagious – I was indignant back, not because I view my ignorance as being more important than other people’s, but because I had come to the conclusion, at the end of my thesis, that what progress seemed to me to be was the ability to ask deeper questions. An unremitting search for higher forms of ignorance. I imagine that broadened questions are automatically also deepened ones, a fascinating inexplicability about the space in which question/insight lives.

At the very bottom of all the squibbing I have done is another unpopular conviction: that despite the immense and brilliant efforts of all of us OWG’s, the extent to which we have succeeded in staking out the basic lay of the land in syntax (or anywhere else), the degree with which we have “covered” syntax is less than vanishingly small. The best description of a stance that I applaud came from Paul Stoller, an anthropologist friend, who has been working with a Songhay shaman/healer for more than three decades. Paul visited an introductory class I was teaching at Georgetown in the summer of 1985 and told us something like:

There are two stances one can adopt with respect to the process of research. One is: the more I study, the more I know. The other is: the more I study, the more clearly I see how little I know.

The latter stance is of course the one that rhymes most deeply with my soul. I have kept somewhat track of most of the squibs that I started writing around 1964 – there are now 4700+ on the web, in handwritten form, which I want to electronify and index asap. The field of syntax is infinitely immenser than it was when I was a student at the ’Tute, and I am way out of touch with current research. But my (uninformed) opinion is that a tiny fraction of the problems which those squibs of mine thrust in your faces has been looked at in any depth.

And what is depth? I have tried to stay somewhat current in my research on pseudoclefts, and the mystery squibs pour in by the fistful, every time I mess around more with pseudos. Which I take as an encouraging sign. The clarity of my understanding of this huge domain has not kept up with the degree of confusion that I feel about things, the most very basic things. I might wish to escape this bind, but I believe that there is no such thing as a non-illusory escape. I think that any sufficiently deep/broad investigation, of this kind of phenomenon, will end up in the same place. This sort of brings me back to John von Neumann. The squibs are my tether – they keep me from getting lost in the beauty of my (many) pet theories.

I am all for explanations and theories, but I side with Gregory Bateson’s father, William Bateson, a great nineteenth-century biologist – the first to use the term “genetics.” He told Gregory to treasure his exceptions, a stance my blood approves. Bateson, who was one of the greatest minds of the twentieth century, when talking of the way he held his mind in his research, says this:

“I want to emphasize that whenever we pride ourselves upon finding a newer, stricter way of thought or exposition; whenever we start insisting too hard upon ‘operationalism’ or symbolic logic or any other of those very essential systems of tramlines, we lose something of the ability to think new thoughts. And equally, of course, whenever we rebel against the sterile rigidity of formal thought and exposition, and let ourselves run wild, we likewise lose. As I see it, the advances in scientific thought come from a combination of loose and strict thinking, and this combination is the most precious tool of science.”

— “Experiments in Thinking about Observed Ethnological Material,” in Steps to an Ecology of Mind, Ballantine Books, New York (1972), pp. 73–75.

I probably err more on the side of letting myself run wild than on that of being overly theoretical. I think that letting go, first of the dream to have clockwork-like rules, and second, of the hubris of thinking that I am getting closer and closer to having all of the basic ducks in a row – abandoning, however wistfully, both of those dreams (or is it really just one single dream?), has been the greatest change in my thinking since I started in the whitewater world of the linguistics department in dear old Building 20 in 1964.

I think that perhaps the most beautiful statement of the stance I wish I could cleave to comes from Thomas Huxley:

“Sit down before fact like a little child, and be prepared to give up every preconceived notion, follow humbly wherever and to whatever abysses Nature leads, or you shall learn nothing.”

— T. H. Huxley, quoted in Marilyn Ferguson, “Karl Pribram’s Changing Reality,” in Ken Wilber (ed.). The Holographic Paradigm and Other Paradoxes, (1982), Shambhala, Boulder, Colorado, p.15-16, http://www.quoteworld.org/quotes/6978

The other thing which I have been working on, this time for a mere 33 years, is poetics. I contracted this disease from my great mentor and pal, Roman Jakobson, in around 1965, when I audited his class (which was always called “Crrooshal Prohblims in Leengveestics”). That year it was on Payeteeks. It seems to me that if we want to understand the deepest parts of a language, we should first go to its greatest writers, and look most carefully at all the pyrotechnics that they can pull out of their hat. If we don’t we run the lethal danger of not being able to escape Roman’s lance:

A linguist deaf to the poetic functions of language and a literary scholar indifferent to linguistics are equally flagrant anachronisms.

— Roman Jakobson “Closing Statement,” Style in Language, Thomas Sebeok (ed.), MIT Press (1960). p. 377.

Of course we will fail miserably in our attempts to understand their densest writing. But it will be a generous failure, heroic, deep.

Andrés Pablo Salanova

What was the broad question that you most wanted to get an answer to during your time in the program?

I was preoccupied to know what should be the correct relationship between linguistic theory and language description.

There is of course an easy answer to this, namely that good descriptions let the theory develop, and (“most importantly”) theory informs the questions that we ask when describing a language; while this is obviously true, I don’t feel that grammars have changed significantly thanks to the theoretical developments of the generative age. In part this might be sociological, but I believe it is as much the case because theoretical linguistics seems unable to bend itself to characterize each language in its own terms.

What is the current status of this question? Has it been answered? Did it turn out to be an ill-conceived question? If it’s a meaningful question as yet unanswered, please tell us what you think the path to an answer might be, or what obstacles make it a hard question.

I thought it was ill-conceived during most of my time at MIT, but since then I’ve again started to feel that it is a relevant question. I think Ken Hale was absolutely right in proposing that the way forward was to train native linguists; this hasn’t happened broadly enough to convince the majority of theoretical linguists of the importance of being truly immersed in the language one wishes to describe.

Raj Singh

What was the broad question that you most wanted to get an answer to during your time in the program?

The question I was hoping to answer when I got to MIT was: when we make pragmatic inferences, what is the principle that tells us when to stop thinking? Given an utterance in a context, language users systematically and reliably come to infer the truth or falsity of various other sentences or propositions, such as happens in focus constructions, implicature, and presupposition accommodation. Since the space of inferences we make is bounded in seemingly non-arbitrary ways, a theory of such inferential capacities requires a theory of these bounds, that is, a theory that tells you when to stop thinking. I was hoping that I would be able to work out a theory of relevance that would provide the required stopping rule: you consider all those propositions that are relevant, and nothing else.

Seminars and meetings with Kai von Fintel, Danny Fox, and Irene Heim taught me that this would not be trivial, partly because of the so-called `symmetry problem:’ As soon as we write down some very natural axioms about relevance, one can show that there are sentences that are predicted to be relevant but which never enter into pragmatic reasoning. If the axioms are right, relevance alone will not give us the bounds we need. Something else will be needed to help put a frame around our pragmatic reasoning.

What is the current status of this question? Has it been answered? Did it turn out to be an ill-conceived question? If it’s a meaningful question as yet unanswered, please tell us what you think the path to an answer might be, or what obstacles make it a hard question.

The current state of the art suggests that the language faculty itself provides the required frame. More specifically, the space of potential inferences is mechanically derived by the grammar, in a context-independent way, by executing a restricted set of structure-modification operations on the asserted sentence. This provides an upper-bound on what may be inferred. Under this architecture, the role of relevance is reduced to merely selecting some subset of these potential inferences for purposes of pragmatic reasoning. As such, it has no chance to create symmetry problems.

What currently interests me is the way sets of potential inferences are generated for different pragmatic tasks, as well as the grammar-context interface principles that determine which subsets of these potential inferences will become actual inferences. The implicature system makes use of one set of potential inferences, the accommodation system makes use of another, Maximize Presupposition reasoning another, and so on. How, if at all, are these sets related? What are the general principles from which these sets are generated? Given such sets, how does context decide which subsets to use? What are the mechanisms that convert these sets of potential inferences to actual inferences? Why does UG provide these sets, and not some others?

K. P. Mohanan

What was the broad question that you most wanted to get an answer to during your time in the program?

Following Chomsky, I assume that:

  1. generative linguistic theory is a theory of the biologically rooted mental linguistic system of the human species, and
  2. generative grammars are theories of individual mental linguistic systems that populate the space provided by the human brain-and-mind.

If we take these axioms seriously, what kind of evidence would shed light on the questions that theoretical linguistics investigates, and what kind of conclusions can we draw from them? Our data have traditionally come from speaker judgments on the acceptability/grammaticality of linguistic forms (in syntax and semantics) and the pronunciations in dictionaries, occasionally enhanced by speaker judgments on possible words (in phonology). Are these the best forms of evidence for what we wish to understand? If we expanded our evidence base to include what is (dismissively) labeled as “external evidence,” what kinds of conclusions would we draw?

Extending this further to the current state of theoretical linguistics, I would now like to ask: How would the conclusions from the expanded sets of data match the traditional conclusions, and the conclusions emerging from evidence from corpora (as in some versions of Optimality Phonology/Syntax)?

What is the current status of this question? Has it been answered? Did it turn out to be an ill-conceived question? If it’s a meaningful question as yet unanswered, please tell us what you think the path to an answer might be, or what obstacles make it a hard question.

It has not been answered. I still think it is an important question that lies at the foundations of generative linguistic theory: it is not an ill-conceived question, but I personally don’t think we have taken Chomsky seriously enough. As a community, I wonder if we have really understood the implications of his starting point.

If we really wish to take the starting point seriously, we need to abolish the boundaries between the so-called (generative) theoretical linguistics, psycholinguistics, neuro-linguistics, and bio-linguistics. We need a community of inquirers who have a broad understanding of this entire spectrum before they can specialize in their narrow pursuits. For that, the nature of graduate education needs to change. We need a new generation of linguists who are better than we are.

Perhaps the reason is more fundamental. And that has to do with the way we teachers imprison our students within the theories that we have either created or adopted. We do not teach them to identify and formulate novel questions that threaten our own theories. If they do ask novel questions and notice novel phenomena, the tendency is to answer the questions or explain the phenomena within their supervisors’ theories, perhaps with minor modifications that keep the brand name. We do not teach students to unpack competing linguistic theories, compare them, and evaluate them. And most importantly, we do not teach our students how to challenge their teachers and show that the teachers are wrong. As a student a MIT, I was taught to challenge my teachers; what I see now is reverence for teachers and the authorities of established theories. Until we get our act together in linguistics education, I don’t think the question that Noam and Morris guided me to pursue when I was a graduate student can be answered.

Luigi Rizzi

During my years at MIT (1983-1985 and Fall 1986 as faculty member, 1977 – 1981 as visiting scholar), I mainly worked on two research topics:

  • the theory of silent syntactic positions (null pronominals and, more generally, null arguments, traces, etc.);
  • the theory of locality.

There was a third and broader topic, dominant at that time, which was nourished by research on locality and null elements, and offered a framework for more specific technical work in comparative syntax:

  • the proper treatment of language invariance and variation with parametric models.

What led many of us to study null elements at the time was the simple observation that knowledge of elements missing from the physical signal, the postulation of their presence in mental representations, and the way in which they were interpreted, were more likely to spring from inner necessities of the system of mental computations, rather than from association with, or induction from, specific pieces of external input. So, the study of silent positions seemed to offer a privileged access to what everybody was aiming at, an understanding of the system of mental computations. A core idea was that a theory of such elements should include:

  • A characterization of where they can occur, what was called the “formal licensing” conditions.
  • A characterization of how they can be interpreted on the basis of the overt context in which they occur, what was called the procedure of “identification”.

I personally worked a lot on licensing and identifications of null pronominals, with special reference to pro, and of traces, mainly arising from A’ dependencies. The study of null pronominals led to much work on the Null Subject Parameter; more generally, it contributed to developing the parametric approach to comparative syntax through the detailed study of a parametric option with richly articulated comparative consequences.

Another research direction on null elements, stemming from the attempt to work out the “identification” conditions for traces (in this case, the conditions permitting the connection between a trace and its antecedent) led to a long term project on locality and intervention effects, which gave rise, a few years later, to Relativized Minimality, and then to The Minimal Link Condition, locality on Agree, etc., all conditions which, in restrospect, can be seen as trying to express in slightly different technical ways the concept of minimal search.

Much progress has been made on specific aspects of these three topics, and they are all of actuality in current research. Suffice it to think of:

  1. Recent publications like Biberauer, Holmberg, Roberts, Sheehan, eds. (2009), with contributions bearing on the analysis of Null Subjects in Minimalist Theory.
  2. Much current work on the theory of locality, also in connection with the program of studying the cartography of syntactic structures.
  3. The current lively debate on the parametric approach and Minimalism, and, more generally, on how to best express a theory of syntactic variation (e.g., the Barcelona workshop last year, etc.)

I believe that these topics are so broad and central that they will remain on focus in syntactic research in the years to come. Some significant issues are the following:

  1. The theory of locality has been built along two distinct concepts, which are implemented in formal principles on separate tracks:
    1. Intervention: some kind of structurally defined intervener disrupts a local relation (RM, MLC, but also the Minimal Distance Principle, etc.)
    2. Impenetrability: certain configurations are impenetrable to rules (Island Constraints, Subjacency, CED, Phase Impenetrability,…) Is it necessary to postulate two distinct kinds of principles? Or can one envisage a unification? On what basis?
  2. Traces are generally assumed in minimalist syntax, but the necessity of assuming null pronominals (PRO and pro) is controversial (e.g.., under the movement theory of control and the “pronominal affix” approach to null subjects).
    1. Can one really do away with null pronominals?
    2. And what typology of null elements (including also A’ elements like null operators, null topics, etc.) can be assumed?
  3. Is syntactic variation expressible through the notion of parameter?
    1. If so, what is the format and locus of parameters? i.e., how and where are they expressed in a grammatical system?
    2. If not, what is the alternative?

Akira Watanabe

What was the broad question that you most wanted to get an answer to during your time in the program?

The only broad question I was interested in when I was a graduate student is, what shape does UG take? It’s not the kind of question you can get the final answer to before you finish the program (or before your life is over). It’s a matter of discovering new generalizations and coming up with new good ideas. I thought I would be very happy if I could contribute to this project. And of course, as a grad student, there were (and no doubt still are) practical sides, too. In my days, we were required to write up two generals papers by the end of the second year, and had to finish the entire program in four years. With a lot of progress made in the 80’s, this time table was getting rather tight for students in the early 90’s. So, you had to be looking for new generalizations and new ideas all the time. No other question occupied me.

What is the current status of this question? Has it been answered? Did it turn out to be an ill-conceived question? If it’s a meaningful question as yet unanswered, please tell us what you think the path to an answer might be, or what obstacles make it a hard question.

After 20 years, I’m interested in the same question, which continues to be fascinating. I think we can say that a lot of progress has been made since then and new vistas have been opened up. At the same time, there still are phenomena that defy a principled account and details that haven’t yet received proper attention. The human language faculty is a very complex entity, with interfaces connecting it to other cognitive domains. Collective efforts are needed to obtain new insights into how it works.

Joe Emonds

The fundamental aim in the linguistic analysis of a language $latex L$…

…is to separate the grammatical sequences which are the sentences of $latex L$ from the ungrammatical sequences which are not sentences of $latex L$ and to study the structure of the grammatical sequences. The grammar of $latex L$ will thus be a device which generates all the grammatical sequences of $latex L$ and none of the ungrammatical ones.

First paragraph of Syntactic Structures: Ch. 2, ‘The Independence of Grammar’

As a student who had been strongly attracted by grammars of $latex L$ (= English, Latin, German, Greek, French) and holder of a mathematics MA, what attracted me to the MIT program, via Chomsky’s writings, was the sense that at least preliminary explicit formulations of these grammars of $latex L$ were in sight—not during my stay at MIT, but in say a couple of decades.

With almost everyone else, I was convinced from the first of ‘…the necessity for supplementing a “particular grammar” by a universal grammar if it is to achieve descriptive adequacy.’ (Aspects of the Theory of Syntax: 6). Thus, I understood,

(1) Grammar of $latex L = UG + G_i$ ( = Particular Grammar of $latex L_i$ )

These grammars $latex G_i$, supplemented by UG, were to generate all and only grammatical sequences of the $latex L_i$. So, the broad question had two parts: what was UG, perhaps the hardest part, and what were the (formalized, explicit) Particular Grammars, a supposedly easier question. Nonetheless, the second part also seemed intriguing and puzzling, since, beyond some generalities, exact aspects of e.g. English and French grammars had little in common. (Kayne’s dissertation, his later French Syntax, didn’t seem to be a book about English grammar.) Thus in addition to UG, “the broad question for which I most wanted to get an answer to” was:

(2) What exactly is the form of particular grammars $latex G_i$ that UG can then ‘supplement’?

A contentful answer would be at least preliminary formally explicit Gi of some language(s), e.g. English, French, etc. These grammars would be integrated with UG (how was of course also part of the question), and would be working hypotheses which research would further formalize, simplify and refine.

What happened however was that almost no one got interested in the equation (1). With few exceptions, involving parameters that sort of fizzled out, research proceeded as if any grammatical pattern in some $latex L_i$ could always be decomposed into an interesting UG component plus some downgraded remnant that was ‘low level’, ‘a late rule’, ‘morphology’ or ‘purely lexical’. These unformalized and ad hoc remnants were regularly put aside.

For me, therefore, a more promising alternative was the idea in Borer’s Parametric Syntax, that precisely formulated lexical entries of grammatical morphemes, or ‘Grammatical Lexicons’ (of closed class items) were the needed particular grammars $latex G_i$. However, though espoused at times by e.g. Fukui and Speas, Chomsky, and notably Ouhalla, these Grammatical Lexicons are rarely formulated or theoretically developed by research in syntax, beyond occasional focus on isolated morphemes (Hebrew sel, French se).

So the current status of question (2) is “unanswered”; there are still no preliminary explicit formulations of Grammatical Lexicons $latex G_i$. Moreover, outside of HPSG, whose grammars seem unreservedly stipulative and factor out no UG ‘supplement’, generative syntax still largely ignores (2). Yet (2) seems quite meaningful and in no way ill-conceived.

In fact, it is more than meaningful. Without formalized $latex G_i$, generative syntax is not fulfilling the fundamental aim of linguistic analysis, to produce formal Grammars of $latex L_i$. And no serious “obstacles make it a hard question” (certainly real progress in constructing UG is harder), other than lack of interest and the still unspoken hope that work on UG will somehow eventually make answers to (2) trivial.

Why aren’t the answers trivial? Staying with the example of French and English, syntacticians widely take them to be ‘similar.’ In terms of language variety they are. Nonetheless, their Grammatical Lexicons $latex G_e$ and $latex G_f$, (each containing some 400 ±100 items including affixes and grammatical Ns, V, and As) don’t share a single pair of items with the same grammar; recently I even find very ≠ très and a lot ≠ beaucoup. No grammatical preposition, no complementizer, no verbal affix, no negative word, no degree word, no quantifier, no reflexive morpheme, no grammatical verb, no pronoun, no prefix, no article has the same grammar in the two languages. And because these many differences are not even tentatively represented in generative models, the field of syntax knows very little more today than in 1960, at least in formal terms, about exactly how French and English are different.

The path to answering (2) is then for at least some researchers to work on it, after its being sidetracked and hidden from view for most of the generative period. I hope my book Lexicon and Syntax has been a step in this direction. In general, I have no doubt that generative syntax can answer (2) in interesting and even relatively complete ways, once people decide it is not a distraction from UG, but rather the best stepping stone for constructing it. Conversely, if the question of Particular Grammars remains unaddressed, then generative syntax has little to say on ‘the fundamental aim of linguistic analysis’.

Bob Fiengo

As it happens, there was a very broad question that I was worried about when I was at MIT in the early seventies. I can’t remember how I would have phrased the question at that time, but I would now put it this way:

Can linguistics, given that its data are intuitions about sentences, be a science? I was then conceiving of linguistics as separable from other areas, including psycholinguistics, whose data are not restricted to intuitions. And I was then assuming that other sciences emphatically do not take human intuitions as data. Physics doesn’t. So I was worried about the scientific prospects for linguistics, as I narrowly conceived of it.

I am still not so clear on the answer to this.

San Duanmu

When I was in the program (1986-1990), the broad question I most
wanted to get an answer to was how to obtain greater generalizations
towards language universals. In my dissertation, I addressed the
question of why contour tones could split into level tones in some
languages but not in others. A popular proposal at the time, following
the principles-and-parameters approach of Chomsky (1981), attributed
the difference to a parameter, so that some contour tones split and
some do not. But the parameter seemed to restate the problem, rather
than explaining it. A better solution emerged when I noticed a
correlation between the stability of contour tones and the weight of
syllables: contour tones tend to split in languages whose syllables
are mostly CV and not in languages whose syllables are mostly CVX. The
connection between syllable structure and tone split can be made
though metrical theory, in particular the Weight-Stress Principle and
the determination of tonal domains as stress domains (through word and
phrasal stress).

My experience made me wary of parameter-based solutions, which are
still common, including their reincarnation in Optimality Theory as
factorial typology. In addition, it made me aware of the shortcoming
of looking at a problem in isolation, and the importance of
considering the interactions among related fields. For example, in my
recent work, I have argued that, if we try to account for all
phonotactic patterns by phonology alone, then the maximal syllable can
look rather large and complicated. However, if we consider both
morphological and phonological factors, then the maximal syllable in
many languages is smaller than it appears. The search for language
universals remains difficult, but I am optimistic that there is a lot
to gain.