Carson Schütze

What was the broad question that you most wanted to get an answer to during your time in the program?

There are two broad questions I was seeking answers to when I came into the program in 1992.

One concerned the connection between the lexicon and morphology on the one hand and syntax on the other. At the time the original Minimalist Program manuscript was hot off the presses and I was quite puzzled by what it seemed to assume about this: words entered the syntax already inflected, yet these inflections still had to be “checked” against functional heads (e.g. T, Agr in the case of verbs) by undergoing movement–possibly covertly–in order to be allowed to surface. This seemed first of all to create a look-ahead problem: all sorts of crashes could be caused if the derivation included an inflected word whose features would wind up not matching those of the relevant functional head. Secondly it was unclear what the status of the inflectional features was in the lexicon–they seemed to necessarily be properties of individual words, whereas intuitively one wanted to say that, e.g. -s represented the features 3sg regardless of what word it was a part of. This could only be captured with lexical redundancy rules. And third, there seemed to be no way to derive Baker’s Mirror Principle, insofar as it holds, except by pure stipulation that affix order had to match checking sequence.

The second, broader question was also the reason I signed up for the NSF Research/Training Grant Program, which provided a 5th year for students doing work in acquisition, computation, or language processing–I did some of each, and was the lead editor of the MITWPL volume (#26) that assembled the work through 1995. The question was how to integrate the study of these areas (which I had already done separately at Univ. of Toronto) with the core linguistic theory that we were learning. The area where it was least clear how to go about this was language processing, which had a tradition of hostility toward linguistic theory dating back at least a decade, arguably much longer.

What is the current status of this question? Has it been answered? Did it turn out to be an ill-conceived question? If it’s a meaningful question as yet unanswered, please tell us what you think the path to an answer might be, or what obstacles make it a hard question.

As to the current state of these questions, I believe the MIT program provided excellent foundations for answers, though the questions are by no means settled.

The question of how to integrate the lexicon, morphology, and syntax was finally made clear with Alec and Morris’s development of Distributed Morphology (DM). For the first time I could see how a derivation could work from start to finish, without having to “finesse” many questions about how the words would end up in the right forms without some omniscient creature compiling the numeration. In the two decades since, DM has been widely adopted, and Late Insertion, which the DM literature strongly advocated, has been even more widely accepted. This notion proved crucial to my dissertation and much of my work since; I believe it has provided new ways to make sense of classic questions such as the “last resort” character of do-support and the nature of default case (to pick two examples I’ve worked on). While there had been major proposals about the architecture of the morphological component before at MIT (Pesetsky’s ideas that fed into Kiparsky’s Lexical Phonology), DM in my view represents a quantum break from the past.

I was helped in dealing with the second question by the hiring of Ted Gibson, who started teaching in my second year and with whom I ended up publishing two journal articles. Already in his dissertation he had shown how theta-roles could play a major role in explaining sentence-level comprehension phenomena (adapting and refining suggestions by Pritchard), and during those early years of the RTG Program a great many syntax and semantics students parlayed their linguistic expertise into experimental designs in collaboration with Ted, and in at least one case took questions about real-time processing so seriously it ended up as a central consideration in his redesign of Minimalist derivations as left-to-right rather than bottom-to-top–this was Colin Phillips’s “Parser is Grammar” theory, expounded in his dissertation.

Interestingly, as my own interests in language processing have evolved beyond sentence comprehension to lexical retrieval and to language production, Distributed Morphology has once again turned out to be vital in developing my ideas. When studying morphological decomposition as a processing problem, it turns out to be critical, in my view, to adopt the ‘morphemes as pieces’ assumption that DM shares with some other theories of morphology, and crucially not Anderson’s ‘morphemes as processes’ view; another lesson from DM that’s been crucial is the idea that if there’s a way to analyze something (a word, an idiom, etc.) by decomposing it into smaller pieces, that should be the null hypothesis–one can learn little by assuming a lack of internal structure. But where DM has turned out to be truly central to my conception of a problem is in the study of speech errors. As Roland Pfau deftly showed in his dissertation, DM can be adopted virtually in its entirety as a production model–it embodies many of the same distinctions as Garrett’s (1975) pioneering work, especially the idea that open-class stems and their “encyclopedic” meanings operate in an entirely different plane from syntactic and morphological structure, the latter being fruitfully viewed as of a piece. Using DM as a lens through which to examine speech errors has led me to re-conceive their taxonomy.

To conclude by returning to the general question of the integration of psycholinguistic research and linguistic theory, it is clear that MIT in the 1990s led the way in this endeavor, chiefly by training bona fide linguists who are also bona fide experimentalists and ‘computationalists’. Time has proven that this was the right approach–David Pesetsky and Ken Wexler deserve credit for their vision in establishing the RTG, as does Alec Marantz for extending it into neurolinguistics with the MEG lab–all the leading linguistics programs are now training their students in this way. I am proud to count myself among the first generation of such students from the MIT Linguistics Program.

Maria Rita Manzini

What was the broad question that you most wanted to get an answer to during your time in the program?

I came to MIT with some knowledge of linguistic facts (what they look like, how to organize them) and of (then) current formalisms, as well as with some intuitive grasp of the notion of explanation (in terms of modelling and unification). What MIT taught me is why one would model linguistic data – in other words, that what is really being modelled is a system of knowledge. I think that this resonated with some vague deep question that I had in me (what is thought? what is the connection between language and thought?) and taught me how I could do something focussed (even useful) with it – so that if I had to single out one question that I really care about it would have to be simply the key question of generative linguistics: how do we model linguistic knowledge in the simplest and most realistic way? This question is at the core of Chomsky’s teaching – and it should be acknowledged here that Chomsky is a great teacher, not just in that he taught us great things, but also because he did what good teachers are meant to do, namely hammer home what is important (no matter how many repetitions and variations on the theme it takes) and put everything else in perspective.

For instance, in recent years I have spent much time, together with Leonardo Savoia, modelling variation in natural languages. That afforded me a considerable amount of pleasure: being able to show that so many combinatorial slots would be filled in; being able to find x and its reverse a few kilometers apart (we are talking about Romance dialects); being able to show that typologists’ dreams were just that. Yet if one asked me whether variation or even the modelling of variation was a key concern of mine I would most certainly have to give a negative answer. What I care about is the restrictions that variation data impose on the form of grammar (i.e. our model of linguistic knowledge) – nothing more (or less).

What is the current status of this question? Has it been answered? Did it turn out to be an ill-conceived question? If it’s a meaningful question as yet unanswered, please tell us what you think the path to an answer might be, or what obstacles make it a hard question.

I chose to answer point 1 by a general question – ‘the’ general question of generative linguistics. The question is not ill-conceived (despite all critics) – and one only has to read recent articles by Chomsky (for instance the one with Berwick) to see how much progress has been made on it since I was at MIT a few decades ago. Yet it is clear that as part of this general progress, many problematic areas have come to light. I will try to make amends for my all too general answer to query 1, by suggesting a couple of issues that, though difficult to solve, can be meaningfully posed within current frameworks.

The first one has to do with the modular organization of grammar, which has been considerably clarified by minimalism and by the recent phylogenetic (as opposed to ontogenetic) perspective taken by Chomsky. The prevalent wisdom (encoded in the Berwick-Chomsky article) is that syntax and interpretation correspond to an essentially unique module while PF corresponds to a not clearly delimited number of modules (is there a Morphological Structure in the sense of Distributed Morphology? is linearization carried out in one or in several submodules, à la Raimy?) whose relation to the syntactico-semantic model is in points largely unconstrained. Despite the intuitive appeal of such a model at first sight (language is thought – sounds are mere accident) I think it has too many mysterious aspects – I don’t mean aspects that are not worked out (though that is also a problem, cf. Sheer on PF ‘intermundia’), but aspects which make the design of the language faculty mysterious. Alternative models (equally compatible with known evidence and general epistemological criteria) may return a much more transparent view of this modular organization.

Take a concrete example, morphology, which has been brought to the forefront of research by Distributed Morphology. DM highlights a major issue issue in the overall organization of grammar concerning the relation of traditionally syntactic phenomena (above ‘word’ level) and traditionally morphological ones (below ‘word’ level, specifically inflectional). In Halle and Marantz’s original proposal and in much subsequent work, the existence of a dedicated morphological component is necessitated by the opacity of ‘exponents’ with respect to underlying terminals (syncretisms, fusions, fissions, zero exponents, etc.), despite the assumed overlapping of morphology and syntax both with respect to rules (Merge) and primitives. In this case, it is not the form of the solution which is problematic, but the question it sets out to answer. Why would one of the interfaces be devoted to opacizing the other – the perfectly transparent LF one? A possible alternative is give up some rigidity at the syntax-LF interface, i.e. allow for more interpretive enrichments, ambiguity resolutions etc. at the LF interface – so as to have a more transparent model of the PF interface, including (obviously, but crucially) the lexicon.

The overarching issue of the architecture of grammar (i.e. the modular organization of linguistic knowledge) does not arise only at the interfaces – it touches on the nature of syntax itself (the FLN). For instance, in the current trend to ‘syntacticize semantics’, (Cinque and Rizzi’s phrasing), interpretive primitives of grammar are encoded as abstract constituents – and the trend is certainly not restricted to the cartographic approach. v names what is undoubtedly a real interpretive relation, i.e. the application of a cause/ agent to an elementary event. But what is the evidence that v corresponds to a headed constituent in syntax? As far as I can see, at the LF interface it is immaterial whether the (causer/agent, elementary event) relation is notated as a constituency (sisterhood/ dominance/ Spec) relation or not – and that is also true of the attribution of phase status to the relevant configuration in syntax.

At the same time if one looks at the PF interface, when the v relation is overtly lexicalized, it is not in the form that one would expect, namely that of an auxiliary-like head or perhaps a particle. Lexicalizations of v relations abound, but they consist of incorporated causative/transitive morphology, i.e. inflections of the main verb, or else nominal inflections, notably the so-called accusative Case. One may be mildly surprised at the fact that so few morpho-syntactic reflexes of the v head/ constituent are found (perhaps none) – as well as by the fact that all the evidence points to the relevant relation connecting either to the predicate head (transitivizing morphology) or to argumental constituents (Case). With a less core category, like Appl, this is even clearer. Why isn’t Appl ever lexicalized but as a verb inflection or as a Case (the so-called dative)? Similarly, the preposition to looks like a head of its own projection, not like a functional projection of V. In a nutshell, functional hierarchies certainly correspond to interpretively real primitives – but they hardly shed any light on the relation that these primitives bear to actual morphosyntactic structure (the crucial question of syntax); at best they encode it, at worst they may obscure it.

I will leave my answer at this point, where the general lines of inquiry that I was trying to highlight break down into the very many smaller questions of current research in syntax – at least those that my own work is concerned with.

Nigel Fabb

The broad question: Literature is experientially special and (yet) it is made of ordinary language: can linguistics help us understand what makes it special?

If there is progress towards answering this question, it is tentative and contested. But a PhD in linguistics from MIT started me off in certain ways, which I’ve tried to pass on to my students and put into my research. First, I had to learn that deep questions must be approached by a technical path, and I think linguistics is still the best model for how to do this, even for something which is not entirely linguistic, such as literature. Right now, I’m trying to solve the problem of profound ineffability (as in literary representations of the sublime), within an entirely computational approach to content (according to which ineffability should be impossible), but solving it by technical means, drawn from linguistics. Second, linguistics explores how different components interact; my PhD was about interfaces (the distribution of work between syntax and morphology), and I believe that understanding how the work is done in different components and how they interface is central to understanding literature. Understanding where literary form is computed is a matter of components and interface: I think it’s mostly derived by inference, as a kind of content, and not computed via a special set of rules as is e.g., phonological or syntactic form. Third, linguistics is not a theory of all of language: some aspects of language are nonlinguistic, and it’s an open question what aspects of the language of literature are nonlinguistic. At present, I see the line of verse and poetic meter as two possibly nonlinguistic aspects of language which, nevertheless, linguistics can help us understand; Morris Halle and I have developed a new theory of meter which shares some component parts with phonology but does not see metrical form as derived from phonology.

Martha McGinnis

The broad question I became most interested in while at MIT was:

How are the structures and categories of linguistic theory represented in the brain?

While the reductionist program of connectionism never did anything for me, at MIT I discovered the fascinating young field of cognitive neuroscience. Even before Alec Marantz managed to get hold of his own MEG machine, I had the chance to work in his MEG analysis lab, alongside my illustrious fellow students Colin Phillips (Linguistics) and David Poeppel (Brain & Cog Sci). Our experiments exploring phonological and lexical representations in the brain were exciting steps in the direction we collectively imagined, and linguistically informed research in this direction still strikes me as some of the most intriguing and potentially fruitful in the field.

The question posed above has most certainly has not been resolved, but it continues to be a meaningful one. Stemmer & Whitaker’s (2008) Handbook of the Neuroscience of Language has an entire chapter on the neuroscience of syntax (written by Alan Beretta) that cites extensive research from the last 15 years by a variety of linguistically informed scholars, e.g. Avrutin, Beretta himself, Caplan, Friederici, Grodzinsky, Kaan, Linebarger, Marantz, Phillips, Poeppel, and Pylkkänen. Others (Hagoort, Hickok, Kutas, Levelt, Moro, Ullman, etc.) have contributed to this and other areas of understanding, including phonology, morphology, and the lexicon.

There are various obstacles to linguistically-informed research in neuroscience (which at least partly account for my own decision to pursue more tractable lines of inquiry). Of course there are strategic obstacles, perhaps chief among them that both language and the brain are such complex systems. However, generative linguistics has made tremendous strides in our understanding of human language as a unified system with a common biological basis – thanks in no small part to the influence of Chomsky, Halle, and the many eminent scholars they have trained. Moreover, neuroimaging techniques provide a wealth of new opportunities to study the living brain. Thus, in principle, progress in the cognitive neuroscience of language is now easier to make than it has ever been.

Still, many tactical and logistical obstacles remain. Neuroimaging equipment is expensive. The training required to gather and analyze data, and simply to keep the machinery and software running properly, is extensive. It is challenging to train students deeply enough in two distinct fields (linguistics and cognitive neuroscience) that they can make a significant contribution. This is in part because of stark differences between the fields, in part because of a lack of relevant training in the K-12 school system, and in part because there are few funding opportunities like the Research Training Grant that created MIT’s five-year doctoral program in Psycholinguistics. Neurolinguistics is not a traditional area of linguistics, so there is rarely an existing culture, critical mass, or infrastructure to support scholars working in this area. There are few neurolinguistics jobs for linguistically trained scholars. And it is virtually impossible to conduct neurolinguistics research as a side interest, since it requires such an enormous investment of both money and time.

In short, the scholars who have succeeded in conducting research in this area are true pioneers. I hope and trust that they and their students will continue to expand this exciting new direction for the field.

Mark Baker

Here are my (quick) answers to the two 50th anniversary questions we were posed:

What was the broad question that you most wanted to get an answer to during your time in the program?

The broad question that most obsessed me when I was a student was (and still is): how can striking examples of robust crosslinguistic variation (especially in syntax) best be accounted for in light of the insight that we have a rich and substantive Universal Grammar? For example, can one truly fit both (say) a polysynthetic language and an English-like language into the same theory, doing full justice to both the similarities and the differences.

What is the current status of this question? Has it been answered? Did it turn out to be an ill-conceived question? If it’s a meaningful question as yet unanswered, please tell us what you think the path to an answer might be, or what obstacles make it a hard question.

The current status of this question: It is still relevant and well-formed (I believe). Progress has certainly been made on it, thanks to the many rich and insightful analyses of non-Indo-European languages, done largely by MIT graduates and their students, over the last 25 years ago. But deep questions about the proper form of the answer remain controversial. Furthermore, what remains to be done in this area is still huge, and has been hindered to some extent, both by the attrition of languages around the world, and the response given to it in many linguistic quarters, favoring “descriptions” of less-studied languages that are shallow and hold to a narrow kind of positivistic empiricism (in my view).

Ray Jackendoff

What was the broad question that you most wanted to get an answer to during your time in the program?

To what degree is the form of sentences determined by syntax, and to what degree by meaning? And to what degree can these two influences be separated?

What is the current status of this question? Has it been answered? Did it turn out to be an ill-conceived question? If it’s a meaningful question as yet unanswered, please tell us what you think the path to an answer might be, or what obstacles make it a hard question.

I’ve answered the question to my own satisfaction, in my books Architecture of the Language Faculty, Foundations of Language, and (with Peter Culicover) Simpler Syntax. However, it’s clear that the field as a whole isn’t satisfied with my answer.

Idan Landau

What was the broad question that you most wanted to get an answer to during your time in the program?

As I was sitting in my office (2 years in building 20, 2 years in E39), the broad question that I most wanted to get an answer to was: When will I see daylight the next time? In a week? A month? After my defense?

In the little time left for me to consider other broad questions, I was mostly concerned with the question of control. I wanted to understand the fundamental mechanics of control, since I wasn’t satisfied with the existing proposals (which reduced control to binding or predication). I was also intrigued by the multi-dimensionality of control, and was eager to understand the precise division of labor between semantics, syntax and discourse in the formation and interpretation of control structures.

What is the current status of this question? Has it been answered? Did it turn out to be an ill-conceived question? If it’s a meaningful question as yet unanswered, please tell us what you think the path to an answer might be, or what obstacles make it a hard question.

The question, being as broad as it is, is still very much open. I don’t think it was ill-conceived, but I do think many people mistakenly assumed that control is a monolithic phenomenon, and that for this reason, the answer must be maximally simple. But it’s not. Ontologically, there are different types of control-like relations, and the big challenge of the field, in my opinion, is to converge on the right ontological classes. The distinction between obligatory and non-obligatory control is indeed a major one, but quite a few sub-distinctions within each of these broad categories are yet to be determined. The question is hard because of the usual inter-dependence of empirical categories and theoretical constructs. If one takes {a,b,c} to be a natural class, excluding {d,e}, then one is led to a particular theoretical outlook; whereas if one takes {a,b} to be a natural class, excluding {c,d,e}, then one is led to a different theoretical outlook. Since the classification itself is theory-laden, we are trapped in a vicious circle.

The way out, I think, is to enrich the empirical base, so that empirical correlations will emerge more decisively in establishing classes of data than theoretical considerations do. This has proved fruitful, for example, in the incorporation of finite control data into the mainstream literature. And I expect similar progress in other domains. Against the empirical progress, however, there are strong theoretical inclinations, mostly associated with “radical minimalism”. These inclinations, I believe, have not done well to the field of control. The heated debates around “movement vs. PRO” analyses of control have hardly produced novel insights or directed attention to neglected phenomena (with very few exceptions). Instead, they have diverted attention from real puzzles, sometimes classical ones, that have yet to be addressed.

Luciana Storto

What was the broad question that you most wanted to get an answer to during your time in the program?

One question that I have always wanted to answer is the nature of verb-second (V2) phenomena. My interest was triggered by the fact that Karitiana, the Amerindian language I have worked on during my MA and PhD, displays V2 phenomena in declarative sentences. When I entered MIT, the literature on German, Dutch and Frisian V2 explained the complementary distribution between the position of the verb in main (VO) and embedded clauses (OV) as following from verb movement in finite clauses from a basic OV position (through an Infl-final position) to an empty sentence-initial C position; this movement was followed by an obligatory movement of a constituent to the Specifier of C. Karitiana shares exactly the same features with those Germanic languages, except that the constituent movement is not obligatory, but preferred.

What is the current status of this question? Has it been answered? Did it turn out to be an ill-conceived question? If it’s a meaningful question as yet unanswered, please tell us what you think the replica watch path to an answer might be, or what obstacles make it a hard question.

The question about the nature of V2 is a hard one to tackle. The literature on embedded V2 in mainland Scandinavian and Frisian, especially that which tried to account for the co-occurrence of embedded V2 and C, has generated the need to discuss CP recursion. An alternative to the CP recursion analysis put forth to explain embedded V2 in Icelandic and Yiddish is that the embedded V2 clauses have movement of the finite V to I but not to C. Research in Karitiana supports the latter hypothesis, in that it indicates that embedded clauses in the language are VPs dominated by aspectual and/or auxiliary head-final projections to which the verb moves, but not by head-initial CPs. The Karitiana data seems to suggest that the complementary distribution found in V2 languages does not have anything to do with the presence or absence of Cs per se, but with the syntactic, semantic and pragmatic differences between main and embedded clauses. If it is true that illocutionary force cannot be expressed in embedded environments, this is an area in which more research is required. The semantics and pragmatics of embedded root phenomena and non-finite clauses are some crucial phenomena we need to understand better as a field before we even try to account for V2.

Maria Luisa Zubizarreta

As a naive, young student in the late 70’s and early 80’s, I wondered whether we might ever understand the relation between the algorithmic descriptions provided by linguistic theorizing and the way in which language is wired in the brain. It would seem that 30 years later there is some hope to learn something about this with all those new FMRIs and EEGs techniques, so trendy now adays. But I am rather skeptical. If we do not understand how the swiss replica watches simple brain of an insect maps onto behavior, what can we expect to learn about the human brain and that complex object called “language”? This is not to say that this question is not worth pursuing, it is just to say that I do not think we have learned much regarding the above question—other than that the contention that language is entirely located on the left hemisphere is wrong. And breakthrough in this area is probably far away in the future.

The other question that I wondered and still wonder about it is how do humans learn language, or more narrowly, how do humans acquire such complex and subtle grammatical intuitions? I do think we have made some (humble) progress in this area and I expect that we will learn much more about this in the decades ahead. Yet, it is a research area of much controversy, as you know. I think it would be very good to have a debate panel that directly addresses that controversy, in a detailed and meaningful way.

Itziar Laka

What was the broad question that you most wanted to get an answer to during your time in the program?

What brought me to the program was the simple notion that language belonged in our minds, that it was a complex species-specific trait rooted in our brains. For someone trained as a phylologist (like me), in a tradition were language was conceived as some inmaterial, platonic entity whose fundamental nature was hardly ever reflected upon, the idea that language lived in our minds/brains was a powerful one, even though fom today’s perspective this might seem surprising. Almost by chance I read some of Noam’s early works (Cartesian Linguistics, Reflections on Language, etc.) and it was mainly that notion of language as part of (cognitive) psychology that really ignited my desire to study linguistics (my original goal was literature). That, and the idea that grammars were precise mechanisms whose formal architecture could be studied and implemented formally, explicitelly, generatively. I soon began to fake breitling pester my friends in bars, telling them about this amazing linguist and his ideas, and I started fantasizing that I might one day be a student in the department, which I imagined very differnt from how it was. But this was just a far fetched fantasy. Later on, as I was finishing my bachelor’s degree in Basque Phylology, a concatenation of highly improbable and very lucky events led me into the program. When I landed, there was no specific question I seeked an answer for; what drove me was a general curiosity about the inner workings of language. I wanted to understand its basic features, and how they could give rise to the great variability of specific grammars.

What is the current status of this question? Has it been answered? Did it turn out to be an ill-conceived question? If it’s a meaningful question as yet unanswered, please tell us what you think the path to an answer might be, or what obstacles make it a hard question.

In my view, there has been enormous progress in our understanding of the fundamental aspects of language, and the bits and pieces that continue to complete this puzzle come from many fields, not only linguistics. I think our views (at least mine) regarding the the place language has in our minds have changed. The neurocogitive foundations upon which language rests appear to be less species and domain-specific than was generally thought before, but at the same time grammatical computation of meaningful elements stands out as a human-specific trait. The general question of the nature of language and its basic design is a lively one today, and though I know I am very optimistic in my outlook, I do think there are interdisciplinary bridges a researcher can walk now that did not exist when I was a student. The complexity of the problem is not trivial; there are aspects that in my view ought to receive greater consideration in Linguistics, like the impact of time in the course of linguistic computation, the impact of non-grammatical factors, and the nature and architecture of other neurocognitive functions, which would make communication across disciplines interested in the study of language more fruitful, but in my opinion the central questions about the nature and design of human language are meaningful and very much alive. I think the challenge for contemporary Linguistics is to play a central role in the quest for an answer, not to withdraw from it.