Carson Schütze

What was the broad question that you most wanted to get an answer to during your time in the program?

There are two broad questions I was seeking answers to when I came into the program in 1992.

One concerned the connection between the lexicon and morphology on the one hand and syntax on the other. At the time the original Minimalist Program manuscript was hot off the presses and I was quite puzzled by what it seemed to assume about this: words entered the syntax already inflected, yet these inflections still had to be “checked” against functional heads (e.g. T, Agr in the case of verbs) by undergoing movement–possibly covertly–in order to be allowed to surface. This seemed first of all to create a look-ahead problem: all sorts of crashes could be caused if the derivation included an inflected word whose features would wind up not matching those of the relevant functional head. Secondly it was unclear what the status of the inflectional features was in the lexicon–they seemed to necessarily be properties of individual words, whereas intuitively one wanted to say that, e.g. -s represented the features 3sg regardless of what word it was a part of. This could only be captured with lexical redundancy rules. And third, there seemed to be no way to derive Baker’s Mirror Principle, insofar as it holds, except by pure stipulation that affix order had to match checking sequence.

The second, broader question was also the reason I signed up for the NSF Research/Training Grant Program, which provided a 5th year for students doing work in acquisition, computation, or language processing–I did some of each, and was the lead editor of the MITWPL volume (#26) that assembled the work through 1995. The question was how to integrate the study of these areas (which I had already done separately at Univ. of Toronto) with the core linguistic theory that we were learning. The area where it was least clear how to go about this was language processing, which had a tradition of hostility toward linguistic theory dating back at least a decade, arguably much longer.

What is the current status of this question? Has it been answered? Did it turn out to be an ill-conceived question? If it’s a meaningful question as yet unanswered, please tell us what you think the path to an answer might be, or what obstacles make it a hard question.

As to the current state of these questions, I believe the MIT program provided excellent foundations for answers, though the questions are by no means settled.

The question of how to integrate the lexicon, morphology, and syntax was finally made clear with Alec and Morris’s development of Distributed Morphology (DM). For the first time I could see how a derivation could work from start to finish, without having to “finesse” many questions about how the words would end up in the right forms without some omniscient creature compiling the numeration. In the two decades since, DM has been widely adopted, and Late Insertion, which the DM literature strongly advocated, has been even more widely accepted. This notion proved crucial to my dissertation and much of my work since; I believe it has provided new ways to make sense of classic questions such as the “last resort” character of do-support and the nature of default case (to pick two examples I’ve worked on). While there had been major proposals about the architecture of the morphological component before at MIT (Pesetsky’s ideas that fed into Kiparsky’s Lexical Phonology), DM in my view represents a quantum break from the past.

I was helped in dealing with the second question by the hiring of Ted Gibson, who started teaching in my second year and with whom I ended up publishing two journal articles. Already in his dissertation he had shown how theta-roles could play a major role in explaining sentence-level comprehension phenomena (adapting and refining suggestions by Pritchard), and during those early years of the RTG Program a great many syntax and semantics students parlayed their linguistic expertise into experimental designs in collaboration with Ted, and in at least one case took questions about real-time processing so seriously it ended up as a central consideration in his redesign of Minimalist derivations as left-to-right rather than bottom-to-top–this was Colin Phillips’s “Parser is Grammar” theory, expounded in his dissertation.

Interestingly, as my own interests in language processing have evolved beyond sentence comprehension to lexical retrieval and to language production, Distributed Morphology has once again turned out to be vital in developing my ideas. When studying morphological decomposition as a processing problem, it turns out to be critical, in my view, to adopt the ‘morphemes as pieces’ assumption that DM shares with some other theories of morphology, and crucially not Anderson’s ‘morphemes as processes’ view; another lesson from DM that’s been crucial is the idea that if there’s a way to analyze something (a word, an idiom, etc.) by decomposing it into smaller pieces, that should be the null hypothesis–one can learn little by assuming a lack of internal structure. But where DM has turned out to be truly central to my conception of a problem is in the study of speech errors. As Roland Pfau deftly showed in his dissertation, DM can be adopted virtually in its entirety as a production model–it embodies many of the same distinctions as Garrett’s (1975) pioneering work, especially the idea that open-class stems and their “encyclopedic” meanings operate in an entirely different plane from syntactic and morphological structure, the latter being fruitfully viewed as of a piece. Using DM as a lens through which to examine speech errors has led me to re-conceive their taxonomy.

To conclude by returning to the general question of the integration of psycholinguistic research and linguistic theory, it is clear that MIT in the 1990s led the way in this endeavor, chiefly by training bona fide linguists who are also bona fide experimentalists and ‘computationalists’. Time has proven that this was the right approach–David Pesetsky and Ken Wexler deserve credit for their vision in establishing the RTG, as does Alec Marantz for extending it into neurolinguistics with the MEG lab–all the leading linguistics programs are now training their students in this way. I am proud to count myself among the first generation of such students from the MIT Linguistics Program.