Uli Sauerland

What was the broad question that you most wanted to get an answer to during your time in the program?

One of my main questions was to determine the mechanisms of memory storage and access for referents. The restriction to “memory for referents” is to exclude memory for syntactic relations such a subject-verb agreement, while including pronouns and traces as core cases, but also possibly other cases where only language-external content is remembered. (I’ve put aside non-linguistic questions though those were really foremost on my mind at the time and I’m also applying some hindsight to formulate the question more clearly.)

What is the current status of this question? Has it been answered? Did it turn out to be an ill-conceived question? If it’s a meaningful question as yet unanswered, please tell us what you think the path to an answer might be, or what obstacles make it a hard question.

Concern with the question obviously predates my days in linguistics, and a number of pretty good answers are around. The dominant answer is a model is based on the notion of a position in an assignment sequence as in Frege-Tarski type logic. This is however a very powerful system and the question is whether all that power is really necessary. As far as I know there are no good arguments that all the power is needed, though at times I thought differently. Work on the question has mostly been abandoned since semanticists are usually not concerned about theories that have more power than strictly necessary.

Tom Bever

  1. What did I want to get an answer to from studying MIT linguistics?
  2. What is the current status of answers to this question?

Personal answer

  1. I wanted to see if I could be a linguist.
  2. Not sure.

(More) Professional answer.

  1. I was already interested in language acquisition and adult behavior: I had privately translated Jakobson’s Kindersprache, and was the chief RA on an early childhood language acquisition project: my undergraduate thesis was on the emergence of phonology in the first year of life, in relation to neurological development. Jakobson was my advisor for that work and an advisor on the research project and through him I met Morris, or more felicitously for me, Morris met me. That first meeting made an indelible impression of Morris as a no- nonsense and insightful thinker about science who was prepared to treat even a brash kid as someone to argue with as an equal (he blew away a pompous proposal I had in mind about how to collect all possible data about early child vocalizations – interestingly, there is a project today at MIT with just such ambition). Eventually he invited me to be in the first MIT class. I was applying to psychology at Harvard, and interviewed Smitty Stevens (noted psychophysicist) who was a bit stern, and instructed me that I should go learn something first and THEN I could be a psychologist. I took this to heart and decided to learn linguistics and be in a more student- friendly place.

    The question I always had in mind was how the brain incorporates and uses language. I pursued a dual career as a grad student in linguistics and in psychology (MIT also had a new program in that), and I was lucky enough, with Morris and Noam’s nomination and support to be awarded a Harvard Junior Fellowship: this gave me access to the generousity of George Miller at the Harvard Center for Cognitive Studies, where I had several research assistants of my own, while I was still a graduate student. I ran many experiments on sentence processing, most of which failed, which was a wonderful learning experience.

    Other accidents abound in this background: for example, Jerry Fodor kindly picked me up daily in his (very small and very cold) Austin Healy and brought me to school during the first few years of my study: this lead to many discussions about the psychology of language and ultimately some early experiments together (clicks and all that).

    HLTeuber, Chair of psychology was extraordinarily supportive. And so on. The only glitch was that Morris would not let me write a thesis on language processing – it wasn’t thought of as a part of linguistics at the time – so I duly did my sentence by analyzing Bloomfield’s analysis of Menomini Phonology, and some implications for how to unpack phonological rules – a burning question of the day. Eventually, I got a job at Rockefeller U and have pursued psycholinguistics mostly, since then.

  2. Not sure.

(Most) Professional answer.

  1. The major question was and is how to integrate a structural theory of language with models of brain and behavior so that there would be mutual contributions. At the time, Miller and his students were taking the Syntactic Structures model of language structure very seriously as a model of language behavior, especially memory for sentences, but also acquisition, perception and production. When I started graduate school, this movement was in its prime, with great excitement about interpreting linguistic models as psychological models, and then subjecting them to experimental “test”. Linguists, including Noam, were publicly skeptical about such efforts, noting that a few linguistic intuitions provide more psychological data than a few years’ worth of experiments – if a given experiment seemed not to support a theoretical structure so much the worse for the experiment. In the event, the Miller program collapsed as more experiments came in that showed that the one-one correspondence between linguistic rules and psychological processes was not consistent (most of these were by me, Fodor, Garrett and Slobin). This was the background for our attempts to develop a new way of thinking about the relation between language structure and behavior, a relationship mediated by language acquisition processes and adult behavioral processes setting constraints on learnable and usable languages.

    The strongest themes of the day relating to behavior were: nativism/empiricism, underlying (aka, “deep”) structures, rules vs. associative habits. Miller et al attempted to show that sentences are organized by rule: Noam was arguing, as today, that the child’s data are too impoverished for an associative or pattern learning process, so language must be innate. I became involved early on in experimental attempts to show that deep structures are actually computed as part of sentence processing. A still small voice (well, small anyway) in all of this was the theme that language is a biological object. This had been most famously argued by Lenneberg and clearly was part of Noam’s background thinking. But it was not a major overt theoretical linguistics focus of the projects at hand, which were much more concerned with the architecture of daily syntax, phonology and semantics. When I started out on attempts to show in some detail how language is the result of a maturational and experiential process involving emerging structural abilities, language behavior patterns, experience, and cognitive constraints, it was a lonely adventure for quite some time. The idea that there is a large set of architecturally possible languages which is reduced by “performance” constraints was in the background, but not a prominent part of the research program. I got a lot of gas over it, even from Noam, or at least felt I did.

    The intellectual tension remains between linguistic theory imperialism and psychological functionalism: the issues tend still to co-occur with the controversies over nativism/empiricism, rules/associations, and surface/deep representations, respectively.

    Learning. For example, “parameter setting” acquisition theories maximize the extent to which the infant is equipped with fore- knowledge of the typological options for language – “learning” involves recognizing “triggers” that indicate the setting for each parameter. This view often seems compelling because the “alternative” learning theory has usually been limited to some form of associationism, which by itself is definitionally inadequate to account for what is learned: hence it is a straw opposition to parameter setting theory. What is now at issue is the construction of a more complex hypothesis testing model of language learning, which can integrate statistical generalizations with structural adaptations. A number of features of language are now being invoked as supporting this approach. First, the last decade has witnessed an explosion of investigations of the extent to which the statistically supported patterns of language behavior can carry structural information to the infant – it turns out that the extent is much greater than was often thought: but the rub is that inductive models that extract the regularities often require the equivalent of millions of computations to converge on the statistical patterns. This heightens the importance of another series of current studies showing that infants are pretuned to focus on learning only certain kinds of serial patterns – not exactly language, but possible components of language universals. A third development is the resuscitation of the “laws of form” as constraining language to have certain kinds of structures, either because of categorical limitations or because of efficiency considerations. For example, it has long been argued that if language is hierarchical, then hierarchies cannot cross over into each other (see Barbara Partee, nee Hall for the original discussion): that is a categorical law of form. More recently, arguments have appeared about the kind of phrase structures and interlevel mappings that are most efficient computationally, as explanations of certain language universals.

    Gradually, there may appear a union of certain kinds of inductive models interacting with laws of form and structural potential of the infant, to explain language learning and language structure.

    All of this ferment is now subsumed under the now popular re- evocation of “biolinguistics”, now trumpeted as the leading idea integrating today’s language sciences. The historical trend in ideas about the generative architecture of syntax has also lead in this direction. In Syntactic Structures, virtually every “construction” type (e.g., passive, question, negative) corresponded to its own rule(s). Gradually, this has been whittled down, first removing “generalized” transformations that integrate propositions; then formulating constraints on transformations, ending up with GB theory, on which there was one “rule” (“relate/move alpha”), and numerous “theories” acting as filters on possible derivations after an initial phrase structure is created by Xbar theory and a rehabilitated version of merge (“case theory”, “theta role theory”, “binding”). Finally, today we see a further (ultimate?) simplification in which the surface hierarchical structure is itself built by successive iterations of the same structure building rules – most of the “construction” building work is now carried by the internal organization and constraints of individual lexical items. So over a long period of time, Syntactic Architectures have moved from a complex set of transformations, and a simple lexicon, to a complex lexicon and a simple set of recursive tree building processes. The goal now is to specify how syntax is the best possible interface between long evolved conceptual structures and recently evolved efferent motor capacities, such as the vocal tract or the hands.

    This latest development raises new issues for nativism because it is not immediately obvious how parameter setting can work in relation to the minimalist architecture, since many parameters assume a hierarchical organization and/or complete derivation.

    Parameters could apply to the interface between syntax and the phonological component but again this has them working as filters, without much rationale for their particular forms or evolution. My (recidivist) guess, in which I am no longer rare, is that parameters in large part comprise emergent simplicity constraints on learning, and language use: perhaps not at all a part of the universal architecture of language except insofar as that architecture creates decision points for variable parameters to be established.

    Behavior and “Rules.” Starting in the 1980s there was a burst of interest in connectionist models, spurred by the discovery of various methods of enhancing perceptrons – basically the use of multiple layers – so they can asymptotically master problems that require the full range of logical operators. For several decades, connectionism dominated modeling efforts – it became difficult to get a behavioral finding published without a connectionist model that simulated it. The appeal of models that worked by varying associative activation strengths between conceptual “nodes” was often advertised as based on their similarity to how neurons interact in the brain. Describing language was a recognized goal, worthy of such modeling attempts. A number of toy problems were approached, including, famously the modeling of the strong vs weak past verb forms in English. The originators of these models went so far as to echo Zellig Harris’s lament about his rules, originally written 40 years earlier: what linguists take to be rules are actually descriptive approximations that summarize regularities in the statistical patterns of the real linguistic data. In the end the models’ successes have also been the undoing of the enterprise of debunking linguistic nativism. Just as in the statistical modeling of Motherese, the thousands of trials and millions of individual computations involved in learning even the toy problems become an argument that this process cannot be the way the child learns language, nor the way adults process it. Kids must have some innate mechanisms that vastly reduce the hyothesis space.

    But especially the minimalist program of sentence structure building has set an even more abstract problem for modeling both acquisition and processing. For example, phrase structure trees are now composed by successive iterations of merge, starting with the most embedded phrase. In the case of a right branching language, this means that the basic structures of sentences are formally constructed starting at their end, and work backwards. Clearly this cannot be a viable model of actual serial language processing. We either must give up on a role for linguistic theory as a direct component of processing, or we must configure a model that allows both immediate comprehension and somewhat later assignment of full structure. This has made demonstrations of the “psychological reality” of derivations important. Since the derivations may not be assigned until well into a sentence, the best way to test them is to test the results of their application: this has motivated various studies of the salience of empty categories during processing, most importantly WH-trace and NP-trace: experimental evidence that these inaudible elements nonetheless are present during comprehension is an important motive to build in the theory that predicts their occurrence. Our attempt at this has involved a resuscitation of analysis by synthesis: on this model, we understand sentences initially based on surface patterns, canonical forms and so on; then we assign a more complete syntactic structure based on derivational processes. Various kinds of evidence have emerged in support of this model, including behavioral and neurological facts. The idea that we understand sentences twice does not require that we wait until a clause is complete to assign a derivation; rather it requires that candidate derivations be accessed serially, immediately following the initial analyses based on surface patterns. My co authored book on sentence comprehension spelled out a range of behavioral data supporting this idea: interestingly, more recent electrophysiological and imaging methods have adduced brain evidence for a dual processing model.

    But such a model is also vexed by the fact that in today’s linguistic theory, virtually every additional phrase structure level involves some form of movement, and hence trace. So the number of traces in a full description can sometimes be larger than the number of words. This is astoundingly true in the case of some versions of today’s “distributed morphology” in which individual lexical items can have in effect a derivation utilizing “light verbs” (an uncanny recapitulation and refinement of the best technical aspects of the ontological misadventures with Generative Semantics in the 1970s). How do we go about motivating the choice of which traces are psychologically active during sentence processing and which are not?

    Another approach has been to finesse the problem by building models and evidence for them that indeed the correct syntactic structure is in fact assigned by a model that operates in real time, and in effect “top-down”. On this view, speakers have several complete syntactic models with (one hopes) strong descriptive equivalence. But the top-down models are driven up front not only by structural patterns, but by statistical indices on the current likelihood of a particular pattern. So, under any circumstances we are faced with the prospect of inelegant humans, who insist on relating meaning and form via several distinct systems at least one of which is statistical, and the other structural.

    Biology of language. Of course, “biolinguistics” should be informed by a combination of biologically based fields: aphasiology, neurolinguistics, cognitive neuroscience, genetics and evolutionary theory. The recent explosion of brain imaging methods has to some extent made brain images a replacement for connectionist modeling as an entre’ to publication: not always to important or good effect.

    There was a great deal of skepticism for many years about the relevance of brain studies by those in the central dogma territory of Cambridge. For example, a good friend of us all, made a telling remark when I told him about an early result involving the N400 (the “surprise” component of the ERP brain wave). The result was that the N400 is especially strong at the end of a sentence like “this is the book the frog read [t]” suggesting that there is a real effect of the WH-“trace”. His remark was: “you mean the brain knows…[t]?” But the dogged persistence of a few international labs (e.g., Nijmegen, Jerusalem, Montreal, Leipzig, Seattle, and now New York) has begun to bear fruit on basic questions relating to language organization.

    Of great interest now, is a glimmer of emerging study of genetic factors in the emergence of language. I am (I hope) contributing to this effort by focusing on differences in language representation and processing as a function of familial handedness. We have documented with at least 20 behavioral paradigms that right handers process language differently when they also have left handers in their family pedigree: the main difference is that they focus on lexical knowledge more readily than syntactic pattern knowledge. Recently (giving up my prejudice against) brain imaging studies are showing corresponding neurological differences in how the lexicon is organized and when it is accessed during processing. This may give us a handle into what to look for in the maturing language learning brain, as a function of specific polymorphisms associated with phenotypic left handedness. We are beginning to collaborate with laboratories in Leipzig, San Sebastian, Genoa and Trieste on these possibilities.

  2. So where are we today in relation to the original question, interrelating linguistic structure with maturation, behavior and the brain? Many of the specific aspects have been clarified, but an overall theory remains elusive. A few experiments purport to show that component computational processes involved in processing or representing sentences, involve uniquely located and/or timed brain processes. But we are a long way from understanding how such demonstrations will accumulate into a meaningful model. As is the usual case, careful serendipity will probably be our best bet.

Applied Coda.

Linguistics is more than a theoretical discipline. Quite a few (even MIT) linguistics graduates work in applied settings, on computational issues, or saving endangered languages, or on reading programs, just to name a few. In my own case, I have concentrated for years on using comprehension models to improve the readability and enjoyability of texts. A sample of basic methods involve: varying the between-word spacing to coordinate with major comprehension units: varying the clarity of individual letters as a function of their information value. For years, I tried to give away these ideas, but publishers are generally too skittish to accept them. So I patented implementations of these processes and others, and we are marketing them with some emerging success. Morris says this will be the first instance of anyone making money from Linguistics. I’m hoping it will eventually make it possible for an endowed chair of interdisciplinary studies as payback to the field. At the moment, the financial value is mostly theoretical.

Heidi Harley

What was the broad question that you most wanted to get an answer to during your time in the program?

I was (and still am) interested in the nature of the syntax/morphology/lexical semantics interface.

What is the current status of this question? Has it been answered? Did it turn out to be an ill-conceived question? If it?s a meaningful question as yet unanswered, please tell us what you think the path to an answer might be, or what obstacles make it a hard question.

At the time, syntax/morphology/lexical semantics interface issues were mostly understood in the field as being about the syntax-lexicon interface, and about mapping between independent levels of representation. I didn’t understand much about it at the time, but was very excited and inspired by two strands of work going on at the time among the faculty: Alec Marantz and Morris Halle’s Distributed Morphology approach to the syntax/morphology interface, and Ken Hale and Jay Keyser’s work on the syntax/argument structure interface. In both cases the answer seemed to be that the concept of ‘mapping’ was inadequately predictive/restrictive, and that a more principled and explanatory account was forthcoming if the relationship between the hypothesized representations was simply identity: it’s syntax all the way down, in Marantz’s phrase.

In the intervening years, this has turned out to be a very fruitful idea, in both directions, and I have not yet seen any convincing reason to let go of it. I think we have achieved a significant body of work in this unified perspective which has allowed a lot of insight into the workings of the relationship between UG and conceptual structure, and between UG and surface representations.

My classmate Colin Phillips and I organized a “Morphology/Syntax Workshop” at MIT as a satellite event at the 1994 LSA meeting in Boston, which then resulted in a MITWPL volume of the same name (MITWPL 21). It might be interesting to revisit the work in that volume and consider the differences in understanding that have emerged in the intervening 17 years.

Barbara Partee

What was the broad question that you most wanted to get an answer to during your time in the program?

I didn’t know anything when I started the program, and didn’t have any questions of my own to begin with – I was just absorbing ideas, and becoming curious about various things as I went. I remember wrestling for quite a long time with a question Ed Klima raised in syntax class in my first semester: why is it obligatory to have a relative clause with “those of the boys”? I think that one has been answered, though it wasn’t during my four years; it turned out not to be a matter of syntax.

The main ‘broad’ question that was intriguing me during my last two years was this one: Do transformations ever change meaning?

That problem was one that I cared about, since Klima’s beautiful work on negation in English involved an optional rule changing some to any under negation, yielding two non-synonymous sentences (1a-b) from the same underlying structure. And that violated the then-recently formulated Katz and Postal hypothesis that transformations didn’t change meaning, i.e. that deep structure determined meaning. And that hypothesis seemed very attractive and strong; it even made it into Aspects.

(1)   a.  Sandy didn’t answer some of the questions.
      b.  Sandy didn’t answer any of the questions.

But semantics then was in too amorphous a state for me to want to try to work on it head-on; syntax was much more satisfying to work on. In a third-year seminar where we presented potential dissertation topics, I gave a very unsuccessful attempt to solve the problem of some-any alternation syntactically, and (wisely) abandoned it as a dissertation topic because I could only find a very ugly solution involving three separate some’s. Wisely, because that problem needed semantic tools that didn’t then exist and which I wouldn’t have been able to invent.

(As an alternative dissertation topic, I suggested trying to assemble what had been done so far in transformational grammar into a grammar of English. Chomsky, in 1964 no more experienced as a thesis advisor than I was as a thesis writer, said that sounded like a nice idea. I thought a thesis was a one-year take-home exam; at the end of a year, I had written about subjects and objects (about 1/30 of my outline, with an early version of the unaccusativity hypothesis), and turned in what I’d done; that was my dissertation.)

What is the current status of this question? Has it been answered? Did it turn out to be an ill-conceived question? If it?s a meaningful question as yet unanswered, please tell us what you think the path to an answer might be, or what obstacles make it a hard question.

Has it been answered? Well, so much has changed that the question can’t be asked in exactly that form any more. But it has been “answered” many times in many ways, and in the more general form, “What is the relation between syntax and semantics?”, remains one of the most interesting and difficult questions in the field. I wouldn’t presume to try to write a short paragraph on what I think the path to an answer might be, when so many of us MIT alumni have written so much about that over the past 50 years. The idea of trying to put our heads together to discuss what obstacles make it so hard and so perennially contentious might be a very constructive exercise – we might be able to make headway on THAT question without fighting too much.

David Caplan

What was the broad question that you most wanted to get an answer to during your time in the program?

At the time, I wanted to know how syntactic structure was represented, and how it was constructed in comprehension. Beyond that, I wanted to see if these models could be used in the diagnosis and treatment of patients with neurological disease. I was on my way to Med School, which I told Morris and Noam was the case. They accepted me despite the fact that I was so interested in psycholinguistics and applications. I cannot say how grateful I am for having been accepted and having been in the program. What I learned there has been the basis for my entire professional life.

What is the current status of this question? Has it been answered? Did it turn out to be an ill-conceived question? If it’s a meaningful question as yet unanswered, please tell us what you think the path to an answer might be, or what obstacles make it a hard question.

I still think the question of how syntactic structure is represented is well-formed question, and I still think that an important approach to answering it is logical analysis of expert (i.e., trained linguists’) judgments about features of sentence such as meaning, synonymy, wellformedness, etc. However, these methods have well attested problems, the solutions to which are not apparent (to me). Also, the relation between theories of the structure of syntactic representations and parsing/interpretation has become less clear as models of representations in the MP have become (in a certain sense) more abstract (not that the relation was ever clear). As far as applications to language pathology goes, current models of syntactic representations have essentially stopped influencing models of language deficits in adults withy neurological disorders; even the most linguistically oriented researchers (e.g., Yosef Grodzinsky) utilize models from around 1995 (he may disagree with this characterization of his work). More broadly, although I am not up to date in other areas, I see a similar isolation of modern work on syntactic representations in the MP framework from other potentially related fields such as child language development. If this view is correct, I think it presents a serious challenge to work that sees the study of “competence” as part of biology. Noam’s perspective that the crisis lies in biology is not as appealing to me as it was 15 years ago.

David Perlmutter

The question that has always struck me as central, when I was a student and ever since, is this:

(1) In what ways do languages differ and in what ways are all human languages alike?

When I was a student it was common for answers to this question to be proposed based on evidence from English alone. In my dissertation and ever since I have tried to enlarge the language base in terms of which this question is discussed. One chapter of my dissertation, in fact, later evolved into a much-discussed parameter of variation.

What has happened to question (1) since my student days is nothing short of amazing. There has been an explosion of research on the most diverse languages, largely due to the development of theoretical constructs capable of handling a far wider range of languages than the constructs in use in my student days could handle. The advances in understanding, I think, have been impressive, although the splintering of the research community along theoretical lines, at least in syntax, has to a considerable extent obscured the real gains that have been made.

The question itself continues to be central to linguistics. It has not been answered not because it is an ill-conceived question, but because, I would say, it is right on the mark. The results of research on typologically diverse languages have brought out greater cross-linguistic differences than were even imagined in my student days. To cite just one example, my own expansion of my research to sign languages alongside spoken ones has made me keenly aware of the possibility of far greater cross-linguistic variation than previously imagined and has made finding cross-linguistic commonalities much more challenging.

From my own perspective, I think I know far more about how languages work than anyone knew in my student days, but at the same time, what I think remains to be discovered amounts to much more than anyone imagined in my student days. And that, I think, is real progress.

Colin Phillips

What was the broad question that you most wanted to get an answer to during your time in the program?

The broad question that most interested me as a student, and continues to interest me, is what is the relation between grammatical models and psychological/neural computations. Before coming to MIT I had spent a year at the University of Rochester, where Tom Bever and others were working hard to bring linguists, psychologists and computer scientists together, and right after I arrived at MIT there was a growth in efforts to integrate theoretical and experimental work, thanks to an NSF training grant that David Pesetsky and Ken Wexler secured, plus Alec Marantz’ efforts in cognitive neuroscience. So it was a good time to think about such issues. Since my views on the relation between grammars and real-time processes were perceived to be swimming against the tide, people sometimes assume that I must have encountered hostility at MIT, but that is far from the truth. I found that people were very supportive and open to discussions on these issues.

What is the current status of this question? Has it been answered? Did it turn out to be an ill-conceived question? If it’s a meaningful question as yet unanswered, please tell us what you think the path to an answer might be, or what obstacles make it a hard question.

I think that the question remains very relevant, and that we’re currently able to pose the question in a rather more articulated fashion than was the case 15 years ago. We have learned a great deal about the grammatical sophistication of real-time processes, and in recent years we have benefited a lot from the use of explicit computational models of information encoding and access in memory. Our understanding of the neuroanatomy and electrophysiology of language is substantially richer than it was in the mid-90s. And linguists are increasingly a part of the conversation in these areas, including a number of very talented young linguists who are now entering the field equipped with skills that we couldn’t have imagined when I was a student. I am currently optimistic about our ability to make good progress on this issue.

Diane Massam

What was the broad question that you most wanted to get an answer to during your time in the program?

a) What is Syntax? (and relatedly, what is Universal Grammar?) How can we be sure it exists? Meaning and Sound/Sign are obviously part of language, but does Syntax really exist, if so, where, and what form does it take in the mind/brain? Even if I am convinced, how can I convince others of its existence and reality?

b) Secondary question: What are the range and limits of syntactic variation?

What is the current status of this question?

a) I still think it is hard to answer, that is, it is always a challenge for me to find ways to answer it to the satisfaction of some of my introductory (and increasingly, even advanced) students. I think syntax and abstract concepts in general are in danger of being sidelined, and we need to advance ways to evidence them.

b) The second question is being answered daily in all the incredible cross-linguistic work that is now being done, though we still need more expertise in endangered languages to better record diversity.

Has it been answered?

a) For me, personally, it was answered and then some, during my time at MIT and beyond, but it keeps re-emerging in my interactions with others.

b) No, but we are learning more all the time.

Did it turn out to be an ill-conceived question?

a) No, especially judging from students’ reactions today.

b) No.

If it’s a meaningful question as yet unanswered, please tell us what you think the path to an answer might be, or what obstacles make it a hard question.

a) The scientific method, sound argumentation, and theoretically informed experimental and inter-disciplinary research. Obtacles include a trend toward superficiality and a belief that only what can be measured computationally is valid.

b) Encouraging fieldwork and native speaker linguists. Obstacles include the way the world is organized, unfortunately, and continued language endangerment.

Lisa Selkirk

As a grad student at MIT (1968-1972) I was interested in both phonology and syntax. I found myself particularly interested in sound patterns that showed an influence of syntax, and in the end opted to write a thesis in this area, which I called The Phrasal Phonology of English and French. It dealt with diverse aspects of the phonology of English function words and with the phonology of French liaison patterns, expanding the theory of syntax- driven phonological boundary placement offered in The Sound Pattern of English. My graduate education at MIT at that time gave me a profound interest in the nature of the distinct types of grammatical representation and in the interfaces between these in the architecture of the grammar. With time, the understanding of what constitutes phonological representation has expanded beyond what was envisaged in The Sound Patterns of English. My own contribution in this area has been primarily to show motivation for a properly phonological hierarchical constituent structure that is independent of, but systematically related to, syntactic structure. This prosodic constituency in phonological representation is argued to provide the characteristic domains for most apparently syntax-sensitive phenomena of the phonology and phonetics. The question of just how linguistic theory should characterize the effect of syntactic constituency or the syntactic derivation on phonological domain structure continues to be a central theme in my own research (see for example my article ‘The syntax-phonology interface’ to appear in The Handbook of Phonological Theory, 2 ed., edited by Riggle, Goldsmith and Yu.) Another gift of my graduate education at MIT was an understanding of the advantage to be gained empirically and theoretically by thinking modularly. This has been especially important in trying to understand the intonational pitch pattern of sentences, which in a language like English is subject to a multitude of factors that relate not only to the phonology of prosodic constituency and stress prominence in the sentence, but also to semantic/pragmatic factors like focus, discourse- new status and discourse-givenness, as well as to the tonal representation of the sentence and its phonetic interpretation, and to the effects of the phonetic interpretation of prosodic constituency and prominence, to cite just the grammatical influences. The perspective that coming to an understanding of intonation involves solving the puzzle of just how all these pieces fit together, and of just what contribution each is responsible for making, has given what I think are positive results. (My labors in this area are ongoing, in particular in work with Jonah Katz on phonetic evidence for making a semantic distinction between contrastive focus and discourse-new, and in work with Angelika Kratzer on the semantics and phonology of (contrastive) Focus and Givenness in English.)

It is not possible to say at this point that major issues in the theory of the syntax- phonology interface, or even in the theory of the relevant aspects of phonological representation, have been resolved to the point of achieving broad consensus. Too little data on these questions is available from the world’s languages, or even from single languages, and too few competing theoretical perspectives have been articulated and evaluated with respect to the data available. There have simply been too few scholars at work in this area, perhaps because investigation of these questions requires thinking beyond the confines of single components of grammar and in particular the readiness to overcome what has come to seem a practical, even natural, divide between phonology and phonetics on the one hand and syntax, semantics and pragmatics on the other.

Seth Cable

For what it’s worth, I wanted to submit my own answers to the two key questions on the invitation:

What was the broad question that you most wanted to get an answer to during your time in the program?

In what ways can the investigation of a single understudied language (Tlingit) advance debates within theoretical linguistics.

What is the current status of this question? Has it been answered? Did it turn out to be an ill-conceived question? If it’s a meaningful question as yet unanswered, please tell us what you think the path to an answer might be, or what obstacles make it a hard question.

Well, needless to say, it’s still very much an open question 😉 It’s been answered, in part, by work both by myself, as well as by folks like James Crippen (UBC) and Jeff Leer (UAF).