Stephanie Fielding

What was the broad question that you most wanted to get an answer to during your time in the program?

The question I entered the program with was: How can I resurrect my Tribal language? The last speaker had died nearly 100 years before. Although she left some writings in Mohegan, it was not enough to really start anyone speaking.

What is the current status of this question? Has it been answered? Did it turn out to be an ill-conceived question? If it’s a meaningful question as yet unanswered, please tell us what you think the path to an answer might be, or what obstacles make it a hard question.

The Mohegan language is still unspoken, but the number of breitling replica language resources continues to grow. Our grammar has been defined and the lexicon has grown to about 1100 words…not counting the inflections. A dictionary has been developed and published and we have a website with podcasts that contain short lessons on different areas of the language, as well as lists of things with accompanying audio files. A series of 15 lessons that cover the basics of the language have been developed and presented to a core group of students at Mohegan and in our virtual classrooms in Shinnecock, Unkechaug and Montauk communities on Long Island and their members that live as far away as Colorado and Saskatchewan. The on-line classes have also served far-flung Mohegans in Wisconsin, Georgia, New York, Massachusetts, Rhode Island and Florida. But still people aren’t speaking.

My enthusiasm for the language continues. I work on it daily, but the difficulties getting people to use the language continues to plague me. The dictionaries on my desk from related languages are many. My thesis, which includes sound changes from Wampanoag and Delaware languages, is dog-eared. There is a file on my computer with about 3,000 words that have not yet made it to round two of the process to get into the dictionary. The morphology of the language is becoming clearer. And the culture imbedded in the language is starting to show itself more robustly. But still no one speaks.

The Chief, the Medicine Woman and the Pipe Carrier can all rattle off prayers so smoothly that one might think that Mohegan is a spoken language. They make me proud. But had I not translated their prayers for them, I wouldn’t know what they were saying. Not only does no one speak the language, no one hears the language…not even me.

The realization that I have limited years in which to accomplish this goal presses on me, so I occasionally deposit everything thing that I have, hard copy and digital copies, in the Mohegan Archives for safe keeping. If my dream isn’t realized before I die, someone in another generation will have all of my work available to them, and it won’t be four small diaries. It will include the database, the research of the lexicon with thousands of words translated into Mohegan. It will include a grammar, lessons and the plans for an immersion school.

Recently, I learned a language game that claims to get a person fluent in a very narrow, but important, portion of the language. From this basic fluency other vocabulary and grammar can be added. I don’t understand it fully, nor am I sure how to institute this into the workings of the Tribe, but I’m working on it. I didn’t learn this at MIT, but it might be the missing link between learning a language and learning about a language.

Terry Langendoen

I don’t recall having had a “broad question” that I wanted an answer to while I was a grad student in the program. I had had Noam as my undergraduate advisor, and he set me a problem for my undergraduate thesis, to design a finite transducer that is strongly equivalent to a context free grammar up to any fixed finite bound on self embedding (as center embedding was then called). As I came to realize some years later, the solution in replica watch my thesis (completed in May 1961) was not correct, but that question did not occupy me during my grad student days. I discovered in the early 1970s that even to get strong equivalence for unbounded right and left embedding (or branching) required the use of readjustment rules of the sort Morris had proposed in connection with the intonation pattern of right-branching relative clauses as in “This is the cat that chased the rat …” (to avoid having to match an unbounded number of left and right brackets), and published it in LI in 1975. But that account was not strictly speaking correct even for right and left branching, since the vacuous movements involved didn’t leave traces. In the late 1990s I made another attempt, and submitted an algorithm for constructing a strongly equivalent FST for an arbitrary binary branching CFG with up to a fixed bound on center embedding to the website that was set up to celebrate Noam’s 70th birthday, and in 2004 submitted a revised version to Language, which was finally published in 2008, the delay resulting from dithering on my part, not the fault of the editor or the reviewers. In any event, I’d be happy to be part of a discussion of the problem of center embedding (or recursion generally), should that be of interest to a sufficient number of others.

Avery Andrews

What was the broad question that you most wanted to get an answer to during your time in the program?

My original question was probably something along the lines of replica watches uk:

“Take a finite number of rules and get an infinite number of utterances with different meanings. What a useful something-for-not-very-much-deal. How does it work?”

What is the current status of this question? Has it been answered? Did it turn out to be an ill-conceived question? If it’s a meaningful question as yet unanswered, please tell us what you think the path to an answer might be, or what obstacles make it a hard question.

Current status: much has been learned, plenty more to go.

Ur Shlonsky

What was the broad question that you most wanted to get an answer to during your time in the program?

I didn’t actually formulate it in these terms when I came to MIT, but in retrospect, it was language diversity and formal variation that were the main issues that drove my curiosity. They still do replica watch.

What is the current status of this question? Has it been answered? Did it turn out to be an ill-conceived question? If it’s a meaningful question as yet unanswered, please tell us what you think the path to an answer might be, or what obstacles make it a hard question.

I think Principles & Parameters is still the best theory for conceptualising language variation although, paradoxically, we have a less clear idea today what those principles are and in what terms parameters should be formulated.

One idea that has gained ground, and which I think is right, is that ‘parameters’ are keyed to properties of functional heads or, more precisely, to the features that constitute them. So one might ask what features can do.

We know that features can be null or overt, attract a category or not, be interpretable or not, merge with another feature or not and perhaps a few other things. The options are limited.

What I suspect gives rise to language diversity is that these options have to be multiplied by the number of features which UG makes available. Here we arrive at the following questions: Of the properties that enter into human thought and belief systems, which ones are represented as grammatical features? This is an interface question par excellence, in my judgement and should be high on our agenda today.

Tom Wasow

What was the broad question that you most wanted to get an answer to during your time in the program?

This question presupposes that I knew much more than I actually did when I started graduate school. I had majored in math as an undergraduate at a college with virtually no linguistics. Just about all I knew about linguistics was that I had read Syntactic Structures, and thought it was an exciting application of replica watches mathematical methods. I went to graduate school wanting exposure to more of that kind of work, and to learn to do it myself. I didn’t have ideas about language or linguistics that were developed enough to be formulated as questions.

What is the current status of this question? Has it been answered? Did it turn out to be an ill-conceived question? If it’s a meaningful question as yet unanswered, please tell us what you think the path to an answer might be, or what obstacles make it a hard question.

The sort of rigor that appealed to me in Syntactic Structures had largely disappeared from generative grammar even before I started graduate school (in 1968), although some people still were making an effort to write explicit grammars. For example, at the end my first syntax class, Haj Ross passed out a handout with all of the rules we had discussed in the course written out fairly formally. And in the late 60s and early 70s there were a few linguists (most notably Stanley Peters) still pursuing what I took to be the Syntactic Structures program, namely formalizing the grammars of languages and examining their properties as formal systems. But those were the exceptions. The most influential generative research at the time was formulated quite inexplicitly, and questions of the formal properties of grammars were considered of marginal interest. Instead, the focus had shifted to language as a property of mind, that could shed light on human nature. I was quickly sold on that rather different research program, and I still think the most interesting linguistic questions are psycholinguistic ones.

Great strides have been made in the past 40 years, both in terms of developing precise formal models of aspects of natural language and in terms of understanding the unconscious mental processes involved in language acquisition, production, and comprehension. But I think most of that progress has been made by investigators working outside the mainstream of generative grammar. That is mainly because of three methodological shortcomings of most work in generative grammar:

Inexplicitness: Technical terminology is rarely defined precisely, and grammars are virtually never formulated precisely.

Questionable data: Usage data and laboratory experiments have had almost no impact on generative theories, which have been based almost entirely on introspective judgments, casually collected.

Avoidance of quantitative models: Generative grammarians put forward categorical hypotheses to account for phenomena that appear to be highly gradient.

In my opinion, the progress that has been achieved in developing precise models of language as a property of mind has come largely from computational linguists, psycholinguists, and sociolinguists, who do not share these methodological limitations. And over the course of my 40+ years in the field, the influence of generative grammar on these other subfields has steadily decreased.