What was the broad question that you most wanted to get an answer to during your time in the program?
The question I was hoping to answer when I got to MIT was: when we make pragmatic inferences, what is the principle that tells us when to stop thinking? Given an utterance in a context, language users systematically and reliably come to infer the truth or falsity of various other sentences or propositions, such as happens in focus constructions, implicature, and presupposition accommodation. Since the space of inferences we make is bounded in seemingly non-arbitrary ways, a theory of such inferential capacities requires a theory of these bounds, that is, a theory that tells you when to stop thinking. I was hoping that I would be able to work out a theory of relevance that would provide the required stopping rule: you consider all those propositions that are relevant, and nothing else.
Seminars and meetings with Kai von Fintel, Danny Fox, and Irene Heim taught me that this would not be trivial, partly because of the so-called `symmetry problem:’ As soon as we write down some very natural axioms about relevance, one can show that there are sentences that are predicted to be relevant but which never enter into pragmatic reasoning. If the axioms are right, relevance alone will not give us the bounds we need. Something else will be needed to help put a frame around our pragmatic reasoning.
What is the current status of this question? Has it been answered? Did it turn out to be an ill-conceived question? If it’s a meaningful question as yet unanswered, please tell us what you think the path to an answer might be, or what obstacles make it a hard question.
The current state of the art suggests that the language faculty itself provides the required frame. More specifically, the space of potential inferences is mechanically derived by the grammar, in a context-independent way, by executing a restricted set of structure-modification operations on the asserted sentence. This provides an upper-bound on what may be inferred. Under this architecture, the role of relevance is reduced to merely selecting some subset of these potential inferences for purposes of pragmatic reasoning. As such, it has no chance to create symmetry problems.
What currently interests me is the way sets of potential inferences are generated for different pragmatic tasks, as well as the grammar-context interface principles that determine which subsets of these potential inferences will become actual inferences. The implicature system makes use of one set of potential inferences, the accommodation system makes use of another, Maximize Presupposition reasoning another, and so on. How, if at all, are these sets related? What are the general principles from which these sets are generated? Given such sets, how does context decide which subsets to use? What are the mechanisms that convert these sets of potential inferences to actual inferences? Why does UG provide these sets, and not some others?