top of page

PAPERS

Published in Philosophical Studies

UTTERING MOOREAN SENTENCES AND THE PRAGMATICS OF BELIEF REPORTS (LINK)

Abstract

Moore supposedly discovered that there are sentences of a certain form that, though they can be true, no rational human being can sincerely and truly utter any of them. MC and MO are particular instances:

MC: ‘‘It is raining and I believe that it is not raining’’

MO: ‘‘It is raining and I don’t believe that it is raining’’

 

In this paper, I show that there are sentences of the same form as MC and MO that can be sincerely and truly uttered by rational agents. We call sentences of the same form as MC and MO ‘‘Moorean Sentences’’. In Part II, we go over a standard argument for why sentences of the same form as MC and MO cannot be sincerely and truly uttered, and we explain why this argument is unsound. In explaining why this argument is unsound, we rely on the context-sensitivity of belief-reports and show that the premises of the argument are not all true in the same context. We then employ a general theory of belief reports that incorporates guises to explain why our examples are sincerely utterable while Moore’s sentences are typically not sincerely utterable. The answer will turn on the suggestion that belief reports carry a hidden quantifier over guises, where the domain of quantification is determined by context. We conclude with general lessons about Moore’s Paradox and the supposed limits of first-person belief reports.

Published in Erkenntnis

Knowing Your Way Out of St. Petersburg: An Exploration of "Knowledge-First" Decision Theory

Abstract

The St. Petersburg Game is a gamble that has infinite expected value (according to standard decision theories) but is intuitively worth less than $25. In this paper I consider a new solution to the St. Petersburg paradox, based on the idea that you should act on what you know. While the possibility of a "knowledge-first" decision theory is often floated, the resulting theory is rarely explored in much detail. I survey some of the costs and benefits of the theory in the rest of the paper. I conclude that the resulting theory provides a fit contender for a solution to the St. Petersburg Paradox and should be taken more seriously than it has.

Published in Ergo

Prefaces, Knowledge, and Questions

Abstract

The Preface Paradox is often discussed for its implications for rational belief. Much less discussed is a variant of the Preface Paradox for knowledge. In this paper, I argue that the most plausible closure-friendly resolution to the Preface Paradox for Knowledge is to say that in any given context, we do not know much. I call this view ``Socraticism".

    I argue that Socraticism is the most plausible view on two accounts -- (1). this view is compatible with the claim that most of our knowledge ascriptions are true, and (2). provided that (1) is true, the costs of accepting Socraticism are much less than the costs of accepting any other resolution to the Paradox. 

    I argue for (1) in Part II by developing a question-sensitive contextualist model for knowledge that shows how Socraticism is compatible with the claim that most of our knowledge ascriptions are true. I also argue how this contextualist model can achieve this result where other contextualist models fail.

    I then consider other closure-friendly solutions to the paradox in part III and show how accepting those solutions forces us to give up a number of plausible epistemic principles.

Published in Erkenntnis

MORAL FACTS DO NOT SUPERVENE ON QUALITATIVE FACTS

Abstract

It is often taken to be a given that if two people, x and y, are qualitatively identical and have committed qualitatively identical actions, then it cannot be the case that one has committed something wrong whereas the other did not. That is to say, if and differ in their moral status, then it must be because and are qualitatively different, and not simply because is identical to and not identical to y. In this fictional dialogue between Socrates and Cantor involving infinitely many qualitatively identical agents, this assumption is challenged. 

Published in Philosophical Studies

Group Prioritarianism: Why AI Should Not Replace Humanity

If a future AI system can enjoy far more well-being than a human per resource, what would be the best way to allocate resources between these future AI and our future descendants?  It is obvious that on total utilitarianism, one should give everything to the AI. However, it turns out that every Welfarist axiology on the market also gives this same recommendation. Without resorting to non-consequentialist normative theories that suggest that we ought not always to create the world with the most value, or non-welfarist theories that tell us that the best world may not be the world with the most welfare, we propose a new theory that justifies the survival of humanity in the face of overwhelming AI wellbeing. We call this new theory, ``Group Prioritarianism".

Forthcoming in Nous

Paradoxes of Infinite Aggregation (with Jeff Russell)

 There are infinitely many ways the world might be, and there may well be infinitely many people in it. These facts raise moral paradoxes. We explore a conflict between two highly attractive principles: a \textit{Pareto} principle that says that what is better for everyone is better overall, and a \textit{statewise dominance} principle that says that what is sure to turn out better is better on balance. We refine and generalize this paradox, showing that the problem is faced by many theories of interpersonal aggregation besides utilitarianism, and by many decision theories besides expected value theory. Considering the range of consistent responses, we find all of them to be quite radical.

Under Review

Recurrence, Rational Choice, and The Simulation Hypothesis (with Simon Goldstein)

According to the doctrine of recurrence, we are reborn after our apparent death to live our life again. This paper develops a new  doctrine of recurrence. We make three main claims. First, we argue that  the simulation hypothesis increases the chance that we will recur. Second, we argue that the chance of recurrence affects rational choice, depending on the shape of your utility function. In particular, we show that one kind of recurrence will be action relevant if and only if your preferences between actions could shift when all outcomes are scaled by a common factor. Third, we argue that recurrence can affect rational choice even if you do not survive recurrence.

Under Review

KNOWING ENOUGH NOT TO BE A FANATIC

In this paper, I develop a question sensitive, contextualist model of knowledge and embed it into a ``knowledge-first decision theory". The result is a general decision theory that avoids the paradoxes involved with Pascal's Mugger, the St. Petersburg Game, and the Pasadena Game.

Under Review

A paper on AI Safety and the Simulation Hypothesis (with Simon Goldstein)

The paper introduces and critically explores the idea that we are safer from AI risk if the simulation hypothesis is true. 

In Progress

THE LOTTERY, THE LAW, AND THE LIMITS OF KNOWLEDGE

Abstract

In a famous case described by Nesson, we have purely statistical evidence that 24 out of 25 (96%) prisoners in a particular prison conspired to kill a prison guard. In orthodox legal practice, this evidence alone is insufficient to convict any particular prisoner, even though one can convict a particular prisoner on the basis of eye-witness testimony when the probability of guilt conditional on the testimony is only 95%. One clean response that can justify this difference in attitudes is that the lottery structure of the first case precludes one from knowing of any particular prisoner that they are guilty while the structure of the second case does not.​

   In this paper, I argue that this argument, though correct in broad outline, moves much too fast. I argue that there are contexts in which a lottery structure does not preclude knowledge, and so the challenge is to find out what is so special about legal contexts that precludes knowledge when a lottery is involved. For some mainstream brands of contextualism, the answer is surprisingly elusive. To answer this question, I develop a formal model of what I call ``Question Sensitive Contextualism''. On this model, the key context-sensitive parameter is a question, and I argue that what makes legal contexts special is that the value of fairness constrains what kind of questions must be active.

Finally, I show how this model gets us the right results with regard to Nesson's example, but how it also delivers some surprising recommendations in more contested cases.

In Progress

A PAPER ON THE ETHICS OF SELF-DRIVING CARS (WITH DAVID CLARK)

When a self-driving car malfunctions and crashes, who pays for the cost? Much of the literature on this issue assume that the ones to bear the cost are the ones responsible for the crash, and so the question of accountability is reduced to a question of responsibility. We think this is a mistake. This assumes a particular view of harm compensation that we reject. Instead, we argue that when a person is harmed in the process where another is benefitted, it is the beneficiary who owes compensation. Likewise, given the massive benefits an influx of self-driving cars can give to society, it is the society being benefitted that owes compensation (through taxpayer money) to those harmed by self-driving cars.

bottom of page