PAPERS
Published in Philosophical Studies
UTTERING MOOREAN SENTENCES AND THE PRAGMATICS OF BELIEF REPORTS (LINK)
Abstract
Moore supposedly discovered that there are sentences of a certain form that, though they can be true, no rational human being can sincerely and truly utter any of them. MC and MO are particular instances:
MC: ‘‘It is raining and I believe that it is not raining’’
MO: ‘‘It is raining and I don’t believe that it is raining’’
In this paper, I show that there are sentences of the same form as MC and MO that can be sincerely and truly uttered by rational agents. We call sentences of the same form as MC and MO ‘‘Moorean Sentences’’. In Part II, we go over a standard argument for why sentences of the same form as MC and MO cannot be sincerely and truly uttered, and we explain why this argument is unsound. In explaining why this argument is unsound, we rely on the context-sensitivity of belief-reports and show that the premises of the argument are not all true in the same context. We then employ a general theory of belief reports that incorporates guises to explain why our examples are sincerely utterable while Moore’s sentences are typically not sincerely utterable. The answer will turn on the suggestion that belief reports carry a hidden quantifier over guises, where the domain of quantification is determined by context. We conclude with general lessons about Moore’s Paradox and the supposed limits of first-person belief reports.
Published in Erkenntnis
Knowing Your Way Out of St. Petersburg: An Exploration of "Knowledge-First" Decision Theory
Abstract
The St. Petersburg Game is a gamble that has infinite expected value (according to standard decision theories) but is intuitively worth less than $25. In this paper I consider a new solution to the St. Petersburg paradox, based on the idea that you should act on what you know. While the possibility of a "knowledge-first" decision theory is often floated, the resulting theory is rarely explored in much detail. I survey some of the costs and benefits of the theory in the rest of the paper. I conclude that the resulting theory provides a fit contender for a solution to the St. Petersburg Paradox and should be taken more seriously than it has.
Published in Ergo
Prefaces, Knowledge, and Questions
Abstract
The Preface Paradox is often discussed for its implications for rational belief. Much less discussed is a variant of the Preface Paradox for knowledge. In this paper, I argue that the most plausible closure-friendly resolution to the Preface Paradox for Knowledge is to say that in any given context, we do not know much. I call this view ``Socraticism".
I argue that Socraticism is the most plausible view on two accounts -- (1). this view is compatible with the claim that most of our knowledge ascriptions are true, and (2). provided that (1) is true, the costs of accepting Socraticism are much less than the costs of accepting any other resolution to the Paradox.
I argue for (1) in Part II by developing a question-sensitive contextualist model for knowledge that shows how Socraticism is compatible with the claim that most of our knowledge ascriptions are true. I also argue how this contextualist model can achieve this result where other contextualist models fail.
I then consider other closure-friendly solutions to the paradox in part III and show how accepting those solutions forces us to give up a number of plausible epistemic principles.
Published in Erkenntnis
MORAL FACTS DO NOT SUPERVENE ON QUALITATIVE FACTS
Abstract
It is often taken to be a given that if two people, x and y, are qualitatively identical and have committed qualitatively identical actions, then it cannot be the case that one has committed something wrong whereas the other did not. That is to say, if x and y differ in their moral status, then it must be because x and y are qualitatively different, and not simply because x is identical to x and not identical to y. In this fictional dialogue between Socrates and Cantor involving infinitely many qualitatively identical agents, this assumption is challenged.
Published in Philosophical Studies
Group Prioritarianism: Why AI Should Not Replace Humanity
If a future AI system can enjoy far more well-being than a human per resource, what would be the best way to allocate resources between these future AI and our future descendants? It is obvious that on total utilitarianism, one should give everything to the AI. However, it turns out that every Welfarist axiology on the market also gives this same recommendation. Without resorting to non-consequentialist normative theories that suggest that we ought not always to create the world with the most \textit{value}, or non-welfarist theories that tell us that the best world may not be the world with the most \textit{welfare}, we propose a new theory that justifies the survival of humanity in the face of overwhelming AI wellbeing. We call this new theory, ``Group Prioritarianism".
Under Review
Recurrence, Rational Choice, and The Simulation Hypothesis (with Simon Goldstein)
According to the doctrine of recurrence, we are reborn after our apparent death to live our life again. This paper develops a new doctrine of recurrence. We make three main claims. First, we argue that the simulation hypothesis increases the chance that we will recur. Second, we argue that the chance of recurrence affects rational choice, depending on the shape of your utility function. In particular, we show that one kind of recurrence will be action relevant if and only if your preferences between actions could shift when all outcomes are scaled by a common factor. Third, we argue that recurrence can affect rational choice even if you do not survive recurrence.
Under Review
KNOWING ENOUGH NOT TO BE A FANATIC
In this paper, I develop a question sensitive, contextualist model of knowledge and embed it into a ``knowledge-first decision theory". The result is a general decision theory that avoids the paradoxes involved with Pascal's Mugger, the St. Petersburg Game, and the Pasadena Game.
Under Review
A PAPER ON PARADOXES INVOLVING INFINITE POPULATIONS (WITH JEFF RUSSELL)
In this paper, we introduce a series of paradoxes involving gambles with infinite states that affect infinitely many individuals. We show that there is a deep inconsistency between Ex Ante Pareto and Statewise Dominance given very weak assumptions about compensating tradeoffs.
In Progress
Safety and Simulation
A ``Treacherous Turn" is when an Artificial General Intelligence (AGI) acts safe when it believes it's being tested in a simulation, but then acts on its true (and potentially very dangerous) goals once it has been let out of the test simulation. It is almost universally assumed in the literature that if an AGI is capable of realizing it is in a test simulation, and it has dangerous goals, and we let it out of the test because we cannot tell, then doom is the default scenario. In this paper, I argue that a serious consideration has been left out. In order to assess the likelihood of doom, we need to assess how likely a dangerous AGI is to believe that our universe is a simulation. Given two plausible assumption, Simulation Symmetry and Simulation Reasonableness, we can give an argument that doom is not the default scenario. In the course of this paper, I do not mean to suggest that this argument should make us sloppy in AI safety work; rather, I argue that this argument sheds light on what other factors we need to consider in order to estimate the likelihood of doom more accurately. I also show that this argument points to another path of safety research that is separate from (but supplements) capabilities, alignment, and interpretability research. Ultimately, I conclude that a crucial part of AI safety research should include Simulation Research.
In Progress
THE LOTTERY, THE LAW, AND THE LIMITS OF KNOWLEDGE
Abstract
In a famous case described by Nesson, we have purely statistical evidence that 24 out of 25 (96%) prisoners in a particular prison conspired to kill a prison guard. In orthodox legal practice, this evidence alone is insufficient to convict any particular prisoner, even though one can convict a particular prisoner on the basis of eye-witness testimony when the probability of guilt conditional on the testimony is only 95%. One clean response that can justify this difference in attitudes is that the lottery structure of the first case precludes one from knowing of any particular prisoner that they are guilty while the structure of the second case does not.
In this paper, I argue that this argument, though correct in broad outline, moves much too fast. I argue that there are contexts in which a lottery structure does not preclude knowledge, and so the challenge is to find out what is so special about legal contexts that precludes knowledge when a lottery is involved. For some mainstream brands of contextualism, the answer is surprisingly elusive. To answer this question, I develop a formal model of what I call ``Question Sensitive Contextualism''. On this model, the key context-sensitive parameter is a question, and I argue that what makes legal contexts special is that the value of fairness constrains what kind of questions must be active.
Finally, I show how this model gets us the right results with regard to Nesson's example, but how it also delivers some surprising recommendations in more contested cases.
In Progress
A PAPER ON THE ETHICS OF SELF-DRIVING CARS (WITH DAVID CLARK)
When a self-driving car malfunctions and crashes, who pays for the cost? Much of the literature on this issue assume that the ones to bear the cost are the ones responsible for the crash, and so the question of accountability is reduced to a question of responsibility. We think this is a mistake. This assumes a particular view of harm compensation that we reject. Instead, we argue that when a person is harmed in the process where another is benefitted, it is the beneficiary who owes compensation. Likewise, given the massive benefits an influx of self-driving cars can give to society, it is the society being benefitted that owes compensation (through taxpayer money) to those harmed by self-driving cars.