causation

A Model-Invariant Theory of Causation

To Appear in The Philosophical Review.

I provide a theory of causation within the causal modeling framework. In contrast to most of its predecessors, this theory is model-invariant in the following sense: if the theory says that C caused (didn’t cause) E in a causal model, M, then it will continue to say that C caused (didn’t cause) E once we’ve removed an inessential variable from M. I suggest that, if this theory is true, then we should understand a cause as something which transmits deviant or non-inertial behavior to its effect.

A Theory of Structural Determination

2016. Philosophical Studies 173 (1): 159–186.

While structural equations modeling is increasingly used in philosophical theorizing about causation, it remains unclear what it takes for a particular structural equations model to be correct. To the extent that this issue has been addressed, the consensus appears to be that it takes a certain family of causal counterfactuals being true. I argue that this account faces difficulties in securing the independent manipulability of the structural determination relations represented in a correct structural equations model. I then offer an alternate understanding of structural determination, and I demonstrate that this theory guarantees that structural determination relations are independently manipulable. The account provides a straightforward way of understanding hypothetical interventions, as well as a criterion for distinguishing hypothetical changes in the values of variables which constitute interventions from those which do not. It additionally affords a semantics for causal counterfactual conditionals which is able to yield a clean solution to a problem case for the standard ‘closest possible world’ semantics.

The Emergence of Causation

2015. The Journal of Philosophy 112 (6): 261–-308.    

Several philosophers have embraced the view that high-level events—events like Zimbabwe’s monetary policy and its hyper-inflation—are causally related if their corresponding low-level, fundamental physical events are causally related. I dub the view which denies this without denying that high-level events are ever causally related causal emergentism. Several extant philosophical theories of causality entail causal emergentism, while others are inconsistent with the thesis. I illustrate this with David Lewis’s two theories of causation, one of which entails causal emergentism, the other of which entails its negation. I then argue for causal emergentism on the grounds that it provides the only adequate means of squaring the apparent plenitude of causal relations between low-level events with the apparent scarcity of causal relations between high-level events. This tension between the apparent abundance of low-level causation and the apparent scarcity of high-level causation has been noted before. However, it has been thought that various theses about the semantics or the pragmatics of causal claims could be used to ameliorate the tension without going in for causal emergentism. I argue that none of the suggested semantic or pragmatic strategies meet with success, and recommend emergentist theories of causality in their stead. As Lewis’s 1973 account illustrates, causal emergentism is consistent with the thesis that all facts reduce to microphysical facts.

chance

A Subjectivist’s Guide to Deterministic Chance

2019. Synthese.

I present an account of deterministic chance which builds upon the physico-mathematical approach to theorizing about deterministic chance known as ‘the method of arbitrary functions’. This approach promisingly yields deterministic probabilities which align with what we take the chances to be—it tells us that there is approximately a 1/2 probability of a spun roulette wheel stopping on black, and approximately a 1/2 probability of a flipped coin landing heads up—but it requires some probabilistic materials to work with. I contend that the right probabilistic materials are found in reasonable initial credence distributions. I note that, with some normative assumptions, the resulting account entails that deterministic chances obey a variant of Lewis’s ‘principal principle’. I additionally argue that deterministic chances, so understood, are capable of explaining long-run frequencies.

choice

Riches and Rationality

To Appear in The Australasian Journal of Philosophy.

A one-boxer, Erica, and a two-boxer, Chloe, engage in a familiar debate. The debate begins with Erica asking Chloe: ‘If you’re so smart, then why ain’cha rich?’. As the debate progresses, Chloe is led to endorse a novel causalist theory of rational choice. This new theory allows Chloe to forge a connection between rational choice and long-run riches. In brief: Chloe concludes that it is not long-run wealth but rather long-run wealth creation which is symptomatic of rationality.

The Causal Decision Theorist’s Guide to Managing the News

2020. The Journal of Philosophy 117 (3): 117–149.

According to orthodox causal decision theory, performing an action can give you information about factors outside of your control, but you should not take this information into account when deciding what to do. Causal decision theorists caution against an irrational policy of ‘managing the news’. But, by providing information about factors outside of your control, performing an act can give you two, importantly different, kinds of good news. It can tell you that the world in which you find yourself is good in ways you can’t control, and it can also tell you that the act itself is in a position to make the world better. While the first kind of news does not speak in favor of performing an act, I believe that the second kind of news does. I present a revision of causal decision theory which advises you to manage the news about the good you stand to promote, while ignoring news about the good the world has provided for you.

Review of Newcomb’s Problem, edited by Arif Ahmed

2020. Economics & Philosophy 36 (1), 171–176.

credence

Updating for Externalists

To Appear in Noûs.

The externalist says that your evidence could fail to tell you what evidence you do or not do have. In that case, it could be rational for you to be uncertain about what your evidence is. This is a kind of uncertainty which orthodox Bayesian epistemology has difficulty modeling. For, if externalism is correct, then the orthodox Bayesian learning norms of conditionalization and reflection are inconsistent with each other. I recommend that an externalist Bayesian reject conditionalization. In its stead, I provide a new theory of rational learning for the externalist. I defend this theory by arguing that its advice will be followed by anyone whose learning dispositions maximize expected accuracy. I then explore some of this theory’s consequences for the rationality of epistemic akrasia, peer disagreement, undercutting defeat, and uncertain evidence.

Learning and Value Change

2019. Philosophers’ Imprint 19(29): 1–22.

Accuracy-first accounts of rational learning attempt to vindicate the intuitive idea that, while rationally-formed belief need not be true, it is nevertheless likely to be true. To this end, they attempt to show that the Bayesian’s rational learning norms are a consequence of the rational pursuit of accuracy. Existing accounts fall short of this goal, for they presuppose evidential norms which are not and cannot be vindicated in terms of the single-minded pursuit of accuracy. I propose an alternative account, according to which learning experiences rationalize changes in the way you value accuracy, which in turn rationalize changes in belief. I show that this account is capable of vindicating the Bayesian’s rational learning norms in terms of the single-minded pursuit of accuracy, so long as accuracy is rationally valued.

Diachronic Dutch Books and Evidential Import

2019. Philosophy and Phenomenological Research 99(1): 49–80.

A handful of well-known arguments (the ‘diachronic Dutch book arguments’) rely upon theorems establishing that, in certain circumstances, you are immune from sure monetary loss (you are not ‘diachronically Dutch bookable’) if and only if you adopt the strategy of conditionalizing (or Jeffrey conditionalizing) on whatever evidence you happen to receive. These theorems require non-trivial assumptions about which evidence you might acquire—in the case of conditionalization, the assumption is that, if you might learn that e, then it is not the case that you might learn something else that is consistent with e. These assumptions may not be relaxed. When they are, not only will non-(Jeffrey) conditionalizers be immune from diachronic Dutch bookability, but (Jeffrey) conditionalizers will themselves be diachronically Dutch bookable. I argue: 1) that there are epistemic situations in which these assumptions are violated; 2) that this reveals a conflict between the premise that susceptibility to sure monetary loss is irrational, on the one hand, and the view that rational belief revision is a function of your prior beliefs and the acquired evidence alone, on the other; and 3) that this inconsistency demonstrates that diachronic Dutch book arguments for (Jeffrey) conditionalization are invalid.

No One Can Serve Two Epistemic Masters

2016. Philosophical Studies 175 (10): 2389–2398..

Consider two epistemic experts–for concreteness, let them be two weather forecasters. Suppose that you aren’t certain that they will issue identical forecasts, and you would like to proportion your degrees of belief to theirs in the following way: first, conditional on either’s forecast of rain being x, you’d like your own degree of belief in rain to be x. Secondly, conditional on them issuing different forecasts of rain, you’d like your own degree of belief in rain to be some weighted average of the forecast of each. Finally, you’d like your degrees of belief to be given by an orthodox probability measure. Moderate ambitions, all. But you can’t always get what you want.

How to Learn from Theory-Dependent Evidence

2014. The British Journal for the Philosophy of Science 65 (3): 493–519.

Weisberg provides an argument that neither conditionalization nor Jeffrey conditionalization is capable of accommodating the holist’s claim that beliefs acquired directly from experience can suffer undercutting defeat. I diagnose this failure as stemming from the fact that neither conditionalization nor Jeffrey conditionalization give any advice about how to rationally respond to theory-dependent evidence, and I propose a novel updating procedure that does tell us how to respond to evidence like this. This holistic updating rule yields conditionalization as a special case in which our evidence is entirely theory independent. Note: I revise and further generalize this theory in Updating for Externalists.

Drafts

These papers are still being revised. Feedback is very much appreciated.

Two-Dimensional De Se Deference

Principles of expert deference say that you should align your credences with those of an expert. This expert could be your doctor, your future, better informed self, or the objective chances. These kinds of principles face difficulties in cases in which you are uncertain of the truth-conditions of the thoughts in which you invest credence, as well as cases in which the thoughts have different truth-conditions for you and the expert. For instance, you shouldn’t defer to your doctor by aligning your credence in the de se thought ‘I am sick’ with the doctor’s credence in that same de se thought. Nor should you defer to the objective chances by setting your credence in the thought ‘The actual winner wins’ equal to the objective chance that the actual winner wins. Here, I generalize principles of expert deference to handle these kinds of problem cases.

Video of a talk I gave on this material is available here

The Principle of Indifference and the Principal Principle are Incompatible

The Principle of Indifference (POI) says that, in the absence of evidence, you should distribute your credences evenly. The Principal Principle (PP) says that, in the absence of evidence, you should align your credences with the chances. Richard Pettigrew (2016) appears to accept both the PP and the POI. However, the POI and the PP are incompatible. Abiding the POI means violating the PP. So Bayesians cannot accept both principles; they must choose which, if either, to endorse.

Escaping the Cycle

I present a decision problem in which causal decision theory appears to violate the independence of irrelevant alternatives (IIA) and normal-form extensive-form equivalence (NEE). I show that these violations lead to exploitable behavior and long-run poverty. These consequences appear damning, but I urge caution. Causalists can dispute the charge that they violate IIA and NEE in this case by carefully specifying when options in different decision problems are similar enough to be counted as the same.