Chances of a Death Foretold

In Gibbard and Harper’s ‘Death in Damascus’, you must choose to travel to either Damascus or Aleppo, you are rather confident that you will meet Death in whichever city you actually choose, and that traveling to the city you don’t actually choose would save your life. In the standard version of this case, that’s because Death has made a quite reliable prediction about which city you will choose. Today’s post isn’t about ‘Death in Damascus’. It’s about a superficially similar case in which Death does not predict which city you will choose. Instead, Death simply flips a coin to decide where to go. But before you make up your mind, a reliable oracle tells you that you’ll meet Death. What’s interesting about this version case is that, for orthodox CDT, which choice is permissible depends upon when the coin flip takes place.

As I’ll be understanding it here, causal decision theory is formulated with the aid of an imaging function, which maps a world $w$ and a proposition $A$ to a probability function, $w_A$, such that $w_A(A) = 1$. The interpretation of this imaging function is that, if $A$ is an act, then $w_A(x)$ is the chance that world $x$ would obtain, were you to choose $A$ at world $w$. Then, as I’ll understand it here, causal decision theory (CDT) says to select the act, $A$, which maximizes $\mathcal{U}(A)$, where

$$ \mathcal{U}(A) \stackrel{\text{df}}{=} \sum_w \Pr(w) \cdot \sum_x w_A(x) \cdot V(x) $$

and $V(x)$ is the degree to which you desire that world $x$ is actual. The inner sum $\sum_x w_A(x) \cdot V(x)$ is how good you would expect $A$ to make things, were you to choose it at world $w$. $\mathcal{U}(A)$ is your expectation of this quantity, so it measures how good you would expect $A$ to make things, were you to choose it. CDT says to choose the act which you would expect to make things best, were you to choose it.

If there aren’t any chances to speak of, then $w_A$ will put all of its probability on a single world, which we can write just ‘$w_A$‘. $w_A$ is the world which would have obtained, had you performed $A$ in $w$. If there are no chances to speak of, then $\mathcal{U}(A) = \sum_w \Pr(w) \cdot V(w_A)$.

CDT disagrees with its rivals only when there is a correlation between your choice and a state which is causally independent of your choice (a ‘state of nature’). This can happen in two different ways. Firstly, there could be a common cause, $CC$, of your choice, $A$, and the state of nature, $K$.

In this case, so long as the value of the common cause $CC$ is not known, there may be a correlation between $K$ and $A$ (though, if the value of $CC$ is known, then $A$ and $K$ will be probabilistically independent.)

For instance, consider:

Death Predicted
Based on knowledge of your brain chemistry, Death made a prediction about whether you would go to Aleppo or Damascus. He awaits in whichever city he predicted. Given that you go to Aleppo, you are 80% confident that Death will await there. And given that you go to Damascus, you are 60% confident that Death will await there.

Your brain chemistry is the common cause of Death’s prediction and your choice. It explains the correlation between you and Death’s choice of city.

In this case, the recommendations of CDT depend upon how confident you are that you’ll end up going to Aleppo. I’ll suppose that avoiding Death is the only thing you care about, and that $V($Death$) = 0$, while $V($Life$) = 1$. Let ‘$A$’ be the proposition that you go to Aleppo, and let ‘$D$’ be the proposition that you go to Damascus. Let $a$ be your probability that you’ll go to Aleppo. Then,

$$ \mathcal{U}(A) = 0.6 - 0.4 a \qquad \text{ and } \qquad \mathcal{U}(D) = 0.4 + 0.4 a $$

If $a > 0.25$, then $\mathcal{U}(D) > \mathcal{U}(A)$. If $a < 0.25$, then $\mathcal{U}(D) < \mathcal{U}(A)$. And if $a = 0.25$, then $\mathcal{U}(D) = \mathcal{U}(A)$. So, if you are likely to go to Aleppo, then CDT recommends that you go to Damascus. If you begin to take this advice to heart, and learn that you have, so that you end up likely to go to Damascus, then CDT changes its mind, and advises you to go to Aleppo. If you follow this advice, and learn that you have, then CDT will change course again, advising you to go to Damascus. And so on.

Deliberational Causal Decision Theorists like Brian Skyrms, James Joyce, and Brad Armendt advise you to vacillate back and forth in this way until you end up exactly 25% likely to choose Aleppo and 75% likely to choose Damascus. At that point, both options have equal utility, and so both options are permissible. Skyrms advises you to perform a mixed act of choosing Aleppo with 25% probability and Damascus with 75% probability, whereas Joyce and Armendt say simply that you are permitted to pick either destination, but none will conclude that you’ve chosen irrationally from the fact that you end up in Aleppo. (My official position is that this is a mistake. Given that you’re more likely to face Death in Aleppo than Damascus, Aleppo is an irrational choice.)

Unknown common causes aren’t the only way of introducing a correlation between your choice, $A$, and a state of nature, $K$. There could be a correlation because there is a common effect of $A$ and $K$, $CE$, whose value is known.

In this case, there could be a correlation between $A$ and $K$, even when they are causally independent, and they have no common causes.

For instance, consider:

Death Foretold
Earlier today, Death flipped an indeterministic coin to decide whether to go to Aleppo or Damascus. If it landed heads twice, then he decided to go to Damascus. Otherwise, he decided to go to Aleppo. Now you must choose where to go. Before you make your choice, an oracle informs you that you will meet Death tomorrow.

Whether you meet Death is a common effect of your choice and the coin flip. And the oracle’s prophesy allows you to know the value of this common effect. So in Death Foretold, as in Death Predicted, there is a correlation between your choice and Death’s destination.

There are four relevant possibilities:

  • $w_A^A$, in which you choose to go to Aleppo and Death is in Aleppo.
  • $w_A^D$, in which you choose to go to Aleppo and Death is in Damascus.
  • $w_D^A$, in which you choose to go to Damascus and Death is in Aleppo.
  • $w_D^D$, in which you choose to go to Damascus and Death is in Damascus.
And I’ll suppose that $V(w_A^A) = V(w_D^D) = 0$, and $V(w_A^D) = V(w_D^A) = 1$.

Suppose that the oracle’s prophesies are perfectly reliable—you’re certain that she speaks the truth. In that case, the correlation is perfect, and you give positive probability to only the possibilities $w_A^A$ and $w^D_D$. And your probability that $w_A^A$ is actual is just your probability that you choose Aleppo, $a$.

Since the coin has already been flipped, there are no chances to speak of. At $w_A^A$, if you were to choose to go to Damascus, you’d be at the world $w_D^A$ (if you were to go to Aleppo, you’d be at $w_A^A$, since you in fact choose Aleppo at $w_A^A$). And at $w_D^D$, if you were to choose to go to Aleppo, you’d be at the world $w_A^D$ (if you were to go to Damascus, you’d be at $w_D^D$, since you in fact choose Damascus at $w_D^D$).

Again let $a$ be your probability that you’ll go to Aleppo. Then,

$$ \mathcal{U}(A) = 1-a \qquad \text{ and } \qquad \mathcal{U}(D) = a $$

So, in Death Foretold, CDT leads to exactly the same kind of instability as in Death Predicted. So long as $a > 0.5$, $\mathcal{U}(D) > \mathcal{U}(A)$. If $a < 0.5$, then $\mathcal{U}(D) < \mathcal{U}(A)$. And if $a = 0.5$, then $\mathcal{U}(D) = \mathcal{U}(A)$.

As in Death Predicted, deliberational causal decision theorists will say that either destination is permissible. My own judgment is that this is the correct verdict. But I’m not interested in the defending this judgment. Instead, I want to call attention to the fact that CDT’s verdicts are different if Death flips his coin a bit later in the day.

Death Foretold (v2)
Later today, Death will flip an indeterministic coin to decide whether to go to Aleppo or Damascus. If it lands heads twice, then he will decide to go to Damascus. Otherwise, he will decide to go to Aleppo. Now you must choose where to go. Before you make your choice, an oracle informs you that you will meet Death tomorrow.

In this case, there are chances to speak of. At $w_A^A$, if you were to choose to go to Damascus, there’s a 25% chance that you’d be at the world $w_D^D$, and there’s a 75% chance that you’d be at the world $w_D^A$ (since there’s a 25% chance that Death’s coin lands heads twice). Similarly, at $w_A^A$, if you were to choose to go to Aleppo, there’s a 25% chance that you’d be at the world $w_A^D$, and there’s a 75% chance that you’d be at the world $w_A^A$. At $w_D^D$, if you were to choose to go to Damascus, there’s a 25% chance that you’d be at the world $w_D^D$ and a 75% chance that you’d be at $w_D^A$. And, at $w_D^D$, if you were to go to Aleppo, there’s a 25% chance you’d be at $w_A^D$ and a 75% chance you’d be at $w_A^A$.

This makes a difference to the values of $\mathcal{U}(A)$ and $\mathcal{U}(D)$. Now that the coin flip has moved later in the day,

\begin{aligned} \mathcal{U}(A) &= \sum_w \Pr(w) \cdot \sum_x w_A(x) \cdot V(x) \\
&= \Pr(w_A^A) \cdot [ 0.25 \cdot V(w_A^D) + 0.75 \cdot V(w_A^A) ] + \Pr(w_D^D) \cdot [0.25 \cdot V(w_A^D) + 0.75 \cdot V(w_A^A)] \\
&= a \cdot [ 0.25 \cdot 1 + 0.75 \cdot 0 ] + (1-a) \cdot [0.25 \cdot 1 + 0.75 \cdot 0] \\
&= 0.25 \end{aligned}

and

\begin{aligned} \mathcal{U}(D) &= \sum_w \Pr(w) \cdot \sum_x w_D(x) \cdot V(x) \\
&= \Pr(w_A^A) \cdot [ 0.25 \cdot V(w_D^D) + 0.75 \cdot V(w_D^A) ] + \Pr(w_D^D) \cdot [0.25 \cdot V(w_D^D) + 0.75 \cdot V(w_D^A)] \\
&= \Pr(w_A^A) \cdot [ 0.25 \cdot 0 + 0.75 \cdot 1 ] + \Pr(w_D^D) \cdot [0.25 \cdot 0 + 0.75 \cdot 1] \\
&= 0.75 \end{aligned}

So now, $\mathcal{U}(D)$ is greater than $\mathcal{U}(A)$, no matter how likely you are to go to Aleppo or Damascus. So now, deliberational causal decision theorists will say that it is impermissible to go to Aleppo. This seems like the wrong verdict to me, but what seems worse is that deliberational CDT treats Death Foretold differently from Death Foretold (v2). Death’s coin flip is causally independent of your choice; whether the coin is flipped in the morning or the evening shouldn’t make a difference with respect to whether it is permissible to go to Aleppo.

Causal decision theorists could try to treat these two cases similarly by using a Rabinowicz-ian (strongly) centered imaging function. This imaging function is like the one I used above, except that, in Death Foretold (v2), it says that, at $w_A^A$, were you to go to Aleppo, there’s a 100% chance that you’d end up at world $w_A^A$; and, at $w_D^D$, were you to go to Damascus, there’s a 100% chance that you’d end up at world $w_D^D$.

Rabinowicz’s theory helps somewhat, but not enough. It still treats Death Foretold (v2) differently than Death Foretold. In the second version of the case, where Death’s coin flip takes place later in the day, it says that

\begin{aligned} \mathcal{U}(A) = 0.25 (1-a) \qquad \text{ and } \qquad \mathcal{U}(D) = 0.75 a \end{aligned}

Whereas, when Death’s coin flip takes place earlier in the day, it says that $\mathcal{U}(A) = 1-a$ and $\mathcal{U}(D) = a$ (since, in that case, there are no chances to speak of, so Rabinowicz and the orthodox view will agree). Suppose that your initial probability for $A$ is 0.4. Then, in Death Foretold, Rabinowicz’s theory says (at least, at the beginning of deliberation) that you must go to Aleppo. However, in Death Foretold (V2), with the same initial probability for $A$, Rabinowicz’s theory says (at least, at first) that you must go to Damascus.