*I spent the past two days preparing comments on a very interesting paper by Vera Hoffmann-Kolss for the upcoming Society for the Metaphysics of Science meeting. Thinking through the paper got me freshly confused about some matters that I had thought settled, and so I thought I’d write up a blog post on those confusions in an attempt to sort them out.*

It’s tempting to think that counterfactual dependence suffices for causation. But this can’t be quite right. I both played cards and played poker. Had I not played cards, I wouldn’t have played poker. So there is counterfactual dependence between my playing poker and my playing cards. But my playing cards didn’t *cause* me to play poker. The relationship between my playing cards and my playing poker is constitutive, not causal.

Sophisticated counterfactual theories of causation, therefore, do not say that counterfactual dependence suffices for causation. Rather, what they say is that counterfactual dependence between *distinct* events suffices for causation. By ‘distinct’, we mean a bit more than ‘non-identical’. The event of my playing poker is not identical to the event of my playing cards. (If you doubt this, note that they differ causally. I played poker because I didn’t have a pinochle deck—I usually play pinochle. But I certainly didn’t play cards because I didn’t have a pinochle deck.) Rather, ‘distinct’ in this context means something more like ‘not logically related’. If two events are not distinct, then let’s say that they *overlap*.

Worries about overlap plague other theories of causation, too. My playing poker is a minimally sufficient condition for my playing cards, so—unless overlapping conditions are specifically excluded—Mackie’s account of causation will deem them causally related.

Today, I’ll be exploring this problem as it plays out for those who, like myself, think that the causal relata are variable values. For such theorists, the problem is to say when *variables* are distinct, and when they overlap.

# 1. Lewisian Events and Variables

Let me begin by getting clearer about what I mean by a ‘variable’. When I’m at my most careful and pedantic, I like to think of variables as generalized Lewisian events. Lewis thought that an event was a property of a spacetime region. For Lewis, a property is just a class of individuals at worlds—intuitively, the class of individuals possessing the property at those worlds. Thus, for Lewis, an event is just a class of spacetime regions at worlds.

Given a Lewisian event, we may construct a function, $e$, from regions to $\{ 1, \ast \}$. $e ( R )=1$ if the event occurs within $R$, and $e ( R ) =\ast$ otherwise. Here, I follow Lewis in distinguishing regions *within* which an event occurs from regions *in* which an event occurs. An event occurs *in* at most one region in any world. However, it occurs *within* every region which contains the region *in* which it occurs. For every world, there will be a worldly region which contains all the regions at that world. If and only if an event occurs at a world $\omega$, the function $e$ will map the worldly region $R_\omega$ to $1$. Given the class of regions at $\omega$ *within* which an event occurs, we may recover the region *in* which it occurs at $\omega$ by simply taking their intersection. So, just as we may go from a Lewisian event to one of these functions, we may go from one of these functions back to a Lewisian event. Lewisian events, then, are equivalent to a function from regions to $\{ 1, \ast \}$.

- Lewisian Events
- A Lewisian event, $e$, is a function from spacetime regions at worlds to $\\{1, \ast \\}$. $e ( R ) = 1$ iff $e$ occurs within the region $R$, and $e ( R ) = \ast$ otherwise.

This is a characterization, not a definition. Not just any function from regions to $\{1, \ast\}$ will count as a Lewisian event. For instance, no event occurs *in* more than one spacetime region at any one world. Lewis rules out events which occur in only one possible world—that is, events which map only a single wordly region to $1$. He also rules out events which are too gerrymandered—e.g., any event which is essentially “a fiddling in the presence of a boy whose grandson will first set foot on the moon” (p.~257). Some unified account of which events exist and which do not would be nice, but Lewis has none to offer.

We can understand a Lewisian *variable* as the contrastive generalization of a Lewisian event. A variable $V$ is a function from regions to $\mathbb{R} \cup \{ \ast \}$. If $V( R) = v$, then $v$ is the *value* the variable $V$ takes on within the region $R$. If $V( R) = \ast$, then the variable $V$ is undefined within $R$. As with events, we should distinguish those regions *within* which a variable takes on a value from those regions *in* which it takes on a value. Like events, variables take on a value *in* at most one region per world, but take on a value *within* every region containing the region *in* which it takes on a value. As with events, at each world, we may take the intersection of all regions *within* which a variable takes on a value to recover the region *in* which it takes on that value.

- Lewisian Variables
- A Lewisian variable, $V$, is a function from spacetime regions at worlds to $\mathbb{R} \cup \{ \ast \}$. If $V ( R ) = v$, then $v$ is the value the variable takes on within the region $R$, and if $V ( R ) = \ast$, then the variable is undefined within the region $R$.

This, too, is a characterization and not a definition. Not just any function from wordly regions to $\mathbb{R} \cup \{ \ast \}$ counts as a variable. As in the case of events, we should assume that variables take on a value *in* at most one region per world, and we should rule out certain too gerrymandered functions from regions to $\mathbb{R} \cup \{\ast\}$. There is no variable which takes on the value $x$ within exactly those worlds where my right earlobe is $x$ meters from the last spot the last descendant of Napoleon ever left their glasses, and *in* the region where Stanley Kubrick first dreamt. As with events, it would be nice to have a precise characterization of which variables exist and which do not, but I have none to offer.

As a helpful shorthand, we may allow ourselves to write “$e$” for the set of regions which get mapped to $1$ by the event $e$.
$$
e \,\,:=\,\, \{ R \mid e( R) = 1 \}
$$
And we may allow ourselves to write “$V=v$” for the set of regions which get mapped to $v$ by the function $V$.
$$
V=v \,\,:=\,\, \{ R \mid V( R) = v \}
$$
This is a helpful, and mostly harmless, bit of notation, but for reasons I’ll discuss in the next paragraph, *strictly speaking* we should not conflate the set $\{ R \mid V( R) = v \}$ with the variable value $V=v$. Some other bits of notation: I’ll use “$\mathscr{R}(V)$” for the *range* of the variable $V$—that is, the set of all real numbers to which $V$ maps some region,
$
\{ v \in \mathbb{R} \mid \exists R : V( R) = v \}
$. I’ll use a boldfaced “$\mathbf{v}$” for a *set* of values of $V$, and I’ll therefore use “$V \in \mathbf{v}$” for the set of regions which get mapped to a value within $\mathbf{v}$, $\{ R \mid V(R ) \in \mathbf{v} \}$.

Notice that, given these characterizations, a Lewisian event is just a singly-valued Lewisian variable. A multiply-valued Lewisian variable is what we may call a *proper* Lewisian variable. I say that proper Lewisian variables are the *contrastive* generalization of Lewisian events. Why ‘contrastive’? Consider an event like Susan’s stealing the bicycle. This is a function which maps regions to $1$ iff they contain Susan stealing the bicycle, and $\ast$ otherwise. This event may be embedded in two different variables. Firstly, consider a variable we may call *whether Susan steals*. This variable takes on the value $1$ for regions in which Susan steals the bicycle, takes on the value $0$ for regions in which Susan buys the bicycle, and takes on the value $\ast$ otherwise (e.g., for regions which don’t contain Susan, or in which Susan steals something other than the bicycle). Secondly, consider a variable we may call *what Susan steals*. This variable takes on the value $1$ for regions in which Susan steals the bicycle, takes on the value $0$ for regions in which Susan steals the moped, and takes on the value $\ast$ otherwise. Both of these variables take on the value $1$ iff the event of Susan’s stealing the bicycle occurs. However, the variable value *whether Susan steals* $= 1$ is different from the variable value *what Susan steals* $=1$. The difference between them is akin to the difference between the sentences

- Susan
*stole*the bicycle (rather than paying for it). - Susan stole
*the bicycle*(rather than the moped).

One way of making sense of sentences like (1) and (2) is that (1) *presupposes* that Susan either stole or paid for the bicycle, and *asserts* that she stole it; whereas (2) *presupposes* that Susan either stole the bicycle or the moped, and *asserts* that she stole the bike. Though (1) and (2) assert the same thing, they differ in their presuppositions. Similarly, though the variables *whether Susan steals* and *what Susan steals* take on the value $1$ in precisely the same regions, they differ with respect to their presuppositions. *whether Susan steals* presupposes that Susan either stole the bike or paid for it, while *what Susan steals* presupposes that Susan either steals the bike or the moped. This difference in presupposition makes for a difference in variable value. The variable value *whether Susan steals* $=1$ is a different variable value than *what Susan steals* $=1$. This is for the good, since *whether* Susan stole caused her arrest, but *what* she stole did not. (See Dretske (1977)) Since we want our theory of causation to mark this difference, and since it would be preferable to not have to increase the arity of the causal relation, it is good that our causal relata have this contrastive character.

It is for this reason that we should be careful to distinguish the set $\{ R \mid V( R) = v \}$ from the variable value $V=v$. If we did not distinguish them, then the variable value *whether Susan steals* $=1$ would be identical to the variable value *what Susan steals* $=1$. Compare: it is common to model propositions as functions from possible worlds to truth-value. If we assume that these functions are total, then there is no harm in shifting back and forth between the functions and the set of worlds which get mapped to ‘true’. Given that the functions are total, these representations are equivalent. One method for representing propositions with presuppositions in this framework is to make the corresponding functions partial. Worlds at which the presupposition fails are not mapped to any truth-value. Once this change is made, we must be careful to distinguish a proposition from the set of worlds at which it is true; we may go from the former to the latter, but not from the latter back to the former. And the situation is parallel when we move from taking *events* to be the causal relata to taking *variable values* to be the causal relata. A variable value presupposes a set of disjoint events, and singles out one of them as occurrent. Thus, from each variable value, we get a corresponding event; but we cannot get from an event back to a corresponding variable value. In what follows, I will use expressions like “$V=v$” to stand for classes of regions at worlds, but we should bear in mind that this is a simplification which is harmless for present purposes, but could quickly become harmful in others.

# 2. Lewisian Overlap

Now that we’re clear on what variables are (or at least, what I think they should be, for the purposes of constructing a theory of causation), let’s think through when we should say that variables overlap, and when we should say that they are distinct.

## 2.1. Overlapping Events

Since variables are just the contrastive generalization of events, a nice place to start is with Lewis’s theory of when events overlap, and when they are distinct. To begin with, let’s say that an event $e$ *implies* $f$ iff every region containing the event $e$ also contains the event $f$, or $e \subseteq f$. We can then present the Lewisian account of when events overlap as follows:

- Overlapping Events
- Two events, $e$ and $f$, overlap if:

E1) $e$ implies $f$, $$ e \subseteq f $$

E2) $f$ implies $e$, $$ f \subseteq e $$ or

E3) there is some event, $i$, which is implied by both $e$ and $f$, $$ e \subseteq i \quad \text{ and } \quad f \subseteq i $$

In (E3), we can think of the event $i$ as an event which lies at the intersection of $e$ and $f$. Here, the set theoretic notation can be misleading—keep in mind that, to say that $e \subseteq i$ is to say that any region which *contains* $e$ also *contains* $i$; and to say that $f \subseteq i$ is similarly to say that any region which *contains* $f$ also *contains* $i$. So $i$ is an event which sits (necessarily) at the intersection of the events $e$ and $f$. If there is such an intersective event, then $e$ and $f$ overlap.

(Parenthetically, because of the superficial differences between my presentation of Lewisian events and Lewis’s own, my use of the term “implies” differs from Lewis’s. Nevertheless, **Overlapping Events** follows from the sufficient conditions for overlap which Lewis offers in section 5 of *Events*. At the end of this post, I offer a proof of this fact.)

Lewis introduces condition (E3) because of cases like the following (originally from Kim): I write out the name “Larry” on the whiteboard. In so doing, I write out the letters “Larr”, and I write the letters “rry”. Had I not written the letters “Larr”, I would not have written the letters “rry”. But this dependence is logical, and not causal. Neither event on its own implies the other, so conditions (E1) and (E2) on their own will not tell us that these events overlap. However, (E3) will do the job, since there is the event of writing the letters “rr”. Any region within which I write “Larr” is a region within which I write “rr”; and any region within which I write “rry” is a region within which I write “rr”. So (E3) allows us to correctly rule that my writing “Larr” overlaps with my writing “rry”.

Actually, once we have condition (E3) of **Overlapping Events**, we no longer have any need for conditions (E1) or (E2). That’s because (E1) is just the special case of (E3) where $i=f$, and (E2) is just the special case of (E3) where $i=e$.

## 2.2 Overlapping Variables

Generalizing these conditions to variables, we may give the following sufficient conditions for variables $U$ and $V$ overlapping.

- Overlapping Variables
- Two variables, $U$ and $V$, overlap if:
V1) some value of $U$ implies something non-trivial about the value of $V$, $$ \exists u \in \mathscr{R}(U) \,\,\, \exists \mathbf{v} \subsetneq \mathscr{R}(V) \quad U=u \subseteq V \in \mathbf{v}$$

V2) some value of $V$ implies something non-trivial about the value of $U$, $$ \exists v \in \mathscr{R}(V) \,\,\, \exists \mathbf{u} \subsetneq \mathscr{R}(U) \quad V=v \subseteq U \in \mathbf{u}$$ or

V3) there is some variable, $I$, about whose values both some value of $U$ and some value of $V$ imply something non-trivial, $$ \exists u \in \mathscr{R}(U) \,\,\, \exists \mathbf{i} \subsetneq \mathscr{R}(I) \quad U=u \subseteq I \in \mathbf{i} $$ and $$ \exists v \in \mathscr{R}(V) \,\,\, \exists \mathbf{i} \subsetneq \mathscr{R}(I) \quad V=v \subseteq I \in \mathbf{i} $$

This isn’t the most obvious generalization of **Overlapping Events**. In place of (V1), we might instead have said, “some value of $U$ implies some value of $V$”. This condition would have been strictly weaker, in the sense that it would have classified strictly fewer pairs of variables as overlapping. For illustration, suppose that both $U$ and $V$ are ternary variables which, for any spacetime region $R$, either take on the value $\ast$ or one of the following pairs of values.

$V$ | $1$ | $2$ | $2$ | $0$ | $0$ | $1$ |
---|---|---|---|---|---|---|

$U$ | $0$ | $0$ | $1$ | $1$ | $2$ | $2$ |

In this case, no value of $U$ implies any value of $V$ (nor does any value of $V$ imply any value of $U$). Nevertheless, every value of $U$ does imply something non-trivial about the value of $V$. Necessarily, if $U(R ) = u$, then $V(R ) \neq u$, for $u \in \{ 0, 1, 2 \}$. Symmetrically, every value of $V$ implies something non-trivial about the value of $U$. Necessarily, if $V( R)=v$, then $U( R) \neq v$, for $v \in \{ 0, 1, 2 \}$. There is clearly a logical relationship between $U$ and $V$, though the weaker formulation “some value of $U$ implies some value of $V$” wouldn’t allow us to detect it, and so I think it makes sense to opt for my stronger formulation of **Overlapping Variables**.

Notice that, since events are just singly-valued variables, (V1), (V2), and (V3) also give sufficient conditions for the overlap of *events*. In this special case, they reduce back to the Lewisian conditions (E1), (E2), and (E3).

We saw above that, once we have condition (E3) of **Overlapping Events**, we get conditions (E1) and (E2) for free. The same is true of condition (V3) of **Overlapping Variables**. For (V1) is just the special case of (V3) in which $I = V$, and (V2) is just the special case of (V3) in which $I = U$. Moreover, condition (V2) is redundant, once we have condition (V1). If some value of $V$ implies something non-trivial about the value of $U$, $V=v \subseteq U \in \mathbf{u}$, then there must be some value of $U$, $u^* \notin \mathbf{u}$, such that $U = u^* $ implies that $V \neq v$. So there must be some value of $U$ which implies something non-trivial about the value of $V$.

# 3. Woodwardian Overlap

Jim Woodward puts forward a necessary condition for variable distinctness (and therefore, a sufficient condition for variable overlap) called *independent fixability*. Two variables $U$ and $V$ are independently fixable iff, for every value $u \in \mathscr{R}(U)$ and every value $v \in \mathscr{R}(V)$, it is possible to set $U$ to $u$ *via* an intervention while setting $V$ to $v$ *via* an intervention. I’d prefer to not invoke Woodward’s technical notion of an intervention if I don’t have to; and fortunately, it follows from $U$ and $V$ being independently fixable that it is *possible* that $U = u$ and $V = v$, for every pair of values $u$ and $v$. Thus, Woodward’s *independent fixability* entails the following sufficient condition for variable overlap:

- Incompossible Values
- The variables $U$ and $V$ overlap if

IV) there is some value $u \in \mathscr{R}(U)$ and some value $v \in \mathscr{R}(V)$ such that there is no possible world within which $U=u$ and $V=v$.

Now, it’s interesting to note that (IV) is equivalent to condition (V1) from **Variable Overlap**. (I prove this at the end of the post.) If we think that **Incompossible Values** is strong enough to reveal all cases of variable overlap, then, we should think that condition (V1) is all that’s required, and that condition (V3) is too strong.

This is what I thought until recently. I learned better from Hoffmann-Kolss’s paper, mentioned at the beginning of this post. Condition (V1) on its own is too weak. There are pairs of variables, all of whose values are compossible with one another, but which still overlap. Here’s a modification of Hoffmann-Kolss’s case: I will roll a standard six sided die. Then, consider the variables $O$ and $H$, where
$$
O( R) = \left\{\begin{array}{l l}
1 & \text{ if the die lands on an odd number within $R$} \\

0 & \text{ if the die lands on an even number within $R$} \\

\ast & \text{ otherwise }

\end{array} \right.
$$
and
$$
H( R) = \left\{\begin{array}{l l}
1 & \text{ if the die lands on a high number ($>3$) within $R$} \\

0 & \text{ if the die lands on a low number ($\leqslant 3$) within $R$} \\

\ast & \text{ otherwise }

\end{array} \right.
$$
No value of $O$ implies anything non-trivial about the value of $H$; nor does any value of $H$ imply anything non-trivial about the value of $O$. So (V1), and (IV), rule $O$ and $H$ distinct. However, there is a probabilitic correlation bewteen the values of $O$ and $H$. While the unconditional probability that $O = 1$ is ^{1}⁄_{2}, the probability that $O=1$, given that $H=1$, is ^{1}⁄_{3}. And this correlation is not causal, but rather logical. So our account of variable overlap should tell us that $O$ and $H$ overlap.

# 4. Shared Supervenience Bases

An incredibly natural reaction to this case is to think that the *reason* $O$ and $H$ overlap is that there is the more fine-grained variable $N$, which tells us the exact *number* the die lands on. That is, $N( R) = n$ if $R$ is a region within which the die lands on $n$, for $n \in \{ 1, 2, \dots, 6 \}$, and $N( R) = \ast$ otherwise. The values of $O$ and $H$ *supervene* upon the value of $N$, in the sense that $N$’s values imply the values of $O$ and $H$. Any region within which $N$ takes on an odd value is a region within which $O$ takes on the value $1$; and any region within which $N$ takes on an even value is a region within which $O$ takes on the value $0$. Similarly, any region within which $N$ takes on a value greater than 3 is a region within which $H$ takes on a value of $1$; and any region within which $N$ takes on a value less than or equal to 3 is a region within which $H$ takes on a value of $0$.

For the reasons we encountered above, we will want to generalize this notion of variable supervenience so that it is enough for one variable to supervene upon another that the value of one implies *something* non-trivial about the value of the other. Then, we might think that the right way to rule out overlapping variables like $O$ and $H$ is by saying, if there is a variable, $S$, with values that imply something non-trivial about the value of $U$, and with values that imply something non-trivial about the value of $V$, then then $U$ and $V$ overlap. Let’s call this sufficient condition for overlap **Shared Supervenience Base**.

- Shared Supervenience Base
- The variables $U$ and $V$ overlap if

SSB) there is some variable $S$ with a value which implies something non-trivial about the value of $U$, $$ \exists s \in \mathscr{R}(S) \,\,\, \exists \mathbf{u} \subsetneq \mathscr{R}(U) \quad S = s \subseteq U \in \mathbf{u} $$ and a value which implies something non-trivial about the value of $V$, $$ \exists s \in \mathscr{R}(S) \,\,\, \exists \mathbf{v} \subsetneq \mathscr{R}(V) \quad S = s \subseteq V \in \mathbf{v} $$

(**Shared Supervenience Base**, by the way, is essentially the route which Hoffmann-Kolss ends up taking, though there are some superficial differences.)

Notice that (SSB) is strictly stronger than (IV). That is, any variables which (IV) rules overlapping will be ruled overlapping by (SSB); though (SSB) rules some variables overlapping which (IV) does not, like $O$ and $H$. To see that (SSB) will agree with (IV) when it says two variables overlap, recall that (IV) is equivalent to (V1), and then note that (V1) is the special case of (SSB) in which $S = U$.

Notice also that (SSB) is *not* just condition (V3) from **Overlapping Variables**. If some value of $U$ implies something non-trivial about the value of $V$, then let’s say that $U$ implies $V$. Then, (SSB) rules that two variables, $U$ and $V$, overlap when there is some third variable which implies both $U$ and $V$. (V3), on the otherhand, rules that two variables, $U$ and $V$, overlap when there is some third variable which is implied by both $U$ and $V$. Using arrows to represent the relation of implication, we can visualize the difference between (V3) and (SSB) like so.

Notice also that (V3) is capable of correctly classifying $O$ and $H$ as overlapping. For both $O$ and $H$ imply something non-trivial about the value of the variable $N$. For instance, $O = 1 \subseteq N \in \{ 1, 3, 5\}$, and $H=1 \subseteq N \in \{4, 5, 6 \}$.

Since events are just singly-valued variables, (SSB) also gives a sufficient condition for the overlap of events. In this special case, (SSB) says that the events $e$ and $f$ overlap if there is some third event, $s$, which implies both $e$ and $f$, $s \subseteq e$ and $s \subseteq f$.

I believe that we should reject (SSB), and that, instead, we should endorse the Lewisian (V3). To see why, we can just focus on what (SSB) says about events.

Let’s suppose that the assassination of Archduke Ferdinand by Gavrilo Princip is an event. And let’s suppose that it is essentially an assassination, with a gun, *of* Archduke Ferdinand, and *by* Gavrilo Princip. That is, no region gets mapped to $1$ by this event unless it is a region within which Gavrilo Princip shoots Archduke Ferdinand dead. Call the event “$a$”, for *assassination*. It seems important to have $a$ included in our ontology—this event caused the start of World War I, and in order for it to do this, it must exist.

Let’s suppose also that Gavrilo Princip’s pulling the trigger is an event, and that this event is essentially a pulling of a trigger by Gavrilo Princip. That is, no region gets mapped to $1$ by this event unless it is a region within which Gavrilo Princip pulls the trigger of a gun. Call this event “$p$”, for *Princip*.

And let’s additionally suppose that Archduke Ferdinand’s death is an event, and that this event is essentially a dying of Archduke Ferdinand. That is, no region gets mapped to $1$ by this event unless it is a region within which Archduke Ferdinand dies. Call this event “$f$”, for *Ferdinand*.

It seems important to have the events $p$ and $f$ in our ontology, since it seems evident that $p$ caused $f$. For this reason, it is also important than $p$ and $f$ be *distinct*. If we say that they overlap, then our account of causation would incorrectly tell us that $p$ did not cause $f$.

But note that $a$ implies both $p$ and $f$. Any region within which $a$ occurs is a region within which $p$ occurs. And any region within which $a$ occurs is also a region within which $f$ occurs. So (SSB) tells us, incorrectly, that $p$ and $f$ overlap. Therefore, if we accept (SSB), then we could not say that Gavrilo Princip’s pulling the trigger caused the death of Archduke Ferdinand. We’d better not accept (SSB), then.

Notice that the same verdict does not follow from (V3) of the Lewisian **Overlapping Variables**. For $p$ does not imply $a$; nor does $f$ imply $a$. Princip could pull the trigger without assassinating Archduke Ferdinand. So too could the Archduke die without being assassinated by Princip. So I think there is compelling reason to reject (SSB) and to instead endorse (V3). (V3) allows us to correctly rule that $O$ and $H$ overlap without incorrectly classifying $p$ and $f$ as overlapping.

# A. Loose Ends

## A.1. Lewis is Committed to Overlapping Events

Above, I claimed that **Overlapping Events** was entailed by Lewis’s sufficient conditions for overlap. Given the variant notation, this is far from obvious. For the curious (and to assuage my own nagging conscience) I’ll give a proof here. Let’s introduce $\hat{e}$ for a function which maps a region $R$ to $1$ iff the event occurs *in* that region (the event’s merely occuring *within* that region is not enough). As before, we can use $\hat{e}$ for the set of regions which get mapped to $1$ by $\hat{e}$. Then $\hat{e}$ will be an event as Lewis formally defined them.

Lewis gave sufficient conditions for overlap in terms of a parthood relation. He said that $\hat{e}$ and $\hat{f}$ overlap if either (1) $\hat{e}$ is a part of $\hat{f}$; (2) $\hat{f}$ is a part of $\hat{e}$; or (3) there is some event $\hat{\imath}$ which is a part of both $\hat{e}$ and $\hat{f}$. What I wish to show here is that $e \subseteq f$ suffices for $\hat{f}$ being a part of $\hat{e}$. This will show that overlap according to **Overlapping Events** suffices for overlap according to Lewis.

On page 255 of *Events*, Lewis defines his *implication* relation as follows,

Let us say that event $e$ implies event $f$ iff, necessarily, if $e$ occurs in a region then also $f$ occurs in that region. Considered as classes, event $e$ is a subclass included in class $f$.

This use of ‘implies’ differs from the one I used above. It is like the one I used above, but applied to events after we put the hats on. To say that $\hat{e}$ implies $\hat{f}$ is to say that $\hat{e} \subseteq \hat{f}$. Later, on page 258, Lewis defines the relation of *being essentially part of* as follows (with minor notational changes)

Let us say that event $f$ is essentially part of event $e$, iff, necessarily, if $e$ occurs in a region, then also $f$ occurs in a subregion included in that region.

If we use $\hat{f} \sqsubseteq \hat{e}$ to stand for this relation, then we have that $$ \hat{f} \sqsubseteq \hat{e} := (\forall R) (\hat{e}( R) = 1 \rightarrow (\exists R’) ( R’ \subseteq R \wedge \hat{f}( R’) = 1) ) $$ We may now prove the following lemma.

- Lemma 1
- If $e \subseteq f$, then $\hat{f} \sqsubseteq \hat{e}$.

*Proof*. Assume that $e \subseteq f$. If $\hat{e}( R) = 1$, then $e(R ) = 1$. And, since $e \subseteq f$, if $e( R) = 1$, then $f( R) =1$. If $f( R) = 1$, then there is some subregion $R’ \subseteq R$ such that $\hat{f}( R’) = 1$. So, if $\hat{e}( R) = 1$, then there is some subregion $R’ \subseteq R$ such that $\hat{f}(R’)=1$. So $\hat{f} \sqsubseteq \hat{e}$. $\blacksquare$

Then, on page 259, Lewis defines the relation of *being a part of* as follows (with minor notational changes)

Let us say that occurrent event $f$ is part of occurrent event $e$ iff some occurrent event that implies $f$ is essentially part of some occurrent event that implies $e$.

If we use “$\hat{f}P\hat{e}$” to stand for “$\hat{f}$ is a part of $\hat{e}$“, then this tells us that
$$
\hat{f}P\hat{e} \,\,:=\,\, (\exists \hat{\imath}) (\exists \hat{\jmath}) ( \hat{\imath} \subseteq \hat{f} \wedge \hat{\jmath} \subseteq \hat{e} \wedge \hat{\imath} \sqsubseteq \hat{\jmath} )
$$
We can then prove **Lemma 2**.

- Lemma 2
- If $\hat{f} \sqsubseteq \hat{e}$, then $\hat{f} P \hat{e}$.

*Proof*. Since $\hat{f} \subseteq \hat{f}$ and $\hat{e} \subseteq \hat{e}$, if $\hat{f} \sqsubseteq \hat{e}$, then we may just let $\hat{\imath} = \hat{f}$ and let $\hat{\jmath} = \hat{e}$ in the definition of $\hat{f} P \hat{e}$ above. $\blacksquare$

Putting together **Lemma 1** and **2**, we have that $e \subseteq f$ suffices for $\hat{f}$ being a part of $\hat{e}$. Therefore, a) $e \subseteq f$ suffices for $\hat{f}$ being a part of $\hat{e}$; b) $f \subseteq e$ suffices for $\hat{e}$ being a part of $\hat{f}$; and c) there being an event $i$ such that $e \subseteq i$ and $f \subseteq i$ suffices for there being an event $\hat{\imath}$ such that $\hat{\imath}$ is a part of $\hat{e}$ and $\hat{\imath}$ is a part of $\hat{f}$. So, if two events are overlapping according to **Overlapping Events**, then they will be overlapping according to Lewis.

## A.2. (IV) is Equivalent to (V1)

Above, I claimed that Woodward’s (IV) from **Incompossible Values** is equivalent (V1) from **Overlapping Variables**. To see this, suppose that (V1) is true, so that $U = u \subseteq V \in \mathbf{v}$, for some $u \in \mathscr{R}(U)$ and some $\mathbf{v} \subsetneq \mathscr{R}(V)$. It follows that there is some value of $V$, $v^* \notin \mathbf{v}$, such that there is no region within which $U = u$ and $V = v^* $. Therefore, there is no worldly region within which $U=u$ and $V = v^* $, and thus $U$ and $V$ overlap according to (IV).

Going in the other direction, suppose that $U$ and $V$ are distinct according to (IV). Then, there is some value of $U$—call it ‘$u^* $‘—and some value of $V$—call it ‘$v^* $‘—such that there is no worldly region within which $U=u^* $ and $V = v^* $. But then, $U=u^* $ implies something non-trivial about the value of $V$, namely, that $V \neq v^* $. So $U$ and $V$ are distinct according to (V1).