Tuesday, November 21, 2017


A standard definition of omniscience is:

  • x is omniscient if and only if x knows all truths and does not believe anything but truths.

But knowing all truths and not believing anything but truths is not good enough for omniscience. One can know a proposition without being certain of it, assigning a credence less than 1 to it. But surely such knowledge is not good enough for omniscience. So we need to say: “knows all truths with absolute certainty”.

I wonder if this is good enough. I am a bit worried that maybe one can know all the truths in a given subject area but not understand how they fit together—knowing a proposition about how they fit together might not be good enough for this understanding.

Anyway, it’s kind of interesting that even apart from open theist considerations, omniscience isn’t quite as cut and dried as one might think.

Perfect rationality and omniscience

  1. A perfectly rational agent who is not omniscient can find itself in lottery situations, i.e., situations where it is clear that there are many options, exactly one of which can be true, with each option having approximately the same epistemic probability as any other.

  2. A perfectly rational agent must believe anything there is overwhelming evidence for.

  3. A perfectly rational agent must have consistent beliefs.

  4. In lottery situations, there is overwhelming evidence for each of a set of inconsistent claims, namely for the claims that one of options 1,2,3,… is the case, but that option 1 is not the case, that option 2 is not the case, that option 3 is not the case, etc.

  5. So, in lottery situations, a perfectly rational agent has inconsistent beliefs. (2,4)

  6. So, a perfectly rational agent is never in a lottery situation. (3,5)

  7. So, a perfectly rational agent is omniscient. (1,6)

The standard thing people like to say about arguments like this is that they are a reductio of the conjunction of the premises 2 through 4. But I think it might be interesting to take it as a straightforward argument for the conclusion 7. Maybe one cannot separate out procedural epistemic perfection (perfect rationality) from substantive epistemic perfection (omniscience).

That said, I am inclined to deny 3.

It’s worth noting that this yields another variant on an argument against open theism. For even though I am inclined to think that inconsistency in beliefs is not an imperfection of rationality, it is surely an imperfection simpliciter, and hence a perfect being will not have inconsistent beliefs.

Saturday, November 18, 2017

Bayesianism and anomaly

One part of the problem of anomaly is this. If a well-established scientific theory seems to predict something contrary to what we observe, we tend to stick to the theory, with barely a change in credence, while being dubious of the auxiliary hypotheses. What, if anything, justifies this procedure?

Here’s my setup. We have a well-established scientific theory T and (conjoined) auxiliary hypotheses A, and T together with A uncontroversially entails the denial of some piece of observational evidence E which we uncontroversially have (“the anomaly”). The auxiliary hypotheses will typically include claims about the experimental setup, the calibration of equipment, the lack of further causal influences, mathematical claims about the derivation of not-E from T and the above, and maybe some final catch-all thesis like the material conditional that if T and all the other auxiliary hypotheses obtain, then E does not obtain.

For simplicity I will suppose that A and T are independent, though of course that simplifying assumption is rarely true.

I suspect that often this happens: T is much better confirmed than A. For T tends to be a unified theoretical body that has been confirmed as a whole by a multitude of different kinds of observations, while A is a conjunction of a large number of claims that have been individually confirmed. Suppose, say, that P(T)=0.999 while P(A)=0.9, where all my probabilities are implicitly conditional on some background K. Given the observation E, and the fact that T and A entail its negation, we now know that the conjunction of T and A is false. But we don’t know where the falsehood lies. Here’s a quick and intuitive thought. There is a region of probability space where the conjunction of T and A is false. That area is divided into three sub-regions:

  1. T is true and A is false

  2. T is false and A is true

  3. both are false.

The initial probabilities of the three regions are, respectively, 0.0999, 0.0009999 and 0.0001. We know we are in one of these three regions, and that’s all we now know. Most likely we are in the first one, and the probability that we are in that one given that we are in one of the three is around 0.99. So our credence in T has gone down from three nines (0.999) to two nines (0.99), but it’s still high, so we get to hold on to T.

Still, this answer isn’t optimistic. A move from 0.999 to 0.99 is actually an enormous decrease in confidence.

But there is a much more optimistic thought. Note that the above wasn’t a real Bayesian calculation, just a rough informal intuition. The tip-off is that I said nothing about the conditional probabilities of E on the relevant hypotheses, i.e., the “likelihoods”.

Now setup ensures:

  1. P(E|A ∧ T)=0.

What can we say about the other relevant likelihoods? Well, if some auxiliary hypothesis is false, then E is up for grabs. So, conservatively:

  1. P(E|∼A ∧ T)=0.5
  2. P(E|∼A ∧ ∼T)=0.5

But here is something that I think is really, really interesting. I think that in typical cases where T is a well-established scientific theory and A ∧ T entails the negation of E, the probability P(E|A ∧ ∼T) is still low.

The reason is that all the evidence that we have gathered for T even better confirms the hypothesis that T holds to a high degree of approximation in most cases. Thus, even if T is false, the typical predictions of T, assuming they have conservative error bounds, are likely to still be true. Newtonian physics is false, but even conditionally on its being false we take individual predictions of Newtonian physics to have a high probability. Thus, conservatively:

  1. P(E|A ∧ ∼T)=0.1

Very well, let’s put all our assumptions together, including the ones about A and T being independent and the values of P(A) and P(T). Here’s what we get:

  1. P(E|T)=P(E|A ∧ T)P(A|T)+P(E|∼A ∧ T)P(∼A|T)=0.05
  2. P(E|∼T)=P(E|A ∧ ∼T)P(A|∼T)+P(E|∼A ∧ ∼T)P(∼A|∼T) = 0.14.

Plugging this into Bayes’ theorem, we get P(T|E)=0.997. So our credence has crept down, but only a little: from 0.999 to 0.997. This is much more optimistic (and conservative) than the big move from 0.999 to 0.99 that the intuitive calculation predicted.

So, if I am right, at least one of the reasons why anomalies don’t do much damage to scientific theories is that when the scientific theory T is well-confirmed, the anomaly is not only surprising on the theory, but it is surprising on the denial of the theory—because the background includes the data that makes T “well-confirmed” and would make E surprising even if we knew that T was false.

Note that this argument works less well if the anomalous case is significantly different from the cases that went into the confirmation of T. In such a case, there might be much less reason to think E won’t occur if T is false. And that means that anomalies are more powerful as evidence against a theory the more distant they are from the situations we explored before when we were confirming T. This, I think, matches our intuitions: We would put almost no weight in someone finding an anomaly in the course of an undergraduate physics lab—not just because an undergraduate student is likely doing it (it could be the professor testing the equipment, though), but because this is ground well-gone over, where we expect the theory’s predictions to hold even if the theory is false. But if new observations of the center of our galaxy don’t fit our theory, that is much more compelling—in a regime so different from many of our previous observations, we might well expect that things would be different if our theory were false.

And this helps with the second half of the problem of anomaly: How do we keep from holding on to T too long in the light of contrary evidence, how do we allow anomalies to have a rightful place in undermining theories? The answer is: To undermine a theory effectively, we need anomalies that occur in situations significantly different from those that have already been explored.

Note that this post weakens, but does not destroy, the central arguments of this paper.

A consideration making the theodical defeat of evil a bit easier

For an evil to be defeated, in the theodical sense, the evil needs to be not only compensated for in the sufferer’s life, but it needs to be interwoven into a good in the sufferer’s life in such a way that the meaning of the evil is radically transformed in that life.

A requirement of the defeat of evil guards against theodicies where the sufferer gets the short end of the stick, the evil being permitted for the sake of goods to other individuals, or abstract impersonal goods like elegant laws of nature. Defeat appears to have an innate intrapersonality to it.

It occurs to me, however, that in heaven the requirement of defeat can sometimes be met through goods that happen to someone other than the sufferer. For all in heaven are friends of the best sort, and as Aristotle says, a friend (of the best sort) is another self, so that what happens to the friend happens to one. So if Alice has suffered an evil and Bob got a proportionate good out of God’s permitting the evil to Alice, if Alice and Bob are friends in the deepest sense, then the evil that happened to Alice is just as much a part of Bob’s life, and the good to Bob is just as much a part of Alice’s. Thus, defeat can be achieved interpersonally given friendship, without any worries about Alice getting the short end of the stick.

And abstract impersonal goods—like aesthetic ones—can become deeply personal through appreciation.

Thus, the intrapersonality condition in defeat can be met more easily than seems at first sight.

Thursday, November 16, 2017

Truth-value open theism

Consider the view that there are truth values about future contingents, but (as Swinburne and van Inwagen think) God doesn’t know future contingents. Call this “truth-value open theism”.

  1. Necessarily, a perfectly rational being believes anything there is overwhelming evidence for.

  2. Given truth-value open theism, God has overwhelming but non-necessitating evidence for some future contingent proposition p.

  3. If God has overwhelming but non-necessitating evidence for some contingent proposition p, there is a possible world where God has overwhelming evidence for p and p is false.

  4. So, if truth-value open theism is true, either (a) there is a possible world where God fails to believe something he has overwhelming evidence for or (b) there is a possible world where God believes something false. (2-3)

  5. So, if truth-value open theism is true, either (a) there is a possible world where God fails to be perfectly rational or (b) there is a possible world where God believes something false. (1,4)

  6. It is an imperfection to possibly fail to be perfectly rational.

  7. It is an imperfection to possibly believe something false.

  8. So, if truth-value open theism is true, God has an imperfection. (6-7)

And God has no imperfections.

To argue for (2), just let p be the proposition that somebody will freely do something wrong over the next month. There is incredibly strong inductive evidence for (2).

A version of the cosmological argument from preservation

Suppose that all immediate causation is simultaneous. The only way to make this fit with the obvious fact that there is diachronic causation is to make diachronic causation be mediate. And there is one standard way of making mediate diachronic causation out of immediate synchronic causation: temporally extended causal relata. Suppose that A lasts from time 0 to time 3, B lasts from time 2 to time 5, and C lasts from time 4 to time 10 (these can be substances or events). Then A can synchronically cause B at time 2 or 3, B can synchronically cause C at time 4 or 5, and one can combine the two immediate synchronic causal relations into a mediate diachronic causal relation between A and C, even though there is no time at which we have both A and C.

The problem with this approach is explaining the persistence of A, B and C over time. If we believe in irreducibly diachronic causation, then we can say that B’s existence at time 2 causes B’s existence at time 3, and so on. But this move is not available to the defender of purely simultaneous causation, except maybe at the cost of an infinite regress: maybe B’s existence from time 2.00 to time 2.75 causes B’s existence from time 2.50 to time 3.00; but now we ask about the causal relationship between B’s existence at time 2.00 and time 2.75.

So if we are to give a causal explanation of B’s persistence from time 2 to time 5, it will have to be in terms of the simultaneous causal efficacy of some other persisting entity. But this leads to a regress that is intuitively vicious.

Thus, we must come at the end to at least one persisting entity E such that E’s persistence from some time t1 to some time t2 has no causal explanation. And if we started our question with asking about the persistence of something that persists over some times today, then these times t1 and t2 are today.

Even if we allow for some facts to be unexplained contingent “brute” facts, the persistence of ordinary objects over time shouldn’t be like that. Moreover, it doesn’t seem right to suppose that the ultimate explanations of the persistence of objects involve objects whose own persistence is brute. For that makes it ultimately be a brute fact that reality as a whole persists, a brute and surprising fact.

So, plausibly, we have to say that although E’s persistence from t1 to t2 has no causal explanation, it has some other kind of explanation. The most plausible candidate for this kind of explanation is that E is imperishable: that it is logically impossible for E to perish.

Hence, if all immediate causation is simultaneous, very likely there is something imperishable. And the imperishable entity or entities then cause things to exist at the time at which they exist, thereby explaining their persistence.

On the theory that God is the imperishable entity, the above explains why for Aquinas preservation and creation are the same.

(It’s a pity that I don’t think all immediate causation is simultaneous.)

Problem: Suppose E immediately makes B persist from time 2 to time 4, by immediately causing it to exist at all the times from 2 to 4. Surely, though, E exists at time 4 because it existed at time 2. And this “because” is hard to explain.

Response: We can say that B exists at time 4 because of its esse (or act of being) at time 2, provided that (a) B’s esse at time 2 is its being caused by E to exist at time 2, and (b) E causes B to exist at time 4 because (non-causally because) E caused B to exist at time 2. But once we say that B exists at time 4 because of its very own esse at time 2, it seems we’ve saved the “because” claim in the problem.

Two moment presentism

The biggest problem for presentism is the problem of diachronic relations, especially causation. If E is earlier than F and E causes F, then at any given time, this instance of causation will have to either be a relation between two non-existent relata or a relation between one existent and one non-existent relatum, and this is problematic. Here’s a variant on presentism that solves that problem.

Suppose time is discrete, but instead of supposing that a single moment is always actual, suppose that always two successive moments are actual. Thus, if the moments are numbered 0, 1, 2, 3, …, first 0 and 1 are actual, then 1 and 2 are actual, then 2 and 3 are actual, and so on. We then say that the present contains both of the successive moments: the present is not a moment. It is never the case that a single moment is actual, except maybe at the beginning or end of the sequence (those are variants whose strengths and weaknesses need evaluation). Strictly speaking, then, we should label times with pairs of moments: time 1–2, time 2–3, etc. (There are now two variants: on one of them, time 2–3 consists of nothing but the two moments, or it also has an “in between”.)

We then introduce two primitive tense operators: “Just was” and “Is about to be”. Thus, if an object is yellow from times 0 through 2 and blue from time 3 onward, then at time 2–3 it just was yellow and is about to be blue. We can say that an object is F at time 2–3, where Fness is something stative rather than processive, provided that it just was F and is about to be F. We might want to say that it is changing from being F1 to being F2 if it just was F1 and is about to be F2 instead (or maybe there is something more to change than that).

We can now get cases of direct diachronic causation between events at neighboring moments, and because both of the moments are present, our “two-moment presentist” will say that when the two moments are both present, causation is a relation between two existent relata, one at the earlier moment and the other at the later. Of course, there will be cases of indirect diachronic causation to talk about, where some event at time 2 causes an event at time 4 by means of an event at time 3, but the two-moment presentist can break this up into two direct instances of diachronic causation, one of which did/does/will take take place at time 2–3 and the other of which did/does/will take place at time 3–4.

I bet this view is in the literature. It’s too neat a solution to the problem not to have been noticed.

A spatial "in between"

In my last post I offered the suggestion that someone who thinks time is discrete has reason to think that there is something in between the moments—a continuous unbroken (but perhaps breakable) interval.

I think a similar thought can be had about discrete space.

Consideration 1: Imagine that space is discrete, arranged on a grid pattern, and I touch left and right index fingers together. It could happen that the rightmost spatial points of my left fingertip is side-by-side with the leftmost spatial points of my right fingertip, but nonetheless my hands aren’t joined into a single solid. One way to represent this setup would be to say that a spatial point in my left fingertip is right next to a spatial point in my right fingertip, but the interval between these spatial points is not within me.

But positing a spatial “in between” isn’t the only solution: distinguishing internal and external geometry is another.

Consideration 2: Zeno’s Stadium argument can be read as noting that if space and time are discrete, then an object moving at one point per unit of time rightward and an equal length object moving at one point per unit of time leftward can pass by each other without ever being side-by-side. Positing an “in between”, such that objects may be “inbetween places when they are in between times, may make this less problematic.

Wednesday, November 15, 2017

A non-reductive eternalist theory of change

It is sometimes said that B-theorists see change as reducible to temporal variation of properties—being non-F at t1 but F at t2 (the “at-at theory of change”)—while A-theorists have a deeper view of change.

But isn’t the A-theorist’s view of change just something like: having been non-F but now being F? But that’s just as reductive as the B-theorist’s at-at theory of change, and it seems just as much to be a matter of temporal variation. Both approaches have this feature: they analyze change in terms of the having and not having of a property. Note, also, that the A-theorist who gives the having-been-but-now-being story about change is committed to the at-at theory being logically sufficient for change from being non-F to being F.

I think there may be something to the intuition that the at-at theory doesn’t wholly capture change. But moving to the A-theory does not by itself solve the problem. In fact, I think the B-theory can do better than the best version of the A-theory.

Let me sketch an Aristotelian story about time. Time is discrete. It has moments. But it is not exhausted by moments. In addition to moments there are intervals between moments. These intervals are in fact undivided, though they might be divisible (Aristotle will think they are). At moments, things are. Between moments, things become. Change is when at one moment t1 something is non-F, at the next moment t2 it is F, and during the interval between t1 and t2 it is changing from non-F to F.

On this story, the at-at theory gives a necessary condition for changing from non-F to F, but perhaps not a sufficient one. For suppose temporally gappy existence is possible, so that an object can cease to exist and come back. Then it is conceivable that an object exist at t1 and at t2, but not during the interval between t1 and t2. Such an object might be brought back into existence at t2 with the property of Fness which it lacked at t1, but it wouldn’t have changed from being non-F to being F.

But there is a serious logical difficulty with the above story: the law of excluded middle. Suppose that a banana turns from non-blue (say, yellow) to blue over the interval I from t1 to t2. What happens during the interval? By excluded middle, the banana is non-blue or blue. But which is it? It cannot be non-blue on a part of the interval I and blue on another part, for that would imply a subdivision of the interval on the Aristotelian view of time. So it must be blue over the whole interval or non-blue over the whole interval. But neither option seems satisfactory. The interval is when it is changing from non-blue to blue; it shouldn’t already be at either endpoint during the interval. Thus, it seems, during I the banana is neither non-blue nor blue, which seems a contradiction.

But the B-theorist has a way of blocking the contradiction. She can take one of the standard B-theoretic solutions to the problem of temporary intrinsics and use that. For instance, she can say that the banana is neither blue-during-I and nor non-blue-during-I. There is no contradiction here, nor any denial of excluded middle.

What the theory denies is temporalized excluded middle:

  1. For any period of time u, either s during u or (not s) during u

but it affirms:

  1. For any period of time u, either s during u or not (s during u).

A typical presentist is unable to say that. For a typical presentist thinks that if u is present, then s during u if and only if s simpliciter, so that (1) follows from (2), at least if u is present (and then, generalizing, even if it’s not). Such a typical presentism, which identifies present truth with truth simpliciter is I think the best version of the A-theory.

Thinking of time as made up of moments and intervals is, I think, quite fruitful.

Tuesday, November 14, 2017

Freedom, responsibility and the open future

Assume the open futurist view on which freedom is incompatible with there being a positive fact about what I choose, and so there are no positive facts about future (non-derivatively) free actions.

Suppose for simplicity that time is discrete. (If it’s not, the argument will be more complicated, but I think not very different.) Suppose that at t2 I freely choose A. Let t1 be the preceding moment of time.


  1. At t2, it is already a fact that I choose A, and so I am no longer free with respect to A.

  2. At t1, I am still free with respect to choosing A, but I am not yet responsible with respect to A.


  1. At no time am I both free and responsible with respect to A.

This seems counterintuitive to me.

Open theism and divine perfection

  1. It is an imperfection to have been close to certain of something that turned out false.

  2. If open theism is true, God was close to certain of propositions that turned out false.

  3. So, if open theism is true, God has an imperfection.

  4. God has no imperfections.

  5. So, open theism is not true.

I think (1) is very intuitive and (4) is central to theism. It is easy to argue for (2). Consider giant sentence of the form:

  1. Alice’s first free choice on Monday is F1, Bob’s first free choice on Tuesday is F2, Carol’s first free choice on Tuesday is F3, …

where the list of names ranges over the names of all people living on Monday, and the Fi are "right", "not right" and "not made" (the last means that the agent will not make any free choices on Tuesday).

Exactly one proposition of the form (6) ends up being true by the end of Monday.

Suppose we’re back on the Sunday before that Monday. Absent the kind of knowledge of the future that the open theist denies to God, God will rationally assign probabilities to propositions of the form (6). These probabilities will all be astronomically low. Even though Alice may be very virtuous and her next choice is very likely to be right, and Bob is vicious and his next choice is very likely to be wrong, etc., given that any proposition of the form (6) has 7.6 billion conjuncts, the probability of that proposition is tiny.

Thus, on Sunday God assigns miniscule probabilities to all the propositions of the form (6), and hence God is close to certain of the negations of all such propositions. But come Tuesday, one of these negated propositions turns out to be false. Therefore, on Tuesday—i.e., today—there a proposition that turned out false that God was close to certain of. And that yields premise (2).

(I mean all my wording to be neutral between the version of open theism where future contingents have a truth value and the one where they do not.)

Moreover, even without considerations of perfections, being close to certain of something that will turn out to be false is surely inimical to any plausible notion of omniscience.

Monday, November 13, 2017

Flying rings

My five-year-old has been really enjoying our Aerobie Pro flying disk, but it has too much range to use at home or in a backyard. The patent has expired, so I designed a 3D-printable version with a similar airfoil profile and customizable diameter and wing-chord. The inner one is 100mm diameter (20mm chord), and can be used indoors. Here are the files.

Open theism and utilitarianism

Here’s an amusing little fact. You can’t be both an open theist and an act utilitarian. For according to the act utilitarian, to fail to maximize utility is wrong. It is impossible for God to do the wrong thing. But given open theism, it does not seem that God can know enough about the future in order to be necessarily able to maximize utility.

Thursday, November 9, 2017

Proportionality in Double Effect is not a simple comparison

It is tempting to make the final “proportionality” condition of the Principle of Double Effect say that the overall consequences of the action are good or neutral, perhaps after screening off any consequences that come through evil (cf. the discussion here).

But “good or neutral” is not a necessary condition for permissibility. Alice is on a bridge above Bob, and sees an active grenade roll towards Bob. If she does nothing, Alice will be shielded by the bridge from the explosion. But instead she leaps off the bridge and covers the grenade with her body, saving Bob’s life at the cost of her own.

If “good or neutral” consequences are required for permissibility, then to evaluate the permissibility of Alice’s action it seems we would need to evaluate whether Alice’s death is a worse thing than Bob’s. Suppose Alice owns three goldfish while Bob owns two goldfish, and in either case the goldfish will be less well cared for by the heirs (and to the same degree). Then Alice’s death is mildly worse than Bob’s death, other things being equal. But it would be absurd to say that Alice acted wrongly in jumping on the grenade because of the impact of this act on her goldfish.

Thus, the proportionality condition in PDE needs to be able to tolerate some differences in the size of the evils, even when these differences disfavor the course of action that is being taken. In other words, although the consequences of jumping on the grenade are slightly worse than those of not doing so, because of the impact on the goldfish, the bad consequences of jumping are not disproportionate to the bad consequences of not jumping.

On the other hand, if it was Bob’s goldfish bowl, rather than Bob, that was near the grenade, the consequences of jumping would be disproportionate to the consequences of not jumping, since Alice’s death is disproportionately bad as compared to the death of Bob’s goldfish.

Objection: The initial case where Alice jumps to save Bob’s life fails to take into account the fact that Alice’s act of self-sacrifice adds great value to the consequences of jumping, because it is a heroic act of self-sacrifice. This added increment of value outweighs the loss to Alice’s extra goldfish, and so I was incorrect to judge that the consequences are mildly negative.

Response: First, it seems to be circular to count the value of the act itself when evaluating the act’s permissibility, since the act itself only has positive value if it is permissible. And anyway one can tweak the case to avoid this difficulty. Suppose that it is known that if Alice does not jump on the grenade, Carl who is standing beside her will. And Carl only owns one goldfish. Then whether Alice jumps or not, the world includes a heroic act. And it is better that Carl jump than that Alice, other things being equal, as Carl only has one goldfish depending on him. But it is absurd that Alice is forbidden from jumping in order that a man with fewer goldfish might do it in her place.

Question: How much of a difference in value can proportionality tolerate?

Response: I don’t know. And I suspect that this is one of those parameters in ethics that needs explaining.

A simple "construction" of non-measurable sets from coin-toss sequences

Here’s a simple “construction” of a non-measurable set out of coin-toss sequences, i.e., of an event that doesn’t have a well-defined probability, going back to Blackwell and Diaconis, but simplified by me not to use ultrafilters. I’m grateful to John Norton for drawing my attention to this.

Let Ω be the set of all countably infinite coin-toss sequences. If a and b are two such sequences, say that a ∼ b if and only if a and b differ only in finitely many places. Clearly ∼ is an equivalence relation (it is reflexive, symmetric and transitive).

For any infinite coin-toss sequence a, let ra be the reversed sequence: the one that is heads wherever a is tails and vice-versa. For any set A of sequences, let rA be the set of the corresponding sequences. Observe that we never have a ∼ ra, and that U is an equivalence class under ∼ (i.e., a maximal set all of whose members are ∼-equivalent) if and only if rU is an equivalence class. Also, if U is an equivalence class, then rU ≠ U.

Let C be the set of all unordered pairs {U, rU} where U is an equivalence class under ∼. (Note that every equivalence class lies in exactly one such unordered pair.) By the Axiom of Choice (for collections of two-membered sets), choose one member of each pair in C. Call the chosen member “selected”. Then let N be the union of all the selected sets.

Here are two cool properties of N:

  1. Every coin-toss sequence is in exactly one of N and rN.

  2. If a and b are coin-toss sequences that differ in only finitely many places, then a is in N if and only if b is in N.

We can now prove that N is not measurable. Suppose N is measurable. Then by symmetry P(rN)=P(N). By (1) and additivity, 1 = P(N)+P(rN), so P(N)=1/2. But by (2), N is a tail set, i.e., an event independent of any finite subset of the tosses. The Kolmogorov Zero-One Law says that every (measurable) tail set has probability 0 or 1. But that contradicts the fact that P(N)=1/2, so N cannot be measurable.

An interesting property of N is that intuitively we would think that P(N)=1/2, given that for every sequence a, exactly one of a and ra is in N. But if we do say that P(N)=1/2, then no finite number of observations of coin tosses provides any Bayesian information on whether the whole infinite sequence is in N, because no finite subsequence has any bearing on whether the whole sequence is in N by (2). Thus, if we were to assign the intuitive probability 1/2 to P(N), then no matter what finite number of observations we made of coin tosses, our posterior probability that the sequence is in N would still have to be 1/2—we would not be getting any Bayesian convergence. This is another way to see that N is non-measurable—if it were measurable, it would violate Bayesian convergence theorems.

And this is another way of highlighting how non-measurability vitiates Bayesian reasoning (see also this).

We can now use Bayesian convergence to sketch a proof that N is saturated non-measurable, i.e., that if A ⊆ N is measurable, then P(A)=0 and if A ⊇ N is measurable, then P(A)=1. For suppose A ⊆ N is measurable. Suppose that we are sequentially observing coin tosses and forming posteriors for A. These posteriors cannot ever exceed 1/2. Here is why. For a coin toss sequence a, let rna be the sequence obtained by keeping the first n tosses fixed and reversing the rest of the tosses. For any any finite sequence o1, ..., on of observations, and any infinite sequence a of coin-tosses compatible with these observations, at most one of a and rna is a member of N (this follows from (1) and the fact that ra ∈ N if and only if rna ∈ N by (2)). By symmetry P(A ∣ o1...on)=P(rnA ∣ rn(o1...on)) (where rnA is the result of applying rn to every member of A). But rn(o1...on) is the same as o1...on, so P(A ∣ o1...on)=P(rnA ∣ o1...on). But A and rnA are disjoint, so P(A ∣ o1...on)+P(rnA ∣ o1...on)≤1 by additivity, and hence P(A ∣ o1...on)≤1/2. Thus, the posteriors for A are always at most 1/2. By Bayesian convergence, however, almost surely the posteriors will converge to 1 or 0, respectively, depending on whether the sequence being observed is actually in A. They cannot converge to 1, so the probability that the sequence is in A must be equal to 0. Thus, P(A)=0. The claim that if A ⊇ N is measurable then P(A)=1 is proved by noting that then N − A ⊇ rN (as rN is the complement of N), and so by the above argument with rN in place of N, we have P(N − A)=0 and thus P(A)=1.

Tuesday, November 7, 2017

Why might God refrain from creating?

Traditional Jewish and Christian theism holds that God didn’t have to create anything at all. But it is puzzling what motive a perfectly good being would have not to create anything. Here’s a cute (I think) answer:

  • If (and only if) God doesn’t create anything, then everything is God. And that’s a very valuable state of affairs.

Adding infinite guilt

Bob has the belief that there are infinitely many people in a parallel universe, and that they wear numbered jerseys: 1, 2, 3, …. He also believes that he has a system in a laboratory that can cause indigestion to any subset of these people that he can describe to a computer. Bob has good evidence for these beliefs and is (mirabile!) sane.

Consider four scenarios:

  1. Bob attempts to cause indigestion to all the odd-numbered people.

  2. Bob attempts to cause indigestion to all the people whose number is divisible by four.

  3. Bob attempts to cause indigestion to all the people whose number is either odd or divisible by four.

  4. Bob yesterday attempted to cause indigestion to all the odd-numbered people and on a later occasion to all the people whose number is divisible by four.

In each scenario, Bob has done something very bad, indeed apparently infinitely bad: he has attempted infinite mass sickening.

In scenarios 1-3, other things being equal, Bob’s guilt is equal, because the number of people he attempted to cause indigestion to is the same—a countable infinity.

But now we have two arguments about how bad Bob’s action in scenario 4 is. On the one hand, in scenario 4 he has attempted to sicken the exact same people as in scenario 3. So, he is equally guilty in scenario 4 as in scenario 3.

On the other hand, in scenario 4, Bob is guilty of two wrong actions, the action of scenario 1 and that of scenario 2. Moreover, as we saw before, each of these actions on its own makes him just as guilty as the action in scenario 3 does. Doing two wrongs, even two infinite wrongs, is worse than just doing one, if they are all of the same magnitude. So in scenario 4, Bob is guiltier than in scenario 3. One becomes the worse off for acquiring more guilt. But if 4 made Bob no guiltier than 3 would have, it would make Bob no guiltier than 1 would have, and so after committing the first wrong in 4, since he would already have the guilt of 1, Bob would have no guilt-avoidance reason to refrain from the second wrong in 4, which is absurd.

How to resolve this? I think as follows: when accounting guilt, we should look at guilty acts of will rather than consequences or attempted consequences. In scenario 4, although the total attempted harm is the same as in each of scenarios 1-3, there are two guilty acts of will, and that makes Bob guiltier in scenario 4.

We could tell the story in 4 so that there is only one act of will. We could suppose that Bob can self-hypnotize so that today he orders his computer to sicken the odd-numbered people and tomorrow those whose number is divisible by four. In that case, there would be only one act of will, which will be less bad. It’s a bit weird to think that Bob might be better off morally for such self-hypnosis, but I think one can bite the bullet on that.

Evidence that I am dead

I just got evidence that I am dead, in an email that starts:

Dear expired [organization] member,
You might think this is pretty weak evidence. Maybe "expired" doesn't mean "dead" here. But the email continues:
Thank you for your past support of [organization]. Your membership has recently expired, and we would like to take this opportunity to urge you to renew your membership.
But last year I acquired a life membership...

Sorry, I couldn't resist sharing this.

From a dualism to a theory of time

This argument is valid:

  1. Some human mental events are fundamental.

  2. No human mental event happens in an instant.

  3. If presentism is true, every fundamental event happens in an instant.

  4. So, presentism is not true.

Premise (1) is widely accepted by dualists. Premise (2) is very, very plausible. That leaves (3). Here is the thought. Given presentism, that a non-instantaneous event is happening is a conjunctive fact with one conjunct about what is happening now and another conjunct about what happened or will happen. Conjunctive facts are grounded in their conjuncts and hence not fundamental, and for the same reason the event would not be fundamental.

But lest four-dimensionalist dualists cheer, we can continue adding to the argument:

  1. If temporal-parts four-dimensionalism is true, every fundamental event happens in an instant.

  2. So, temporal-parts four-dimensionalism is not true.

For on temporal-parts four-dimensionalism, any temporally extended event will be grounded in its proper temporal parts.

The growing block dualist may be feeling pretty smug. But suppose that we currently have a temporally extended event E that started at t−2 and ends at the present moment t0. At an intermediate time t−1, only a proper part of E existed. A part is either partly grounded in the whole or the whole in the parts. Since the whole doesn’t exist at t−1, the part cannot be grounded in it. So the whole must be partly grounded in the part. But an event that is partly grounded in its part is not fundamental. Hence:

  1. If growing block is true, every fundamental event happens in an instant.

  2. So, growing block is not true.

There is one theory of time left. It is what one might call Aristotelian four-dimensionalism. Aristotelians think that wholes are prior to their parts. An Aristotelian four-dimensionalist thinks that temporal wholes are prior to their temporal parts, so that there are temporally extended fundamental events. We can then complete the argument:

  1. Either presentism, temporal-parts four-dimensionalism, growing block or Aristotelian four-dimensionalism is true.

  2. So, Aristotelian four-dimensionalism is true.

Monday, November 6, 2017

Statistically contrastive explanations of both heads and tails

Say that an explanation e of p rather than q is statistically contrastive if and only P(p|e)>P(q|e).

For instance, suppose I rolled an indeterministic die and got a six. Then I can give a statistically contrastive explanation of why I rolled more than one (p) rather than rolling one (q). The explanation (e) is that I rolled a fair six-sided die. In that case: P(p|e)=5/6 > 1/6 = P(q|e). Suppose I had rolled a one. Then e would still have been an explanation of the outcome, but not a statistically contrastive one.

One might try to generalize the above remarks to conclude to this thesis:

  1. In indeterministic stochastic setups, there will always be a possible outcome that does not admit of a statistically contrastive explanation.

The intuitive argument for (1) is this. If one indeterministic stochastic outcome is p, either there is or is not a statistically contrastive explanation e of why p rather not p is the case. If there is no such statistically contrastive explanation, then the consequent of (1) is indeed true. Suppose that there is a statistically contrastive explanation e, and let q be the negation of p. Then P(p|e)>P(q|e). Thus, e is a statistically contrastive explanation of why p rather than q, but it is obvious that it cannot be a statistically contrastive explanation of why q rather than p.

The intuitive argument for (1) is logically invalid. For it only shows that e is not the statistically contrastive explanation for why q rather than p, while what needed to be shown is that there is no statistically contrastive explanation.

In fact, (1) is false. The indeterministic stochastic situation is Alice’s flipping of a coin. There are two outcomes: heads and tails. But prior to the coin getting flipped, Bob uniformly chooses a random number r such that 0 < r < 1 and loads the coin in such a way that the chance of heads is r. Suppose that in the situation at hand r = 0.8. Let H be the heads outcome and T the tails outcome. Then here is a constrastive explanation for H rather than T:

  • e1: an unfair coin with chance 0.8 of heads was flipped.

Clearly P(H|e1)=0.8 > 0.2 = P(T|e1). But suppose that instead tails was obtained. We can give a constrastive explanation of that, too:

  • e2: an unfair coin with chance at least 0.2 of tails was flipped.

Given only e2, the chance of tails is somewhere between 0.2 and 1.0, with the distribution uniform. Thus, on average, given e2 the chance of tails will be 0.6: P(T|e2)=0.6. And P(H|e2)=1 − P(T|e2)=0.4. Thus, e2 is actually a statistically contrastive explanation of T. And note that something like this will work no matter what value r has as long as it’s strictly between 0 and 1.

It might still be arguable that given indeterministic stochastic situations, something will lack a statistically contrastive explanation. For instance, while we can give a statistically contrastive explanation of heads rather than tails, and a statistically contrastive explanation of tails rather than heads. But it does not seem that we can give a statistically contrastive explanation of why the coin was loaded exactly to degree 0.8, since that has zero probability. Of course, that’s an outcome of a different stochastic process than the coin flip one, so it doesn't support (1). And the argument needs to be more complicated than the invalid argument for (1).

Cheap Makey Makey alternative

The Makey Makey is a cool electronic gadget that lets kids make a USB controller out of any somewhat conductive stuff, like bananas, play dough, etc. Unfortunately, it's about $50 (there is also a $30 clone). Also, annoying, it requires a ground connection for the user. I made a capacitive version that costs about $3 using a $2 stm32f103c8 board. It emulates either a keyboard or a gamepad/joystick.

Here are instructions.

Projection and the imago Dei

There is some pleasing initial symmetry between how a theist (or at least Jew, Christian or Muslim) can explain features of human nature by invoking the doctrine that we are in the image of God and using this explanatory schema:

  1. Humans are (actually, normally or ideally) F because God is actually F

and how an atheist can explain features attributed to God by projection:

  1. The concept of God includes being actually F because humans are (actually, normally or ideally) F.

Note, however, that while schemata (1) and (2) are formally on par, schema (1) has the advantage that it has a broader explanatory scope than (2) does. Schema (1) explains a number of features (whether actual or normative) of the nature of all human beings, while schema (2) only explains a number of features of the thinking of a modest majority (the 55% who are monotheists) of human beings.

There is also another interesting asymmetry between (1) and (2). Theist can without any damage to their intellectual system embrace both (1) and a number of the instances of (2) that the atheist embraces, since given the imago Dei doctrine, projection of normative or ideal human features onto God can be expected to track truth with some probability. On the other hand, the atheist cannot embrace any instances of (1).

Note, too, that evolutionary explanations do not undercut (1), since there can be multiple correct explanations of one phenomenon. (This phenomenon is known to people working on Bayesian inference.)

Saturday, November 4, 2017

Neo-Aristotelian Perspectives on Contemporary Science

The collection Neo-Aristotelian Perspectives on Contemporary Science (eds: Simpson, Koons and Teh) is now available. It's divided into a physical sciences and a life sciences part.

My piece on the Traveling Forms interpretation is in the physical sciences part (interestingly, though, that interpretation is more about us than about physics).

Thursday, November 2, 2017

Four problems and a unified solution

A similar problem occurs in at least four different areas.

  1. Physics: What explains the values of the constants in the laws of nature?

  2. Ethics: What explains parameters in moral laws, such as the degree to which we should favor benefits to our parents over benefits to strangers?

  3. Epistemology: What explains parameters in epistemic principles, such as the parameters in how quickly we should take our evidence to justify inductive generalizations, or how much epistemic weight we should put on simplicity?

  4. Semantics: What explains where the lines are drawn for the extensions of our words?

There are some solutions that have a hope of working in some but not all the areas. For instance, a view on which there is a universe-spawning mechanism that induces random value of constants in laws of nature solves the physics problem, but does little for the other three.

On the other hand, vagueness solutions to 2-4 have little hope of helping in the physics case. Actually, though, vagueness doesn’t help much in 2-4, because there will still be the question of explaining why the vague regions are where they are and why they are fuzzy in the way there are—we just shift the parameter question.

In some areas, there might be some hope of having a theory on which there are no objective parameters. For instance, Bayesianism holds that the parameters are set by the priors, and subjective Bayesianism then says that there are no objective priors. Non-realist ethical theories do something similar. But such a move in the case of physics is implausible.

In each area, there might be some hope that there are simple and elegant principles that of necessity give rise to and explainingthe values of the parameters. But that hope has yet to be born out in any of the four cases.

In each area, one can opt for a brute necessity. But that should be a last resort.

In each area, there are things that can be said that simply shift the question about parameters to a similar question about other parameters. For instance, objective Bayesianism shifts the question of about how much epistemic weight we should put on simplicity to the question of priors.

When the questons are so similar, there is significant value in giving a uniform solution. The theist can do that. She does so by opting for these views:

  1. Physics: God makes the universe have the fundamental laws of nature it does.

  2. Ethics: God institutes the fundamental moral principles.

  3. Epistemology: God institutes the fundamental epistemic principles for us.

  4. Semantics: God institutes some fundamental level of our language.

In each of the four cases there is a question of how God does this. And in each there is a “divine command” style answer and a “natural law” style answer, and likely others.

In physics, the “divine command” style answer is occasionalism; in ethics and epistemology it just is “divine command”; and in semantics it is a view on which God is the first speaker and his meanings for fundamental linguistic structs are normative. None of these appeal very much to me, and for the same reason: they all make the relevant features extrinsic to us.

In physics, the “natural law” answer is theistic Aristotelianism: laws supervene on the natures of things, and God chooses which natures to instantiate; theistic natural law is a well-developed ethical theory, and there are analogues in epistemology and semantics, albeit not very popular ones.

Wednesday, November 1, 2017

Theistic Natural Law and the Euthyphro Problem

Theistic Natural Law (TNL) theory seems to be subject to the Euthyphro problem much as divine command theory (DCT) is. On DCT, the Euthyphro problem takes the form of the question:

  1. Why did God command what he commanded rather than commanding otherwise?

On TNL, the Euthyphro problem takes the form of the question:

  1. Why did God create beings with the natures he did rather than creating beings with other natures?

In both cases, one can respond by talking of the essential goodness of God, by virtue of which he makes a good choice as to how to fittingly match the non-normative with the normative features of creatures. In the DCT case, God makes the match by benevolently choosing what sorts of creatures to create and what sorts of commands to give them. In the TNL case, God makes the match by benevolently choosing the non-deontic and deontic features of natures and then creating creatures with these natures. Thus, in the DCT case, God has reason to coordinate the sociality of creatures with the command to cooperate, while in the TNL case God has reason to actualize natures that either both include sociality and the duty to cooperate or to actualize natures that include neither.

So in what way is TNL better off than DCT with regard to the Euthyphro problem? The one thing I can think of in the vicinity is this: TNL allows for there to be deontic features that necessarily every natural includes, and it allows for there to be some deontic features of creatures that are entailed by the non-deontic features. For instance, perhaps every possible nature of an agent includes a prohibition against pointless imposition of torture, and every possible nature of a linguistic agent includes a prohibition against lying. But I am not sure this difference is really relevant to the Euthyphro problem.

I do prefer TNL to DCT, but not because of the Euthyphro problem. My reason for the preference is that many moral obligations appear to be intrinsic features of us.

Of course, the above arguments presuppose a particular picture of how natural law works. But I like that picture.

Captain Proton's Ray Gun

The kids and I are big Star Trek fans (well, the 5-year-old is just a fan, not a big fan, as yet), and my son wanted to have Captain Proton's Ray Gun. Captain Proton is a cheesy character in a fictional series of 1950s movies in Star Trek Voyager. So, I guess, he's a fictional fictional character. I found some photos of a prop, traced the images in Inkscape, exported to OpenSCAD, and made 3D printable files, which are here. I printed it (it prints in two halves that join together), but we have yet to paint it (may not paint it right away, as in silver and gray it will look too much like a real gun at a distance to use outside the house).

Tuesday, October 31, 2017

Infinite grounding regresses

Suppose, as seems possible, that every day for eternity you will toss a coin and get heads.

Then that you will get heads on every future day seems to be grounded in:

  1. You will get heads on day 1, and you will get heads on every day starting with day 2.

And the second conjunct of (1) seems to be grounded in:

  1. You will get heads on day 2, and you will get heads on every day starting with day 3.

And the second conjunct of (2) seems to be grounded in:

  1. You will get heads on day 3, and you will get heads on every day starting with day 4.

And so on.

So, it seems, infinite propositional grounding regresses are possible.

I suspect that infinite existential grounding regresses are not possible, though.

Monday, October 30, 2017

Counseling the lesser evil

A controversial principle in Catholic moral theology is the principle of “counseling the lesser evil”, sometimes confusingly (or confusedly) presented as the “principle of the lesser evil”. The principle is one that the Church has not pronounced on. (For a survey of major historical points, see this piece by Fr. Flannery.)

First, a clarification. Nobody in the debate thinks it is ever permissible to do the lesser evil. The lesser evil is still an evil, and it is never permissible to do evil, no matter what might result from it. The debate is very specifically the following. Suppose someone is determined to do an evil, and cannot be dissuaded from doing some evil or other. Is it permissible to counsel a lesser evil in order to redirect the person from a greater evil? For instance, if someone is about to murder you, and cannot be dissuaded from an evil course of action, are you permitted to counsel theft instead, as on some interpretations the ten men in Jeremiah 41:8 do? (But see quotations in Flannery for other interpretations.)

There is no question that if the potential murderer is redirected to theft, the theft will still be wrong, indeed quite possibly a mortal sin (depending on the amount stolen). The moral question about “the lesser evil” is not about the primary evildoer but about the counselor. On the one hand, it appears that if the counselor’s counsel is sincere, the counselor is wrongfully endorsing an evil—albeit less evil—course of action. Indeed, it seems that the counselor is even intending the evil, albeit as an alternative to a greater evil.

On the other hand, a number of people will have very strong intuitions that it is not wrong to say to a potential murderer “Don’t kill me: here, take my laptop!” (Note: I assume the coerced circumstances do not render this a valid gift, so the potential murderer will indeed be a thief by taking the laptop.)

Let me add that the argument I will give leaves open the question of the advisability of counseling the lesser evil. Often it may be better to inspire the evildoer to do the good thing rather than the lesser of the evils. Moreover, one needs to be extremely wary of any public counseling of the lesser evil, because it is apt to encourage people who are not determined on evil to do the lesser evil. I think it is unlikely that such counseling is often advisable.

So, here’s the argument. Start with this thought. Agents deliberate about options. As they do so, they come to favor some options over others. Eventually, as they narrow in on the decision, they favor one option over all the others. Moreover:

  1. If a deliberating agent in the end favors B over C, typically the agent will not choose C as a result of this deliberation.

There are at least two reasons for the “typically”. First, maybe the agent is irrational. Second, maybe there can be cases of circular favoring structures, so that the agent favors B over C, favors A over B, and favors C over A, so that she ends up choosing C anyway.

Next observe this:

  1. If option B is better than option C, then it is good for a deliberating agent to favor B over C.

This is true regardless of whether B and C are both good options, or B is good and C is bad, or both B and C are bad. It is simply a good thing to favor the better over the worse.

With (1) and (2) in mind, consider a case where the agent has three options: a good A (e.g., going away), a lesser evil B (e.g., theft) and a greater evil C (e.g., murder). By (2) it is good if the agent to favors B over C. Suppose the counselor strives to lead the agent who is determined on evil to favor B over C (e.g., by emphasizing the resale value of the laptop, or the likelihood that the police will investigate a murder more thoroughly than a theft, or the greater sinfulness of murder, depending on what is more likely to impress the particular agent). Then the conditions for the Principle of Double Effect can be satisfied on the side of the counselor.

  1. The counselor is pursuing a good end, the agent’s not choosing C.

  2. The counselor’s chosen means to the good end is the agent’s favoring B over C. By (1), such favoring is likely to be effective in fulfilling the counselor’s good end (namely, the agent’s not choosing C) and by (2), such favoring is good.

  3. There is a foreseen but not intended evil of the agent opting for B. It is not intended, because the counselor’s plan of action will be successful whether or not the agent opts for B (as foreseen) or for A (an unexpected bonus).

  4. The good of the agent’s not choosing C is proportionate to the foreseen evil of the agent’s choosing B, and there is, we may suppose, no better way of achieving the good.

In particular, there is no intention that the agent choose B, or even choose B over C. The intention is that the agent favor B over C, which is all that is typically needed, given (1), for the agent not to choose C.

Note 1: This provides a defense of pretty strong cases of counseling the lesser evil. The argument works even in cases where the agent being counseled wouldn’t have thought of evil B prior to the counseling (that is the case in Jeremiah 41:8). It might even work where B is impossible prior to the counseling. For instance you might unlock your safe in order to make it easier for the agent to steal your money in place of killing you. In so doing, your end is still that C not be done, and the means is that B is favored over C.

Note 2: This solves the problem of bribes.

Note 3: I am not very confident of any of the above.

Friday, October 27, 2017

Bribes and conditional intentions

You are trying to get a permit that you are both morally and legally entitled to, but an official requires a bribe to give you the permit. Are you permitted to pay the bribe?

I always thought: Of course!

But now I think this is more difficult than it has seemed to me. Initially, it seems that your action plan is very simple:

  1. Give the bribe in order that the official give you the permit.

But suppose that you pay the bribe and the official never notices the money slipped onto her desk, though when you lean over her desk, from that angle you look just like her nephew, so she gives you the permit out of nepotism. In that case, while you got what you wanted, you didn’t fulfill your plan–your bribery was not a success. That shows that (1) is only a part of your action plan. More fully, your plan is:

  1. Give the bribe in order that the official be motivated by it (in the usual way bribes motivate) to give you the permit.

But now it seems to be a moral evil that an official be motivated by a bribe to do something, even if the thing she is motivated to do is the right thing. So in setting oneself on plan (2), it seems one intends something immoral.

I wonder if this isn’t a case similar to asking a murderer: “If you are going to kill me, kill me painlessly” (which one might even put in the simple phrase “Kill me painlessly”, with everybody understanding that the request is conditional). In that case, your intention is not that the murderer kill you painlessly, but that:

  1. If the murderer kills you, she kills you painlessly.

And that conditional isn’t a bad thing.

One makes the request of the murderer on the expectation–but certainly neither intention nor hope!–that the the antecedent of the conditional will turn out to be true. Nonetheless, one does not intend the consequent.

Perhaps in the bribery case one has a similar intention:

  1. If the official isn’t going to be motivated by duty, she will be motivated by the bribe.

One then gives the bribe on the expectation–but neither intention nor hope–that the official will be unmotivated by duty.

But things aren’t quite that simple. Suppose that I prefer Coca Cola to cocaine, and in a really shady restaurant I place this order:

  1. I’ll have a Coca Cola, but if you can’t do that, then I’ll have some cocaine.

Here I’ve done something wrong: I’ve conditionally procured illegal drugs. But how to distinguish (5) from (3) and (4)?

One psychological difference is that in (5), presumably I desire the cocaine, just not as much as I desire the Coca Cola. But in (3) and (4), I don’t desire the painless killing or the taking of the bribe. (Compare this case: Malefactors will forcibly give you Coca Cola, cocaine or cyanide. You say “I’ll have a Coca Cola, but if you can’t do that, I’ll have some cocaine.” Here, I presume, you don’t desire the cocaine, but it’s better than the Coca Cola. That’s more like (3) and (4) than like the restaurant version of (5).)

But I don’t really want to rest the relevant moral distinctions on desires.

Here’s what I’d like to say, but I have a hard time making it work out. In (5)–the restaurant coke/cocaine order–when the antecedent of the conditional is met, your will stands behind the consequent. In (3)–the killing case–your will doesn’t stand behind the consequent even when the antecedent of the conditional is met. Even when it is inevitable that you will be killed, you don’t intend to die, but only not to die painfully. But I worry about this. Suppose then you die painlessly. Isn’t your intention not to painfully die satisfied by the painless death, and hence the painless death was the means to avoiding the painful death? And in the bribery case you intend not to have your request denied, but wasn’t the taking of the bribe the means to the request?

Perhaps there is something much simpler, though, that doesn’t involve intentions so much. Perhaps it’s not morally wrong for the official to give the permit because of the bribe. What is wrong is for the official to give the permit solely because of the bribe. But you needn’t intend that. On the contrary, you might have emphasized to the official that you are morally and legally entitled to the permit. There are many ways the bribe can work. It might be the sole motive. But it might also be a partial motive. Or it might be a defeater for a defeater: "It's a lot of trouble to give permits, so I won't bother. But if I get a bribe, then the trouble is worth it." Of course, that still leaves the probably purely hypothetical case where you know that the only way the bribe will work is by being the sole motive. But now it's not so clear that it's permissible to give it.

And in the case of the murder, you are trying to dissuade the murder from killing you painfully by drawing her attention to the argument that option C is bad because there is a better–albeit still bad–option B? She might then go for option B or she might go for the good option A. Either way, she refrains from doing C. There is, in fact, a way in which the murder case is easier than the bribe case, because your being killed painlessly is not a means to your avoiding the painful death–it is what occurs in its place. If I am offered coffee or water and I go for the water, my drinking water isn’t a means to avoiding coffee, though it happens in its place.

Thursday, October 26, 2017

A two-stage view of proportionality in the Principle of Double Effect

A question about Double Effect that hasn’t been sufficiently handled is in what way, if any, the good effects of bad effects are screened off when judging proportionality.

It seems that some sort of screening off is needed. Consider this case. An evildoer says that he’ll free five innocents unless you kill one innocent; otherwise, she’ll kill them. So you shoot at the innocent’s shirt covering his chest, intending to learn how the fabric is rent by the bullet (knowledge is a good thing!), while foreseeing without intending that the innocent should die, and also foreseeing without intending that the evildoer will free the five.

This is clearly a travesty of double effect reasoning. But the only condition that isn’t obviously satisfied is the proportionality condition. So let’s think about proportionality. Here are two ways to think here:

  1. All good and bad effects count for proportionality. Thus, both the death of the one and the saving of the five count, as does the trivial good of knowing how the shirt rips. Thus proportionality is satisfied: the goods are proportionate.

  2. The good effects that are causally downstream of the bad effects of one’s action don’t count. On this view, it is the intended effect that must be proportionate to the unintended bad effects. Thus, the death of the one counts, and the trivial good of knowing about how the fabric rips counts, but the saving of the five does not count, as it is not intended (if it were intended, the act would be impermissible, of course). But of course the good of knowing how the fabric rips is not proportionate to the death of the one innocent.

Option 2 fits better with the intuition that the initial case was a travesty of double effect reasoning.

But option 2 doesn’t seem to be the right one in all other cases. Suppose I am guarding five innocents sentenced to death by an evil dictator. If I free them, I will be killed. I also know that unless the innocents leave the country, they will be recaptured soon. The innocents are planning to bribe the border officials, which is quite likely to work. But it will be wrong for the border officials to let them escape, because the border officials will have the false belief that these people are justly sentenced, but are venal.

It seems permissible to free the innocents. Here, the unintended but foreseen bad effect is my own death. The good effect is the innocents’ being allowed out of prison. But it seems that if we don’t get to consider effects downstream of bad stuff, we don’t get to consider the fact that the innocents will escape the country, as that’s downstream of the border officials’ venal acceptance of bribes.

Here’s one theory I developed today in conversation with a graduate student. Proportionality is very complex. Perhaps there are two stages.

Stage I: Are the intended good effect and the foreseen bad effects are in the same ballpark? This is a very loose proportionality consideration. One life and ten lives are in the same ballpark, but knowing how the fabric rips is far out of that ballpart. If the intended good effect is so much less than the foreseen bad effects that they are not in the same ballpark, proportionality is not met. Here, the good effects that are downstream of the bad effects don’t count.

If the Stage I proportionality condition is violated, the act is wrong. If it’s met, I proceed to Stage II.

Stage II: Now I get to do a proportionality calculation taking into account all the foreseen goods and bads, regardless of how they are connected to intentions.

The proportionality condition now requires a positive evaluation by means of both stages.

On this two stage theory, shooting the innocent’s shirt in the initial case is wrong, as proportionality is violated at Stage I. On the other hand, the release of the prisoners may be permissible. For the freedom of the innocents is in the same ballpark as my life—it’s a big ballpark—even if they are going to be recaptured. It’s not a trivial good, like the taste of a mint.

I am not happy with this. It’s too complicated!

Certamen practice machine

This summer, the big kids and I built a practice machine for the Junior Classical League's Certamen competition, based on an Arduino (clone) Mega 2560. It's a "practice machine" as it's not officially approved for tournament use (and perhaps can't be without the clicker shape being changed). Cost is about $80 (including filament), as compared to $400+ for the official version.

Build instructions are here. Code is here.

Monday, October 23, 2017

Existence, causation and individuation

Suppose a cause C produces horses, in the following way:

  • When C produces a horse, a horse instantly comes into existence made out of some mass of non-equine matter M.

  • The genetic makeup of the resulting horse is randomly distributed over all DNA compatible with being a horse.

(Imagine lightning striking a bog and randomly turning the bog mass into a horse.)

So now suppose that in world w1, a female Arabian, Green Lightning, comes into existence as a result of C, while in w2, a male Exmoor pony, Tigger, comes into existence as a result of C.

Presumably, Green Lightning and Tigger are numerically distinct horses. Why are they distinct? Presumably because they are qualitatively different—specifically, because their DNA is different. If in w1 and w2, C respectively produced horses that were exactly alike out of M, those horses would have to have been numerically identical. (Haecceitists will disagree.)

But now we have a puzzle for Aristotelians. Both Green Lightning and Tigger are of the same species. (If you think that breeds or sexes make for different metaphysical species, modify the example and make them be of the same sex and breed, but still very different from each other.) Let the Fs be the qualitative features that Green Lightning and Tigger initially differ in.

  1. The Fs in are accidents in the Aristotelian sense: they are accidental to their horsehood, which is their form.

(They may not be accidents in the contemporary modal sense. It may be that it is impossible for a horse to be of another sex than it is.)


  1. The Fs make Green Lightning be distinct from Tigger.

  2. If what makes Green Lightning be distinct from Tigger are the Fs, then the Fs help make Green Lightning be Green Lightning.

  3. Nothing that helps make x be x can be explanatorily posterior to x.

  4. So, the Fs are not explanatorily posterior to Green Lightning. (2-4)

  5. The accidents of x are explanatorily posterior to x.

  6. So, the Fs are explanatorily posterior to Green Lightning. (1,6)

  7. Contradiction! (5,7)

The case where C makes a horse come into existence from non-equine matter makes the above argument a bit more vivid. In the ordinary case of equine reproduction, a sperm and egg contribute their DNA and give rise to the DNA of the offspring. There it could be argued that the relevant thing that helps make the resulting horse be the horse it is is the DNA in the sperm and the DNA in the egg.

One could conclude that a horse can’t come into existence from matter that doesn’t already contain implicit in it the DNA of the horse. But that is implausible, especially since God could create a horse even without any matter.

This puzzle worries me a lot. I initially thought it was a special puzzle for four-dimensionalist temporal-parts Aristotelianism, because it showed that the first temporal part of the horse was explanatorily prior to the whole, whereas Aristotelianism forbids parts to be prior to wholes. But then I realizes that the same point could be made about accidents without reference to four-dimensionalism.

Here is my best solution. There is something about Green Lightning that is prior to her being Green Lightning. It is her being caused by C to exist with the Fs (i.e., her being caused by C to exist as a female Arabian, etc.). Admittedly, that sounds just as much as an accident of Green Lightning as the Fs are. It’s not Green Lightning’s form, so what else could it be but an accident? There is no answer in Aristotle, but there is a potential answer in Aquinas: this could be Green Lightning’s act of being, her esse. And it is not crazy to take Green Lightning’s esse to be something that (a) is prior to Green Lightning, (b) Green Lightning could not exist without, and (c) an individuator of Green Lightning.

This reminds me of a line of thought in the Principle of Sufficient Reason book where I argued that the esse of a contingent being is its being caused. If my present solution is correct, that was only a partial description of the esse of a contingent being. And I think there may well be an argument for the principle that ex nihilo nihil fit in the vicinity, just as in the PSR book—for it is absurd to think that anything contingent could be prior to x if x has no cause, while this esse is something contingent.

Murder by slowdown?

Zeno wants Alice dead and he has the following plan. He slows down Alice’s functioning—say, by cooling her or by sending her around the earth on a spaceship so fast that relativistic time dilation does the job—so much that each second of Alice’s internal time takes a billion years of external time. In six seconds of Alice’s internal time, she’s dead, because the sun runs out of hydrogen and turns into a red giant.

Did Zeno kill Alice or did the sun kill Alice? Both: Zeno kills Alice by shifting her future life into a spatiotemporal position where that life would be destroyed by the sun. This is akin to sending Alice now into the sun on a speeding rocket.

(I am not a lawyer, but I expect Zeno could only be convicted of attempted murder, since a conviction for murder requires the victim to be dead; similarly, I assume that an 80-year-old person who gives someone a poison that takes forty years to work can only be convicted of attempted murder, because by the time the poison does its work, the murderer will be dead.)

But now imagine that Zeno lives in a universe where the earth will be habitable forever. He sets up an automated system that slows down Alice’s internal time to such a degree that in the first year of external time, Alice’s internal time moves ahead only 3 seconds; in the next external year, it moves ahead by 1.5 seconds; in the next year, it moves ahead by 0.75 seconds; and so on. What happens? Well, Alice still cannot have more than six seconds of life ahead of him. In n years of external time, she will have had 6 − 6/2n seconds of internal time.

So just as in the first scenario, Zeno has ensured that Alice has less than six seconds of internal time left. It sure sounds like murder. But wait! In the second scenario, it seems that Alice never dies: she is alive this year, just sluggish; she will be alive next year, though even more sluggish; and so on.

But Alice will be dead in exactly six seconds of internal time. So what will be the cause of death? The unfortunate misalignment between Alice’s internal time and the external time of the universe, together with the universe running out of time “once year ω rolls around”? Maybe. I am not sure. This is paradoxical.

There is a way of getting out of this paradox. Suppose internal time must be discrete. Then to slow down Alice’s time means to space out the discrete ticks of her time. Suppose for simplicity that Alice has a hundred ticks per internal second. Then in the next year, she will have 300 ticks. Some time in year ten, the 599th tick of Alice’s future life happens. And the 600th tick will never happen. So, the gradual slowdown story is is impossible. The speed his zero after the tenth year. The best (or worst?) Zeno can do is ensure that the 599th tick of Alice’s life is the last one. But if that’s what he does, then he causes her death by ensuring that the 600th tick never happens. But if that’s what he does, there is no gradual slowdown paradox.

Friday, October 20, 2017

Why my present existence can't depend on future events

I find very persuasive arguments like this:

  1. If theory T is true, then whether I exist now depends on some future events.

  2. Facts about what exists now do not depend on future events.

  3. So, theory T is not true.

For instance, some four-dimensionalist solutions to problems of fission according to which the number of people there are now depends on whether fission will are subject to this criticism.

But I’ve had a nagging worry about arguments like this, that in accepting (2), I am not being faithful to my eternalist four-dimensionalist convictions: why should the present aspects of the four-dimensional me have this sort of priority? Moreover, I didn’t really have an argument for (2). Until today.

Here is an argument for (2). Start with this.

  1. If facts about my present existence depend on future events, then facts about my present existence depend on future events that happen to me.

For instance, suppose that whether I exist now depends on whether some surgeon cuts my brain in half tomorrow. Well, then, some of the events that my present existence depends on will be events that happen entirely to someone else—for instance, whether the surgeon gets to work on time. But other events, such as the cutting or non-cutting of the brain, will happen to me. It would be absurd to think that facts about my present existence or identity depend on future events that happen entirely to something other than me.

Then add:

  1. Any events that happen to me in the future depend on my present existence.

For, such events presuppose my future existence, and my future existence is caused by my present existence.

  1. Circular dependence is impossible.

  2. So, facts about my present existence do not depend on future events.

Note that (6) is a very strong premise, and is one place the argument can get attacked. Many people think that you can have circular dependence when the dependence in the two directions is of a different sort. In the case at hand, facts about my present existence might depend constitutively on future events, while the future events depend causally on my present existence. Nonetheless, I think (6) is true, even if the dependence in the two directions is of a different sort.

Another move is to describe the future events on which my existence depends without reference to me. Don’t describe what the surgeon does as the splitting of my brain, but as the splitting of brain x. Then we could say that the future event of the surgeon’s splitting my brain does depend on my present existence, but my present existence doesn’t depend on that event. Instead, it depends on the future event of the surgeon’s splitting brain x. This objection denies (5): while the splitting of my brain depends on my present existence, the splitting of brain x does not, and yet it happens to me.

I think this is mistaken. The splitting of brain x depends on the future existence of that brain, and that brain depends on me, because parts depend on wholes—that is a deep Aristotelian premise I accept. Thus I think (5) is true. An event that happens to me is an event that involves at least a part of me, and none of my parts could exist without me. Granted, a brain like mine could exist without me. But token events are individuated in part by the things caught up in them. A splitting of a brain merely like mine would be a different event from the splitting of this particular brain. And it is a token event that my present existence is supposed to depend on.

The above argument won’t move non-Aristotelians who think that wholes depend on parts rather than parts depending on wholes. But it works for me. And hence it assuages the worry that in accepting (2), I am being unfaithful to my views about time.

All that said, I don’t really want to affirm (2) in an exceptionless way. If I am a time-traveller born in the year 2200, then my present existence does depend on what will happen in the future. But it only depends on what will happen in the external-time future not on what will happen in my internal-time future. And, crucially, I think time-travel is only possible when it doesn’t result in causal loops. So even if I am a time-traveller from the future, I cannot affect anything that is causally relevant to whether I will be born, etc. This probably means that if time-travel is possible, it is possible only in very carefully limited settings.

Thursday, October 19, 2017

Conciliationism is false or trivial

Suppose you and I are adding up a column of expenses, but our only interest is the last digit for some reason. You and I know that we are epistemic peers. We’ve both just calculated the last digit, and a Carl asks: Is the last digit a one? You and I speak up at the same time. I say: “Probably not; my credence that it’s a one is 0.27.” You say: “Very likely; my credence that it’s a one is 0.99.”

Concialiationists now seem to say that I should lower my credence and you should raise yours.

But now suppose that you determine the credence for the last digit as follows: You do the addition three times, each time knowing that you have an independent 1/10 chance of error. Then you assign your credence as the result of a Bayesian calculation with equal priors over all ten options for the last digit. And since I’m your epistemic peer, I do it the same way. Moreover, while we’re poor at adding digits, we’re really good at Bayesianism—maybe we’ve just memorized a lot of Bayes’ factor related tables. So we don’t make mistakes in Bayesian calculations, but we do at addition.

Now I can reverse engineer your answer. If you say your credence in a one is 0.27, then I know that of your three calculations, one of them must have been a one. For if none of your calculations was a one, your credence that the digit was a one would have been very low and if two of your calculations yielded a one, your credence would have been quite high. There are now two options: either you came up with three different answers, or you had a one and then two answers that were the same. In the latter case, it turns out that your credence in a one would have been fairly low, around 0.08. So it must be that your calculations yielded a one, and then two other numbers.

And you can reverse engineer my answer. The only way my credence could be as high as 0.99 is if all three of my calculations yielded a one. So now we both know that my calculations were 1, 1, 1 and yours were 1, x, y where 1, x, y are all distinct. So now you aggregate this data, and I do the same as your peer. We have six calculations yielding 1, 1, 1, 1, x, y. A Bayesian analysis, given the fact that the chance of error in each calculation is 0.9, yields a posterior probability of 0.997.

So, your credence did go up. But mine went up too. Thus we can have cases where the aggregation of a high credence with a low credence results in an even higher credence.

Of course, you may say that the case is a cheat. You and I are not epistemic peers, because we don’t have the same evidence: you have the evidence of your calculations and I have the evidence of mine. But if this counts as a difference of evidence, then the standard example conciliationists give, that of different people splitting a bill in a restaurant, is also not a case of epistemic peerhood. And if the results of internal calculations count as evidence for purposes of peerhood, then there just can’t be any peers who disagree, and conciliationism is trivial.

Wednesday, October 18, 2017

From the finite to the countable

Causal finitism lets you give a metaphysical definition of the finite. Here’s something I just noticed. This yields a metaphysical definition of the countable (phrased in terms of pluralities rather than sets):

  1. The xs are countable provided that it is possible to have a total ordering on the xs such if a is any of the xs, then there are only finitely many xs smaller (in that ordering) than x.

Here’s an intuitive argument that this definition fits with the usual mathematical one if we have an independently adequate notion of nautral numbers. Let N be the natural numbers. Then if the xs are countable, for any a among the xs, define f(a) to be the number of xs smaller than a. Since all finite pluralities are numbered by the natural numbers, f(a) is a natural number. Moreover, f is one-to-one. For suppose that a ≠ b are both xs. By total ordering, either a is less than b or b is less than a. If a is less than b, there will be fewer things less than a than there are less than b, since (a) anything less than a is less than b but not conversely, and (b) if you take something away from a finite collection, you get a smaller collection. Thus, if a is less than b, then f(a)<f(b). Conversely, if b is less than a, then f(b)<f(a). In either case, f(a)≠f(b), and so f is one-to-one. Since there is a one-to-one map from the xs to the natural numbers, there are only countably many xs.

This means that if causal finitism can solve the problem of how to define the finite, we get a solution to the problem of defining the countable as a bonus.

One of the big picture things I’ve lately been thinking about is that, more generally, the concept of the finite is foundationally important and prior to mathematics. Descartes realized this, and he thought that we needed the concept of God to get the concept of the infinite in order to get the concept of the finite in turn. I am not sure we need the concept of God for this purpose.