Wednesday, September 20, 2017

Probabilistic Counterexampler

Every so often someone asks me if some piece of probabilistic reasoning works. For instance, today I got a query from a grad student whether

  1. P(A|C)>P(A|B) implies P(A|B ∨ C)>P(A|B).

Of course, I could think about it each time somebody asks me something. But why think when a computer can solve a problem by brute force.

So, last spring I wrote a quick and dirty python program that looks for counterexamples to questions like that simply by considering situations with three dice, and iterating over all the possible combinations of subsets A, B and C of the state space (with some reduction due to symmetries).

The program is still quick and dirty, but I at least I made the premises and conclusions not be hardcoded. You can get it here.

For instance, for the query above, you can run:

python probab-reasoning.py "P(a,c)>P(a,b)" "P(a,b|c)>P(a,b)" 

(The vertical bars are disjunction, not conditional probability. Conditional probability uses commas.) The result is:

a={1}, b={1, 2}, c={1}
a={1}, b={1, 2, 3}, c={1}
a={1}, b={1, 2, 3}, c={1, 2}
a={1}, b={1, 2, 3}, c={1, 3}
a={1}, b={1, 2, 3}, c={1, 4}
...

So, lots of counterexamples. On the other hand, you can do this:

python probab-reasoning.py "P(a)*P(b)==P(a&b)" "P(b)>0" "P(a,b)==P(a)" 

and it will tell you no counterexamples were found. Of course, that doesn’t prove that the result is true, but in this case it is.

The general operation is that you install python and use a commandline to run:

python probab-reason.py premise1 premise2 ... conclusions

You can use the variables a, b and c, and the operations & (conjunction), | (disjunction) and ~ (negation) between the events. You can use conditional probability P(a,b) and unconditional probability P(a). You can use standard arithmetical and comparison operators on probabilities. Make sure that you use python’s operators. For instance, equality is ==, not =. You should also use python’s boolean operations when you are not working with events: e.g., “P(a)==1 and P(b)==0.5”.

Any premise or conclusion that requires conditionalization on a probability zero event to evaluate automatically counts as false.

You can use up to five single-letter variables (other than P), and you can also specify the number of sides the die has prior to listing the premises. E.g.:

python probab-reasoning.py 8 "P(a)*P(b)==P(a&b)" "P(b)>0" "P(a,b)==P(a)" 

Monday, September 18, 2017

Two ways of being vicious

Many of the times when Hitler made a wrong decision, his character thereby deteriorated and he became more vicious. Let’s imagine that Hitler was a decent young man at age 19. Now imagine Schmitler, who lived a life externally just like Hitler’s, but on Twin Earth. Until age 19, Schmitler’s life was just like Hitler. But from then on, each time Schmitler made a wrong choice, aliens or angels or God intervened and made sure that the moral deterioration that normally follows upon wrong action never occurred. As it happens, however, Schmitler still made the same choices Hitler did, and made them with freedom and clear understanding of their wickedness.

Thus, presumably unlike Hitler, Schmitler did not morally fall, one wrong action at a time, to the point of a genocidal character. Instead, he committed a series of wrong actions, culminating in genocide, but each action was committed from the same base level of virtue and vice, the same level that both he and Hitler had at age 19. This is improbable, but in a large enough universe all sorts of improbable things will happen.

So, now, here is the oddity. Since Schmitler’s level of virtue and vice at the depth of his moral depradations was the same as at age 19, and at age 19 both he and Hitler were decent young men (or so I assume), it seems we cannot say that Schmitler was a vicious man even while he was committing genocidal atrocities. And yet Schmitler was fully responsible for these atrocities, perhaps more so than Hitler.

I want to say that Schmitler is spectacularly vicious without having much in the way of vices, indeed while having more virtue than vice (he was, I assume, a decent young man), even though that sounds like a contradiction. Schmitler is spectacularly vicious because of what he has done.

This doesn’t sound right, though. Actions are episodic. Being vicious is a state. Hitler was a vicious man while innocently walking his dog on a nice spring day in 1944, even when not doing any wrongs. And we can explain why Hitler was vicious then: he had a character with very nasty vices, even while he was not exercising the vices. But how can we say that Schmitler was vicious then?

Here’s my best answer. Even on that seemingly innocent walk, Schmitler and Hitler were both failing to repent of their evil deeds, failing to set out on the road of reconciliation with their victims. A continuing failure to repent is not something episodic, but something more like a state.

If this is right, then there are two ways of being vicious: by having vices and by being an unrepentant evildoer.

(A difficult question Robert Garcia once asked me is relevant, though: What should we say about people who have done bad things but suffered amnesia?)

Some arguments about the existence of a good theodicy

This argument is valid:

  1. If no good theodicy can be given, some virtuous people’s lives are worthless.

  2. No virtuous person’s life is worthless.

  3. So, a good theodicy can be given.

The thought behind 1 is that unless we accept the sorts of claims that theodicists make about the value of virtue or the value of existence or about an afterlife, some virtuous people live lives of such great suffering, and are so far ignored or worse by others, that their lives are worthless. But once one accepts those sorts of claims, then a good theodicy can be given.

Here is an argument for 2:

  1. It would be offensive to a virtuous person that her life is worthless.

  2. The truth is not offensive to a virtuous person.

  3. So, no virtuous person’s life is worthless.

Perhaps, too, an argument similar to Kant’s arguments about God can be made. We ought to at least hope that each virtuous person’s life has value on balance. But to hope for that is to hope for something like a theodicy. So we ought to hope for something like a theodicy.

The above arguments may not be all that compelling. But at least they counter the argument in the other direction, that it is offensive to say that someone’s sufferings have a theodicy.

Here is yet another argument.

  1. That there is no good theodicy is an utterly depressing claim.

  2. One ought not advocate utterly depressing claims, without very strong moral reason.

  3. There is no very strong moral reason to advocate that there is no good theodicy.

  4. So, one ought not advocate that there is no good theodicy.

The grounds for 8 are pragmatic: utterly depressing claims tend to utterly depress people, and being utterly depressed is very bad. One needs very strong reason to do something that causes a very bad state of affairs. I suppose the main controversial thesis here is 9. Someone who thinks religion is a great evil might deny 9.

Let's not exaggerate the centrality of virtue to ethics

Virtues are important. They are useful: they internalize the moral law and allow us to make the right decision quickly, which we often need to do. They aren’t just time-savers: they shine light on the issues we deliberate over. And the development of virtue allows our freedom to include the two valuable poles that are otherwise in tension: (a) self-origination (via alternate possibilities available when we are developing virtue) and (b) reliable rightness of action. This in turn allows our development of virtue reflect the self-origination and perfect reliability in divine freedom.

But while virtues are important, they are not essential to ethics. We can imagine beings that only ever make a single, but truly momentous, decision. They come into existence with a clear understanding of the issues involved, and they make their decision, without any habituation before or after. That decision could be a moral one, with a wrong option, a merely permissible option, and a supererogatory option. They would be somewhat like Aquinas’ angels.

We could even imagine beings that make frequent moral choices, like we do, but whose nature does not lead them too habituate in the direction of virtue or vice. Perhaps throughout his life whenever Bill decides whether to keep an onerous promise or not, there is a 90% chance that he will freely decide rightly and a 10% chance that he will freely decide wrongly, a chance he is born and dies with. A society of such beings would be rather alien in many practices. For instance, members of that society could not be held responsible for their character, but only for their choices. Punishment could still be retributive and motivational (for the chance of wrong action might go down when there are extrinsic reasons against wrongdoing). I think such beings would tend to have lower culpability for wrongdoing than we do. For typically when I do wrong as a middle-aged adult, I am doubly guilty for the wrong: (a) I am guilty for the particular wrong choice that I made, and (b) I am guilty for not having yet transformed my character to the point where that choice was not an option. (There are two reasons we hold children less responsible: first, their understanding is less developed, and, second, they haven’t had much time to grow in virtue.)

Nonetheless, while such virtue-less beings woould be less responsible, and we wouldn’t want to be them or live among them, they would still have some responsibility, and moral concepts could apply to them.

Saturday, September 16, 2017

Adding a USB charging port to an elliptical machine

Last night I added a USB charging port to our elliptical machine, using a $0.70 buck converter, so that we can exercise while watching TV on a tablet even when running out of batteries. Here are instructions.

Note, too, how the tablet is held in place with 3D printed holders. My next elliptical upgrade project will be to make it be a part of a USB game controller (the other part will be a Wii Nunchuk) so that one can control speed in games with speed of movement.

Friday, September 15, 2017

Four-dimensionalism and caring about identity

In normal situations, diachronic psychological connections and personal identity go together. A view introduced by Parfit is that when the two come apart, what we care about are the connections and not the identity.


This view seems to me to be deeply implausible from a four-dimensional point of view. I am a four-dimensional thing. This four-dimensional thing should prudentially care about what happens to it, and only about what happens to it. The red-and-black four-dimensional thing in the diagram here (up/down represents time; one spatial dimension is omitted) should care about what happens to the red-and-black four-dimensional thing, all along its temporal trunk. This judgment seems completely unaffected by learning that the dark slice represents an episode of amnesia, and that no memories pass from the bottom half to the upper half.

Or take a case of symmetric fission, and suppose that the facts of identity are such that I am the red four-dimensional thing in the diagram on the right. Suppose both branches have full memories of what happens before the fission event. If I am the red four-dimensional thing, I should prudentially care about what happens to the red four-dimensional thing. What happens to the green thing on the right is irrelevant, even if it happens to have in it memories of the pre-split portion of me.

The same is true if the correct account of identity in fission is Parfit’s account, on which one perishes in a split. On this account, if I am the red four-dimensional person in the diagram on the left, surely I should prudentially care only about what happens to the red four-dimensional thing; if I am the green person, I should prudentially care only about what happens to the green one; and if I am the blue one, I should prudentially care only about what happens to the blue one. The fact that both the green and the blue people remember what happened to the red person neither make the green and blue people responsible for what the red person did nor make it prudent for the red person to care about what happens to the green and blue people.

This four-dimensional way of thinking just isn’t how the discussion is normally phrased. The discussion is normally framed in terms of us finding ourselves at some time—perhaps a time before the split in the last diagram—and wondering which future states we should care about. The usual framing is implicitly three-dimensionalist: what should I, a three-dimensional thing at this time, prudentially care about?

But there is an obvious response to my line of thought. My line of thought makes it seem like I am transtemporally caring about what happens. But that’s not right, not even if four-dimensionalism is true. Even if I am four-dimensional, my cares occur at slices. So on four-dimensionalism, the real question isn’t what I, the four-dimensional entity, should prudentially care about, but what my three-dimensional slices, existent at different times, should care about. And once put that way, the obviousness of the fact that if I am the red thing, I should care about what happens to the red thing disappears. For it is not obvious that a slice of the red thing should care only about what happens to other slices of the red thing. Indeed, it is quite compelling to think that the psychological connections between slices A and B matter more than the fact that A and B are in fact both parts of the same entity. (Compare: the psychological connections between me and you would matter more than the fact that you and I are both parts of the same nation, say.) The correct picture is the one here, where the question is whether the opaque red slice should care about the opaque green and opaque blue slices.

In fact, in this four-dimensionalist context, it’s not quite correct to put the Parfit view as “psychological connections matter more than identity”. For identity doesn’t obtain between different slices. Rather, what obtains is co-parthood, an obviously less significant relation.

However, this response to me depends on a very common but wrongheaded version of four-dimensionalism. It is I that care, feel and think at different times. My slices don’t care, don’t feel and don’t think. Otherwise, there will be too many carers, feelers and thinkers. If one must have slices in the picture (and I don’t know that that is so), the slices might engage in activities that ground my caring, my feeling and my thinking. But these grounding activities are not caring, feeling or thinking. Similarly, the slices are not responsive to reasons: I am responsive to reasons. The slices might engage in activity that grounds my responsiveness to reasons, but that’s all.

So the question is what cares I prudentially should have at different times. And the answer is obvious: they should be cares about what happens to me at different times.

About the graphics: The images are generated using mikicon’s CC-by-3.0 licensed Gingerbread icon from the Noun Project, exported through this Inkscape plugin and turned into an OpenSCAD program (you will also need my tubemesh library).

Thursday, September 14, 2017

Agents, patients and natural law

Thanks to Adam Myers’ insightful comments, I’ve been thinking about the ways that natural law ethics concerns natures in two ways: on the side of the agent qua agent and on the of the patient qua patient.

Companionship is good for humans and bad for intelligent sharks, let’s suppose. This means that we have reasons to promote companionship among humans and to hamper companionship among intelligent sharks. That’s a difference in reasons based on a difference in the patients’ nature. Next, let’s suppose that intelligent sharks by nature have a higher degree of self-concern vs. other-concern than humans do. Then the degree to which one has an obligation to promote the very same good–say, the companionship of Socrates–will vary depending on whether one is human or a shark. That’s a difference in reasons based on a difference in the agents’ nature.

I suspect it would make natural law ethics clearer if natural lawyers were always clear on what is due to the agent’s nature and what is due to the patient’s nature, even if in fact their interest were solely in cases where the agent and patient are both human.

Consider, for instance, this plausible thesis:

  • I should typically prioritize my understanding over my fun.

Suppose the thesis is true. But now it’s really interesting to ask if this is true due to my nature qua agent or my nature qua patient. If I should prioritize my understanding over my fun solely because of my nature qua patient, then we could have this situation: Both I and an alien of some particular fun-loving sort should prioritize my understanding over my fun, but likewise both I and the alien should prioritize the alien’s fun over the alien’s understanding, since human understanding is more important than human fun, while the fun of a being like the alien is more important than the understanding of such a being. On this picture, the nature of the patient specifies which goods are more central to a patient of that nature. On the other hand, if I should prioritize my understanding over my fun solely because of my nature qua agent, then quite possibly we are in the interesting position that I should prioritize my understanding over my fun, but also that I should prioritize the alien’s understanding over the alien’s fun, while the alien should prioritize both its and my fun over its and my understanding. For me promoting understanding is a priority while for the alien promoting fun is a priority, regardless of whose understanding and fun they are.

And of course we do have actual and morally relevant cases of interaction across natures:

  • God and humans

  • Angels and humans

  • Humans and brute animals.

Wednesday, September 13, 2017

Probabilities and Boolean operations

When people question the axioms of probability, they may omit to question the assumptions that if A and B have a probability, so do A-or-B and A-and-B. (Maybe this is because in the textbooks those assumptions are often not enumerated in the neat lists of the “three Kolmogorov axioms”, but are given in a block of text in a preamble.)

First note that as long as one keeps the assumption that if A has a probability, so does not-A, then by De Morgan’s, any counterexample to conjunctions having a probability will yield a counterexample to disjunctions having a probability. So I’ll focus on conjunctions.

I’m thinking that there is reason to question these axioms, in fact two reasons. The first reason, one that I am a bit less impressed with, is that limiting frequency frequentism can easily violate these two axioms. It is easy to come up with cases where A-type events have a limiting frequency, B-type ones do, too, but (A-and-B)-type ones don’t. I’ve argued before that so much the worse for frequentism, but now I am not so sure in light of the second reason.

The second reason is cases like this. You have an event C that has no probability whatsoever–maybe it’s an event of a dart hitting a nonmeasurable set–and a fair indeterministic coin flip causally independent of C. Let H and T be the events of the coin flip being heads or tails. Then let A be the event:

  • (H and C) or (T and not C).

Here’s an argument that P(A)=1/2. Imagine a coin with erasable heads and tails images, and imagine that a trickster prior to flipping a coin is going to decide, using some procedure or other, whether to erase the heads and tails images on the coin and draw them on the other side. “Clearly” (as we philosophers say when we have no further argument!) as long as the trickster has no way of seeing the future, the trickster’s trick will not affect the probabilities of heads or tails. She can’t make the coin be any less or more likely to land heads by changing which side heads lies on. But that’s basically what’s going on in A: we are asking what the probability of heads is, with the convention that if C doesn’t happen, then we’ll have relabeled the two sides.

Another argument that P(A)=1/2 is this (due to a comment by Ian). Either C happens or it doesn’t. No matter which is the case, A has a chance 1/2 of happening.

So A has probability 1/2. But now what is the probability of A-and-H? It is the same as the probability of C-and-H, which by independence is half of the probability of C, and the latter probabilit is undefined. Half of something undefined is still undefined, so A-and-H has an undefined probability, even though A has a perfectly reasonable probability of 1/2.

A lot of this is nicely handled by interval-valued theories of probability. For we can assign to C the interval [0, 1], and assign to H the sharp probability [1/2, 1/2], and off to the races we go: A has a sharp probability as does H, but their conjunction does not. This is good motivation for interval-valued theories of probability.

Tuesday, September 12, 2017

Numerical experimentation and truth in mathematics

Is mathematics about proof or truth?

Sometimes mathematicians perform numerical experiments with computers. Goldbach’s Conjecture says that every even integer n greater than two is the sum of two primes. Numerical experiments have been performed that verified that this is true for every even integer from 4 to 4 × 1018.

Let G(n) be the statement that n is the sum of two primes, and let’s restrict ourselves to talking about even n greater than two. So, we have evidence that:

  1. For an impressive sample of values of n, G(n) is true.

This gives one very good inductive evidence that:

  1. For all n, G(n) is true.

And hence:

  1. It is true that: for all n, G(n). I.e., Goldbach’s Conjecture is true.

Can we say a similar thing about provability? The numerical experiments do indeed yield a provability analogue of (1):

  1. For an impressive sample of values of n, G(n) is provable.

For if G(n) is true, then G(n) is provable. The proof would proceed by exhibiting the two primes that add up to n, checking their primeness and proving that they add up to n, all of which can be done. We can now inductively conclude the analogue of (2):

  1. For all n, G(n) is provable.

But here is something interesting. While we can swap the order of the “For all n” and the “is true” operator in (2) and obtain (3), it is logically invalid to swap the order of the “For all n” and the “is provable” operator (5) to obtain:

  1. It is provable that: for all n, G(n). I.e., Goldbach’s Conjecture is provable.

It is quite possible to have a statement such that (a) for every individual n it is provable, but (b) it is not provable that it holds for every n. (Take a Goedel sentence g that basically says “I am not provable”. For each positive integer n, let H(n) be the statement that n isn’t the Goedel number of a proof of g. Then if g is in fact true, then for each n, H(n) is provably true, since whether n encodes a proof of g is a matter of simple formal verification, but it is not provable that for all n, H(n) is true, since then g would be provable.)

Now, it is the case that (5) is evidence for (6). For there is a decent chance that if Goldbach’s conjecture is true, then it is provable. But we really don’t have much of a handle on how big that “decent chance” is, so we lose a lot of probability when we go from the inductively verified (5) to (6).

In other words, if we take the numerical experiments to give us lots of confidence in something about Goldbach’s conjecture, then that something is truth, not provability.

Furthermore, even if we are willing to tolerate the loss of probability in going from (5) to (6), the most compelling probabilistic route from (5) to (6) seems to take a detour through truth: if G(n) is provable for each n, then Goldbach’s Conjecture is true, and if it’s true, it’s probably provable.

So the practice of numerical experimentation supports the idea that mathematics is after truth. This is reminiscent to me of some arguments for scientific realism.

Presentism and multiverses

  1. It is possible to have an island universe whose timeline has no temporal connection to our timeline.

  2. If presentism is true, it is not possible to have something that has no temporal connection to our timeline.

  3. So, presentism is not true.

Presentism and classical theism

  1. If presentism is true, then everything that exists, exists presently.

  2. Anything that exists presently is temporal.

  3. God exists.

  4. So, if presentism is true, then God is temporal.

  5. But God is not temporal.

  6. So, presentism is not true.

Some presentists will be happy to embrace the thesis that God is temporal. But what about presentist classical theists? I suppose they will have to deny (1). Maybe they can replace it with:

  1. If presentism is true, then everything temporal that exists, exists presently.

Presentism is now longer an elegant thesis about the nature of existence, though.

Maybe a better move for the presentist is to deny (2)? There is some reason to do that. God while not being spatial is everywhere. Similarly God is everywhen, and hence he is in the present, too. But I am not sure if being in the present is the same as existing presently.

Monday, September 11, 2017

Supertasks and empirical verification of non-measurability

I have this obsession with probability and non-measurable events—events to which a probability cannot be attached. A Bayesian might think that this obsession is silly, because non-measurable events are just too wild and crazy to come up in practice in any reasonably imaginable situation.

Of course, a lot depends on what “reasonably imaginable” means. But here is something I can imagine, though only by denying one of my favorite philosophical doctrines, causal finitism. I have a Thomson’s Lamp, i.e., a lamp with a toggle switch that can survive infinitely many togglings. I have access to it every day at the following times: 10:30, 10:45, 10:52.5, and so on. Each day, at 10:00 the lamp is off, and nobody else has access to the machine. At each time when I have access to the lamp, I can either toggle or not toggle its switch.

I now experiment with the lamp by trying out various supertasks (perhaps by programming a supertask machine), during which various combinations of toggling and not toggling happen. For instance, I observe that if I don’t ever toggle the switch, the lamps stays off. If I toggle it a finite number of times, it’s on when that number is odd and off when that number is even. I also notice the following regularities about cases where an infinite number of togglings happens:

  1. The same sequence (e.g., toggle at 10:30, don’t toggle at 10:45, toggle at 10:52.5, etc.) always produces the same result.

  2. Reversing a finite number of decisions in a sequence produces the same outcome when an even number of decisions is reversed, and the opposite outcome when an odd number of decisions is reversed.

(Of course, 1 is a special case of 2.) How fun! I conclude that 1 and 2 are always going to be true.

Now I set up a supertask machine. It will toss a fair coin just prior to each of my lamp access times, and it will toggle the switch if the coin is heads and not toggle it if it is tails.

Question: What is the probability that the lamp will be on at 11?

“Answer:” Given 1 and 2, the event that the lamp will be on at 11 is not measurable with respect to the standard (completed) product measure on a countable infinity of coin tosses. (See note 1 here.)

So, given supertasks (and hence the falsity of causal finitism), we could find ourselves in a position where we would have to deal with a non-measurable set.

Natural law love-first metaethics

Start with this Aristotelian thought:

  1. Everything should to fulfill its nature, and every “should” fact is a norm specifying the norm of fulfilling one’s nature.

But not every “should” is a moral should. Sheep should have four legs, but a three-legged sheep is not morally defective. Here’s a hypothesis:

  1. A thing morally should A if and only if that thing has a will with an overriding norm of loving everything and that the thing morally should A is a specification of that norm.

On this theory, moral norms are norms for the same Aristotelian reason that all other norms are norms—all norms derive from the natures of things. But at the same time, the metaethics is a metaethics of love. What renders a norm a moral norm is its content, that it is a specification of the norm that one should love everything.

Why is it, on this theory, that I should be affable to my neighbor? Because such affability is a specification of the norm of fulfilling my nature. But that needn’t be my practical reason for the affability: rather, that is the explanation of why I should be affable (cf. this). What makes the norm of affability to my neighbor a moral norm? That I have a norm of love of everything, and that the norm of affability specifies that norm.

And we can add:

  1. A thing is a moral agent if and only if it has a will with an overriding norm of loving everything.

One could, perhaps, imagine beings that have a will with an overriding norm of self-benefit. Such beings wouldn’t be moral agents. But we are moral agents. In fact, I suspect the following is true:

  1. Loving everything is the only proper function of the human will.

Given the tight Aristotelian connection between proper function and norms:

  1. All norms on the human will are specifications of the norm of loving everything.

This metaethical theory I think is both a natural law theory and a love-first metaethics. It is a natural law theory in respect of the sources of normativity, and it is a love-first metaethics in respect of the account of moral norms. Thus it marries Aristotle with the Gospel, which is a good thing. I kind of like this theory, though I have a nagging suspicion it has problems.

Reductive accounts of matter

I’ve toyed with identifying materiality with spatiality (much as Descartes did). But here’s another very different reductive idea. Maybe to be material is to have energy. Energy on this view is a physical property, maybe a functional one and maybe a primitive one.

If this view is right, then one might have worlds where there are extended objects in space, but where there is no matter because the physics of these objects is one that doesn’t have room or need for energy.

Note that the sense of “matter” involved here is one on which fields, like the electromagnetic one, are material. I think that in the philosophical usage of “material” and “matter”, this is the right answer. If it turned out that our minds were identical with the electromagnetic fields in our brains, that would surely be a vindication of materialism rather than of dualism.

Now, here’s something I’m worrying about when I think about matter, at least after my rejection of Aristotelian matter. There seem to be multiple properties that are co-extensive with materiality in our world:

  • spatiality

  • energy

  • subjection to the laws of physics (and here there are two variants: subjection to our laws of physics, and subjection to some laws of physics or other; the latter might be circular, though, because maybe “physics” is what governs matter?).

Identifying matter with one or more of them yields a different concept of materiality, with different answers to modal questions. And now I wonder if the question of what matter is is a substantive one or a merely verbal one? On the Aristotelian picture, it was clearly a substantive question. But apart from that picture, it’s looking more and more like a merely verbal question to me.

Non-measurable sets and intuition

Here’s an interesting reason to accept the existence of non-measurable sets (and hence of whatever weak version of the Axiom of Choice that it depends on). A basic family of mathematical results in analysis says that most measurable real-valued functions on the real line are “close to” being continuous, i.e., that they can be approximated by continuous functions in some appropriate sense. But it is intuitive to think that there “should” be real-valued functions on the real line that are not close to being continuous—there “should” be functions that are very, very messy. So, intuitively, there should be non-measurable functions, and hence non-measurable sets.

Friday, September 8, 2017

A defense of natural law eudaimonism

My main objection to natural law ethics has for a long time been that it looks egoistic because it is eudaimonistic. One version of that worry is the “one thought too many” objection: You should just do good to your fellow humans because they are who they are, because they are your fellow human beings, or something like that, but definitely not because doing so leads to your flourishing.

I think there is a nice—and probably well-known to people other than me—response to this version of the worry, and to many similar “one thought too many” worries. To put this “one thought too many” worry more abstractly, the worry is that the metaethics will infect the reasons for action in an unacceptable way. But the response should simply be that, first, what metaethics asks is this question:

  1. What makes the reasons for action be reasons for action?

Here, read “reasons” factively as “good reasons” or even “good moral reasons” (I don’t actually distinguish the two, but many do), not as motivations. And, second, insofar as R is my reason for my action, I am acting on account of R, not on account of R being a reason. Compare: what causes the fire is the match, not the match’s being a cause.

Thus, the natural lawyer should say that what makes the fact that an action promotes the good of my neighbor be a reason is that I flourish (in part) by intentionally (under this description) promoting the good of my neighbor. But the reason for the action is that the action promotes the good of my neighbor, not that I flourish by intentionally promoting the good of my neighbor. The natural law answer to the metaethics question (1) is this:

  1. R is on balance a reason for action if and only if, and if so then because, I flourish by acting on R.

We do in fact flourish by intentionally promoting the good of our neighbor. Note that (2) does not by itself yield any egoism in our motivations. We could imagine selfless beings that flourish only insofar as they are intentionally promoting the good of their neighbor as a final end, and who are blighted insofar as they are intentionally promoting their own good or flourishing. We are, of course, not such selfless beings, but we don’t learn the fact that we are not such beings from (2). In fact, (2) is fully logically compatible with us being such beings. Hence, the metaethical theory (2) cannot by itself give rise to the “one thought too many” worry I started the post with. (Of course, some natural lawyers will go beyond (2). They may say that in fact our happiness is the end of all our actions. If so, then I think they are subject to the “one thought too many” worry.)

It is important to add a little bit to the above story. While it is true that “this benefits my friend” is typically reason enough, and that I don’t need to act on the second order fact that “this benefitting my friend is a reason”, we also do have such second order reasons. That there is a reason for an action is itself a reason for action. A parent might tell a child: “You have good reason to do this, but I can’t explain the reason right now.” In that case, the child could well be acting on the second-order reason that there is a first-order reason. (The child could also be acting on a first-order reasons to please the parent).

Here is another kind of case. I start off without any belief about whether R is a reason for action, and R leaves me cold. Maybe I am completely insensitive to considerations of privacy, and the fact that an action promotes someone’s privacy just leaves me completely cold. But I observe my virtuous friends, and see that they are acting on reasons like R, and I notice that their so acting contributes to what I admire about them. I conclude that R is in fact a good reason for action. But that’s purely intellectual. I am still left quite cold and unmotivated by the fact that some proposed action A falls under R. But what I can do at this point is to act on the second-order reason that A falls under a good reason. I can even say what that good reason is. But I cannot act on it itself, because it leaves me cold.

These are, however, non-ideal cases. If I know that R is a good reason, I should strive to form my will to be motivated by R. It will be better to act on R than to act on the knowledge that R is a reason. And thinking about these cases makes the response to the “one thought too many” worry about natural law even more compelling, I think. It does promote my flourishing to promote my flourishing, though I think that it doesn’t promote my flourishing as well as promoting the flourishing of others does. So that kindliness to others promotes my flourishing is a reason for benefiting others, just not as good a reason as that it benefits others. But such “not as good reasons” are important for our moral development: we are not yet in the ideal state, and so that “one thought too many” is still needed.

This helps make me feel a lot better about natural law ethics. Not quite enough to embrace it, though.

Thursday, September 7, 2017

Two kinds of non-measurable events

Non-measurable events are ones to which the probability function in the situation assigns no probability. Philosophically speaking, non-measurable events come in two varieties:

  1. Non-measurable events that should not have any probability assignment.

  2. Non-measurable events that should have a probability assignment.

Type (1) non-measurable events are the kinds of weird events that can be constructed from the Hausdorff and Banach-Tarski paradoxes, as well as perhaps (this is less clear) the Vitali non-measurable sets.

But I think there are also type (2) non-measurable events relative to standard choices of probability functions. For instance, suppose that in each universe of an infinite multiverse a fair coin is tossed countably infinitely often.

How likely is it that in at least one universe all the coin tosses are heads? If the universes form a countable infinity, classical probability theory gives an answer: zero. But if the universes form an uncountable infinity, classical probability theory gives no answer at all—the standard completed product measure makes the event be non-measurable. However, intuitively, there should be an answer in at least some cases. If the number of universes is much larger than the number of possible countable sequences of coin tosses (i.e., is much larger than 2ω), we would expect the probability to be 1 or close to it. We can coherently extend the standard probability function to give that answer. But we can also coherently extend it to give a different answer, including the answer that the probability of an all-heads universe is zero, even if the number of universes is a gigantic infinite cardinality.

We don’t want to just make up an answer here. We want the answer to be derivable in some way resembling the proof of the theorem that if you toss a coin infinitely many times, you’ve got probability 1 of getting heads at least once.

I suppose we could take it to be a metaphysical axiom that if you have K disjoint collections each with M coin tosses, then if K and M are infinite and K > M, then with probability one at least one collection yields all heads. But it would be nice to have more than just intuition here, and in similar problems.

Wednesday, September 6, 2017

A problem for some Humeans

Suppose that a lot of otherwise ordinary coins come into existence ex nihilo for no cause at all. Then whether a given coin lies heads or tails up is independent of how all the other coins lie in the sense that no information about the other coins will give you any data about how this one lies.

It is crucial here that the coins came into existence causelessly. If the coins came off an assembly line, and a large sample were all heads-up, we would have good reason to think that the causal process favored that arrangement and hence that the next coin to be examined will also be heads-up.

But now suppose that I know that Humeanism about laws is true, and there is a very, very large number of coins lying in a pile, all of which I know for sure to have come to be there causelessly ex nihilo, and there are no other coins in the universe. Suppose, further, that in fact all the coins happen to lie heads-up. Then when the number of coins is sufficiently large (say, of the order of magnitude of the number of particles in the universe), on Humean grounds it will be a law of nature that coins begin their existence in the heads-up orientation. But if the independence thesis I started the post with is true, then no matter how many coins I examined, I would not have any more reason to think that the next unexamined coin is heads than that it is tails. Thus, in particular, I would not be justified in believing in the heads-up law.

One might worry that I couldn’t know, much less know for sure, that the coins are there causelessly ex nihilo. A reasonable inference from the fact that lots of examined coins are all heads-up would seem to be that they were thus arranged by something or someone. And if I made that inference, then I could reasonably conclude that the coins are all heads-up. But my conclusion, while true and justified, would not be knowledge. I would be in a Gettier situation. My justification depends essentially on the false claim that the coins were arranged by something or someone. So even if one drops the assumption that I know that the coins are there causelessly ex nihilo, I still don’t know that the heads-up law holds. Moreover, my reason for not knowing this has nothing to do with dubious theses about the infallibility of knowledge. I don’t know that the heads-up law holds, whether fallibly or infallibly.

There is no problem for the Humean as yet. After all, there is nothing absurd about there being hypothetical situations where there is a law but we can’t know that it obtains. But for any Humean who additionally thinks that our universe came into existence causelessly, there is a real challenge to explain why the laws of our world are not like the heads-up law—laws that we cannot know from a mere sample of data.

This problem is fatal, I think, to the Humean who thinks that our universe started its existence with a large number of particles. For the properties of the particles would be like the heads-up and tails-up orientations of the coins, and we would not be in a position to know all particles fall into some small number of types (as the standard model in particle physics does). But a Humean scientist who doesn’t think the universe has a cause could also think that our universe started its existence with a fairly simple state, say a single super-particle, and this simple state caused all the multitude of particles we observe. In that case, the order-in-multiplicity that we observe would not be causeless, and the above argument would not apply.

Thursday, August 31, 2017

Musings on authority

I have a lot of authority to impose hardships on myself. I can impose hardships on myself in two main ways. I can do something that either is or causes a hardship or risk of hardship to myself. Or I can commit myself to doing something that is or causes me a hardship or risk of hardship (I can commit myself by making a promise or by otherwise putting myself in a position where there is no morally permissible way to avoid the hardship). I have a wide moral latitude to decide which burdens to bear for the sake of which goods, though not an unlimited latitude. The decisions between goods are morally limited by the virtue of prudence. It would be wrong to undertake a 90% risk of death for the sake of a muffin. But it's morally up to me, or at least would be if I had no dependents, whether to undertake a 40% risk of death for the sake of writing a masterpiece. I do have the authority to impose some hardships on my children and my students, but that authority is much more limited: I do not have the authority to impose a 40% risk of death for the sake of writing a masterpiece. My authority to impose hardships on myself is much greater than my authority to impose hardships on others.

One explanation of the difference in the degree of our authority over ourselves and our authority over others is that people's authority over others derives from people's authority over themselves: we give authority over us to others. That is what the contractarian thinks, but it is implausible for familiar reasons (e.g., there aren't enough voluntarily accepted contracts to make contractarianism work). I prefer one of these two stories:

  1. Both authority (of the hardship-imposing kind) over self and authority over others derives from God's authority over us.
  2. Of necessity, some relationships are authority-conferring, and different kinds of relationships are necessarily authority-conferring to different degrees. For instance, identity in a mature person confers great authority of x with respect to x. Parenthood by a mature person of an immature person confers much authority but less than identity of a mature person does.

What about God's authority? On view (1), we would expect God to have more authority to impose hardships than anybody else has, including more authority to impose hardships on us than we have with respect to our own selves. What about on view (2)? That's less clear. We would intuitively expect that the God-creature relationship be more authority-conferring than the parent-child one. But how does it compare to identity? It would be religiously uncomfortable to say that someone has more authority over me than God does, even if I am that someone. Can we give a philosophical explanation for this religious intuition? Maybe, but I'm not yet up to it. I think a part of the story is that all our goods are goods by participation in God, that our telos is a telos-by-participation in God as the ultimate final cause of all.

Suppose we could argue that God has more hardship-imposing authority over ourselves than we have over ourselves. Then I think we would have a powerful tool for theodicy. A crucial question in theodicy is whether it is permissible for God to allow hardship H to me for the sake of good G (for myself or another). We would then have a defeasible sufficient condition for this permissibility: if it would not be immorally imprudent for me to allow H to myself for the sake of G, then it would be permissible for God to allow H to me for the sake of G. This is a much stronger criterion than one that is occasionally used in the literature, namely that if I would rationally allow H to myself for the sake of G, then God can permissibly allow it, too.

Tuesday, August 29, 2017

Present moment ethical egoism

One of the least popular ethical theories is present moment ethical egoism (pmee): you ought do what produces the state that is best for you at the present moment.

But pmee has a very lovely formal feature: it can be used to simulate every other ethical theory of permissibility, simply by changing the value function, the function that ranks states in terms of how good they are for you at the present moment. To simulate theory T, just assign value 1 to one’s presently choosing an action that T says is permissible and value −1 to one’s presently choosing an action that T says are impermissible. In this way, pmee simulates Aquinas, Kant, virtue ethics, utilitarianism, non-present-moment egoism, etc.

This formal feature is not shared by non-egoistic consequentialist theories. For the only way a consequentialist theory can simulate a deontological theory is by assigning an overwhelmingly large negative value to wrong choices. But this gives a result incompatible with many deontological theories, namely that you should choose to commit a murder in order to prevent two other people from doing the same.

The formal feature is also not shared by egoistic but not present-moment theories. For on some deontological theories, it is wrong to commit a murder now in order to prevent oneself from choosing two murders later.

Here is another curious thing. Basically, the only present thing in my present control is my present choice. This means that pmee cannot be a consequentialist theory in the typical sense of the word, because all causal consequences take time, and hence every causal consequence within my present control is in the future. In other words, it is the value of the present choice that pmee needs to focus on (both in itself, and in a larger context).

But once we see that it is the value of the choice itself, and not the causal consequences of the choice, that pmee must base a decision on, then given the fact that the most compelling value that a choice has is its moral value, it seems that pmee tells one that one should do what is morally right. And what is morally right cannot be defined in terms of pmee on pain of circularity.

This is a Parfit-like thought, of course. (Maybe even exactly something from Parfit. It’s been a while since I’ve read him.)

Right and wrong choices

Here's a thought I had that might have theodical applications. Agents tend to be more responsible when they choose rightly than when whey choose wrongly. For when one chooses wrongly, one acts against reason. And that cannot but contribute to making one less responsible for the action than had one acted following reason.

Friday, August 25, 2017

The blink of an eye response to the problem of evil

I want to confess something: I do not find the problem of evil compelling. I think to myself: Here, during the blink of an eye, there are horrendous things happening. But there is infinitely long life afterwards if God exists. For all we know, the horrendous things are just a blip in these infinitely long lives. And it just doesn’t seem hard to think that over an infinite future that initial blip could be justified, redeemed, defeated, compensated for with moral adequacy, sublated, etc.

It sounds insensitive to talk of the horrors that people live through as a blip. But a hundred years really is the blink of an eye in the face of eternity.

Wouldn’t we expect a perfect being to make the initial blink of an eye perfect, too? Maybe. But even if so, we would only expect it to be perfect as a beginning to an infinite life that we know next to nothing about. And it is hard to see how we would know what is perfect as a beginning to such a life.

This sounds like sceptical theism. But unlike the sceptical theist, I also think the standard theodicies—soul building, laws of nature, free will, etc.—are basically right. They each attempt to justify God’s permission of some or all evils by reference to things that are indeed good: the gradual building up of a soul, the order of the universe, a rightful autonomy, etc. They all have reasonable stories about how the permission of the evils is needed for these goods. There is, in mind, only one question about these theodicies: Are these goods worth paying such a terrible price, the price of allowing these horrors?

But in the face of an eternal future, I think the question of price fades for two reasons.

First, the goods gained by soul building and free will last for an infinite amount of time. It will forever be true that one has a soul that was built by these free choices. And the value of orderly laws of nature includes an order that is instrumental to the soul building as well as an order that is aesthetically valuable in itself. The benefits of the former order last for eternity, and the beauty of the laws of nature—even as exhibited during the initial blink of an eye—lasts for ever in memory. It is easy for an infinite duration of a significant good to be worth a very high price! (Don’t the evils last in memory, too? Yes, but while memories of beauty should be beautiful things, memories of evil should not be evils—think of the Church’s memory of the Cross.)

Second, it is very easy for God to compensate people during an infinite future for any undeserved evils they suffered during the initial blip. And typically one has no obligation to prevent someone’s suffering when (a) the prevention would have destroyed an important good and (b) one will compensate the person to an extent much greater than the sufferings. The goods pointed out by the theodicies are important goods, even if we worry that permitting the horrors is too high a price. And no matter how terrible these short-lived sufferings were—even if the short period of time, at most about a mere century, “seemed like eternity”—infinite time is ample space for compensation. (Of course, it would be wrong to intentionally inflict undeserved serious harms on someone even while planning to compensate.)

Objection 1: Can one say this while saying that the fleeting goods of our lives yield a teleological argument for the existence of God?

Response: One can. One can be quite sure from a single paragraph in a novel that it is written by someone with great writing skills. But one can never be sure from a single paragraph in a novel that it is not written by someone with great writing skills. (For all we know, the author was parodying bad writing in that paragraph, and the paragraph reflects great skill. But notice that we cannot say about the great paragraph that maybe the author has no skills but was just parodying great writing.)

Objection 2: It begs the question to suppose our future lives are infinite.

Response: No. If God exists, it is very likely that the future lives of all persons, or at the very least of all persons who do not deserve to be annihilated, will be infinite. The proposition that God exists is equivalent to the disjunction: (God exists and there is eternal life) or (God exists and there is no eternal life). If the argument from evil presupposes the absence of eternal life, it is only an argument against the second disjunct. But most of the probability that God exists lies with the first disjunct, given that P(eternal life|God exists) is high. Hence, the argument doesn't do much unless it addresses the first disjunct.

Two sources of discomfort with substitutionary views of atonement

On one family of theories of the atonement, the harsh treatment that justice called for in the light of our sins is imposed on Christ and thereby satisfies retributive justice. Pretty much everybody who thinks about this is at least a little bit uncomfortable with it—some uncomfortable to the point of moral outrage.

It’s useful, I think, to make explicit two primary sources of discomfort:

  1. It seems unjust to Christ that he bear the pain that our sins deserve.

  2. It seems unjust that we are left unpunished.

And it’s also useful to note that these two sources of discomfort are largely independent of one another.

I think that those who are uncomfortable to the point of moral outrage are likely to focus on (1). But it is not hard to resolve (1) given orthodox Christology and Trinitarianism. The burden imposed on Christ is imposed by the will of the Father. But the will of the Father in orthodox theology is numerically identical with the will of the Son. Thus, the burden is imposed on Christ by his own divine will, which he then obeys in his own human will. It is thus technically a burden coming from Christ’s own will, and a burden coming from one’s own will for the sake of others does not threaten injustice.

While (2) is also a source of discomfort, I think it is less commonly a discomfort that rises to the level of moral outrage. Maybe some people do feel outrage at the idea that a mass murderer could be left unpunished if she repentantly accepted Christ into her life and were baptised. But I think it tends to be a moral fault if one feels much outrage at leniency shown to a repentant malefactor.

I also think (2) is the much harder problem. Note, for instance, that the considerations of consent that dissolve (1) seem to do little to help with (2). Imagine that I was a filthy rich CEO of a corporation that was knowingly dumping effluent that caused the deaths of dozens of people and I was justly sentenced to twenty years imprisonment. It would clearly be a failure of justice if I were permitted to find someone else and pay her a hundred million dollars to go to prison in place—even though there would no doubt be a number of people who would be very eager, of their own free will, to do that for the price.

It would be nice if I could now go on to solve (2). But my main point was to separate out the two sources of discomfort and note their independence.

That said, I did just now have a thought about (2) while talking to a student. Suppose that you do me a very good turn. I say: “How can I ever repay you?” And you say: “Pass it on. Maybe one day you’ll have a chance to do this for someone else. That will be repayment enough.” If I one day pass on the blessing that I’ve received from you, justice has been done to you. The beneficiary of my passing on the blessing rightly substitutes for you. Maybe there is a mirror version of this on the side of punishment?

Sentencing to time served

Sometimes people are sentenced to “time served”: the time they spent in jail prior to trial is retroactively counted as their sentence. But doesn’t justice call for harsh treatment to be imposed as a punishment? The jail time, however, was not imposed on the malefactor as a punishment—it was imposed on a person presumed innocent to negate a probability of flight. How can it turn into a punishment retroactively?

Well, one solution is to reject a retributive account of punishment. Another is to say that justice is served by such punishment.

But I think there is a less revisionary approach. Instead of saying that justice calls for harsh treatment to be imposed, say that justice calls for one to ensure something harsh happening as a result of the crime. Sometimes, one ensures a state of affairs by causally imposing it. But one can also ensure a state of affairs by verifying the occurrence of the state of affairs while being committed to causing the state of affairs if that were to fail.

This provides a way for a retributivist to accept the intuition that if someone is paralyzed for life as a result of trying to blow a bank vault, there need be no further call to send them to prison—one may be able to ensure more than sufficient punishment simply by verifying that the paralysis occurred as a result of the crime. Another way for the retributivist to accept that intuition would be to say that while we didn’t impose the paralysis on the robber as a punishment, God did. But the move from imposing to ensuring allows the retributivist to avoid mixing up God in human justice here.

Wednesday, August 23, 2017

Eliminating or reducing parthood

Parthood is a mysterious relation. It would really simplify our picture of the world if we could get rid of it.

There are two standard ways of doing this. The microscopic mereological nihilist says that only the fundamental “small” bits—particles, fields, etc.—exist, and that there are no complex objects like tables, trees and people that are made of such bits. (Though one could be a microscopic mereological nihilist dualist, and hold that people are simple souls.)

The macroscopic mereological nihilist says that big things like organisms do exist, but their commonly supposed constituents, such as particles, do not exist, except in a manner of speaking. We can talk as if there were electrons in us, but there are no electrons in us. The typical macroscopic mereological nihilist is a Thomist who talks of “virtual existence” of electrons in us.

Both the microscopic and macroscopic nihilist get rid of parthood at the cost of ridding themselves of large swathes of objects that common sense accepts. The microscopic nihilist gets rid of the things that are commonly thought to be wholes. The macroscopic nihilist gets rid of the things that are commonly thought to be parts.

But there is a third way of getting rid of parthood that has not been sufficiently explored. The third kind of mereological nihilist would neither deny the existence of things commonly thought to be wholes nor of things commonly thought to be parts. Instead, she would deny the parthood relation that is commonly thought to hold between the micro and the macro things. Parts of the space occupied by me are also occupied by my arms, my legs, my heart, the electrons in these, etc. But these things are not parts of me: they are just substances that happen to be colocated with me. I’ll call this “parthood nihilism”.

This is compatible with a neat picture of organ transplants. If my kidney becomes your kidney, nothing changes with respect to parthood. All that changes is the causal interactions: the kidney that previously was causing certain distributional properties in me starts to cause certain distributional properties in you.

An obvious question is what about property inheritance? Whenever my hand is stained purple, I am partly purple. We don’t want this to be just a coincidence. The common-sense parts theorist has a nice explanation: I inherit being partly purple from my hand being partly purple (note that they’re only properly partly purple—they aren’t purple inside the bones, say). My partial purpleness derives from the partial purpleness of a part of me.

But the parthood nihilist can accept accept this kind of property inheritance and give an account of it: the inheritance is causal. My hand’s being partly purple causes me to be partly purple, which is a distributional property of an extended simple). I guess on the standard view, property inheritance is going to be a kind of grounding: my being partly purple occurs in virtue of my hand’s being a part of me and its being partly purple. On the present nihilism, we have simultaneous causation instead of grounding.

Here’s another difficulty: what about gravity (and relevantly similar forces). I have a mass of 77kg. If my mass is m1 and yours is m2 and the distance between us is r, there is a force pulling you towards me of magnitude Gm1m2/r2. But why isn’t that force equal in magnitude to (m1 + m11 + m12 + m13 + ...)m2/r2, where m11, m12, m13, ... are the masses of what common sense calls “my parts” (about five kilograms for my head, four for my left arm, four for my right arm, and so on)? After all, wouldn’t all these objects be expected to exert gravitational force?

The first two kinds of nihilists have easy answers to the problem. The microscopic nihilist says that only particles have mass as only particles exist. The macroscopic one says that I am all there is here—the head, arms, etc. don’t exist. The standard common-sense view has a slightly more complicated answer available: gravitational forces only take into account non-inherited mass. But parthood nihilist can give a variant of this: it’s a law of nature that only fundamental particles produce gravitational forces.

There is a fourth kind of view. This fourth kind of view is no longer a mereological nihilism, but mereological causal reductivism. On the fourth kind of view, for x to be a part of y just is for x to be identical with y or for x to be a proper part of y. And for x to be a proper part of y just is for a certain causal relation to hold between x’s properties and y’s properties.

Spelling out the details of this causal relation is difficult. Roughly, it just says that all of x’s properties and relations cause corresponding properties and relations of y. Thus, x’s being properly partly located in Pittsburgh causes y to be properly partly located in Pittsburgh, while x’s being wholly located in Pittsburgh causes y to be at least partly located in Pittsburgh; x’s being green on its left half causes y’s being green in the left half of the locational property that x causes y to have; and so on.

As I said, it’s difficult to spell out the details of this causal relation. But it is no more difficult than the common-sense parts theorist’s difficulty in spelling out the details of property inheritance. Wherever the common-sense parts theorist says that there is a part-to-whole inheritance between properties, our reductionist requires a causal relation.

The reductionism changes the order of explanation. Suppose my hand is the only green part of me and it gets amputated. According to the common-sense parts theorist, I am no longer partly green because the green hand has stopped being a part of me. According to the reductionist, on the other hand, the hand’s no longer contributing to my greenness makes it no longer a part of me.

The reductionist and parthood nihilist, however, have an extra explanatory burden. Why do all these causal relations cease together? Why is it that when my right hand stops causing me to be partially green, my right hand also stops causing me to have five right fingers? The common-sense parts theorist has a nice story: when the part stops being a part, all the relevant grounding relations stop because a portion of the ground is the fact that the part is a part.

But there is also a causal solution. The common-sense parts theorist has to give a story as to when it is that certain kinds of causal interaction—say, a surgeon using a scalpel—cause a part to stop being a part. For each such kind of causal interaction, the reductionist and parthood nihilist can say that there is a cessation of all the causal relations that the common-sense parts theorist would say go with inheritance.

All in all, I think the reductionist has a simpler fundamental ideology than the standard common-sense inheritance view: the reductionist can reduce parthood to patterns of causation. Her theory is overall not significantly more complicated than the common-sense inheritance theory, but it is more complicated than either microscopic or macroscopic nihilism. But she gets to keep a lot more of common-sense than the nihilists do. In fact, maybe she gets to keep all of common-sense, except for pretty theoretical claims about the direction of explanation, etc.

The parthood nihilist has most of the advantages of reductionism, but there is some common-sense stuff that she denies—she denies that my arm is a part of me, etc. Overall parthood nihilism is not significantly simpler than reductionism, I think, because the parthood nihilist’s account of how all the relevant causal relations cease together will include all the complications that the reduction includes. So I think reductionism is superior to parthood nihilism.

But I still like macroscopic nihilism more than reductionism.

Beatific vision and scepticism

One way to think of the beatific vision is as a conscious experience whose quale is God himself. Not a representation of God, but the infinite and simple God himself. Such an experience would have have a striking epistemological feature. Ordinary veridical experiences are subject to sceptical worries because the qualia involved in them can occur in non-veridical experiences, or at least can have close facsimiles occurring in non-veridical experiences. But while everything is similar to God, the similarity is always infinitely remote. Moreover, there is a deep qualitative difference between God in the beatific vision and other qualia. No other quale is a person or even a substance.

Thus, someone who has the beatific vision is in the position of having an experience that is infinitely different from all other experiences, veridical or not. This, I think, rules out at least one kind of sceptical worry, and hence the beatific vision is also a fulfillment of the Cartesian quest for certainty—though that is far from being the most important feature of the beatific vision.

Tuesday, August 22, 2017

Aquinas and God

It just occurred to me, while grading a comprehensive exam question on Aquinas, how deeply Jewish Aquinas’s approach to God is. In the structure of the Summa Theologiae, the primary attribute of God, the one on which the derivation of all the others depends, is God’s oneness or simplicity.

Is knowledge of very important things very valuable?

It seems right to say that knowledge, as such, is very valuable when the matter at hand is of great personal importance to one. For instance, it seems intuitively right that it is very valuable to know whether the people one loves are alive.

Suppose Bob, Alice’s beloved husband, was in an area where a disaster happened. Carl read a list of survivors, and told Alice that her husband was one of the survivors. But five minutes later Carl realized that he confused Alice with someone else, and that it wasn’t Alice’s husband’s name that he saw on the list. Carl is terrified that he will have to tell Alice that her husband wasn’t on the list. He goes back to the list and, to his great relief, finds that Bob is on the list as well.

Alice correctly believes that her husband survived the disaster. She does not know that her husband survived, though she thinks she knows. She is Gettiered.

If knowing that one’s beloved husband has survived a disaster is very valuable, Carl would have a quite strong reason to go back to Alice and tell her: “I just checked the list again very carefully, and indeed your husband is on it.” (It would be ill-advised, perhaps, for Carl to say to Alice that he had made the mistake the first time, because if he told her that, she would start worrying that he has made a mistake this time, too.) For, Carl’s telling this to Alice would turn her Gettiered belief into knowledge.

But if Carl has any reason to talk to Alice about this again, the reason is not a very strong one. Hence, even in cases which are of extreme personal importance, knowledge as such is not very valuable.

I conclude that knowledge as such is of little if any intrinsic value. Truth and justification, of course, can have great intrinsic value.

Objection: Carl doesn't have to talk to Alice to turn her true belief into knowledge. For he would have informed Alice had he not found Alice's husband on the list. Thus, on certain externalist views where knowledge depends on the right counterfactuals, Carl's second check of the list is sufficient to turn Alice's true belief into knowledge, even without Carl talking to Alice.

Response: Maybe, but the case need not to be told that way. Perhaps if Alice's husband were not on the list, Carl wouldn't have had the guts to tell Alice. Or perhaps he would have waited twenty four hours to check that Alice's husband doesn't appear on an updated list.

Monday, August 21, 2017

Searching for the best theory

Let’s say that I want to find the maximum value of some function over some domain.

Here’s one naive way to do it:

Algorithm 1: I pick a starting point in the domain at random, place an imaginary particle there and then gradually move the particle in the direction where the function increases, until I can’t find a way to improve the value of the function.

This naive way can easily get me stuck in a “local maximum”: a peak from which all movements go down. In the example graph, most starting points will get one stuck at local maxima.

Let’s say I have a hundred processor cores available, however. Then here’s another simple thing I could do:

Algorithm 2: I choose a hundred starting points in the domain at random, and then have each core track one particle as it tries to move towards higher values of the function, until it can move no more. Once all the particles are stuck, we survey them all and choose the one which found the highest value. This is pretty naive, too, but we have a much better chance of getting to the true maximum of the function.

But now suppose I have this optimization idea:

Algorithm 3: I follow Algorithm 2, except at each time step, I check which of the 100 particles is at the highest value point, and then move the other 99 particles to that location.

The highest value point found is intuitively the most promising place, after all. Why not concentrate one’s efforts there?

But Algorithm 3 is, of course, be a bad idea. For now all 100 particles will be going lock-step, and will all arrive at the same point. We lose much of the independent exploration benefit of Algorithm 2. We might as well have one core.

But now notice how often in our epistemic lives, especially philosophical ones, we seem to be living by something like Algorithm 3. We are trying to find the best theory. And in journals, conferences, blogs and conversations, we try to convince others that the theory we’re currently holding to is the best one. This is as if each core was trying to convince the 99 to explore the location that it was exploring. If the core succeeded, the effect would be like Algorithm 3 (or worse). Forcing convergence—even by intellectually honest means—seems to be harmful to the social epistemic enterprise.

Now, it is true that in Algorithm 2, there is a place for convergence: once all the cores have found their local maxima, then we have the overall answer, namely the best of these local maxima. If we all had indeed found our local maxima, i.e., if we all had fully refined our individual theories to the point that nothing nearby was better, it would make sense to have a conference and choose the best of all of the options. But in fact most of us are still pretty far from even the locally best theory, and it seems unlikely that we will achieve it in this life.

Should we then all work independently, not sharing results lest we produce premature convergence? No. For one, the task of finding the locally optimal theory is one that we probably can’t achieve alone. We are dealing with functions whose values at the search point cannot be evaluated by our own efforts, and where even exploring the local area needs the help of others. And so we need cooperation. What we need is groups exploring different regions of the space of theories. And in fact we have this: we have the Aristotelians looking for the best theory in the vicinity of Aristotle’s, we have the Humeans, etc.

Except that each group is also trying to convince the others. Is it wrong to do so?

Well, one complicating factor is that philosophy is not just an isolated intellectual pursuit. It has here-and-now consequences for how to live our lives beyond philosophy. This is most obvious in ethics (including political philosophy), epistemology and philosophy of religion. In Algorithm 3, 99 of the cores may well be exploring less promising areas of the search space, but it’s no harm to a core to be exploring such an area. But it can be a serious harm to a person to have false ethical, epistemological or religious beliefs. So even if it were better for our social intellectual pursuits that all the factions be doing their searching independently, we may well have reasons of charity to try to convince others—but primarily where this has ethical, epistemological or religious import (and often it does, even if the issue is outside of these formal areas).

Furthermore, we can benefit from criticism by people following other paradigms than ours. Such criticism may move us to switch to their paradigm. But it can benefit us even if it does not do that, by helping us find the optimal theory in our local region.

And, in any case, we philosophers are stubborn, and this stubbornness prevents convergence. This stubbornness may be individually harmful, by keeping us in less promising areas of the search space, but beneficial to the larger social epistemic practice by preventing premature convergence as in Algorithm 3.

Stubbornness can be useful, thus. But it needs to be humble. And that's really, really hard.

A theological argument for four-dimensionalism

One of the main philosophical objections to dualist survivalism, the view that after death and prior to the resurrection we continue existing as disembodied souls is the argument that I am now distinct from my soul and cannot come to be identical with my soul, as that would violate the transitivity of identity: my present self (namely, I) would be identical to my future self, the future self would be identical to my future soul, my future soul would still be identical to my present soul, and so my present self would be identical with my present soul.

(This, of course, won’t bother dualists who think they are presently identical with souls, but is a problem for dualists who think that souls are proper parts of them. And the latter is the better view, since I can see myself in the mirror but I cannot see my soul in the mirror.)

It’s worth noting that this provides some evidence for four-dimensionalism, because (a) we have philosophical and theological evidence for dualist survivalism, while (b) the four-dimensionalist has an easy way out of the above argument. For the four-dimensionalist can deny that my future self is ever identical with my future soul, even given dualist survivalism. My future self, like my present self, is a four-dimensional temporally extended entity. Indeed, the future self and the present self are the same four-dimensional entity, namely I. My future soul, like my present soul, is a temporally extended entity (four-dimensional if souls have spatial extension; otherwise, one-dimensional), which is a proper part of me. And, again, my future soul and my present soul are the same temporally extended entity. At no future time is my future self identical with my future soul even given dualist survivalism. At most, it will be the case that some future temporal slices of me are identical with some future temporal slices of my soul.

Thursday, August 17, 2017

Yet another infinite lottery machine

In a number of posts over the past several years, I’ve explored various ways to make a countably infinite fair lottery machine (assuming causal finitism is false), typically using supertasks in some way.

Here’s another, slightly simplified from a construction in Norton. Suppose we toss a countably infinite number of fair coins to make an array with infinitely many infinite rows that could look like this:

HTHTHHHHHHHTTT...
THTHTHTHTHHHHH...
HHHHHTHTHTHTHT...
...

Make sure that nobody looks at the coins after they are tossed. Here’s something that could happen: each row of the array contains one and only one tails. This is unlikely (probability zero; Norton originally said it's nonmeasurable, but that was a mistake, and we're coauthoring a correction to his paper) but possible. Have a robot scan the array—a supertask will be needed—to verify whether this unlikely event has happened. If not, we have failed to make the machine. But if yes, our array will look relevantly like:

HHTHHHHHHHHHHH...
HHHHHTHHHHHHHH...
HHTHHHHHHHHHHH...
...

Continue making sure nobody looks at the coins. Put a robot at the beginning of the first row. Now, you have an countably infinite fair lottery machine that you can use over and over. To use it, just tell the robot to scan the row it’s at, announce the position of the lone tails, and move to the beginning of the next row. Applied to the above array, you will get the sequence of results 3,6,3,….

Of course, it’s very unlikely that we will succeed in making the machine (the probability is zero). But we might. And once we do, we can run as many paradoxes of infinity as we like. And we might even find ourselves lucky enough to be in a universe where some natural random process has already generated such a lucky array, in which case we don’t even have to flip the coins.

Once we have the machine, we can have lots of fun with it. For instance, it seems antecedently really unlikely that the first hundred times you run the machine, the numbers you get will be in increasing order. But no matter how many numbers you've pulled from the machine, you are all but certain that the next number will be bigger than any of them.

Wednesday, August 16, 2017

Consent and euthanasia

I once gave an argument against euthanasia where the controversial center of the argument could be summarized as follows:

  1. Euthanasia would at most be permissible in cases of valid consent and great suffering.

  2. Great suffering is an external threat that removes valid consent.

  3. So, euthanasia is never permissible.

But the officer case in my recent post about promises and duress suggests that (2) may be mistaken. In that case, I am an officer captured by an enemy officer. I have knowledge that imperils the other officer’s mission. The officer lets me live, however, on the condition that I promise to stay put for 24 hours, an offer I accept. My promise to stay put seems valid, even though it was made in order to avoid great harm (namely, death). It is difficult to see exactly why my promise is valid, but I argue that the enemy officer is not threatening me in order to elicit a promise from me, but rather I am in dangerous circumstances that I can only get out of by making the promise, a promise that is nonetheless valid, much as the promise to pay a merchant for a drink is valid even if one is dying of thirst.

Now, if a doctor were to torture me in order to get me to consent to being killed by her, any death-welcoming words from me would not constitute valid consent, just as promises elicited by threats made precisely to elicit them are invalid. But euthanasia is not like that: the suffering isn’t even caused by the doctor. It doesn’t seem right to speak of the patient’s suffering as a threat in the sense of “threat” that always invalidates promises and consent.

I could, of course, be mistaken about the officer case. Maybe the promise to stay put under the circumstances really is invalid. If so, then (2) could still be true, and the argument against euthanasia stays.

But suppose I am right about the officer case, and suppose that (2) is false. Can the argument be salvaged? (Of course, even if it can’t, I still think euthanasia is wrong. It is wrong to kill the innocent, regardless of consequences or consent. But that’s a different line of thought.) Well, let me try.

Even if great suffering is not an external threat that removes valid consent, great suffering makes one less than fully responsible for actions made to escape that suffering (we shouldn’t call the person who betrayed her friends under torture a traitor). Now, how fully responsible one needs to be in order for one’s consent to be valid depends on how momentous the potential adverse consequences of the decision are. For instance, if I consent to a painkiller that has little in the way of side-effects, I don’t need to have much responsibility in order for my consent to be valid. On the other hand, suppose that the only way out of suffering would be a pill whose owner is only willing to sell it in exchange for twenty years of servitude. I doubt that one’s suffering-elicited consent to twenty years of servitude is valid. Compare how the Catholic Church grants annulments for marriages when responsibility is significantly reduced. Some of the circumstances where annulments are granted are ones where the agent would have sufficient responsibility in order to make valid promises that are less momentous than marriage vows, and this seems right. In fact, in the officer case, it seems that if the promise I made were more momentous than just staying put for 24 hours, it might not be valid. But it is hard to get more momentous a decision than a decision whether to be killed. So the amount of responsibility needed in order to make that decision is much higher than in the case of more ordinary decisions. And it is very plausible that great suffering (or fear of such) excludes that responsibility, or at the very least that it should make the doctor not have sufficient confidence that valid consent has been given.

If this is right, then we can replace (2) with:

  1. Great suffering (or fear thereof) removes valid consent to decisions as momentous as the decision to die.

And the argument still works.

Monday, August 14, 2017

Difficult questions about promises and duress

It is widely accepted that you cannot force someone to make a valid promise. If a robber after finding that I have no valuables with me puts a gun to my head and says: “I will shoot you unless you promise to go home and bring me all of the jewelry there”, and I say “I promise”, my promise seems to be null and void.

But suppose I am a cavalry officer captured by an enemy officer. The enemy officer is in a hurry to complete a mission, and it is crucial to his military ends that I not ride straight back to my headquarters and report what I saw him doing. He does not, however, have the time to tie me up, and hence he prepares to kill me. I yell: “I give you my word of honor as an officer that I will stay in this location for 24 hours.” He trusts me and rides on his way. (The setting for this is more than a hundred years ago.)

However, if promises made under duress are invalid, then the enemy officer should not trust me. One can only trust someone to do something when in some way a good feature of the person impels them to do that thing. (I can predict that a thief will steal my money if I leave it unprotected, but I don’t trust the thief to do that.) But there is no virtue in keeping void promises, since such promises do not generate moral reasons. In fact, if the promise is void, then I might even have a moral duty to ride back and report what I have seen. One shouldn’t trust someone to do something contrary to moral duty.

Perhaps, though, there is a relevant difference between the case of an officer giving parole to another, and the case of the robber. The enemy officer is not compelling me to make the promise. It’s my own idea to make the promise. Of course, if I don’t make the promise, I will die. But that fact doesn’t make for promise-canceling duress. Say, I am dying of thirst, and the only drink available is the diet gingerale that a greedy merchant is selling and which she would never give away for free. So I say: “I promise to pay you back tomorrow as I don’t have any cash with me.” I have made the promise in order to save my life. If the merchant gives me the gingerale, the promise is surely valid, and I must pay the merchant back tomorrow.

Is the relevant difference, perhaps, that I originate the idea of the promise in the officer case, but not in the robber case? But in the merchant case, I would be no less obligated to pay the merchant back if we had a little dialogue: “Could you give me a drink, as I’m dying of thirst and I don’t have any cash?” – “Only if you promise to pay me back tomorrow.”

Likewise, in the officer case, it really shouldn’t matter who originates the idea. Imagine that it never occurred to me to make the promise, but a bystander suggests it. Surely that doesn’t affect the binding force of the promise. But suppose that the bystander makes the suggestion in a language I don’t understand, and I ask the enemy officer what the bystander says, and he says: “The bystander suggests you give your word of honor as an officer to stay put for 24 hours.” Surely it also makes no moral difference that the enemy officer acts as an interpreter, and hence is the proximate origin of the idea. Would it make a difference if there were no helpful bystander and the enemy officer said of his own accord: “In these circumstances, officers often make promises on their honor to stay put”? I don’t think so.

I think that there is still a difference between the robber case and that of the enemy officer who helpfully suggests that one make the promise. But I have a really hard time pinning down the difference. Note that the enemy officer might be engaged in an unjust war, much as the robber is engaged in unjust robbery. So neither has a moral right to demand things of me.

There is a subtle difference between the robber and officer cases. The robber is threatening your life in order to get you to make the promise. The promise is something that the robber is pursuing as the means to her end, namely the obtaining of jewelry. My being killed will not achieve the robber’s purpose at all. If the robber knew that I wouldn’t make the promise, she wouldn’t kill me, at least as far as the ends involved in the promise (namely, the obtaining of my valuables) go. But the enemy officer’s end, namely the safety of his mission, would be even more effectively achieved by killing me. The enemy officer’s suggestion that I make my promise is a mercy. The robber’s suggestion that I make my promise isn’t a mercy.

Does this matter? Maybe it does, and for at least two reasons. First, the robber is threatening my life primarily in order to force a promise. The enemy officer isn’t threatening my life primarily in order to force a promise: the threat would be there even if I were unable to make promises (or were untrustworthy, etc.). So there is a sense in which the robber is more fully forcing a promise out of me.

Second, it is good for human beings to have a practice of giving and keeping promises in the officer types of circumstances, since such a practice saves lives. But a practice of giving and keeping promises in the robber types of circumstances, since such a practice only encourages robbers to force promises out of people. Perhaps the fact that one kind of practice is beneficial and the other is harmful is evidence that the one kind of practice is normative to human beings and the other is not. (This will likely be the case given natural law, divine command, rule-utilitarianism, and maybe some other moral theories.)

Third, the case of the officer is much more like the case of the merchant. There is a circumstance in both cases that threaten my life independently of any considerations of promises—dehydration and an enemy officer whom I’ve seen on his secret mission. In both cases, it turns out that the making of a promise can get me out of these circumstances, but the circumstances weren’t engineered in order to get me to make the promise. But the case of the robber is very different from that of the merchant. (Interesting test case: the merchant drained the oases in the desert so as to sell drinks to dehydrated travelers. This seems to me to be rather closer to the robber case, but I am not completely sure.)

Maybe, though, I’m wrong about the robber case. I have to say that I am uncomfortable with voidly promising the robber that I will get the valuables when I don’t expect to do so—there seems to be a lie involved, and lying is wrong even to save one’s life. Or at least a kind of dishonesty. But this suggests that if I were planning on bringing the valuables, I would be acting more honestly in saying it. And that makes the situation resemble a valid promise. Maybe not, though. Maybe it’s wrong to say “I will bring the valuables” when one isn’t planning on doing so, but once one says it, one has no obligation to bring them. I don’t know. (This is related to this sort of a case. Suppose I don’t expect that there will be any yellow car parked on your street tonight, but I assert dishonestly in the morning that there will be a yellow car parked on your street in the evening. In the early afternoon, I am filled with contrition for my dishonesty to you. Normally, I should try to undo the effect of dishonesty by coming clean to the person I was dishonest to. But suppose I cannot get in touch with you. However, what I can do is go to the car rental place, rent a yellow car and park it on your street. Do I have any moral reason to do so? I don’t know. Not in general, I think. But if you were depending on the presence of the yellow car—maybe you made a large bet about it wit a neighbor—then maybe I should do it.)