Wednesday, September 20, 2017

Probabilistic Counterexampler

Every so often someone asks me if some piece of probabilistic reasoning works. For instance, today I got a query from a grad student whether

  1. P(A|C)>P(A|B) implies P(A|B ∨ C)>P(A|B).

Of course, I could think about it each time somebody asks me something. But why think when a computer can solve a problem by brute force.

So, last spring I wrote a quick and dirty python program that looks for counterexamples to questions like that simply by considering situations with three dice, and iterating over all the possible combinations of subsets A, B and C of the state space (with some reduction due to symmetries).

The program is still quick and dirty, but I at least I made the premises and conclusions not be hardcoded. You can get it here.

For instance, for the query above, you can run:

python probab-reasoning.py "P(a,c)>P(a,b)" "P(a,b|c)>P(a,b)" 

(The vertical bars are disjunction, not conditional probability. Conditional probability uses commas.) The result is:

a={1}, b={1, 2}, c={1}
a={1}, b={1, 2, 3}, c={1}
a={1}, b={1, 2, 3}, c={1, 2}
a={1}, b={1, 2, 3}, c={1, 3}
a={1}, b={1, 2, 3}, c={1, 4}
...

So, lots of counterexamples. On the other hand, you can do this:

python probab-reasoning.py "P(a)*P(b)==P(a&b)" "P(b)>0" "P(a,b)==P(a)" 

and it will tell you no counterexamples were found. Of course, that doesn’t prove that the result is true, but in this case it is.

The general operation is that you install python and use a commandline to run:

python probab-reason.py premise1 premise2 ... conclusions

You can use the variables a, b and c, and the operations & (conjunction), | (disjunction) and ~ (negation) between the events. You can use conditional probability P(a,b) and unconditional probability P(a). You can use standard arithmetical and comparison operators on probabilities. Make sure that you use python’s operators. For instance, equality is ==, not =. You should also use python’s boolean operations when you are not working with events: e.g., “P(a)==1 and P(b)==0.5”.

Any premise or conclusion that requires conditionalization on a probability zero event to evaluate automatically counts as false.

You can use up to five single-letter variables (other than P), and you can also specify the number of sides the die has prior to listing the premises. E.g.:

python probab-reasoning.py 8 "P(a)*P(b)==P(a&b)" "P(b)>0" "P(a,b)==P(a)" 

Monday, September 18, 2017

Two ways of being vicious

Many of the times when Hitler made a wrong decision, his character thereby deteriorated and he became more vicious. Let’s imagine that Hitler was a decent young man at age 19. Now imagine Schmitler, who lived a life externally just like Hitler’s, but on Twin Earth. Until age 19, Schmitler’s life was just like Hitler. But from then on, each time Schmitler made a wrong choice, aliens or angels or God intervened and made sure that the moral deterioration that normally follows upon wrong action never occurred. As it happens, however, Schmitler still made the same choices Hitler did, and made them with freedom and clear understanding of their wickedness.

Thus, presumably unlike Hitler, Schmitler did not morally fall, one wrong action at a time, to the point of a genocidal character. Instead, he committed a series of wrong actions, culminating in genocide, but each action was committed from the same base level of virtue and vice, the same level that both he and Hitler had at age 19. This is improbable, but in a large enough universe all sorts of improbable things will happen.

So, now, here is the oddity. Since Schmitler’s level of virtue and vice at the depth of his moral depradations was the same as at age 19, and at age 19 both he and Hitler were decent young men (or so I assume), it seems we cannot say that Schmitler was a vicious man even while he was committing genocidal atrocities. And yet Schmitler was fully responsible for these atrocities, perhaps more so than Hitler.

I want to say that Schmitler is spectacularly vicious without having much in the way of vices, indeed while having more virtue than vice (he was, I assume, a decent young man), even though that sounds like a contradiction. Schmitler is spectacularly vicious because of what he has done.

This doesn’t sound right, though. Actions are episodic. Being vicious is a state. Hitler was a vicious man while innocently walking his dog on a nice spring day in 1944, even when not doing any wrongs. And we can explain why Hitler was vicious then: he had a character with very nasty vices, even while he was not exercising the vices. But how can we say that Schmitler was vicious then?

Here’s my best answer. Even on that seemingly innocent walk, Schmitler and Hitler were both failing to repent of their evil deeds, failing to set out on the road of reconciliation with their victims. A continuing failure to repent is not something episodic, but something more like a state.

If this is right, then there are two ways of being vicious: by having vices and by being an unrepentant evildoer.

(A difficult question Robert Garcia once asked me is relevant, though: What should we say about people who have done bad things but suffered amnesia?)

Some arguments about the existence of a good theodicy

This argument is valid:

  1. If no good theodicy can be given, some virtuous people’s lives are worthless.

  2. No virtuous person’s life is worthless.

  3. So, a good theodicy can be given.

The thought behind 1 is that unless we accept the sorts of claims that theodicists make about the value of virtue or the value of existence or about an afterlife, some virtuous people live lives of such great suffering, and are so far ignored or worse by others, that their lives are worthless. But once one accepts those sorts of claims, then a good theodicy can be given.

Here is an argument for 2:

  1. It would be offensive to a virtuous person that her life is worthless.

  2. The truth is not offensive to a virtuous person.

  3. So, no virtuous person’s life is worthless.

Perhaps, too, an argument similar to Kant’s arguments about God can be made. We ought to at least hope that each virtuous person’s life has value on balance. But to hope for that is to hope for something like a theodicy. So we ought to hope for something like a theodicy.

The above arguments may not be all that compelling. But at least they counter the argument in the other direction, that it is offensive to say that someone’s sufferings have a theodicy.

Here is yet another argument.

  1. That there is no good theodicy is an utterly depressing claim.

  2. One ought not advocate utterly depressing claims, without very strong moral reason.

  3. There is no very strong moral reason to advocate that there is no good theodicy.

  4. So, one ought not advocate that there is no good theodicy.

The grounds for 8 are pragmatic: utterly depressing claims tend to utterly depress people, and being utterly depressed is very bad. One needs very strong reason to do something that causes a very bad state of affairs. I suppose the main controversial thesis here is 9. Someone who thinks religion is a great evil might deny 9.

Let's not exaggerate the centrality of virtue to ethics

Virtues are important. They are useful: they internalize the moral law and allow us to make the right decision quickly, which we often need to do. They aren’t just time-savers: they shine light on the issues we deliberate over. And the development of virtue allows our freedom to include the two valuable poles that are otherwise in tension: (a) self-origination (via alternate possibilities available when we are developing virtue) and (b) reliable rightness of action. This in turn allows our development of virtue reflect the self-origination and perfect reliability in divine freedom.

But while virtues are important, they are not essential to ethics. We can imagine beings that only ever make a single, but truly momentous, decision. They come into existence with a clear understanding of the issues involved, and they make their decision, without any habituation before or after. That decision could be a moral one, with a wrong option, a merely permissible option, and a supererogatory option. They would be somewhat like Aquinas’ angels.

We could even imagine beings that make frequent moral choices, like we do, but whose nature does not lead them too habituate in the direction of virtue or vice. Perhaps throughout his life whenever Bill decides whether to keep an onerous promise or not, there is a 90% chance that he will freely decide rightly and a 10% chance that he will freely decide wrongly, a chance he is born and dies with. A society of such beings would be rather alien in many practices. For instance, members of that society could not be held responsible for their character, but only for their choices. Punishment could still be retributive and motivational (for the chance of wrong action might go down when there are extrinsic reasons against wrongdoing). I think such beings would tend to have lower culpability for wrongdoing than we do. For typically when I do wrong as a middle-aged adult, I am doubly guilty for the wrong: (a) I am guilty for the particular wrong choice that I made, and (b) I am guilty for not having yet transformed my character to the point where that choice was not an option. (There are two reasons we hold children less responsible: first, their understanding is less developed, and, second, they haven’t had much time to grow in virtue.)

Nonetheless, while such virtue-less beings woould be less responsible, and we wouldn’t want to be them or live among them, they would still have some responsibility, and moral concepts could apply to them.

Saturday, September 16, 2017

Adding a USB charging port to an elliptical machine

Last night I added a USB charging port to our elliptical machine, using a $0.70 buck converter, so that we can exercise while watching TV on a tablet even when running out of batteries. Here are instructions.

Note, too, how the tablet is held in place with 3D printed holders. My next elliptical upgrade project will be to make it be a part of a USB game controller (the other part will be a Wii Nunchuk) so that one can control speed in games with speed of movement.

Friday, September 15, 2017

Four-dimensionalism and caring about identity

In normal situations, diachronic psychological connections and personal identity go together. A view introduced by Parfit is that when the two come apart, what we care about are the connections and not the identity.


This view seems to me to be deeply implausible from a four-dimensional point of view. I am a four-dimensional thing. This four-dimensional thing should prudentially care about what happens to it, and only about what happens to it. The red-and-black four-dimensional thing in the diagram here (up/down represents time; one spatial dimension is omitted) should care about what happens to the red-and-black four-dimensional thing, all along its temporal trunk. This judgment seems completely unaffected by learning that the dark slice represents an episode of amnesia, and that no memories pass from the bottom half to the upper half.

Or take a case of symmetric fission, and suppose that the facts of identity are such that I am the red four-dimensional thing in the diagram on the right. Suppose both branches have full memories of what happens before the fission event. If I am the red four-dimensional thing, I should prudentially care about what happens to the red four-dimensional thing. What happens to the green thing on the right is irrelevant, even if it happens to have in it memories of the pre-split portion of me.

The same is true if the correct account of identity in fission is Parfit’s account, on which one perishes in a split. On this account, if I am the red four-dimensional person in the diagram on the left, surely I should prudentially care only about what happens to the red four-dimensional thing; if I am the green person, I should prudentially care only about what happens to the green one; and if I am the blue one, I should prudentially care only about what happens to the blue one. The fact that both the green and the blue people remember what happened to the red person neither make the green and blue people responsible for what the red person did nor make it prudent for the red person to care about what happens to the green and blue people.

This four-dimensional way of thinking just isn’t how the discussion is normally phrased. The discussion is normally framed in terms of us finding ourselves at some time—perhaps a time before the split in the last diagram—and wondering which future states we should care about. The usual framing is implicitly three-dimensionalist: what should I, a three-dimensional thing at this time, prudentially care about?

But there is an obvious response to my line of thought. My line of thought makes it seem like I am transtemporally caring about what happens. But that’s not right, not even if four-dimensionalism is true. Even if I am four-dimensional, my cares occur at slices. So on four-dimensionalism, the real question isn’t what I, the four-dimensional entity, should prudentially care about, but what my three-dimensional slices, existent at different times, should care about. And once put that way, the obviousness of the fact that if I am the red thing, I should care about what happens to the red thing disappears. For it is not obvious that a slice of the red thing should care only about what happens to other slices of the red thing. Indeed, it is quite compelling to think that the psychological connections between slices A and B matter more than the fact that A and B are in fact both parts of the same entity. (Compare: the psychological connections between me and you would matter more than the fact that you and I are both parts of the same nation, say.) The correct picture is the one here, where the question is whether the opaque red slice should care about the opaque green and opaque blue slices.

In fact, in this four-dimensionalist context, it’s not quite correct to put the Parfit view as “psychological connections matter more than identity”. For identity doesn’t obtain between different slices. Rather, what obtains is co-parthood, an obviously less significant relation.

However, this response to me depends on a very common but wrongheaded version of four-dimensionalism. It is I that care, feel and think at different times. My slices don’t care, don’t feel and don’t think. Otherwise, there will be too many carers, feelers and thinkers. If one must have slices in the picture (and I don’t know that that is so), the slices might engage in activities that ground my caring, my feeling and my thinking. But these grounding activities are not caring, feeling or thinking. Similarly, the slices are not responsive to reasons: I am responsive to reasons. The slices might engage in activity that grounds my responsiveness to reasons, but that’s all.

So the question is what cares I prudentially should have at different times. And the answer is obvious: they should be cares about what happens to me at different times.

About the graphics: The images are generated using mikicon’s CC-by-3.0 licensed Gingerbread icon from the Noun Project, exported through this Inkscape plugin and turned into an OpenSCAD program (you will also need my tubemesh library).

Thursday, September 14, 2017

Agents, patients and natural law

Thanks to Adam Myers’ insightful comments, I’ve been thinking about the ways that natural law ethics concerns natures in two ways: on the side of the agent qua agent and on the of the patient qua patient.

Companionship is good for humans and bad for intelligent sharks, let’s suppose. This means that we have reasons to promote companionship among humans and to hamper companionship among intelligent sharks. That’s a difference in reasons based on a difference in the patients’ nature. Next, let’s suppose that intelligent sharks by nature have a higher degree of self-concern vs. other-concern than humans do. Then the degree to which one has an obligation to promote the very same good–say, the companionship of Socrates–will vary depending on whether one is human or a shark. That’s a difference in reasons based on a difference in the agents’ nature.

I suspect it would make natural law ethics clearer if natural lawyers were always clear on what is due to the agent’s nature and what is due to the patient’s nature, even if in fact their interest were solely in cases where the agent and patient are both human.

Consider, for instance, this plausible thesis:

  • I should typically prioritize my understanding over my fun.

Suppose the thesis is true. But now it’s really interesting to ask if this is true due to my nature qua agent or my nature qua patient. If I should prioritize my understanding over my fun solely because of my nature qua patient, then we could have this situation: Both I and an alien of some particular fun-loving sort should prioritize my understanding over my fun, but likewise both I and the alien should prioritize the alien’s fun over the alien’s understanding, since human understanding is more important than human fun, while the fun of a being like the alien is more important than the understanding of such a being. On this picture, the nature of the patient specifies which goods are more central to a patient of that nature. On the other hand, if I should prioritize my understanding over my fun solely because of my nature qua agent, then quite possibly we are in the interesting position that I should prioritize my understanding over my fun, but also that I should prioritize the alien’s understanding over the alien’s fun, while the alien should prioritize both its and my fun over its and my understanding. For me promoting understanding is a priority while for the alien promoting fun is a priority, regardless of whose understanding and fun they are.

And of course we do have actual and morally relevant cases of interaction across natures:

  • God and humans

  • Angels and humans

  • Humans and brute animals.

Wednesday, September 13, 2017

Probabilities and Boolean operations

When people question the axioms of probability, they may omit to question the assumptions that if A and B have a probability, so do A-or-B and A-and-B. (Maybe this is because in the textbooks those assumptions are often not enumerated in the neat lists of the “three Kolmogorov axioms”, but are given in a block of text in a preamble.)

First note that as long as one keeps the assumption that if A has a probability, so does not-A, then by De Morgan’s, any counterexample to conjunctions having a probability will yield a counterexample to disjunctions having a probability. So I’ll focus on conjunctions.

I’m thinking that there is reason to question these axioms, in fact two reasons. The first reason, one that I am a bit less impressed with, is that limiting frequency frequentism can easily violate these two axioms. It is easy to come up with cases where A-type events have a limiting frequency, B-type ones do, too, but (A-and-B)-type ones don’t. I’ve argued before that so much the worse for frequentism, but now I am not so sure in light of the second reason.

The second reason is cases like this. You have an event C that has no probability whatsoever–maybe it’s an event of a dart hitting a nonmeasurable set–and a fair indeterministic coin flip causally independent of C. Let H and T be the events of the coin flip being heads or tails. Then let A be the event:

  • (H and C) or (T and not C).

Here’s an argument that P(A)=1/2. Imagine a coin with erasable heads and tails images, and imagine that a trickster prior to flipping a coin is going to decide, using some procedure or other, whether to erase the heads and tails images on the coin and draw them on the other side. “Clearly” (as we philosophers say when we have no further argument!) as long as the trickster has no way of seeing the future, the trickster’s trick will not affect the probabilities of heads or tails. She can’t make the coin be any less or more likely to land heads by changing which side heads lies on. But that’s basically what’s going on in A: we are asking what the probability of heads is, with the convention that if C doesn’t happen, then we’ll have relabeled the two sides.

Another argument that P(A)=1/2 is this (due to a comment by Ian). Either C happens or it doesn’t. No matter which is the case, A has a chance 1/2 of happening.

So A has probability 1/2. But now what is the probability of A-and-H? It is the same as the probability of C-and-H, which by independence is half of the probability of C, and the latter probabilit is undefined. Half of something undefined is still undefined, so A-and-H has an undefined probability, even though A has a perfectly reasonable probability of 1/2.

A lot of this is nicely handled by interval-valued theories of probability. For we can assign to C the interval [0, 1], and assign to H the sharp probability [1/2, 1/2], and off to the races we go: A has a sharp probability as does H, but their conjunction does not. This is good motivation for interval-valued theories of probability.

Tuesday, September 12, 2017

Numerical experimentation and truth in mathematics

Is mathematics about proof or truth?

Sometimes mathematicians perform numerical experiments with computers. Goldbach’s Conjecture says that every even integer n greater than two is the sum of two primes. Numerical experiments have been performed that verified that this is true for every even integer from 4 to 4 × 1018.

Let G(n) be the statement that n is the sum of two primes, and let’s restrict ourselves to talking about even n greater than two. So, we have evidence that:

  1. For an impressive sample of values of n, G(n) is true.

This gives one very good inductive evidence that:

  1. For all n, G(n) is true.

And hence:

  1. It is true that: for all n, G(n). I.e., Goldbach’s Conjecture is true.

Can we say a similar thing about provability? The numerical experiments do indeed yield a provability analogue of (1):

  1. For an impressive sample of values of n, G(n) is provable.

For if G(n) is true, then G(n) is provable. The proof would proceed by exhibiting the two primes that add up to n, checking their primeness and proving that they add up to n, all of which can be done. We can now inductively conclude the analogue of (2):

  1. For all n, G(n) is provable.

But here is something interesting. While we can swap the order of the “For all n” and the “is true” operator in (2) and obtain (3), it is logically invalid to swap the order of the “For all n” and the “is provable” operator (5) to obtain:

  1. It is provable that: for all n, G(n). I.e., Goldbach’s Conjecture is provable.

It is quite possible to have a statement such that (a) for every individual n it is provable, but (b) it is not provable that it holds for every n. (Take a Goedel sentence g that basically says “I am not provable”. For each positive integer n, let H(n) be the statement that n isn’t the Goedel number of a proof of g. Then if g is in fact true, then for each n, H(n) is provably true, since whether n encodes a proof of g is a matter of simple formal verification, but it is not provable that for all n, H(n) is true, since then g would be provable.)

Now, it is the case that (5) is evidence for (6). For there is a decent chance that if Goldbach’s conjecture is true, then it is provable. But we really don’t have much of a handle on how big that “decent chance” is, so we lose a lot of probability when we go from the inductively verified (5) to (6).

In other words, if we take the numerical experiments to give us lots of confidence in something about Goldbach’s conjecture, then that something is truth, not provability.

Furthermore, even if we are willing to tolerate the loss of probability in going from (5) to (6), the most compelling probabilistic route from (5) to (6) seems to take a detour through truth: if G(n) is provable for each n, then Goldbach’s Conjecture is true, and if it’s true, it’s probably provable.

So the practice of numerical experimentation supports the idea that mathematics is after truth. This is reminiscent to me of some arguments for scientific realism.

Presentism and multiverses

  1. It is possible to have an island universe whose timeline has no temporal connection to our timeline.

  2. If presentism is true, it is not possible to have something that has no temporal connection to our timeline.

  3. So, presentism is not true.

Presentism and classical theism

  1. If presentism is true, then everything that exists, exists presently.

  2. Anything that exists presently is temporal.

  3. God exists.

  4. So, if presentism is true, then God is temporal.

  5. But God is not temporal.

  6. So, presentism is not true.

Some presentists will be happy to embrace the thesis that God is temporal. But what about presentist classical theists? I suppose they will have to deny (1). Maybe they can replace it with:

  1. If presentism is true, then everything temporal that exists, exists presently.

Presentism is now longer an elegant thesis about the nature of existence, though.

Maybe a better move for the presentist is to deny (2)? There is some reason to do that. God while not being spatial is everywhere. Similarly God is everywhen, and hence he is in the present, too. But I am not sure if being in the present is the same as existing presently.

Monday, September 11, 2017

Supertasks and empirical verification of non-measurability

I have this obsession with probability and non-measurable events—events to which a probability cannot be attached. A Bayesian might think that this obsession is silly, because non-measurable events are just too wild and crazy to come up in practice in any reasonably imaginable situation.

Of course, a lot depends on what “reasonably imaginable” means. But here is something I can imagine, though only by denying one of my favorite philosophical doctrines, causal finitism. I have a Thomson’s Lamp, i.e., a lamp with a toggle switch that can survive infinitely many togglings. I have access to it every day at the following times: 10:30, 10:45, 10:52.5, and so on. Each day, at 10:00 the lamp is off, and nobody else has access to the machine. At each time when I have access to the lamp, I can either toggle or not toggle its switch.

I now experiment with the lamp by trying out various supertasks (perhaps by programming a supertask machine), during which various combinations of toggling and not toggling happen. For instance, I observe that if I don’t ever toggle the switch, the lamps stays off. If I toggle it a finite number of times, it’s on when that number is odd and off when that number is even. I also notice the following regularities about cases where an infinite number of togglings happens:

  1. The same sequence (e.g., toggle at 10:30, don’t toggle at 10:45, toggle at 10:52.5, etc.) always produces the same result.

  2. Reversing a finite number of decisions in a sequence produces the same outcome when an even number of decisions is reversed, and the opposite outcome when an odd number of decisions is reversed.

(Of course, 1 is a special case of 2.) How fun! I conclude that 1 and 2 are always going to be true.

Now I set up a supertask machine. It will toss a fair coin just prior to each of my lamp access times, and it will toggle the switch if the coin is heads and not toggle it if it is tails.

Question: What is the probability that the lamp will be on at 11?

“Answer:” Given 1 and 2, the event that the lamp will be on at 11 is not measurable with respect to the standard (completed) product measure on a countable infinity of coin tosses. (See note 1 here.)

So, given supertasks (and hence the falsity of causal finitism), we could find ourselves in a position where we would have to deal with a non-measurable set.

Natural law love-first metaethics

Start with this Aristotelian thought:

  1. Everything should to fulfill its nature, and every “should” fact is a norm specifying the norm of fulfilling one’s nature.

But not every “should” is a moral should. Sheep should have four legs, but a three-legged sheep is not morally defective. Here’s a hypothesis:

  1. A thing morally should A if and only if that thing has a will with an overriding norm of loving everything and that the thing morally should A is a specification of that norm.

On this theory, moral norms are norms for the same Aristotelian reason that all other norms are norms—all norms derive from the natures of things. But at the same time, the metaethics is a metaethics of love. What renders a norm a moral norm is its content, that it is a specification of the norm that one should love everything.

Why is it, on this theory, that I should be affable to my neighbor? Because such affability is a specification of the norm of fulfilling my nature. But that needn’t be my practical reason for the affability: rather, that is the explanation of why I should be affable (cf. this). What makes the norm of affability to my neighbor a moral norm? That I have a norm of love of everything, and that the norm of affability specifies that norm.

And we can add:

  1. A thing is a moral agent if and only if it has a will with an overriding norm of loving everything.

One could, perhaps, imagine beings that have a will with an overriding norm of self-benefit. Such beings wouldn’t be moral agents. But we are moral agents. In fact, I suspect the following is true:

  1. Loving everything is the only proper function of the human will.

Given the tight Aristotelian connection between proper function and norms:

  1. All norms on the human will are specifications of the norm of loving everything.

This metaethical theory I think is both a natural law theory and a love-first metaethics. It is a natural law theory in respect of the sources of normativity, and it is a love-first metaethics in respect of the account of moral norms. Thus it marries Aristotle with the Gospel, which is a good thing. I kind of like this theory, though I have a nagging suspicion it has problems.

Reductive accounts of matter

I’ve toyed with identifying materiality with spatiality (much as Descartes did). But here’s another very different reductive idea. Maybe to be material is to have energy. Energy on this view is a physical property, maybe a functional one and maybe a primitive one.

If this view is right, then one might have worlds where there are extended objects in space, but where there is no matter because the physics of these objects is one that doesn’t have room or need for energy.

Note that the sense of “matter” involved here is one on which fields, like the electromagnetic one, are material. I think that in the philosophical usage of “material” and “matter”, this is the right answer. If it turned out that our minds were identical with the electromagnetic fields in our brains, that would surely be a vindication of materialism rather than of dualism.

Now, here’s something I’m worrying about when I think about matter, at least after my rejection of Aristotelian matter. There seem to be multiple properties that are co-extensive with materiality in our world:

  • spatiality

  • energy

  • subjection to the laws of physics (and here there are two variants: subjection to our laws of physics, and subjection to some laws of physics or other; the latter might be circular, though, because maybe “physics” is what governs matter?).

Identifying matter with one or more of them yields a different concept of materiality, with different answers to modal questions. And now I wonder if the question of what matter is is a substantive one or a merely verbal one? On the Aristotelian picture, it was clearly a substantive question. But apart from that picture, it’s looking more and more like a merely verbal question to me.

Non-measurable sets and intuition

Here’s an interesting reason to accept the existence of non-measurable sets (and hence of whatever weak version of the Axiom of Choice that it depends on). A basic family of mathematical results in analysis says that most measurable real-valued functions on the real line are “close to” being continuous, i.e., that they can be approximated by continuous functions in some appropriate sense. But it is intuitive to think that there “should” be real-valued functions on the real line that are not close to being continuous—there “should” be functions that are very, very messy. So, intuitively, there should be non-measurable functions, and hence non-measurable sets.