Tuesday, January 26, 2016

Conciliation and caution

I assign a credence 0.75 to p and I find out that you assign credence 0.72 to it, despite us both having the same evidence and epistemic prowess. According to conciliationism, I should lower my credence and you should raise yours.

Here's an interesting case. When I assigned 0.75 to p, I reasoned as follows: my evidence prima facie supported p to a high degree, say 0.90, but I know that I could have made a mistake in my evaluation of the evidence, so to be safe I lowered my credence to 0.75. You, being my peer and hence equally intellectually humble, proceeded similarly. You evaluated the evidence at 0.87 and then lowered the credence to 0.72 to be safe. Now when I learn that your credence is 0.72, I assume you were likewise being humbly cautious. So I assume you had some initial higher evaluation, but then lowered your evaluation to be on the safe side. But now that I know that both you and I evaluated the evidence significantly in favor of p, there is no justification for as much caution. As a result, I raise my credence. And maybe you proceed similarly. And if we're both advocates of the equal weight view, thinking that we should treat each others' credences on par, we will both raise our credence to the same value, say 0.80. As a result, you revise in the direction conciliationism tells you to (but further than most conciliationists would allow) and I revise in the opposite direction to what conciliationism says.

The case appears to be a counterexample to conciliationism. Now, one might argue that I was unfair to conciliationists. It's not uncommon in the literature to define conciliationism as simply the view that both need to change credence rather than the view that they must each change in the direction of the other's credence. And in my example, both change their credence. I think this reading of conciliationism isn't fair to the motivating intuitions or the etymology. Someone who, upon finding out about a disagreement, always changes her credence in the opposite direction of the other's credence is surely far from being a conciliatory person! Be that as it may, I suspect that counterexamples like the above can be tweaked. For instance, I might reasonably reason as follows:

You assign a smaller credence than I, though it's pretty close to mine. Maybe you started with an initial estimate close to but lower than mine and then lowered it by the same amount as I did out of caution. Since your initial estimate was lower than mine, I will lower mine a little. But since it was close, I don't need to be as cautious.
It seems easy to imagine a case like this where the two effects cancel out, and I'm left with the same credence I started with. The result is a counterexample to a conciliationism that merely says I shouldn't stay pat.

4 comments:

Heath White said...

Maybe we need to distinguish between a person's prima facie credence and their ultima facie (or actual) credence, where the difference is that the ultima facie credence is arrived at by taking account of what various epistemic agents arrived at, prima facie-wise, on the basis of the evidence. For instance, one might modify one's p.f. credence on the basis that others disagree (p.f.), but one should not modify one's credence on the basis that others disagree (u.f.) because then there will be all kinds of cycles and double-counting of evidence.

But we could apply this sort of discount to ourselves. Maybe I come up with a 0.9 credence p.f., but considering my weak epistemic record I discount myself to 0.6 or whatever. But then when I want to take your judgments into account, I should consider your p.f. credence, not your u.f. credence.

Alexander R Pruss said...

Something like that sounds right. It would be interesting to see if one could come up with a more precise account of what is going on here.

Heath White said...

Suppose you want to invest in the stock market and you buy a simple black box trading system, which takes questions of the form “Will stock XYZ go up tomorrow?” and answers “yes” or “no”. Call this system B1 and say that it has positive reliability r1, which means that Pr(XYZ rises tomorrow | B1 says XYZ rises tomorrow) = r1. Assume B1 is at least as good as chance.

Suppose further that you know nothing about predicting market prices other than what B1 tells you. You ask about XYZ and it says “yes.” I think you should have credence r1 that XYZ rises tomorrow.

Now suppose you want to improve your returns, so you buy another black box system B2 from a different company. B2 has positive reliability r2 and also is at least as good as chance. You ask both B1 and B2 what will happen to the price of XYZ tomorrow. Frustratingly, I cannot do the math to get the right answer here, but I can figure this much: if B1 and B2 both answer yes, your credence should be at least as great as max(r1,r2); if B1 and B2 both answer no, your credence should be at least as low as min(r1,r2); while if B1 and B2 disagree, your credence should be between r1 and r2. (Some of the math depends on how correlated the errors of B1 and B2 are; I have never seen this important point addressed in the disagreement literature.)

Now suppose you take an interest in predicting future stock movements by looking at (say) past movements, and you develop some better-than-chance expertise. You can call yourself B3. Now you can add your own (B3) predictions to the predictions of B1 and B2, and (if you can figure out how correlated your errors are) you can use a version of the same calculation as before to arrive at a better credence for the movement of XYZ.

It is important, though, that the B3 estimate you begin with does not depend on the predictions of B1 and B2, i.e. does not use them as evidence. Otherwise their predictions will count twice: once in coming up with (the corrupted) B3, and once again when reconciling B1, B2, and (corrupted) B3 for the final credence. (For the same reason, it is important that B1 and B2 don’t take each others’ predictions as inputs.) Maybe what comes to the same thing is to say that B3 is only useful insofar as it makes uncorrelated errors, and depending on the input of others can only introduce correlation.

So we need to distinguish between (i) an initial estimate based on a subset of evidence that does not include others’ judgments; and (ii) the final credence that incorporates all evidence including others’ judgments.

An instance of this distinction: if B1 and B2 crash, so that all I have left is my own B3 estimate, I should not simply believe flat-out whatever I think will happen to XYZ. Rather, I should have a view about how reliable this judgment is, and then my credence should be set equal to this degree of reliability.

I think the application to the problem of disagreement is pretty clear: the conciliationist, in considering the opinions of others, should acknowledge that we need (i) their estimate independent of the opinions of others, and (ii) some notion of how correlated the various errors in our universe of opinions are likely to be, with each other and with ourselves. I’m not sure we ever have this information, but we’d need at least an approximation of it.

Heath White said...

(cont'd)

Now, modify the model a bit. Suppose B1 and B2, instead of having binary outputs, had scalar outputs. You asked, “How much will XYZ move tomorrow?” and it answered with a percentage in the range from -100 to positive infinity. Agglomerating the outputs of B1 and B2 is now much harder. To do it right, you’d need information on how likely each was to be right, and how correlated errors were with each other, with different sorts of answer. And you might prefer a B that was approximately right much of the time, versus one that was wrong less often but way off when it was. I have no idea how to do this equation.

But turning to the actual problem of disagreements, this is the kind of problem we have to solve, because when we ask other intelligent people controversial questions, we get at least this much complexity: a binary answer, plus a scalar credence. (Real questions also often have non-binary answers.) So if we ask others, in effect, “How likely is it that p?” and get a variety of answers, it will be very tough to figure out how to modify one’s own opinion correctly. Again, it may be better to be in the right neighborhood with one’s credence, rather than either precisely right or very wrong—preferences over probability distributions are not themselves part of probability theory. And never mind the difficulty of discounting for the fact that others’ views are almost certainly not independent of each other, i.e. have used each other, or common sources, as inputs. Also, their credences will be affected by their views of their own reliability, which are also probably influenced by others’ opinions, both about the facts and about their ability to evaluate the facts.

Writing this has made me appreciate just how hard the problem is. My main simple takeaway is the epistemic importance, and difficulty, of preserving uncorrelated patterns of error. I think this is not stressed enough in the literature, or wasn’t last time I looked. And then the idea of preferences over probability distributions strikes me as worth exploring.