Greg says,

I am sorry that my lengthy response wasn't very clear. Let me see if we can get on the same page. (Of course, it's not essential that we agree, right? Just essential that we understand the other person's arguments. I really appreciate that you're taking the time to do so with mine!)

So, there are a couple of misunderstandings here. Let's try to tackle them one by one. The first one is that I don't understand your model. At the beginning of your last message, you said, "our universe is just one out of a pool of infinity conceivable universes." Later, you said, "There would be no need for me to resort to a multiverse." So do you believe in the existence of multiple universes, or not?

My guess is not, because you said "conceivable". But since I don't know, let's explore both options.

Option 1: First, let's assume there is just one universe. Yes, it is one universe out of an infinity of conceivable universes, but none of these other universes is real. Since there is no explanation of how or why this particular universe is actualized over the other conceivable universes, any discussion of probability is useless anyway, normalizability problem or not. Do you agree? (I think so because at the end of your last message, you said, "the concept of probability is simply not meaningful in this context". Unless I misunderstand what you meant by that statement.) And if so, then you are stuck with an apparently finely-tuned universe with no explanation for it. The universe just IS, and that's the way it is. Pretty strange, right?

Option 2: Now let's assume there really is a real multiverse ensemble. In that case, there really is some physical mechanism that generates universes, so this real, physical mechanism has real probability distributions for how it generates all the constants and initial conditions. This is what I was saying last time, and if so, then again, you end up with either a very small probability of our universe being the way that it is, or a finely-tuned mechanism. Again, non-theists like the first (very small probability) because you can conceivably defeat that by N large. A finely-tuned mechanism would be problematic because then you have fine tuning at the most basic property of all of physical reality, and you can't explain that.

I really like one of the papers you pointed me to by Colyvan et al. In particular, I liked it when they said, "After all, if they [meaning the constants] could not have been different, the probability of the universe being just as we find it is 1, and no fine tuning has occurred. But what is the modality invoked here? Logical possibility? Conceptual possibility? Physical possibility? This is rarely spelled out in the usual presentations of the argument." (p. 326)

This is what I was saying, and I think you would agree. They go on to discuss the problem with using logical possibility as the modality, precisely because it runs into the normalizability problem, as you pointed out. What they are missing here is that, if there is no real mechanism that "decides" which constants to pick from, there is no point in talking about probability anyway, again, normalizability problem or not. We end up with the universe just IS.

Do you see a third option besides either these other universes are real, or they are not? Or do you see my characterization of the first option as flawed? (I think this is where more discussion will occur, but I'd like to hear what you have to say about it before I ramble about this on and on. And on and on...as I tend to do.

Another misunderstanding I think we had is related to what I just laid out as our two Options. In particular, you quoted me as saying, "everything's equally impossible or our current value is necessary." That in a nutshell is what I was saying our two options were. But I got that from the Colyvan paper: "The fine tuning argument, on its most plausible interpretation, hence not only shows that life-permitting universes are improbable, but, arguably, that they are impossible!" (p. 327) Juxtapose that statement with, "Physical possibility (construed as consistency with the laws of physics and physical constants as we find them) however, restricts the range too much for the proponent of the fine tuning argument, leaving the actual values as the only possible ones, and hence setting the probability at 1!" (p. 329, original emphases removed)

Another misunderstanding is how you then go on to characterize the normalizability problem: "each possible universe is either equally impossible, or they all have a small nonzero probability. They can't be impossible, because then the probabilities don't add up to 1, and they can't have a nonzero probability, because then the probabilities add up to infinity." The either/or statement you lead off with is not true. (Before I go on, I do think you characterized the normalizability problem accurately, but I don't think its conditions are met in reality.) Of course there are probability distributions with an infinite domain that are normalizable. We just don't know what the correct probability distribution to use is. But again, either there is a real mechanism that generates these universes, in which case there is a real distribution so it is really normalizable; or there is not, in which case it is futile to talk about any probability distribution because there is nothing to draw from.

Final misunderstanding: "If I'm right that the concept of probability is simply not meaningful in this context, then this dissolves the mystery. There would be no need for me to resort to a multiverse or necessity." Yeah, totally, you may be right that the concept of probability is not meaningful. That's what I was trying to say in my previous message (and is captured in Option 1 of this message). And, in which case, you would not need to resort to a multiverse because you have already assumed there is not one. But then you are stuck with necessity, because the universe just IS.

OK, so those are the critical parts where I either misunderstood you, or where I think you misunderstood me.

Also, let me end with this: this discussion is awesome and I hope you don't get too frustrated at how long I take to respond. You keep up the good work with cordially asking questions and rebutting Christians' arguments. I know a lot of atheists (and Christians too) that just want to have their say. Maybe that's you too, but you're hiding it really well, which means that's not you.

What I should have said in all of that was simply, I think you characterized the normalizability problem correctly but I just don't think it's relevant. Because either we're dealing with a real mechanism (which then must be normalizable by virtue of its being real) or not, in which case a discussion of probability is futile. What do you think?

Take care!

PS: I just took a look back at my previous message, and I even used "Option 1" and "Option 2" in that message too. I forgot, and I guess it's just what I think so strongly that it came out twice. Shame on me, because it looks like that means I didn't actually explain anything new this time. Let me know if that's true.

PPS: What I should have said in all of that was simply, I think you characterized the normalizability problem correctly but I just don't think it's relevant. Because either we're dealing with a real mechanism (which then must be normalizable by virtue of its being real) or not, in which case a discussion of probability is futile. What do you think?

[See summary page of this discussion, with links to all the posts, here.]

## Saturday, June 27, 2015

### FTA part 7: Aron asks clarifying questions (part of the Aron series)

Aron wrote:

Thanks for the response! I want to make sure I understand you, and that you understand me. My objection was this: our universe is just one out of a pool of infinity conceivable universes. P(FT/~G) is the probability of picking a universe like ours at random. Probabilities only make sense if they add to one (ex/ for a die 1/6x6=1). So for the FTA to work, we need to be able to assign each possible universe a probability that allows them to add up to 1. But this is impossible because if each universe has a probability of 0, they all add up to 0. And if each universe is given a small nonzero probability, they add up to infinity. Since the probabilities can't add up to 1, it is meaningless to talk about probabilities here. The objection is that our intuitions have led us to extend to concept of "probability" far beyond the context in which it is applicable.

The way you characterized the normalizability objection is like this: "everything's equally impossible or our current value is necessary." I'm not sure this is what I was getting at. Instead, it should say "each possible universe is either equally impossible, or they all have a small nonzero probability. They can't be impossible, because then the probabilities don't add up to 1, and they can't have a nonzero probability, because then the probabilities add up to infinity."

If I'm right that the concept of probability is simply not meaningful in this context, then this dissolves the mystery. There would be no need for me to resort to a multiverse or necessity.

In sum, I'm not quite sure exactly what your objection was to the normalizability problem.

[See summary page of this discussion, with links to all the posts, here.]

Thanks for the response! I want to make sure I understand you, and that you understand me. My objection was this: our universe is just one out of a pool of infinity conceivable universes. P(FT/~G) is the probability of picking a universe like ours at random. Probabilities only make sense if they add to one (ex/ for a die 1/6x6=1). So for the FTA to work, we need to be able to assign each possible universe a probability that allows them to add up to 1. But this is impossible because if each universe has a probability of 0, they all add up to 0. And if each universe is given a small nonzero probability, they add up to infinity. Since the probabilities can't add up to 1, it is meaningless to talk about probabilities here. The objection is that our intuitions have led us to extend to concept of "probability" far beyond the context in which it is applicable.

The way you characterized the normalizability objection is like this: "everything's equally impossible or our current value is necessary." I'm not sure this is what I was getting at. Instead, it should say "each possible universe is either equally impossible, or they all have a small nonzero probability. They can't be impossible, because then the probabilities don't add up to 1, and they can't have a nonzero probability, because then the probabilities add up to infinity."

If I'm right that the concept of probability is simply not meaningful in this context, then this dissolves the mystery. There would be no need for me to resort to a multiverse or necessity.

In sum, I'm not quite sure exactly what your objection was to the normalizability problem.

[See summary page of this discussion, with links to all the posts, here.]

### FTA part 6: Defying intuition, the multiverse, or necessity (part of the Aron series)

Greg says,

Yes, the normalization problem does seem to come up, doesn't it? But the more I think about it, the more I think it's a cover-up. Here's what I mean. Just like the initial objection you raised about degree of fine tuning not translating into a rigorous probability, in this case this is just another layer of subtlety, but the conclusion is the same. In going deeper with this, we are essentially just pushing it back another layer.

Another way to think about it is, the first level of fine-tuning is very intuitive, and speaks easily to the common person. "Wow! Look how finely-tuned these constants are! This argues for intention in the make-up of the universe." This is the intuitive conclusion, and sometimes intuition is right.

On the other hand, sometimes intuition is wrong. For someone who wants to contest this conclusion (and please don't consider that turn of the phrase to mean I think the challenger of the FTA is disingenuous...we need to think deeply about it), there is always a way to get out of it. There's always a door to exit for the skeptic. But every time you exit the door, you end up in another room that is smaller and more difficult to exit. A smaller door is there to exit the next room, and still smaller. Pretty soon you'll need one of Alice's mushrooms to get out of the door, it's so small. How deep does the rabbit hole go?

Did that sound pompous? Sorry, I thought of that word picture last night and I really liked it. In any case, my point is that the more one plays the skeptic to deny what intuition is telling us, the harder you have to work and the more you have to deny precious bits of reality.

OK, now that I've played it up so much, do I actually have an argument? (Hee-hee, I hope so. We'll see if you like it or not.)

So let's start with the point I made last time. Either the small life-permissible interval of G is improbable, or the probability distribution from which we are drawing G must be itself finely-tuned (aka, atypical). That makes intuitive sense. It's a bit harder to understand than the basic "G must be within one part in 10^60, therefore God did it," but it's still pretty intuitive. The rebuttal to that is we have no reason to consider any particular sort of probability measure. Indeed, the normalizability problem destroys fine-tuning: either everything's equally impossible, or our current value is necessary (P = 1). What method do we have to restrict the probability distribution to some intermediate shape?

So, we drop-kick intuition and need to go one level deeper. (Remember how I asked, "How deep does the rabbit hole go?" I have a feeling to get to the bottom of this conversation, we'll eventually have to discuss properly basic beliefs and brains-in-a-vat. It's a steep price to pay to be constantly skeptical of the intuitive power of the FTA.) If we really want to have a probability distribution to draw from, we need a mechanism. Here our discussion will bifurcate into two plausible solutions.

(Before I do that, can I mention an aside here? Initially, I presented the FTA as a rigorous Bayesian-type proof. Recall you challenged my ability to say P(FT | ~G) is super-low. Now I just want to recall the point that these probabilities in Bayesian arguments are *epistemic*. Meaning, they're "what are the odds of that happening?"-type probabilities. This is the reason why the Bayesian argument goes through, because most will understand the fine-tuning of the constants and conditions of the universe and of earth and intuitively agree that P(FT | ~G) is low, even if it can't be proven rigorously.)

OK, back to the bifurcation. There are now two naturalistic options (to avoid God): (1) either the universe is alone (and necessary), or (2) it is one of an ensemble of universes, commonly called the multiverse (which then itself is necessary).

Option 1: if the universe is necessary and alone, then all the constants and conditions could not have been other than what they are. In that sense, all of these probabilities would be unity. How could it have been any other way if the universe itself is necessary? But if that is the case, we again are stuck with asking why it had to have been this way. What is it about the universe and necessity that made it so that life could possibly exist? Especially when it seems like there are so many other ways it could have been that would have precluded life. Again, we are now not only stuck with asking "Why is there something rather than nothing?" (since the universe has no explanation for its existence, it would seem rather odd that it would be the necessary entity), but also with "Why is the universe the way that it is" (since it being just the way it is permitted intelligent life to develop within it to ask these questions).

Now, one way you could answer these questions is flippantly. Dr. Krauss is a famous example of this, with his, "'Why' questions are silly." But I don't regard you as thinking that. So then why do you think there is something rather than nothing? Why do you think the universe is the way it is? Remember, without God and thus without intention, there is no explanation for these facts.

Option 2: if the universe is one of many universes in the multiverse, then plausibly this could explain how the perceived fine-tuning arose. Returning to the normalizability problem, the main issue I have with it is, if there really is a natural mechanism that "chooses" values of constants for the universe, then it cannot have the normalizability problem. This is because it must have a real probability distribution, not this hypothetical/philosophical/no-logical-restriction type distribution. So the existence of the multiverse then solves the normalizability problem: either the probability distribution is typical, and thus our universe is rare, or the universe-generating mechanism itself has a finely-tuned probability distribution to produce a bunch of universes like ours. In the second case, our universe is not rare (they're all like ours), but the fine-tuning is in the multiverse itself.

Skeptics rather like the first case: our universe is rare, but the number of universes (probabilistic resources) is so high that one such as ours is bound to have been generated randomly.

I could go on and on about the multiverse, so let me leave it with this so you can respond before I go off the deep end: I don't see Option 1 as viable. For the skeptic, the universe just can't be the necessary entity. It raises too many questions about sufficient explanation. So, Option 2, the multiverse, must be the fall back if you want to escape through the ever-shrinking skeptical door. Further more, the second case of Option 2 (the multiverse itself is finely-tuned) cannot be the case for the skeptic, as this would be identical in ontology to Option 1. Therefore, as I see it, the skeptic must choose the first case of Option 2: the multiverse generates widely-varying random universes, one of which is the lucky one (ours).

Is that where you think you'd go with this?

[See summary page of this discussion, with links to all the posts, here.]

Yes, the normalization problem does seem to come up, doesn't it? But the more I think about it, the more I think it's a cover-up. Here's what I mean. Just like the initial objection you raised about degree of fine tuning not translating into a rigorous probability, in this case this is just another layer of subtlety, but the conclusion is the same. In going deeper with this, we are essentially just pushing it back another layer.

Another way to think about it is, the first level of fine-tuning is very intuitive, and speaks easily to the common person. "Wow! Look how finely-tuned these constants are! This argues for intention in the make-up of the universe." This is the intuitive conclusion, and sometimes intuition is right.

On the other hand, sometimes intuition is wrong. For someone who wants to contest this conclusion (and please don't consider that turn of the phrase to mean I think the challenger of the FTA is disingenuous...we need to think deeply about it), there is always a way to get out of it. There's always a door to exit for the skeptic. But every time you exit the door, you end up in another room that is smaller and more difficult to exit. A smaller door is there to exit the next room, and still smaller. Pretty soon you'll need one of Alice's mushrooms to get out of the door, it's so small. How deep does the rabbit hole go?

Did that sound pompous? Sorry, I thought of that word picture last night and I really liked it. In any case, my point is that the more one plays the skeptic to deny what intuition is telling us, the harder you have to work and the more you have to deny precious bits of reality.

OK, now that I've played it up so much, do I actually have an argument? (Hee-hee, I hope so. We'll see if you like it or not.)

So let's start with the point I made last time. Either the small life-permissible interval of G is improbable, or the probability distribution from which we are drawing G must be itself finely-tuned (aka, atypical). That makes intuitive sense. It's a bit harder to understand than the basic "G must be within one part in 10^60, therefore God did it," but it's still pretty intuitive. The rebuttal to that is we have no reason to consider any particular sort of probability measure. Indeed, the normalizability problem destroys fine-tuning: either everything's equally impossible, or our current value is necessary (P = 1). What method do we have to restrict the probability distribution to some intermediate shape?

So, we drop-kick intuition and need to go one level deeper. (Remember how I asked, "How deep does the rabbit hole go?" I have a feeling to get to the bottom of this conversation, we'll eventually have to discuss properly basic beliefs and brains-in-a-vat. It's a steep price to pay to be constantly skeptical of the intuitive power of the FTA.) If we really want to have a probability distribution to draw from, we need a mechanism. Here our discussion will bifurcate into two plausible solutions.

(Before I do that, can I mention an aside here? Initially, I presented the FTA as a rigorous Bayesian-type proof. Recall you challenged my ability to say P(FT | ~G) is super-low. Now I just want to recall the point that these probabilities in Bayesian arguments are *epistemic*. Meaning, they're "what are the odds of that happening?"-type probabilities. This is the reason why the Bayesian argument goes through, because most will understand the fine-tuning of the constants and conditions of the universe and of earth and intuitively agree that P(FT | ~G) is low, even if it can't be proven rigorously.)

OK, back to the bifurcation. There are now two naturalistic options (to avoid God): (1) either the universe is alone (and necessary), or (2) it is one of an ensemble of universes, commonly called the multiverse (which then itself is necessary).

Option 1: if the universe is necessary and alone, then all the constants and conditions could not have been other than what they are. In that sense, all of these probabilities would be unity. How could it have been any other way if the universe itself is necessary? But if that is the case, we again are stuck with asking why it had to have been this way. What is it about the universe and necessity that made it so that life could possibly exist? Especially when it seems like there are so many other ways it could have been that would have precluded life. Again, we are now not only stuck with asking "Why is there something rather than nothing?" (since the universe has no explanation for its existence, it would seem rather odd that it would be the necessary entity), but also with "Why is the universe the way that it is" (since it being just the way it is permitted intelligent life to develop within it to ask these questions).

Now, one way you could answer these questions is flippantly. Dr. Krauss is a famous example of this, with his, "'Why' questions are silly." But I don't regard you as thinking that. So then why do you think there is something rather than nothing? Why do you think the universe is the way it is? Remember, without God and thus without intention, there is no explanation for these facts.

Option 2: if the universe is one of many universes in the multiverse, then plausibly this could explain how the perceived fine-tuning arose. Returning to the normalizability problem, the main issue I have with it is, if there really is a natural mechanism that "chooses" values of constants for the universe, then it cannot have the normalizability problem. This is because it must have a real probability distribution, not this hypothetical/philosophical/no-logical-restriction type distribution. So the existence of the multiverse then solves the normalizability problem: either the probability distribution is typical, and thus our universe is rare, or the universe-generating mechanism itself has a finely-tuned probability distribution to produce a bunch of universes like ours. In the second case, our universe is not rare (they're all like ours), but the fine-tuning is in the multiverse itself.

Skeptics rather like the first case: our universe is rare, but the number of universes (probabilistic resources) is so high that one such as ours is bound to have been generated randomly.

I could go on and on about the multiverse, so let me leave it with this so you can respond before I go off the deep end: I don't see Option 1 as viable. For the skeptic, the universe just can't be the necessary entity. It raises too many questions about sufficient explanation. So, Option 2, the multiverse, must be the fall back if you want to escape through the ever-shrinking skeptical door. Further more, the second case of Option 2 (the multiverse itself is finely-tuned) cannot be the case for the skeptic, as this would be identical in ontology to Option 1. Therefore, as I see it, the skeptic must choose the first case of Option 2: the multiverse generates widely-varying random universes, one of which is the lucky one (ours).

Is that where you think you'd go with this?

[See summary page of this discussion, with links to all the posts, here.]

### FTA part 5: Aron introduces the normalizability problem (part of the Aron series)

Aron says:

If I grant a uniform distribution for the sake of argument, then the probability of G being "just right" is the ratio: (life permitting values/possible values). As far as I know (and I'm no expert) there's nothing in modern physics that restricts the range of possible values. Robin Collins, for example says, "The value of G, for instance, conceivably could have been any number between 0 and infinity" http://home.messiah.edu/%7Ercollins/Fine-tuning/FINETLAY.HTM).

So we can either say that the range of physically possible values is infinite, or we can say that we simply have no idea what the range is. The second option kills the fine tuning argument, so you should prefer the first option.

Here is my problem: the axiom of normalizability requires that the probabilities of all the possibilities add up to 1. If there are infinity possible values, and each is given the same super small non-zero probability, this adds up to infinity. If, instead, we give each possibility a zero probability, it adds up to 0. Either way, we can't normalize the probability space, so we can't meaningfully talk about probabilities in this context. P(FT/~G) is not low; it just doesn't even make sense to ask for this number. (This is the argument made by McGrew et al. here: http://philpapers.org/rec/MCGPAT. The point was also independently made by Colyvan et al. here:

http://www.colyvan.com/papers/finetuning.pdf, and by Paul Davies in "The Mind of God.")

One solution to the normalizability problem is to drop the assumption of a uniform distribution. A nonuniform distribution would allow us to normalize a space of infinite possibilities. You took this approach and said something like this: "I recognize that there are multiple possible distributions, and we don't know which distribution is correct. But since the set of life favoring distributions is just a small set of the total number of possible distributions, the probability that the actual distribution favors life permitting values is still very low."

Notice that this approach does away with the assumption of a uniform distribution over the range of possible values, but then assumes a uniform distribution over the range of possible distributions. While this proposal allows us to normalize the space of possible values, it simply recreates the normalizability problem, because now we are unable to normalize the space of possible distributions. We are faced with an infinite number of possible distributions, and you seem to be asking that we lay a uniform distribution over this infinite range. This is the normalizability problem all over again.

Another possible solution is to find a way to limit the range of possible values, but I don't think this works. You seem to have done this by focusing on possible values for G between 0 and 2. Why exactly did you restrict the range of possibilities this way?

[See summary page of this discussion, with links to all the posts, here.]

If I grant a uniform distribution for the sake of argument, then the probability of G being "just right" is the ratio: (life permitting values/possible values). As far as I know (and I'm no expert) there's nothing in modern physics that restricts the range of possible values. Robin Collins, for example says, "The value of G, for instance, conceivably could have been any number between 0 and infinity" http://home.messiah.edu/%7Ercollins/Fine-tuning/FINETLAY.HTM).

So we can either say that the range of physically possible values is infinite, or we can say that we simply have no idea what the range is. The second option kills the fine tuning argument, so you should prefer the first option.

Here is my problem: the axiom of normalizability requires that the probabilities of all the possibilities add up to 1. If there are infinity possible values, and each is given the same super small non-zero probability, this adds up to infinity. If, instead, we give each possibility a zero probability, it adds up to 0. Either way, we can't normalize the probability space, so we can't meaningfully talk about probabilities in this context. P(FT/~G) is not low; it just doesn't even make sense to ask for this number. (This is the argument made by McGrew et al. here: http://philpapers.org/rec/MCGPAT. The point was also independently made by Colyvan et al. here:

http://www.colyvan.com/papers/finetuning.pdf, and by Paul Davies in "The Mind of God.")

One solution to the normalizability problem is to drop the assumption of a uniform distribution. A nonuniform distribution would allow us to normalize a space of infinite possibilities. You took this approach and said something like this: "I recognize that there are multiple possible distributions, and we don't know which distribution is correct. But since the set of life favoring distributions is just a small set of the total number of possible distributions, the probability that the actual distribution favors life permitting values is still very low."

Notice that this approach does away with the assumption of a uniform distribution over the range of possible values, but then assumes a uniform distribution over the range of possible distributions. While this proposal allows us to normalize the space of possible values, it simply recreates the normalizability problem, because now we are unable to normalize the space of possible distributions. We are faced with an infinite number of possible distributions, and you seem to be asking that we lay a uniform distribution over this infinite range. This is the normalizability problem all over again.

Another possible solution is to find a way to limit the range of possible values, but I don't think this works. You seem to have done this by focusing on possible values for G between 0 and 2. Why exactly did you restrict the range of possibilities this way?

[See summary page of this discussion, with links to all the posts, here.]

Subscribe to:
Posts (Atom)