Saturday, June 27, 2015

FTA part 8: Clarification on the multiverse vs. necessity (part of the Aron series)

Greg says,
I am sorry that my lengthy response wasn't very clear.  Let me see if we can get on the same page.  (Of course, it's not essential that we agree, right?  Just essential that we understand the other person's arguments.  I really appreciate that you're taking the time to do so with mine!)

So, there are a couple of misunderstandings here.  Let's try to tackle them one by one.  The first one is that I don't understand your model.  At the beginning of your last message, you said, "our universe is just one out of a pool of infinity conceivable universes."  Later, you said, "There would be no need for me to resort to a multiverse."  So do you believe in the existence of multiple universes, or not?

My guess is not, because you said "conceivable".  But since I don't know, let's explore both options.

Option 1: First, let's assume there is just one universe.  Yes, it is one universe out of an infinity of conceivable universes, but none of these other universes is real.  Since there is no explanation of how or why this particular universe is actualized over the other conceivable universes, any discussion of probability is useless anyway, normalizability problem or not.  Do you agree? (I think so because at the end of your last message, you said, "the concept of probability is simply not meaningful in this context".  Unless I misunderstand what you meant by that statement.)  And if so, then you are stuck with an apparently finely-tuned universe with no explanation for it.  The universe just IS, and that's the way it is.  Pretty strange, right?

Option 2: Now let's assume there really is a real multiverse ensemble.  In that case, there really is some physical mechanism that generates universes, so this real, physical mechanism has real probability distributions for how it generates all the constants and initial conditions.  This is what I was saying last time, and if so, then again, you end up with either a very small probability of our universe being the way that it is, or a finely-tuned mechanism.  Again, non-theists like the first (very small probability) because you can conceivably defeat that by N large.  A finely-tuned mechanism would be problematic because then you have fine tuning at the most basic property of all of physical reality, and you can't explain that.

I really like one of the papers you pointed me to by Colyvan et al.  In particular, I liked it when they said, "After all, if they [meaning the constants] could not have been different, the probability of the universe being just as we find it is 1, and no fine tuning has occurred. But what is the modality invoked here? Logical possibility? Conceptual possibility? Physical possibility? This is rarely spelled out in the usual presentations of the argument." (p. 326)

This is what I was saying, and I think you would agree.  They go on to discuss the problem with using logical possibility as the modality, precisely because it runs into the normalizability problem, as you pointed out.  What they are missing here is that, if there is no real mechanism that "decides" which constants to pick from, there is no point in talking about probability anyway, again, normalizability problem or not.  We end up with the universe just IS.

Do you see a third option besides either these other universes are real, or they are not?  Or do you see my characterization of the first option as flawed?  (I think this is where more discussion will occur, but I'd like to hear what you have to say about it before I ramble about this on and on.  And on and I tend to do.

Another misunderstanding I think we had is related to what I just laid out as our two Options.  In particular, you quoted me as saying, "everything's equally impossible or our current value is necessary."  That in a nutshell is what I was saying our two options were.  But I got that from the Colyvan paper: "The fine tuning argument, on its most plausible interpretation, hence not only shows that life-permitting universes are improbable, but, arguably, that they are impossible!" (p. 327)  Juxtapose that statement with, "Physical possibility (construed as consistency with the laws of physics and physical constants as we find them) however, restricts the range too much for the proponent of the fine tuning argument, leaving the actual values as the only possible ones, and hence setting the probability at 1!" (p. 329, original emphases removed)

Another misunderstanding is how you then go on to characterize the normalizability problem: "each possible universe is either equally impossible, or they all have a small nonzero probability. They can't be impossible, because then the probabilities don't add up to 1, and they can't have a nonzero probability, because then the probabilities add up to infinity."  The either/or statement you lead off with is not true.  (Before I go on, I do think you characterized the normalizability problem accurately, but I don't think its conditions are met in reality.) Of course there are probability distributions with an infinite domain that are normalizable.  We just don't know what the correct probability distribution to use is.  But again, either there is a real mechanism that generates these universes, in which case there is a real distribution so it is really normalizable; or there is not, in which case it is futile to talk about any probability distribution because there is nothing to draw from.

Final misunderstanding: "If I'm right that the concept of probability is simply not meaningful in this context, then this dissolves the mystery. There would be no need for me to resort to a multiverse or necessity."  Yeah, totally, you may be right that the concept of probability is not meaningful.  That's what I was trying to say in my previous message (and is captured in Option 1 of this message).  And, in which case, you would not need to resort to a multiverse because you have already assumed there is not one.  But then you are stuck with necessity, because the universe just IS.

OK, so those are the critical parts where I either misunderstood you, or where I think you misunderstood me.

Also, let me end with this: this discussion is awesome and I hope you don't get too frustrated at how long I take to respond.  You keep up the good work with cordially asking questions and rebutting Christians' arguments.  I know a lot of atheists (and Christians too) that just want to have their say.  Maybe that's you too, but you're hiding it really well, which means that's not you.

What I should have said in all of that was simply, I think you characterized the normalizability problem correctly but I just don't think it's relevant. Because either we're dealing with a real mechanism (which then must be normalizable by virtue of its being real) or not, in which case a discussion of probability is futile. What do you think?

Take care!

PS: I just took a look back at my previous message, and I even used "Option 1" and "Option 2" in that message too. I forgot, and I guess it's just what I think so strongly that it came out twice.  Shame on me, because it looks like that means I didn't actually explain anything new this time.  Let me know if that's true.

PPS: What I should have said in all of that was simply, I think you characterized the normalizability problem correctly but I just don't think it's relevant. Because either we're dealing with a real mechanism (which then must be normalizable by virtue of its being real) or not, in which case a discussion of probability is futile. What do you think?

[See summary page of this discussion, with links to all the posts, here.]

FTA part 7: Aron asks clarifying questions (part of the Aron series)

Aron wrote:
Thanks for the response!  I want to make sure I understand you, and that you understand me.  My objection was this: our universe is just one out of a pool of infinity conceivable universes.  P(FT/~G) is the probability of picking a universe like ours at random.  Probabilities only make sense if they add to one (ex/ for a die 1/6x6=1).  So for the FTA to work, we need to be able to assign each possible universe a probability that allows them to add up to 1.  But this is impossible because if each universe has a probability of 0, they all add up to 0.  And if each universe is given a small nonzero probability, they add up to infinity.  Since the probabilities can't add up to 1, it is meaningless to talk about probabilities here.   The objection is that our intuitions have led us to extend to concept of "probability" far beyond the context in which it is applicable.

The way you characterized the normalizability objection is like this: "everything's equally impossible or our current value is necessary."  I'm not sure this is what I was getting at.  Instead, it should say "each possible universe is either equally impossible, or they all have a small nonzero probability. They can't be impossible, because then the probabilities don't add up to 1, and they can't have a nonzero probability, because then the probabilities add up to infinity."

If I'm right that the concept of probability is simply not meaningful in this context, then this dissolves the mystery.  There would be no need for me to resort to a multiverse or necessity.

In sum, I'm not quite sure exactly what your objection was to the normalizability problem.

[See summary page of this discussion, with links to all the posts, here.]

FTA part 6: Defying intuition, the multiverse, or necessity (part of the Aron series)

Greg says,
Yes, the normalization problem does seem to come up, doesn't it?  But the more I think about it, the more I think it's a cover-up.  Here's what I mean.  Just like the initial objection you raised about degree of fine tuning not translating into a rigorous probability, in this case this is just another layer of subtlety, but the conclusion is the same.  In going deeper with this, we are essentially just pushing it back another layer.

Another way to think about it is, the first level of fine-tuning is very intuitive, and speaks easily to the common person.  "Wow!  Look how finely-tuned these constants are!  This argues for intention in the make-up of the universe."  This is the intuitive conclusion, and sometimes intuition is right.

On the other hand, sometimes intuition is wrong.  For someone who wants to contest this conclusion (and please don't consider that turn of the phrase to mean I think the challenger of the FTA is disingenuous...we need to think deeply about it), there is always a way to get out of it.  There's always a door to exit for the skeptic.  But every time you exit the door, you end up in another room that is smaller and more difficult to exit.  A smaller door is there to exit the next room, and still smaller.  Pretty soon you'll need one of Alice's mushrooms to get out of the door, it's so small.  How deep does the rabbit hole go?

Did that sound pompous?  Sorry, I thought of that word picture last night and I really liked it.    In any case, my point is that the more one plays the skeptic to deny what intuition is telling us, the harder you have to work and the more you have to deny precious bits of reality.

OK, now that I've played it up so much, do I actually have an argument?  (Hee-hee, I hope so.  We'll see if you like it or not.)

So let's start with the point I made last time.  Either the small life-permissible interval of G is improbable, or the probability distribution from which we are drawing G must be itself finely-tuned (aka, atypical).  That makes intuitive sense.  It's a bit harder to understand than the basic "G must be within one part in 10^60, therefore God did it," but it's still pretty intuitive.  The rebuttal to that is we have no reason to consider any particular sort of probability measure.  Indeed, the normalizability problem destroys fine-tuning: either everything's equally impossible, or our current value is necessary (P = 1).  What method do we have to restrict the probability distribution to some intermediate shape?

So, we drop-kick intuition and need to go one level deeper.  (Remember how I asked, "How deep does the rabbit hole go?"  I have a feeling to get to the bottom of this conversation, we'll eventually have to discuss properly basic beliefs and brains-in-a-vat.  It's a steep price to pay to be constantly skeptical of the intuitive power of the FTA.)  If we really want to have a probability distribution to draw from, we need a mechanism. Here our discussion will bifurcate into two plausible solutions.

(Before I do that, can I mention an aside here?  Initially, I presented the FTA as a rigorous Bayesian-type proof.  Recall you challenged my ability to say P(FT | ~G) is super-low.  Now I just want to recall the point that these probabilities in Bayesian arguments are *epistemic*.  Meaning, they're "what are the odds of that happening?"-type probabilities.  This is the reason why the Bayesian argument goes through, because most will understand the fine-tuning of the constants and conditions of the universe and of earth and intuitively agree that P(FT | ~G) is low, even if it can't be proven rigorously.)

OK, back to the bifurcation.  There are now two naturalistic options (to avoid God): (1) either the universe is alone (and necessary), or (2) it is one of an ensemble of universes, commonly called the multiverse (which then itself is necessary).

Option 1: if the universe is necessary and alone, then all the constants and conditions could not have been other than what they are.  In that sense, all of these probabilities would be unity.  How could it have been any other way if the universe itself is necessary?  But if that is the case, we again are stuck with asking why it had to have been this way.  What is it about the universe and necessity that made it so that life could possibly exist?  Especially when it seems like there are so many other ways it could have been that would have precluded life.  Again, we are now not only stuck with asking "Why is there something rather than nothing?" (since the universe has no explanation for its existence, it would seem rather odd that it would be the necessary entity), but also with "Why is the universe the way that it is" (since it being just the way it is permitted intelligent life to develop within it to ask these questions).

Now, one way you could answer these questions is flippantly.  Dr. Krauss is a famous example of this, with his, "'Why' questions are silly."  But I don't regard you as thinking that.  So then why do you think there is something rather than nothing?  Why do you think the universe is the way it is?  Remember, without God and thus without intention, there is no explanation for these facts.

Option 2: if the universe is one of many universes in the multiverse, then plausibly this could explain how the perceived fine-tuning arose.  Returning to the normalizability problem, the main issue I have with it is, if there really is a natural mechanism that "chooses" values of constants for the universe, then it cannot have the normalizability problem.  This is because it must have a real probability distribution, not this hypothetical/philosophical/no-logical-restriction type distribution.  So the existence of the multiverse then solves the normalizability problem: either the probability distribution is typical, and thus our universe is rare, or the universe-generating mechanism itself has a finely-tuned probability distribution to produce a bunch of universes like ours.  In the second case, our universe is not rare (they're all like ours), but the fine-tuning is in the multiverse itself.

Skeptics rather like the first case: our universe is rare, but the number of universes (probabilistic resources) is so high that one such as ours is bound to have been generated randomly.

I could go on and on about the multiverse, so let me leave it with this so you can respond before I go off the deep end: I don't see Option 1 as viable.  For the skeptic, the universe just can't be the necessary entity. It raises too many questions about sufficient explanation.  So, Option 2, the multiverse, must be the fall back if you want to escape through the ever-shrinking skeptical door.  Further more, the second case of Option 2 (the multiverse itself is finely-tuned) cannot be the case for the skeptic, as this would be identical in ontology to Option 1.  Therefore, as I see it, the skeptic must choose the first case of Option 2: the multiverse generates widely-varying random universes, one of which is the lucky one (ours).

Is that where you think you'd go with this?

[See summary page of this discussion, with links to all the posts, here.]

FTA part 5: Aron introduces the normalizability problem (part of the Aron series)

Aron says:
If I grant a uniform distribution for the sake of argument, then the probability of G being "just right" is the ratio: (life permitting values/possible values).  As far as I know (and I'm no expert) there's nothing in modern physics that restricts the range of possible values.  Robin Collins, for example says, "The value of G, for instance, conceivably could have been any number between 0 and infinity"

So we can either say that the range of physically possible values is infinite, or we can say that we simply have no idea what the range is.  The second option kills the fine tuning argument, so you should prefer the first option.

Here is my problem: the axiom of normalizability requires that the probabilities of all the possibilities add up to 1.  If there are infinity possible values, and each is given the same super small non-zero probability, this adds up to infinity.  If, instead, we give each possibility a zero probability, it adds up to 0.  Either way, we can't normalize the probability space, so we can't meaningfully talk about probabilities in this context.  P(FT/~G) is not low; it just doesn't even make sense to ask for this number.  (This is the argument made by McGrew et al. here:  The point was also independently made by Colyvan et al. here:, and by Paul Davies in "The Mind of God.")

One solution to the normalizability problem is to drop the assumption of a uniform distribution.  A nonuniform distribution would allow us to normalize a space of infinite possibilities.  You took this approach and said something like this:  "I recognize that there are multiple possible distributions, and we don't know which distribution is correct.  But since the set of life favoring distributions is just a small set of the total number of possible distributions, the probability that the actual distribution favors life permitting values is still very low."

Notice that this approach does away with the assumption of a uniform distribution over the range of possible values, but then assumes a uniform distribution over the range of possible distributions.  While this proposal allows us to normalize the space of possible values, it simply recreates the normalizability problem, because now we are unable to normalize the space of possible distributions.  We are faced with an infinite number of possible distributions, and you seem to be asking that we lay a uniform distribution over this infinite range.  This is the normalizability problem all over again.
Another possible solution is to find a way to limit the range of possible values, but I don't think this works.  You seem to have done this by focusing on possible values for G between 0 and 2.  Why exactly did you restrict the range of possibilities this way?

[See summary page of this discussion, with links to all the posts, here.]

Wednesday, April 1, 2015

FTA part 4: Why fine-tuning means the universe is improbable (part of the Aron series)

Greg says:
OK, great, all that is good to know for me. Hopefully, it will help us avoid talking past each other.

I'll go ahead now and engage with your points from your previous post. First, I am not sure if we want to go down the path of discussing Spinoza. I don’t have much interest in it, and you didn’t seem to push it too hard, so we’ll put that out of mind, unless you want to bring it back up at some point.

But perhaps what I could say about it is to give you a bible verse that is germane to the topic of P(FT | G):

Isaiah 45:18 - he who created the heavens,
he is God;
he who fashioned and made the earth,
he founded it;
he did not create it to be empty,
but formed it to be inhabited

OK, on to the topic of P(FT | ~G). In vernacular, what is the probability that the finely-tuned aspects of the universe would arise naturalistically? This is the term in the fine-tuning argument that typically takes on values like 10^-60 or 10^-120, etc. It’s the term that I called “epsilon.”

In particular, you said that I needed to be careful with that term, and you are absolutely right. Like you said, if you really want to think deeply about this, it’s not really accurate to just take the 10^-60 number and say that’s the probability. Again, you are 100% correct. So why do so many people (including myself) do that? TBH, I think for most people, they probably don’t realize the subtlety. For me, it’s just so much easier to communicate the idea that way, and in the end, if you want to go deeper, the conclusion is the same anyway, because you are just pushing back the fine tuning one step. But the route is indeed more subtle.

BTW, please read the following as if I were discussing an exciting topic that I like to think about and on which I am interested in hearing your feedback, rather than some guns-blazing attack on the non-theistic worldview. Because the former is the way I mean it, rather than the latter.

To make matters concrete, let’s unpack the discussion by focusing on the example of big G, which is the constant that is often said to be finely-tuned to one part in 10^60. It is not necessarily the best example to take, since it has its flaws, but it’s certainly an easy one to discuss. As well, anything I say here can be easily generalized to discussion of other constants.

Now, without loss of generality (and for the sake of discussion), we are free to pick units such that G = 1, so that the life-permitting range for values of G is 1 - 10^-60 < G < 1 + 10^-60. This is a narrow range for sure, but it does not necessarily translate into a low probability, because that depends on the probability distribution from which we are randomly drawing values of G.

(And of course, we are now ranging into philosophical/metaphysical speculation...we have no known mechanism by which we may suppose the existence of a probability distribution, nor one from which a value of G could be “drawn”, but in the end I don’t think it matters. I think the argument makes intuitive sense and will apply to just about any theoretical/hypothetical mechanism that one could come up with.)

Let’s imagine for a second that the probability distribution from which we are drawing the value of G is uniform from zero to two. In that case, p(G) = 0.5 uniformly on that interval. Then P(1 - 10^-60 < G < 1 + 10^-60) does indeed equal 10^-60. In this “special” case, the range of life-permitting values really does equal the probability of getting a value within that range.

But what if the probability distribution were a normal distribution with mu = 1 and sigma = 10^-61? In other words, a really, really tight distribution around the desired value of G = 1. In that case, we are *virtually guaranteed* to have a value of G within the life permitting range.

However…(do you see where I’m going with this?)...the only way you may legitimately assume we have such a “special,” atypical probability distribution for G is if you admit there is fine-tuning in the probability distribution itself. How in the world would one, apart from an intelligent creator with a purpose in mind, possibly justify having a probability distribution that forces this otherwise seemingly serendipitous, life-permitting value of G? (And that’s just one finely-tuned parameter.)

In other words, in my estimation, by correctly pointing out that the narrow fine-tuning range does not equate directly to probability, you escape the fine-tuning at that level, only to encounter it at a deeper level. With the fine-tuning argument, not only do you have to answer the age-old question of, “Why is there something rather than nothing,” but also, “Why is the something (that be, rather than nothing) the way that it is?”

[See summary page of this discussion, with links to all the posts, here.]

Tuesday, March 17, 2015

FTA part 3: Aron answers my general questions (part of the Aron series)

Hi Greg!

1. Not a believer. I'm interested because I have friends who are and it's just generally interesting.

2. Kalam or fine tuning. They have the most intuitive appeal, and they implicate lots of philosophical issues.

3. Problem of evil or potential inconsistencies in the definition of theism.

4. Not a scientism-ist. These sorts of questions are inherently philosophical questions, not scientific ones.

[See summary page of this discussion, with links to all the posts, here.]

Saturday, January 31, 2015

FTA part 2: General questions for Aron (part of the Aron Series)

Aron, how are you doing? I’m glad to hear from you again! I hope you continue to think deeply about these questions.

To be honest, I didn't realize that I was going against what Larry believed about using epistemic probabilities and such. At any rate, I am glad we came to some agreement about how TAG could possibly be used in an argument, even if we still don’t agree on whether certainty in TAG renders evidence for God useless.

Hey, but now that we've come to some sort of conclusion, it sounds like you want to switch gears and go more in-depth into the fine tuning? Would that be correct? I’m interested in doing that. I find the fine tuning argument fascinating, and it is definitely one of the arguments that first opened the door to my skeptical way of thinking to allow me to entertain the possibility that God exists.

But before we get too in-depth with this argument, and I see that you have put forth a couple challenges to it, I was wondering if you’d answer a couple of questions? The reason why is I don’t know you very well. (Maybe Larry does, but I don’t!)

First, I am assuming you are not a believer in Jesus in the classical Christian sense. Is that right? Did you ever at one point consider yourself to be a Christian?

Assuming your answer to the first question is, “No, I am not a believer,” what is your interest in Ratio Christi?

What is your favorite argument in favor of the existence of God?

What is your favorite argument against the existence of God?

A lot of atheists these days take a very hard scientistic stance, in which the only allowable evidence in discussions about whether God exists is evidence that can be tested scientifically. I am assuming that would not be your stance, given how into the TAG you got, but I just wanted to make sure.

Thanks a bunch for humoring me on answering these questions. The reason why I think they’re important is because knowing your background might help us avoid talking past each other. To be fair, I will give you my answers.

I am a believer, but I grew up atheist. My favorite argument in favor of the existence of God is...well, it’s hard to pin one down. I've switched back and forth over the years, but as I said, the fine tuning argument was one of the first that I heard and it really went along way to convincing me to go from being an atheist to theist. For this reason, I am really passionate about apologetics, which is why I am passionate about Ratio Christi.

My favorite argument for atheism is the problem of evil, as well as the horrific events in the OT as evidence against Yahweh being perfectly good; those two are very tough problems for the Christian.

I am clearly not a proponent of scientism, although I am a scientist.

[See summary page of this discussion, with links to all the posts, here.]

FTA part 1: Aron challenges that the probability if fine tuning is low (part of the Aron Series)

Aron wrote:
I like your cumulative case method because it looks like you're using Jeffrey conditioning. The probability of G in light of TAGs uncertain status=P(G/TAG)xP(TAG)+P(G/~TAG)xP(~TAG). The left side would be 1x.6, and the right side would be .4 multiplied by whatever you think P(G) is given all the other evidence. In this case, you can use TAG along with evidential arguments. That's why I think the Bayesian approach is best, bc it can let you do stuff that you couldn't do if you were offering TAG as a proof. I don't think Larry would be open to this approach though.

My initial argument was offered against someone who uses TAG as a deductive argument, so my point still holds in that context. I think you offer a good way to sidestep the problem. But even with the method you offer, my main point still stands: evidential arguments can only make a contribution if you're open to the possibility that TAG is wrong.

About fine tuning. I agree for the most part. There are a few ways I could push back. For what it's worth, there's a long tradition, going back to at least Spinoza, of arguing that P(FT/G) is either 0 or near 0. (Spinoza obviously didn't talk about fine tuning per se, but he argued that God wouldn't/couldn't create a universe bc he'd have no reason to do so, given his lack of wants/needs).

Also, be careful with P(FT/~G). The fine tuning data doesn't say this number is small, it says that the life permitting range of values is small. If someone says a constant is fine tuned to one part in 10 billion, they aren't saying that there is a 1/10 billion chance that the constant would have that value. Instead, they are just saying that if the value of the constant were changed by one ten billionth of a percent, then life couldn't exist. Claims about fine tuning are about the narrowness of the life permitting range of values a constant could take. Additional argumentation is needed to show that its improbable that a constant would take a value in that range. To say that P(FT/~G) is low, you need to make some philosophical assumptions, such as the adoption of the principle of indifference and the rejection of the axiom of countable additivity.

[See summary page of this discussion, with links to all the posts, here.]

Aron and the transcendental argument (part of the Aron Series)

Please see below for my discussion with Aron about the transcendental argument for God's existence (TAG).  I apologize for the abrupt beginning, but I jumped into the conversation in medias res, as it were.  And unfortunately I have no way of retrieving Aron's earliest points in the argument, including his formal argument points (1-8).

For links to the full series, see here.


Greg Reeves wrote:
As a scientist/engineer and not a philosopher, I am also not the best qualified, but with that disclaimer... If Ron is right that the possibility of having evidence against God is necessary for the design argument to succeed, then yes indeed the design argument fails. And he would be absolutely right about the first line of argumentation.

But even if we grant him that needing the possibility of having evidence against God is required for successful evidential arguments against God, Argument 1 is a paradox, not a contradiction, because he is leaving out crucial qualifiers in statements (1) and (8).

When he says "the design argument succeeds" he should be saying "were logic possible without God, the design argument succeeds" (a true statement).

At the end, when he says "the design argument fails" it is instead "now that we have proven that God is necessary for logic to be possible, and the design argument rests on the operation of logic, then the design argument fails (in that it cannot not be true)". This last statement does not contradict the previous statement; they are different statements and he is guilty of equivocation in the word "fails".
In other words, the design argument goes through under certain premises, and does not under others. So what? What he has shown is that if TAG is true, the design argument fails, but only because God is proven rather than uncertain; while if TAG is false, then the design argument succeeds and therefore God is a high probability. In other words, he has shown that God is either a certainty or a highly probable being.

What about his second line of argumentation? Equivocation again. If he is granting TAG in the second argument, then not only is (1) true, but the laws of logic would not hold were God to not exist. Therefore, instead of (3) he should be saying "The fine tuning argument is a successful evidential argument for God if logic is possible without God (assumption)." Instead of (4) it would be "Therefore, God would exist even if logic were possible without God." Then (6) would become, "But there could be theoretical evidence against God, given the success of the fine tuning argument, were logic to be possible without God." And (7) would be a near-tautology: "Therefore, there is no being whose nature is the foundation of logic were logic possible without God." Then (8) Therefore, God would not exist were logic possible without God." Finally, (9) "Therefore, logic is impossible without God." Which is where you began anyway since he started by granting TAG.

We should all (myself included) be very careful about his hidden premises/equivocation.

What about his proof that starts out with "If there is a true statement that takes the form 'there is evidence for x', then it confirms theism?" In this proof, I think he is starting with the assumption that TAG is true, meaning the laws of logic depend on God. But this proof clearly has a problem with it...I am probably not getting my terms 100% correct, but I think Ron is conflating epistemic probability (what we think is true based on the evidence) from ontological probability (what actually IS). If we are uncertain that God exists, then based on evidence, we can epistemically put a probability on our belief (crudely). Such and such evidence favors the interpretation that God exists. Other evidence may favor atheism. But in an uncertain world you can truly have evidence that favors a proposition that is untrue. Therefore, having evidence for a proposition does not make that proposition true...all it does is make the statement "there is evidence for this proposition" true.

But if God really does exist, then atheism (here I am using it as the state of affairs in which no God exists) is ontologically untrue. No matter how much evidence you may say there is for atheism, if God does in fact exist that makes atheism untrue. So the way in which he wants "there is evidence for atheism" to mean atheism is true is just a false maneuver. So let's update his actual argument with this in mind:

1. If there is a true statement that takes the form “there is evidence for x”, then it confirms theism (because of TAG).
2. The statement “there is evidence for atheism” is a statement that takes the form “there is evidence for x”.
3. Therefore, if the statement “there is evidence for atheism” is true, then it confirms theism (which has already been proven because TAG was granted).
4. If the statement “there is evidence for atheism” is true, then it, in principle, challenges theism because it means evidence against theism exists.
5. Therefore, if the statement “there is evidence for atheism” is true, then it both confirms (100%) and challenges (makes you wonder about) theism.
6. If the truth of the statement “there is evidence for atheism” both confirms and challenges theism, then the conclusion that one may draw from the evidence for atheism (ie, that atheism is true) statement is necessarily false. In other words, even though there is evidence for atheism, that evidence does not go through.
7. Therefore, the statement “there is evidence for atheism” if true, may lead you to a false conclusion if you don't realize that such a logically true statement confirms theism (100%) under TAG.
8. Therefore, Premise 1 necessarily leads to the conclusion that while there can be evidence for atheism, atheism is still false.

So in summary, atheism can have evidence for it, but *if TAG is granted* then atheism is false, so evidence for atheism is incorrect. But if TAG is not granted, then you can marshal evidence all you want and we'll see which one stacks up better. Ultimately the existence of the laws of logic have a more comfortable fit in a theistic worldview rather than an atheistic one, and the atheist is left with the uncomfortable task of explaining their existence.


Aron wrote:
Greg, in your first post, what exactly do you mean with this paragraph: "In other wordsthe design argument goes through under certain premises, and does not under others. So what? What he has shown is that if TAG is true, the design argument fails, but only because God is proven rather than uncertain; while if TAG is false, then the design argument succeeds and therefore God is a high probability. In other words, he has shown that God is either a certainty or a highly probable being." My point was to show that, under the assumptions of TAG, the design argument fails, and under the assumptions of the design argument, TAG fails. Are you agreeing with me that the success of one entails the failure of the other?

I'm talking about epistemic probability here.

When I talk about evidence, I mean that some fact makes a hypothesis more epistemically likely than it would have been without it. E is evidence if P(H/E)>P(H). This condition will be met if P(E/H)>P(E/~H).

You're absolutely right that there can be evidence against a true hypothesis (i.e., evidence that lowers it's epistemic probability). For example, if I were framed for murder and the weapon was planted in my sock drawer, this would increase the epistemically probability in the detective's mind that I was guilty. This is despite the fact that, ontologically, the probability that I'm guilty is 0.

So there is no doubt that evidence can pull in different directions epistemically. Some evidence may suggest I'm the killer, and other evidence may suggest I'm innocent. However, no single piece of evidence can simultaneously confirm and challenge a hypothesis. This would mean that for that particular evidence E, P(E/H)>P(E/~H) and P(E/H)
P(H/E) and P(H)

So imagine some fact F raises the epistemic probability of atheism. This would mean P(F/atheism)>P(F/theism), and therefore P(atheism/F)>P(atheism). But, if we think that logic presupposes theism, then we should think every fact about the world increases the epistemic probability of theism to 100% (this is bc in order for there to be facts, laws like the law of identity and non contradiction must apply). This would mean that while F raises the epistemic probability of atheism, it must do the same for theism. Contradiction.

Thus, while it is true that we can have evidence against a true hypothesis, this is only true when that hypothesis is not the foundation of logic itself. If you want to admit the possibility of evidence against theism, you need to drop the premise that logic depends on God.


Greg Reeves wrote:
Aron, sorry that I did not know you had responded to me.  I guess I didn't get any update saying so.  Like I said originally, I am a scientist and engineer, so you have to take what I say with a grain of salt about these matters.

What I meant in the first paragraph is that under some conditions, TAG goes through, and under others DAG (design) goes through.  Meaning, if you accept TAG, and you condition your DAG argument on TAG, then you end up with an inescapable 100% probability for God, because you have already conditioned your DAG argument on the background that God exists.  So, yes I do agree with you that the success of TAG entails the failure of DAG *only if you are correct that* and *only in the sense that* you must have the possibility of some evidence contrary to an argument to make the argument successful.  But I did say up front that I do not necessarily accept that. But again, even if I did, then DAG would only fail in that sense, but would not fail in the sense that God has been proven to not exist.  It seems to be only a technicality.

But I definitely could be wrong about that, so let's explore your suggestion about probabilities.  Let me make sure I am understanding you correctly.  You are concerned that if you have a piece of evidence that increases your epistemic probability for atheism, then under TAG it also simultaneously decreases the evidence for atheism (because any logical construct, if it exists, on TAG, proves theism).  You are then worried that if you accept TAG then you get a logical contradiction.  Therefore, TAG cannot be true.

Problem is, since we are dealing with epistemic probability, you have to be *very careful* because epistemic probability can be very tricky.  You can end up sneaking all sorts of stuff in the back door.  Here is where I think you are going wrong.  I will try to parse my answer in the context of your previous paragraph that started with "So imaging some fact F...":

So imagine some fact F raises the epistemic probability of atheism if you do not accept TAG. This would mean P(F/atheism)>P(F/theism), and therefore P(atheism/F)>P(atheism). But, if we think that logic presupposes theism, in other words, if we then condition our probabilities on TAG, then we should think every fact about the world increases the epistemic probability of theism to 100% (this is bc in order for there to be facts, laws like the law of identity and non contradiction must apply). This would mean P(atheism/F & TAG)

Remember, formally the law of non-contradiction says both A and ~A cannot hold at the same time, in the same way, and *under the same circumstances*.  Conditioning on TAG completely changes your circumstances.  So we are back to what I said in my original post.  It depends on your premises.  You change your premises and then of course your conclusions can change.  If logic were possible without God (ie, you don't condition on TAG), then you could use logic to try to prove atheism.  If indeed TAG is true, and you condition on it, then any piece of evidence that you previously used in support of atheism when you did not condition on TAG is no longer in support of atheism.  If you wish, I could write out a full Bayesian analysis on this, but I get the feeling that most readers of this forum would not benefit from it.


Aron wrote:
I don't think we disagree about anything. Once you have come to accept TAG and incorporated it into your background knowledge, then any probability assessments you make will be conditioned on TAG. And you agree that if we condition on TAG, there could never be evidence against theism. But if this is the case, then there could never be evidence for theism either, so the design argument won't work. You seem to agree with this, but think it's "trivial." The reason I don't think it is trivial is that it forces people to make a choice. If you think TAG is true, then you can't think that biological complexity makes theism more likely than it would otherwise be. And if you think biological complexity lends support to theism - i.e. Pr(theism/biology)>Pr(theism) - then you can't have TAG in your background knowledge. I have seen people make cumulative cases that include both arguments, and I don't think this is an option. You can't simultaneously believe that both arguments are sound.

Greg, I wouldn't mind seeing your Bayesian analysis of Dawkins.


Greg Reeves wrote:
Aron, I am glad we aren't disagreeing then, but I do want to take you to task a little bit because I was primarily responding to how you said these things were *contradictions*.  It is not a formal contradiction because you are either conditioning on TAG or not, and that changes your outcome.

But I like how you want to be very precise about it.  If I were to be building such a cumulative case, I probably wouldn't have caught that, but now that you point it out, I'll be careful.  But I think you can still do it...let me unpack what I mean by that.

First, if one is being as precise as you are, I still argue it is "trivial", because in that case, Pr(theism | biology) >= Pr(theism) --- note here the greater than or equal to rather than the strict greater than --- if you include TAG in your background.  That is because TAG means Pr(theism) = 1.  So Pr(theism | biology) = 1.  So it is a trivial result as to whether you "add" the argument from biological design to your background knowledge.  In either case, Pr(theism) = 1.

So how in the world would you build a cumulative case with TAG included?  Well, since we're talking about epistemic probability here (which is a measure of belief rather than frequency), then you could say "Let 'A' be the event that someone believes TAG is true with a 60% probability".  Could you not then condition on "A"?  Would then TAG not be part of a cumulative case?  I think it could be.

In that case, TAG is just another part of your toolbox in the cumulative case for God.  This is where I fall because I think it's a powerful argument, but no one will be 100% convinced on the basis of this argument alone.  On a frequentist approach, either TAG is true, or not; just like either God actually does exist or not.  But in terms of epistemic/Bayesian thinking, the question is rather, how convinced are you that this argument goes through? If you are 90% certain of TAG, then both Pr(theism | A & B) >= Pr(theism | B) and Pr(theism | A) > 90%, where A = TAG is 90% probable and B = background knowledge.

If you argue for some reason it cannot be part of a cumulative case, then we are faced with two possible choices: either you accept TAG and then arguing further about the existence of God is trivial since you accept the proof from TAG, or you reject TAG and then we go on with the rest of the cumulative case.  Either way, any other evidence/argument I mount in favor of theism is at least neutral: in the former case it's irrelevant and in the latter it strengthens the case for God.

Which camp do you fall under?  Do you accept TAG and therefore need no further convincing on the strength of that argument alone?  Or do you reject TAG and therefore are open to discussing the strength of the other myriad arguments for God's existence?

Again, if you are like me, then TAG is a powerful argument and if correct then God's existence is 100% certain.  But we don't know for sure it's correct, meaning that, in terms of epistemic probabilities, God's existence is not 100% certain. Therefore, we marshall other arguments.

Regarding the Bayesian analysis of Dawkins's statements, it is specifically about the case for the fine tuning of the universe.  The general Bayesian analysis goes like this:

Pr(G | FT) = P(FT | G)*P(G)/P(FT)

where G = God exists and FT = the fine tuning of the universe is instantiated.  As usual, we can split the denominator into two terms:

P(FT) = P(FT | G)*P(G) + P(FT | ~G)*(1 - P(G))

Now, the fine tuning argument says that P(FT | ~G) = epsilon (ie, small).  Usually numbers like 10^-120 or 10^-10^123 are thrown around, but the exact number is not important, just that it's small.

For the sake of simplicity, let me just say that P(FT | G) = 1 (ie, God would indeed make a universe in which advanced life is possible, and of course by necessity such a universe would be finely tuned as we observe).  We can keep this term around for precision but I think it's easier to discuss over facebook if I make this assumption.  The results are essentially the same either way unless you want to argue P(FT | G) = epsilon also, which I think would be hard to justify.

Anyway, this leaves us with:

P(G | FT) = P(G) / (P(G) + epsilon*(1 - P(G))

So the only thing left here "unknown" is our prior probability of the existence of God.  Now, you can see right away that if the fine tuning argument is correct in that epsilon is small, then only if you have an absurdly low prior for God existing can you escape the conclusion that P(G | FT) is close to 1.  Well, that's exactly what Dawkins does.  He says, "It doesn't matter how improbable our universe is; the probability that God exists is smaller."  That's a sneaky statement, but what it means is that P(G) = 0.  The only number that could be a probability that is smaller than *any other number* that is also a probability is zero.  By definition.

Mathematically, Dawkins's statement is: for every epsilon > 0, 0 <= P(G) < epsilon.  This is *mathematically identical* to saying P(G) = 0.  But if someone's prior for God is zero, then there is no reason to have any discussion.  Dawkins is essentially saying, "I don't care what the scientific evidence for fine tuning says, I will choose to believe that God does not exist."  That is not reason or rationality, that is blind belief.  Belief, as it were, in spite of the evidence.  :-P

But let's instead say that your prior is also super-small.  Let's say it's much smaller than epsilon even.  Then you are left with:

P(G | FT) = P(G)/epsilon

In other words, the most hardened skeptic, unless he is exhibiting blind faith that God does not exist (and thus P(G) = 0 for him), would objectively increase his epistemic belief that God exists by orders of magnitude once the evidence for the fine tuning of the universe were examined.  And if you have a set prior for God's existence (and not a moving target so that as more evidence comes in, you "conveniently" make your P(G) smaller), all you have to do is wait a while.  As we discover more about the universe, I predict that epsilon will get smaller and smaller.  If I am right, then eventually, epsilon will either shrink past your set P(G), or you will have to find some other reason to reject this argument.  Or else you are fooling yourself.

Now, don't get me wrong, you can attack what I've said in a number of places.  I did gloss over what one might think P(FT | G) is.  Also, you can argue that the universe is not in fact finely-tuned.  However, in the first case, as I've said, unless you unjustifiably put P(FT | G) ~ epsilon or less, then it doesn't really matter what P(FT | G) is.  In the second case, you would be going against hard scientific data, and thus must object out of a precommitment to a non-theistic philosophy rather than objectively examining the evidence.  I think that in either case you are in a weaker position than the theist.

I can go through a similar Bayesian argument for the resurrection.  I like that one too.

Series: a discussion with Aron from Maryland

Dear all,
Last year, I began an internet discussion with a non-theist named Aron from Maryland.  At the moment, this discussion is ongoing, and will be continuously updated here, so check back often.

We started out talking about the Transcendental Argument for God's existence (TAG), and in the end agreed how it could be used in a bigger cumulative case.

We have since transitioned to discussing the fine-tuning argument (FTA).  It's great discussion, and I hope others will follow along.

Transcript of our discussion about TAG (we start transitioning to discussing FTA at the end):

FTA part 1: Aron challenges that the probability if fine tuning is low

A beginning to our discussion of the FTA, where Aron opens with acknowledging our general agreement of how to apply the TAG, and also suggests the probability that our universe would be the way it is (usually called "finely-tuned" in theistic circles) in not actually low:

FTA part 2: General questions for Aron

I answer with some general questions for Aron to make sure we're not talking past each other:

FTA part 3: Aron answers my general questions

Aron then answers my general questions, so I think we're ready to start the discussion in earnest.

FTA part 4: Why fine-tuning means the universe is improbable

I begin to lay out the fine-tuning argument and press why we can say the probability that the fine-tuning would occur without God is small.  Please note that Aron and I both agree that the constants of the universe are finely-tuned, in that they cannot vary by much before the universe is no longer life permitting.

FTA part 5: Aron introduces the normalizability problem

Aron responds by noting that since we are appealing to "possible universes" (which may not even exist), we have no idea what values of the constants of physics are more probable than others, so we must assume any value has equal probability to any other. However, since we have no way to restrict the ranges these values may take on, we have to allow for infinite ranges (i.e., zero to infinity).  Therefore, the probability distribution is not really a probability distribution.  In this statement, Aron is introducing the "non-normalizability" problem.  To strengthen his argument, Aron cites famed Christian apologist Dr. Timothy McGrew.  This is a big deal.

FTA part 6: Defying intuition, the multiverse, or necessity

I agree that the non-normalizability problem that Aron introduced in the previous post is a big deal.  Then I lay out how the intuition of the FTA argument is layered, and every time a skeptic denies the intuitive conclusion that fine-tuning leads to the conclusion that God most likely designed the universe, he or she must give up some more obvious conclusion for a more skeptical one.  This is my "How deep does the rabbit hole go?" story.  But, we do need to go deeper, so I note that there are two options.  Either the multiverse exists, in which case the probabilities are indeed normalizable (because there is a real mechanism generating universes), or our universe is all there is.  In the second case (which Aron wants to go with), even if the probabilities are meaningless (since there is only one universe), we still have a situation (fine-tuning) that demands an explanation.

FTA part 7:  Aron asks clarifying questions

Aron reasserts that the non-normalizability problem, since it leads to the conclusion that probabilities are meaningless, also means that the fine-tuning argument just does not work.  He asks for more clarification and explanation on my part.

FTA part 8:  Clarification on the multiverse vs. necessity

Here I clarify what I said in my previous post.  In particular, I clarify that if Aron wants to go with a single universe, and claim that fine-tuning does not lead to small probabilities, this is the same as saying (1) the universe just IS, with no explanation for it, and none needed; and (2) the universe just IS FINELY-TUNED FOR LIFE, with no explanation for that either, and none needed.