Tuesday, February 6, 2024

Theism, Confirmation, and Axiology - A Response to Christian Idealism

Note; Credit to Sebastian Montesinos for reviewing my draft of this post with me, and coming up with some examples.


Introduction



Recently, Kyle Alander also known as Christian Idealism (CI) wrote a blogpost that explicates his novel framework for understanding theism as an explanatory hypothesis and addresses some of the proposed evidence against theism from a friend of mine's (NN) well-known blogpost in the internet philrel-o-sphere. CI's blogpost advances a sophisticated Bayesian methodology for assessing the prior probability of a hypothesis, when and how we can update auxiliary additions to the theistic hypothesis without loss, and the theoretical constraints of the theistic hypothesis—topics which me and NN have discussed with CI before. Though, admittedly, I am not entirely satisfied with how said discussion went, as, despite its nearly 3 hour run time, there was a lot we were unable to cover and as a result a lot which was left inconclusive.

In this post, I will be addressing CI's methodology, which he tries to use to screen-off disconfirming evidence against theism such as evil or hiddenness in more detail. I will not focus that much on CI's specific responses to the arguments contained in NN's post, as these responses are, for the most part, contaminated by the underlying theoretical methodology that CI employs. The main goal in this post, is to critically examine CI's attempt to build a theoretical framework which establishes the folly of purported evidence against theism, while at the same time maintaining theism's status as a robust and powerful explanatory project. I argue that, despite its novelty and complexity, his attempt ultimately fails.

In the first section, I will discuss CI's unorthodox Bayesian views, and argue that, contra CI, they do not imply that simplicity does not make theories more probable. I will argue on independent grounds, in sub-section 1.3 that expanded theism has a lower prior than bare theism. In the second section, I will consider confirmation holism, and CI's approach to updating with auxiliary theories in light of the data. I will argue that, granting confirmation holism, you still can't update on the data and conjoin unjustified auxiliary stories to theism without loss. In 2.3 I will consider a non-standard procedure for updating by learning an indicative conditional proposed by Laura Buchak, and ultimately conclude that the conditions aren't met for such a procedure to be used in this case.  In the last section, I will argue that even if we take for granted CI's entire procedure and that he can successfully screen off disconfirming evidence, it would ultimately make theism a degenerative research project and lead to radical underdetermination and inductive skepticism. 

Before reading this post, I recommend that you read CI's blogpost first, and preferably, NN's original blogpost as well (or perhaps better yet, wait for the new and improved version to be released). Worth noting as well, CI has several videos and streams related to this explanatory model he develops, the contents of which will also inform some of the objections I will be making here.


Bayesian Explanationism and Prior Probability 


1.1 Preliminary


CI's methodological approach is Bayesian through and through—not only is it Bayesian, but CI specifically assumes a particular unorthodox Bayesian methodology that even most Bayesians do not endorse. Now, it's important to bear in mind that while Bayesian epistemology is one of if not the most popular theory of confirmation and is widely used in philosophy of religion as well as many other fields, it is not a view which is universally endorsed. Some philosophers think the problem of old evidence and the problem of the priors are unresolved issues with Bayesian epistemology, some dispute the interpretation of probability as a 'degree of belief', etc. Notably, leading atheist philosopher of religion Graham Oppy is not a Bayesian, at least, not with respect to assessing grand theories such as theism vs. naturalism, and instead opts for a much more holistic approach in which we compare the theories on the grounds of their theoretical virtues rather than calculating their conditional probabilities on disparate pieces of data. I only bring this up because those who are not sympathetic to Bayesian epistemology will not be impressed with the highly Bayesian methodology for assessing theistic explanation and evidence CI offers, let alone the specific unorthodox Bayesian views CI draws on, and thus not impressed by CI's attempts to refute atheological arguments within that framework. Of course, this is not a big deal since the arguments CI discusses, which often originate with Paul Draper, are themselves situated within a (albeit more orthodox) Bayesian methodology. It is worth noting though, that they need not be—they may be filtered through a general inductive, or abductive schema where Bayes Theorem may (or may not) happen to be the most useful heuristic, yet this need not entail a broad commitment to Bayesian normative epistemology. Speaking for myself, I am roughly agnostic as to whether Bayesian epistemology is the best account of confirmation and rational belief revision, and if I were a committed Bayesian I'd likely lean towards a more orthodox approach. From this point forward, though, I will take Bayesianism and CI's unorthodox view for granted.


CI holds to a specific view in Bayesian epistemology known as 'Explanationism'. CI describes the view as follows:
In the Explanationist model, basic probabilities are defined as the probabilities of atomic propositions conditional on propositions that are directly explanatorily prior to them. This means that our focus is on how propositions are grounded in or arise from their explanatory antecedents, providing a more contextually sensitive assessment of their likelihood.

I will be assuming this definition of basic probability: P(X|Y&Ni) is basic iff X is atomic, and Y is a conjunction of values for all parents of X in a Bayesian network that, according to Ni, includes all variables immediately explanatorily prior to X, and correctly relates all the variables it includes.

In other words: Basic probabilities are atomic propositions conditional on potential direct explanations, rather than them being the unconditional probability of complete world 

Basically, Bayesian Explanationism assigns basic probabilities (basic probabilities are the fundamental building blocks of probability theory upon which all other probabilities depend) by looking at the explanatory relationships between events or propositions, which involves structuring events or propositions in an order of explanatory priority, that is, the degree to which they explain or are explained by other propositions or events. On this view, basic probabilities are conditional, as opposed to the traditional view on which basic probabilities are unconditional. An immediate worry about this is that conditional probabilities can only be determined once we have unconditional probabilities. We can't know P(H|E) unless we already know the prior probability of H, P(H). So, given this, how can it be right that basic probabilities are conditional? My understanding is that Explanationists will not dispute that P(H) is determined before calculating P(H|E), where they differ is in how they evaluate P(H). Whereas traditional Bayesians will assess P(H) by their subjective degree of belief in H taken in isolation, usually by appealing to intrinsic facts about H such as its inherent simplicity. Explanationists will say that P(H) is determined relative to a background explanatory context, such as relations to propositions explanatorily prior to H, how well H coheres with existing knowledge, how well it fits within a broader theoretical framework, the degree to which a priori truths make H plausible etc. This view of the structure of epistemic probabilities is holistic: rather than basic probabilities being determined in isolation, they are grounded in a network of explanatory relations.


With that said, my dispute will not be with Explanationism as a view put forward by Nevin Climenhaga, rather my dispute here will be with the way in which CI uses Explanationism in an attempt to address the criticism that conjoining theism with Auxiliary Axiological hypotheses (AAH) lowers the overall prior probability of the theistic hypothesis. While CI does not explicitly use explanationism to address the criticism that tacking on specific AAH lowers the prior of the theistic hypothesis in this blogpost, he does do so here and here


1.2 Explanationism and Parsimony 


Let's take a step back. Does explanationism entail that theoretical and ontological parsimony don't matter to the epistemic probability of a theory? On any plausible understanding of Explanationism, the answer is no. As an example, consider how we would compare the theory of General Relativity taken alone (GR), with the theory of General Relativity in conjunction with the proposition that there exists invisible undetectable fairies (GR&F). Both (GR) and (GR&F) appear to have exactly the same predictive expectations and explanatory relations, GR&F makes all the same predictions and bears the same explanatory relations to propositions about the empirical world, e.g gravitational time-dilation, the value of light-bending due to gravitation, the precession of Mercury's orbit etc. The only difference is it also includes the proposition that there are invisible undetectable fairies. While this conjunct proposition does not add any explanatory power to GR, it also does nothing to undermine GR's explanatory power, nor is it at all inconsistent with any propositions we know to be true (it isn't a priori ruled out, or ruled out by our empirical knowledge, since the fairies are undetectable). Yet, it would be absurd to suggest that the probability of GR&F is not lower than the probability of GR alone, even CI (I would hope) would accept that GR has a higher epistemic probability than GR&F. At least part of (if not the only) the reason why that is must be that GR&F is theoretically and ontologically unparsimonious when compared to GR. Parsimony thus, appears to have indispensable value in our explanatory theorizing. 


In this video, CI discusses this paper by Climenhaga which discusses whether simpler state descriptions are more probable. Once again, I will not dispute Climenhaga's work here, I will only contend that CI's use of Climenhaga's work proves too much. Climenhaga argues that views which assign state-descriptions (a specification of the truth or falsehood of every atomic proposition regarding some possible state of affairs)  that are more simple (more uniform, less Kolmogorov complexity, qualitative parsimony) a higher probability gives us the wrong results.  He provides a counter-example involving two tosses (1, 2) of a coin ©, in which there are eight state descriptions which focus on the properties of being 'heads' (H) and 'fair' (F), with 'fair' applying to the coin and 'heads' to each toss. For instance, the state description 'Fc&H1&H2' would mean the coin is fair and lands heads on both tosses. Now, all the states where the coin is fair should have equal probabilities, as a fair coin has an equal chance of landing heads or tails. Yet, the state descriptions where the coin is assigned two different properties (which would be Fc&H1&not-H2 and Fc&not-H1&H2) are less simple, since those state descriptions are less uniform and posit different properties to the tosses as opposed to just one. But it's plainly wrong that they have a lower probability than the more simple, uniform state descriptions (Fc&H1&H2 and Fc&not-H1&not-H2).


I do not disagree with any of the above reasoning. This shows that more uniform and qualitatively parsimonious state descriptions are not inherently more probable. What this does not show is that simpler theories are not more probable, all else equal. All it establishes is that there is a limitation in using simplicity as a guide for assigning probabilities in certain inductive reasoning contexts. In the above set of state descriptions of the possible outcomes of the tosses of a fair coin, the simpler state descriptions are not more probable, because each toss of a fair coin is independent and has an equal probability of landing heads or tails. In this case, we know that equal probability should be assigned to the individual atomic propositions in the state descriptions, in particular, the atomic propositions regarding the tosses (H1, ~H1, H2, ~H2).  So, two points are in order here. Firstly, this doesn't say anything about the most relevant methods we use for evaluating the parsimony of a theory as opposed to a state description. This would be assessing theoretical content, i.e the amount of distinct atomic propositions a theory is committed to, and ontological parsimony i.e commitment to the existence of certain and particular kinds of objects, properties, relations etc.  But notice that state descriptions that attribute the same as opposed to different properties to the tosses, while simpler in the sense that they are more uniform and qualitatively simple, are not more ontologically parsimonious. Both only make an ontological commitment to a fair coin, and 2 tosses of the fair coin. Neither have more loaded theoretical content as they are committed to precisely the same amount of atomic propositions.  Secondly, even forgoing this, given what has been said earlier with regards to the indispensable value of parsimony, a more reasonable conclusion here is that the use of parsimony in determining the probability of a state description about the outcomes of a stochastic process (like a coin toss) is a misapplication rather than that parsimonious theories are not more probable.  It is plausible that parsimony is most relevant in explanatory contexts, particularly when theories or models are being evaluated based on their intrinsic features together with their ability to explain observations, not in assigning probabilities to random outcomes. We think a theory that can explain the data using the fewest assumptions is considered superior. 


I take the lesson here to be that while basic probabilities are not considered in isolation and are primarily determined relative to a background of explanatory relations, this does not rule out parsimony as a contributing factor to the epistemic probability of theories and propositions on explanationism, especially in cases where competing hypotheses explain phenomena equally well. An explanationist can understand parsimony as a guiding principle, a feature which characterizes good explanations in science, and in metaphysics, and an integral part of a hypothesis's explanatory efficiency within a network. 


1.3 Yes, Conjoining Theism with an Axiology Lowers the Prior


Since theism conjoined with an AAH is clearly less parsimonious, this would lower its prior probability. However, to fully drive the point home, I will now provide a positive independent case for why P(T&A) < P(T) using plausible assumptions.


Assumption 1, 1>P(T)>0 & 1>P(A)>0 :  Our first assumption will be that theism (T) and the axiological auxiliary story we conjoin to theism (A) both have probabilities of less than 1 and greater than 0. Whatever prior probabilities you assign to T and A, presumably you aren't maximally certain of their truth.  Ok, some people probably assign a prior probability of 1 to T, but I don't have anything to say to those people. CI and most reasonable theists do not assign a prior probability of 1 to T.  Since the A that is used needs to be specific to screen-off the disconfirming evidence of evil and badness, it should be even more clear that A does not have a prior probability of 1.  The assumption that they are greater than 0 is more straightforward. If either of them were 0, the probability of the conjunction would be 0, and the atheist’s job is much easier. Note that I am also not talking about the probability of A conditional on E and T, I’m leaving open that that may be 1. 


Assumption 2, probabilistic independence: Our second assumption will be that the probability of T and the probability of A are independent of each-other. We can say that two atomic propositions X and Y are probabilistically independent iff P(X|Y) = P(X) and P(Y|X) = P(Y). In this case it would be P(T|A) = P(T) and P(A|T) = P(A). This assumption might be more disputable. But, let's suppose A is an axiology which says that horrific suffering such as a baby being tortured is justified if it leads to virtue cultivation (It doesn't matter what the specific content of the axiology is). Conditional on the truth of A, is theism really more probable, that is, if you just learned A and nothing else? This would only be true if learning the truth of the converse, ~A, lowered the degree of confidence you have in the truth of theism. But it is highly unclear why learning the truth or falsity of A should increase or decrease your confidence in T, which is just (according to CI) the proposition that there is an omnipotent being that acts on the basis of axiological reasons. The same thing is true of A conditional on the truth of T, suppose you learned theism is true, would that increase or lower your confidence in the proposition expressed by A? If so, there should be some way to probabilistically support the truth or falsity of A using only the truth of T, but there does not seem to be any way to do that.  Note that it may be the case that taking into account some data E, increases the probability of A given T, but this is not what is being asked. What's being asked is what is the conditional probability of A on T alone absent any considerations of empirical data. If the probability of A, conditional on T alone is equal to the probability of A alone, then A and T are, in the relevant sense, probabilistically independent.


From these 2 assumptions it follows straightforwardly, that  P(T&A) is lower than P(T).

Definition of Conditional Probability: P(T|A) =P(T&A)P(A)

Given assumption 2, 
P(T|A) = P(T) 

So, P
(T) =P(T&A)P(A)

To figure out P(T&A) we need to get rid of the division by P(A). We do this by
multiplying both sides of the equation by P(A). 

Thus, we have P(T) x P(A) = P(T&A)

Given assumption 1, P(T) and P(A) are both less than 1 and greater than 0. 

So, P(T&A) is lower than P(T).  Even if P(T) and P(A) are both, say, 0.99.
0.99 x 0.99 = 0.98. 

The upshot here is that we have a positive reason to think the conjunctive hypothesis of theism conjoined with an AAH, does in fact have a lower prior probability than bare theism, on the grounds of intrinsic parsimony and on the grounds of independent plausible assumptions, and explanationism in no way undermines this. To what degree theism conjoined with an AAH does have a lower prior is not clear, and it may be the case that the conjunction has more explanatory power overall, and is more probable on those grounds. I will focus on that later. 

Confirmation Holism and Bayesian Updating

2.1 Confirmation Holism and the Role of Auxiliary Hypotheses

The next topic of interest is CI's view that, given a view called confirmation holism he can explain purported disconfirming evidence E against theism T in light of a background axiological theory. Though, in some cases, especially when the E in question is horrific evil, a mere axiology (an assumption about value) won't be enough. More accurately, he must use the conjunction of an axiology A and a particular theodical narrative Ni regarding the afterlife or the future-states of the universe to 'screen-off' E.  Now, he can supposedly do this, by 1) updating on E to learn that, conditional on T, (A & Ni) must be true, and 2) using the truth of (A & Ni) on T to then screen-off the disconfirmatory power of E on T. CI develops this approach in particular in his sections on theism and confirmation and the structure of Bayesian learning. However, his underlying motivation for this approach is his confirmation holism. So, it will first be necessary to understand what confirmation holism is, before delving into what CI tries to glean from it. 


Confirmation Holism is a view in the philosophy of science that a hypothesis is not tested in isolation, but only in conjunction with a set of background theories or beliefs. In essence, individual hypotheses or beliefs aren't tested or confirmed, only entire theoretical frameworks and webs of beliefs are. The view is motivated by the Duhem-Quine Thesis. 


Duhem-Quine-Thesis: A set of hypotheses H within a theoretical framework F, when subjected to an empirical test E, cannot have a single hypothesis h within H isolated for verification or falsification by E.


Consider, as an example, the theory of General Relativity (GR). Suppose we conduct an experiment to test the prediction of gravitational time dilation (GT). According to GR, time should pass at different rates in regions of different gravitational potential; for instance, clocks closer to a massive object should tick slower than those further away. Now, imagine that in our experiment we do not observe the expected gravitational time dilation. This does not entail that GR is false, it could be that one of our background auxiliary assumptions, A1, A2, An..is false. Such as, say,, our theoretical interpretation of the observation, or the accuracy and precision of our time-measuring instruments. It is not the case that, if GR, then we observe GT. Rather if GR, & A1 & A2 & An then we observe GT. So, if we fail to observe GT then either not-GR or not-A1, or not-A2, or not-An.. and it is a pragmatic choice which assumption(s) we would give up.


Confirmation Holism is a somewhat controversial view. However, once again, I will not dispute confirmation holism here, (indeed, I am sympathetic to the view myself) I will only dispute the way in which CI uses it and the implications he thinks it has for theism and background axiological assumptions. 


In discussing confirmation holism and its relation to the topic of Theism and auxiliaries, CI states the following:


Therefore, Theistic confirmation & disconfirmation is only possible through conjoining theism with some background axiological auxiliary hypothesis. This applies to literally all abductive and Bayesian arguments for and against theism that use any notion of epistemic probability.


This should clear away any objections in the following form: you are just assuming an axiology that when conjoined with theism matches the data but you need independent justification besides the evidence for why this axiology is true. The answer to these objections is that confirmation is holistic. When you conjoin a hypothesis with an auxiliary hypothesis you are testing both hypotheses and thus you don’t need to independently confirm the auxiliary hypothesis independently of the main hypothesis because it is only the conjunction of two where you make predictions and thus can gather evidence. And the evidence you gather confirms both holistically.


Theism makes predictions relative to higher-order axiological theories since theism predicts value. So this implies that anytime we make a claim about value and disvalue we include background assumptions about it. So any axiology we use to conjoin with theism is what is going to drive its predictive power.


There are multiple issues with this excerpt from CI, not the least of which being that it is plainly incorrect about what confirmation holism entails. Recall that confirmation holism is simply the view that hypotheses are not tested in isolation but only relative to a set of background beliefs. So in this context, all confirmation holism implies is that the theistic hypothesis can only be tested relative to a set of background assumptions. It doesn't say anything about whether those assumptions ought to be independently justified, and it certainly doesn't imply that one has carte blanche to freely update one's assumptions by conditionalizing on the data without serious consequences. Indeed, granting confirmation holism, it remains highly implausible that you can simply hold to any axiological or narrative assumptions in the background, without independent justification. Indeed, I will now prove this by showing the pandora’s box of troublesome results such an assumption opens up:


-As we've already seen, parsimony is an indispensable theoretical virtue. Yet, if you conjoin a set of auxiliary additions to the theistic hypothesis without independent justification simply to explain the data, then, especially since that same data can't then be used as evidence for the hypothesis, (that would be double-counting data, which is blatantly cheating and circular) the theory would only become less parsimonious, and thus less intrinsically probable. 


-It appears to imply that one has free range to hold particularly ridiculous or counterintuitive axiologies or theodical narratives. For example, to explain the data of rape occurring in the world conditional on theism, we could simply update to an axiology which tells us that, in fact, rape and permitting rape to happen are tremendous moral goods. Or we could have an axiology that says our neural configurations and the functional states of our sensory receptors in response to pain stimulus are of tremendous aesthetic beauty, such that it outweighs any pain experience we might have. In fairness to CI, he does state later that one of the constraints on conjoining theism to an axiology A is that A must not have an intrinsically low prior probability. However, it is opaque how we would evaluate the intrinsic probability of some A on CI's view. After all, neither of these axiologies as such are in tension with his 'ultimate goodness principle' as far as I can tell. Insofar as we can evaluate the intrinsic probabilities of axiologies – the specific conjunctions of axiologies and theodical narratives (A&Ni) CI must use to screen-off evidence against theism themselves appear to have low intrinsic probabilities. This is because there are many epistemically possible contrasting narratives and axiologies, and CI gives us no good antecedent reason to prefer the specific conjunction CI uses over them (indeed, this particular conjunction is only justified at all for the purposes of accommodating our observations).


-This leads to the theistic hypothesis being empirically untestable. For any observation O, we simply update on O and adjust our background axiological assumptions by adding in A1, A2 or An or incorporating some narrative Ni, so that no O could probabilistically count against theism T, since we will always be able to redo our predictions and tack on some A1 or A2 or An, or some Ni, or both. If there is no O that could lower the probability of T, it is a consequence of Bayesian reasoning that there is no O which could raise the probability of T as well. I will be discussing this in more depth in the last section. 


- Two can play at this game. Embodied conscious agents (C) is taken to be evidence for theism: P(C|T) is high while P(C|N) is low. However, as a naturalist I simply update using an auxiliary assumption (A*) which could be that the initial state of the universe is causally constituted as to entail the eventual production of embodied conscious agents, or it could even be that there are brutely metaphysically necessary conscious aliens, then P(C|N&A*) = 1. It may be objected that A* has a low conditional probability on N. But for one, P(A*|N) should not be lower than P(A*), it's just the case that A* is intrinsically very improbable. For two, given what CI says, naturalism and theism can only be tested in conjunction with background theories, and these background theories don't need to be justified independently of the evidence (his words, not mine). A* just so happens to be the background auxiliary assumption the naturalist is making use of to explain our observations, and, while intrinsically very improbable, it is not particularly in tension with their naturalism. So, what's the problem supposed to be on CI's view? Finally, as said earlier, conjunctions such as (A&Ni) which need to be specific enough to screen-off atheological disconfirming evidence also have a low prior probability. So, there is total parity between CI’s approach and my own.  CI might appeal to his ultimate goodness principle “UGP” as a difference-maker, but I will argue in a later section that UGP is insufficient. Further, I can easily create analogous constraints to naturalism (such as what I implied earlier, that the auxiliary must not be in particular tension with their naturalist commitments) such that the point still goes through. 


- It implies there's no penalty for overfitting, as adding multiple auxiliary hypotheses to a main hypothesis would be justified as long as the resultant conjunctive hypothesis aligns with the available data holistically. Overfitting occurs when a hypothesis or model is too closely tailored to a specific set of data, to the point where it loses its  generalizability and predictive accuracy. The model or hypothesis will often, as a result, fit the noise in the data rather than capture an actual trend. I grant that confirmation is holistic, but it's crucial that each component – the main hypothesis and the auxiliary ones – independently contribute to the explanatory power/predictive accuracy of the overall model, in particular when confronted with new data. If auxiliary hypotheses are added solely so that the particular already observed data fits the hypothesis, which is exactly the case for the way in which CI conjoins auxiliaries to theism, that's a theoretical defect. 


In sum, I am happy to accept that the theistic hypothesis is confirmed not in isolation but only in conjunction with a set of auxiliary assumptions situated within a theoretical framework. In light of these considerations, however, this is with the caveat that those auxiliary assumptions are justified and held independently of the data they purport to explain, not simply tacked on to explain said data. This is so as to be more economical, to avoid over-fitting, to leave room for genuine empirical testability, and to avoid absurd and overly permissive implications. Unfortunately, though, this is contrary to CI's entire approach in which he HAS to update on the data, and tailor the axiological assumptions of the theistic hypothesis to fit the data and screen off its disconfirmatory power.  Now, it might be objected that the problem cuts both ways, as naturalists could only mount evidential arguments against theism in light of background auxiliary assumptions about value as well. For instance, the problem of evil, the anthropic argument, and the argument from divine hiddenness all make axiological assumptions. So, the problem of independently justified auxiliary hypotheses is a problem for everyone.


With regards to the anthropic argument and the argument from evil, the only axiological assumptions that are made, are basic moral judgements, such as 'horrendous suffering is prima facie evil' 'barbarism is prima facie bad' or 'experiencing the deepest forms of empathy, love and virtue is prima facie good'. The reason these assumptions typically don't require justification is because theists, and indeed pretty much anyone with ordinary moral intuitions already agrees with them. They are already granted at the start of inquiry, simply part of the background for both theists and naturalists. We might even say they just follow from our shared concept of what 'good' is, and what 'evil' is. Certainly, such judgements are held to be plausible, obvious even, independently of considerations of empirical data. The same can not be said for the specific axiologies and theodical narratives CI wants to use. With respect to divine hiddenness, it's true that the argument from divine hiddenness as most famously articulated by J. L. Schellenberg, does make assumptions about divine love, assumptions which a theist is free to contest. However, nobody claims that the central premise about divine love in the argument from divine hiddenness does not require independent justification. In fact, Schellenberg motivates his assertions regarding divine love. Here's a thoughtful back and forth between a theist and an atheist about whether the notion of divine love at play in the hiddenness argument is correct. So, in our actual practice of philrel, when the axiological assumptions are contested, we do attempt to independently justify them, and so this is, if anything, more evidence that there is something wrong with CI's claim.

2.2 The Cost of Updating with Additional Axiological Auxiliaries

Even if we were to forgo the points in the previous section which are moreso indirect attacks on the strategy CI uses, a more direct problem with CI’s  appeal to Confirmation Holism here, is that, Confirmation Holism, in particular the Duhem-Quine Thesis, is most often formulated deductively, as is the case with how I formulated it. That is, there is no observation O which entails the truth or falsity of some hypothesis H in isolation. When confronted with some O, it is only H together with some auxiliaries A1, A2, An.. that entail ~O, so we will always be able to adjust our auxiliary assumptions, rather than take out the main hypothesis. However, none of this implies that there cannot be some O which lowers the probability of H. In fact, we have reason to believe that is false. Any hypothesis H, is logically equivalent to a disjunction of more specific sub-hypotheses H1 ∨ H2 ∨ Hn. What O might show is that a particular disjunct (or set of disjuncts) of H is false, such as H1. H1 would be H conjoined with some set of auxiliaries A1, A2, such that H&A1&A2 entails ~O. So, O would entail that H1 is false. Now, assuming the probability of H1 does not equal 0, then ~H1 would decrease the probability of H. This is because P(~H1|H) < P(~H1|~H) which is 1, since ~H entails ~H1. It's a theorem of Bayesian probability calculus that if P(A|B) < P(A|~B) then P(B|A) < P(B). Thus, P(H|~H1) < P(H). Since O entails ~H1, and ~H1 lowers the probability of H, and we can assume O doesn't entail or increase the probability of H2 or any other disjunct of H, it follows that O would lower the probability of H. This is true regardless of whether confirmation holism is assumed.


Let's apply this to the case of theism, and auxiliary axiologies and narratives, particularly in the context of the problem of evil. Let T be, in a broadest sense, the theistic hypothesis, a disjunction of the ways in which theism might be true, with the caveat that we are not counting very implausible versions of theism which everyone would agree have a probability of pretty much 0. Let E stand for any horrific evil we observe, such as the holocaust, or baby cancer. Let k be our background information, a set of what both theists and naturalists would agree are known background facts, in particular those facts which are required for E to be observed, such as the existence of the universe, and embodied conscious agents. We include this to avoid the charge (which CI likes to make) that we are 'conveniently ignoring' the background facts that are required for, in the first place, observing E, some of which are supposedly evidence for theism. Let A be some axiology which states that all evils we observe even those which appear overwhelmingly bad and involves intense, horrendous suffering are in fact defeasible and their permission can be morally justified, particularly if E is incorporated into a greater good whole G, such that G would not be as valuable without E. We might also add the condition that the sufferer(s) must endorse the life they lived and recognize the value E contributed to their life. Finally, let Ni be a theodical narrative wherein E is in fact ultimately defeated in such a way that it would be morally justified as per the way A allows. Typically, this will invoke the idea of an afterlife and a process of saint-making, but the specific details of the narrative are not that important for our purposes, thus we will leave Ni non-committal.


Now, what we want to know is P(T&k|E). That is when we learn E how should this change our credence in T, and if it disconfirms T, to what degree? We do this while respecting confirmation holism and acknowledging that E cannot falsify T on its own, rather it can only restrict the probability space in which T is true. To figure out P(T&k|E) one approach would be to figure out what our restricted probability space looks like after we update on E, by removing which versions of T are shown to be false in light of E. We will say that it is those versions of theism under which the conjunction of A&Ni is false. Of course, I harbor no illusions that there could be other propositions which, when added to T, save it from falsification by E. Such as the 'rape is good', and 'pain is beautiful' axiologies I mentioned before, but those are going to have a joint probability of basically 0. It's safe to say that we can assume that P(A&Ni|T&E) is basically 1, and I think CI would agree. So, the new probability space for T when we update on E is that all versions of T are false except those on which A&Ni are true, that is, those where E (and any evil like E) is morally justified and integrated into a greater good whole. 


Notice that the situation is actually even worse for the theist than the above example in which we simply take out a specific disjunct of a hypothesis H1. In this case, we are eliminating all versions of theism where the conjunction A&Ni is false from our outcome space, not just a specific disjunct of T. Perhaps if we were just eliminating a particular disjunct of T, T1, a theist may plausibly argue that T1 was not a very plausible version of T to begin with so it isn't too much of a cost. However, since we are talking about a case where all versions of T are eliminated except the ones involving A&Ni, this line of reasoning becomes far more dubious.  At the very least, if P(A&Ni|T) isn't 1, that is if A&Ni are not entailed from the bare content of the theistic hypothesis, and A&Ni does not itself have a probability of 1 then since learning E restricts the probability space of T to those where A&Ni are true, the probability of T is lowered. In fact, I will argue further that neither A or Ni have a probability of 1.


First let's consider P(A). A is false on certain strict deontological theories of normative ethics. Even if E  is integrated into a greater good whole, or contributes to the betterment of the individual's life on the whole, it could still be unjustified, for it may entail using an agent or set of agents as a mere means to an end. Even if it leads to the agent's overall benefit in the long run, they aren't given a chance to consent to being put through the suffering. Were they able to consent, it could be that even if they knew it'd be ultimately better for them or contributes to a greater good whole, they may, using their rationality, decline to be put through horrendous suffering. On some deontological views, it would then be immoral and thus unjustified to put them through the suffering. Now, you may not find such views plausible (avid readers of this blog might know that I myself am a deontologist, and as such I am sympathetic to this view), but it is surely at-least epistemically conceivable that such a view is correct, so P(A) is less than 1.

What about P(Ni)? This is more straightforward, Ni is just a narrative under which E is in fact justified, this narrative is not motivated on independent grounds. There are many conceivable narratives wherein it is not the case that E is defeated and we are given no antecedent reason to think that the particular narrative in which E is defeated is a priori more probable than countervailing narratives in which E is not defeated. As such, it is even more clear that P(Ni) is less than 1. Conditionalizing on T alone does not change our individual assessments of Ni or A, it is only when we conjoin T with E that we can derive the entailment of A&Ni. If the theist wants to try to argue that A&Ni is very probable on, or a priori entailed from T, they are free to do so, I'll just note that as far as I'm aware no remotely compelling argument in that regard has been forthcoming. Perhaps one could argue that P(Ni|T&A) > P(Ni) since the truth of T&A restricts the number of possibilities in which Ni is false (such as any possibilities involving horrific evils). This reasoning is suspect though, since T&A should also restrict the number of possibilities in which Ni is true (Namely, those possibilities where Ni is true and T or A is false). In any case, what really matters is P(A&Ni|T). Since P(A&Ni|T) is plainly not 1, being forced to restrict the probability space of T to include only those versions of theism in which A&Ni are true in light of the evidence decreases the probability of T.



2.3 Updating without loss?

This brings us to considering how we would update and repartition the probability space of the theistic hypothesis in light of new data. I suspect that at this juncture, CI would try to appeal to a view of updating under which he can supposedly update theism by conjoining it with A&Ni without loss. Where he can have his cake, and eat it too so to speak. Immediately, this is implausible in the extreme given what I have argued in section 2.2. When we learn a specific version or disjunct of a hypothesis H is false, given that version of the hypothesis H1 does not have a probability of 0, it will always follow that ~H1 is some evidence against H. CI will want to say that when we update on E we learn that the probability space of T should have been completely taken up by the extended version of T (T&A&Ni). In other words, the entire weight of the probability of T should be reassigned to T&A&Ni. But if the total outcome space of T should have just been only those versions of T conjoined with A&Ni, then either A&Ni must have a probability of 1 (which I've already given reasons to think is false) or A&Ni must be entailed from T a priori, not merely modulo the data E. Yet, showing an a priori entailment of A&Ni from the bare content of T alone is precisely what CI does not do. So what gives?


I believe what CI will have in mind is an appeal to a non-standard procedure of Bayesian updating, which can be found in Buchak 2014, and Dougherty 2014. On this procedure, when we learn a specific version H1 of a general hypothesis H is false, we can then repartition the probability space of H to H1 while preserving the same ratio of probability H originally had with its negation ~H when we renormalize. However, it's important to note that Buchak never claims that one has carte blanche to use this repartitioning procedure in any circumstance. She recognizes that it only applies to special cases, although there isn't any consensus on what those cases are. It's certainly far from clear that this repartitioning of the probability space can be done by illicitly updating on the data as CI wants to do, and as Dougherty seems to suggest. Nevertheless, let us examine the cases in which we can use this non-standard updating procedure. Notably, Buchak and Dougherty suggest that we can do this when we learn and update on the truth of a conditional. I will examine 2 of these cases, the first from Buchak, the second from Dougherty. I will argue that in both cases there are special reasons for why we can update without loss, reasons which are totally absent in the case of theism and the requisite auxiliaries that are tacked on. 


Here is the first case, from Buchak;

For example, you assign equal probability to the hypothesis that your friend is in town (A) and the hypothesis that he is out of town (A̅). There are five coffee shops in town, three Pete’s and two Starbucks, and knowing nothing else, you assign equal probability to his being at each (with AB representing his being in town at a Pete’s and AB̅ his being in town at a Starbucks). You then learn that he hates Starbucks, so if he’s in town, he won’t be there – therefore, you can rule out AB̅. Intuitively, though, learning this fact shouldn’t make you think it more likely that he is out of town. Whereas the first procedure captured updating by conditionalization, this procedure captures updating on an indicative conditional without lowering the probability of the antecedent.

To clarify, this is a case where you know that if your friend (let's call him John) is in town he will be at a coffee shop. So the initial outcome space of John being at town is John being at one of the three Pete's or one of the two Starbuck's and you initially distribute the probabilities uniformly across the different coffee shops he might be at. Then you learn he hates Starbucks, so you learn the truth of the conditional `if John is in town, he isn't at Starbucks'. This effectively rules out the versions of the 'John is in town' hypothesis wherein John is at one of the two Starbucks, but learning the truth of this indicative conditional should not thereby lower the probability of the 'John is in town' hypothesis.  I'll note that this is a little dubious, if you learned that `John hates Starbucks` this plausibly may increase your credence in `John hates coffee shops in general` and thus decrease your credence in `John is in town`.  Nevertheless, we can safely leave such worries aside. They key disanalogy is as follows:


In the case of the `John is in town` hypothesis, when we learn that `John hates Starbucks` we are learning about the content of the hypothesis, namely, we are learning about the desires John has, something which we could learn independent of considerations of the relevant data if only we had known more about John. In other words, we are learning what the bare content of the theory entails. Namely, that it entails that since John is a man who wouldn't go to Starbucks because he hates it, then if John were at a coffee shop, he wouldn't be at Starbucks. But suppose instead, we did not learn about the entailments of the hypothesis by learning more about John. Instead, we simply check the two Starbucks  in town and find that John is at neither of them. How should this affect your credence in the `John is in town` hypothesis? Well, it seems like if you knew nothing about John's desires you should distribute the probability of him being at the 2 Starbuck's evenly with him being at the 3 Pete's. So, if you then learn that John is not at the 2 Starbuck's this will decrease your overall credence in John being in town via standard Bayesian updating. Notice also, that it would be an illicit move to use the data of failing to observe John at both Starbucks to update and formulate the auxiliary  assumption `John hates Starbuck's' without independent justification, for the express purpose of saving the 'John is in town' hypothesis. However, such a move is precisely the move CI (and others like Trent Dougherty) makes when updating theism. 


Next we'll look at a case put forward by Dougherty;

One such situation is when the reasons for believing the hypothesis pertain to the generic features of the hypothesis independently from the species. For example, suppose, as is quite plausible, that the evidence for common descent of humans and chimps (that they share a common ancestor) is consistent with both Darwinian gradualism and punctuationism. Now suppose biologists discovered it was simply impossible for the mutations postulated by punctuationism to occur, so that punctuationism was effectively ruled out. This should not decrease our confidence in common descent at all.

This case immediately looks more relevant than the previous case, since here we are updating the data to eliminate a specific version of a hypothesis without loss. However, it remains disanalogous. When we find evidence disproving a specific variant of Darwinian evolution, like punctuationism, plausibly this evidence won't be relevant to viable competing non-evolutionary hypotheses. The only relevant competing hypotheses on the table would be gradualism and punctuationism, so evidence against punctuationism would in turn just be evidence for gradualism. For instance, evidence pertaining to the rate of fossil accumulation will be applicable specifically to Darwinian gradualism and punctuated equilibrium, not to an entirely different, rival theory of life's origins. If it were the case that such evidence were entailed by a viable third hypothesis about life's origins AND it ruled out a specific version of Darwinian common descent such as punctuationism, this would indeed lower the probability of common descent, in line with standard Bayesian updating. The important point here is that Darwinian common descent has extraordinary evidence in its favor, and as a theory of life's origins and development, it is quite literally the only game in town. Thus, this is a case where when we discover evidence against a specific sub-hypothesis, we'd have very good reason to repartition the probability space within common descent when we renormalize, rather than to reallocate the remaining probability elsewhere. There wouldn't be anywhere else for it to go. Perhaps the theist will want to say that theism also has no viable competitors. If the theist wants to die on that hill, I feel that I have already done my job as an atheist, but that'd be a discussion for another day. 


There is another relevant disanalogy. This is a case where the data as such only counts against a specific version of the theory, not the general theory. Ex hypothesi, the evidence in question is against punctuationism specifically, which is a specific version of evolutionary theory. In the case of theism however, the evidence E counts against the general version of theism, T, such that you have to update to a more specific, theoretically loaded version of T conjoined with A&Ni. In other words, in the common descent case you simply take out certain assumptions that are tied to a specific version of the general hypothesis, since that specific version of the hypothesis is what the evidence counts against. In the case of T, in response to E, one is forced to relegate the outcome space to a more specific version of T conjoined with specific propositions that are not made likely on the bare content of T. As such, we have much more reason to believe that updating theism in the way required to screen-off E would lead to a decrease in probability than we do for the case of common descent. I’ll note that if were to discover evidence that appears to count against the general theory of common descent and as a result we are forced to move to a specific version of evolutionary theory conjoined with specific propositions not entailed from or made probable on bare evolutionary theory, I would have the intuition that this lowers the overall prior of evolutionary theory. Albeit, it would be negligible in light of the other disanalogy I proffered. 


I could go on addressing more proposed examples, but I feel no need as they will fall pray to similar disanalogies. All in all, I must conclude that the procedure CI employs in attempting to 'screen-off` disconfirmatory evidence, is not as easy and straightforward as he supposes. He is, of course, free to update on the evidence and make use of whatever axiological and narrative assumptions he pleases without independent justification, but not without severe costs. To the extent that his approach to `refuting` proposed evidence against theism relies on such a procedure, there are both rebutting and undercutting defeaters to said approach.

  

Theoretical Constraints and Empirical Content
In the last two sections I have directly argued against CI's strategy, arguing that even granting him his methodological assumptions such as confirmation holism, and explanationism, this in no way leads to his desired conclusion that theism conjoined with auxiliary stories do not have a lower prior probability, or that he can update theism in light of the data without loss. I've further given independent grounds for thinking both of those claims are false. In this final section, I will argue that even if his methodological assumptions and the entailments he believes they have are granted, the fruits of such an approach would be no less disastrous for natural theology as they are taken to be for atheology. In particular, I will argue they imply that theism cannot be tested, and thus taken to be even a viable explanatory hypothesis, let alone anything resembling a progressive research project. Further, if such an approach could be taken seriously in full generality, it would seem to imply an unacceptable degree of inductive skepticism and underdetermination of all theories. CI proposes a theoretical constraint on theistic predictions such that they can allegedly be falsified. However, I will argue that such a constraint is manifestly insufficient to avoid these issues.

3.1 Empirical Testability


My thesis of this sub-section and the next will be to motivate the following argument.

1. If the theistic hypothesis is empirically untestable, then the theistic hypothesis is not a viable explanatory hypothesis

2. If CI's procedure for 'screening-off' disconfirming evidence against the theistic hypothesis is endorsed, then the theistic hypothesis is empirically untestable.

3. Thus, if CI's procedure for 'screening-off' disconfirming evidence against the theistic hypothesis is endorsed, then the theistic hypothesis is not a viable explanatory hypothesis.

The argument as formulated is valid, it is a hypothetical syllogism. The conclusion (3.) can only be denied on pain of denying one of the premises (1.) or (2.). I will dedicate this sub-section to justifying premise (1.), which I take to be the less controversial premise. Thus, I will argue that empirical testability is a necessary condition for a viable explanatory hypothesis. Before we get started though, it will be useful to clarify what is meant by the terms 'empirical testability' and 'viable explanatory hypothesis'. What I mean by 'viable explanatory hypothesis' is just a hypothesis that purports to explain some observable state of affairs that can, even in principle, be a good or theoretically virtuous explanation, which includes minimally being able to be confirmed by the phenomena it purports to explain. Here I will understand empirical testability as:

Empirical Testability: A theory or hypothesis H is empirically testable iff there is some epistemically possible observation O1, such that O1 disconfirms H and there is some epistemically possible observation O2, such that O2 confirms H.

Here by epistemically possible observation, I mean possibly observed relative to the epistemic situation of a human observer like us. In other words, there are no `a priori truths', modal facts, or limitations inherent to the human condition among other things that prevent us from having such an observation. Notice then, that this is a rather modest understanding of empirical testability. It's not even required that we have such confirming or disconfirming observations in order for a hypothesis to be empirically testable, all that's required is that it is simply epistemically possible that we have such observations. Another important point is that I'm not assuming a strict Popperian solution to the demarcation problem. When I say a viable explanatory hypothesis must be able to be disconfirmed, I don't mean that a necessary condition is that the hypothesis must be falsifiable in Karl Popper's sense. That is, that there must be some possible observation which deductively entails that the hypothesis is false. All that is required is that there are epistemically possible observations which count against the hypothesis either via entailment or probabilistically.

So, why should we believe that viable explanatory hypotheses must be empirically testable? That is, why must it be the case that a hypothesis must be both empirically confirmable and disconfirmable in order to be viable? The reason is straightforward, if a hypothesis H is to be empirically confirmable, it must also be empirically disconfirmable. This is because, for H to be empirically confirmable, it must make specific predictions about what kind of observable evidence should be more likely if H is true. But if H is such that no epistemically possible observable state of affairs O can count against it at all (i.e., disconfirm it), then this would mean that P(O∣H) is the same for all O. In other words, all observable evidence we could have would be equally likely whether H is true or not. This makes H utterly uninformative with respect to any O you could have–the likelihood does not change regardless of what evidence is observed.

Consider also that any evidential argument for theism which can be formulated via Bayes Theorem, such as the fine-tuning argument, requires that theism be empirically testable. We start with some observation, in this case a life permitting universe (L), and we use it to test the theistic hypothesis T. For L to confirm T, it must be the case that P(T∣L) > P(T). But if the probability of T given L is higher than the prior probability of T, then the probability of T given not-L must be lower to ensure that the overall probability remains consistent with the base rate of T in line with the total law of probability. So, P(T∣~L) would have to be lower than P(T). L can only raise the probability of T, if not-L lowers the probability of T. The same general principle would pretty much apply to any probabilistic argument in natural theology.

Another supplementary motivation for why hypotheses must be testable would be that it aligns with scientific practice, as scientists routinely design experiments to test their hypotheses. Scientific research also undergoes heavy peer-review prior to publication, wherein hypotheses are critically evaluated for testability and reproducibility. Hypotheses that aren't empirically testable in the sense described do not survive this process and are generally considered pseudo-scientific.

One might be tempted to argue against the idea that a viable explanatory hypothesis must be empirically testable by providing counter-examples. For instance, suppose I have 6 water bottles and take away 2 and am left with 4, call this observation O*. At least part of what explains this, it may be alleged, is that it is an 'a priori necessary truth' of mathematics that '6 - 2 = 4' call this X. But X is not empirically disconfirmable, indeed, that's precisely what makes it a necessary truth! However, while this is a case of an irrefutable necessary truth that plays a part in explaining some observations, it isn't what I would call a hypothesis. It would not be correct to say that O* confirms X, indeed, the epistemic probability of X is 1 regardless of any observations we may have. X among other a priori necessary truths would simply be necessary conditions for the possibility of any experience or observations we could have, or a modal constraint on the states of affairs which could exist and thus be observed. So the explanation offered in this sense wouldn't be one that postulates an entity or a theoretical posit to explain certain empirical data and can be confirmed by said data, rather it would be a statement of the conditions our experiences must conform to for them to be intelligible experiences of some phenomena. So it isn't an explanatory hypothesis, at least in the sense that's of interest to me.

Another possible counter-example might appeal to theoretical sciences, in particular theories in theoretical physics such as string theory, or multiverse theories. String theory (ST) posits the existence of additional spatial dimensions beyond the familiar three. These extra dimensions are hypothesized to be compactified or curled up in small scales that are not observable with current technology. Further, ST often deals with phenomena that are expected to occur at extremely high energy scales, particularly the Planck scale which is a scale many orders of magnitude higher than what scientists can achieve with current particle accelerators. So, it may be alleged, ST is a valid scientific theory which is not empirically testable, and is therefore a counterexample to my claim.

How might I respond? First off, theories like ST are highly speculative and probably should not be definitively endorsed, (especially by laymen) with one of the reasons being precisely because it hasn't been empirically confirmed. Second, it's important to note that ST is mainly motivated by its ability to unify the forces described by the standard model in particle physics, as well as general relativity (GR) and quantum mechanics (QM) under a coherent and parsimonious theoretical framework, not by its ability to explain certain observations unique to it, or being made likely given some observations we've made. Thus, what it mainly has going for it are theoretical virtues such as unification, parsimony, and elegance. So, once again, while I do not question the validity of ST as a scientific theory, it's not exactly what I have in mind when I designate something an 'explanatory hypothesis'. Now, while it's conceivable that the theistic hypothesis may be motivated strictly on the basis of theoretical virtues, insofar as the theistic hypothesis is considered as an explanatory hypothesis (which is what CI treats it as, and forms the basis for most arguments in natural theology), empirical testability is a necessary feature. Third, the claim that ST is empirically untestable in the sense I defined is misleading. At best, we might say it is untestable given current technological limitations. However, it is not implausible that advances in technology or novel experimental designs in the future will allow for empirical testing of its predictions. The history of science is replete with examples where theoretical predictions were confirmed much later when technology caught up, such as the discovery of the Higgs Boson. Disconfirmation of ST could occur if, for instance, it is observed that quantum superposition behaves inconsistently with ST under conditions of high energies or strong gravitational fields. Additionally, it's not even clear that ST is currently untestable. As already stated, ST aims to reconcile GR and QM, and its plausibility could be undercut if foundational aspects of these theories were disproven or if key predictions, like those related to quantum entanglement, were disconfirmed by experiments. Other theoretical physics models will also be subject to at least epistemically possible empirical challenges.

One might also appeal to metaphysical hypotheses such as panpsychism or platonism as a counterexample, but once again this will fall pray to the same disanalogies. Either they are empirically testable, or they are not motivated by an appeal to some observable state of affairs but by an appeal to theoretical virtues, such as parsimony or the fact that they solve certain philosophical issues, such as the explanatory gap or the problem of universals, in which case they aren’t explanatory hypotheses. Or they are a necessary or ‘a priori’ feature of possible experiences and observations in which case they still aren’t explanatory hypotheses.

One last potential challenge might be to appeal to confirmation holism. The idea would be that the claim that hypotheses must be empirically confirmable or disconfirmable neglects the crucial fact that hypotheses are not tested in isolation but only relative to background auxiliary assumptions. This is mistaken however, and equates the more modest explication of empirical testability I provided, with the Popperian falsificationism I explicitly want to avoid. I'm happy to grant that an observation cannot show that a hypothesis in isolation is false, the observation only shows that hypothesis or the disjunction of auxiliary assumptions which together entail the negation of the observation must be false. I can even grant that the evidential effect of any observable evidence would be the same for the auxiliary assumptions and the main hypothesis (that is, neither have a greater increase or decrease in probability relative to the other). All that is required is that some epistemically possible observable state of affairs increases (confirms) or lowers (disconfirms) the probability of the hypothesis, not that it decisively falsifies the hypothesis, or even that it disconfirms it to a greater degree than its auxiliaries. Perhaps this objection is based on a much more radical holism, such that there can't be any observable evidence which probabilistically disconfirms a hypothesis. Such a view, however, is certainly untenable and undermines the very notion of scientific confirmation, for which empirical testing of hypotheses is crucial.

So, in light of the above considerations, we have a very good reason to think viable explanatory hypotheses must be empirically testable and no good reason to think this is not the case. In other words, (1.) is true.

3.2 CI's Theism is Empirically Untestable

Next I will justify (2.), and while it is likely the more controversial premise, I will spend less time on it as it is more straightforward to defend. To justify (2.) we must take a look at the methodology CI employs for approaching theism and potential disconfirmation. In general, as stated in section 2.1, the approach has 2 steps, 1) updating on some observable data to `learn` that, conditional on theism, some auxiliary must be true, and 2) using the truth of the auxiliary story on theism to then screen-off the disconfirmatory power of the observable data on theism. We are supposed to accept (and I will for the purposes of this section) that, in using this procedure, he can remove any evidential tension disconfirming evidence such as evil creates for theism. Now, I've already argued in section 2.1 that if it were the case that we could always use this method without restriction this would make the theistic hypothesis (or any hypothesis really) empirically untestable and vacuous. This is because, for any epistemically possible observation O that initially counts against the theistic hypothesis T, if we have carte blanche to always redo our predictions using CI's methodology no matter what, one can simply tack on some auxiliary story, (and I will stress that according to CI said story need not even be independently justified) which when conjoined to T, O would become completely expected on T thus neutralizing O as disconfirming evidence. If we can always do this procedure without any loss in probability at all, then T will forever remain insulated from empirical disconfirmation no matter what observations we may have, and would thereby be empirically untestable.

This is especially a problem given how flexible T is, as it posits a being, God, for which the only relevant things we know about said being is a) He can perform any possible action (or actualize any logically actualizable state of affairs) and b) acts on the basis of axiological reasons. So, without any knowledge about the reasons God is motivated by, as far as our epistemic situation is concerned we would have no idea what to expect from T. T would be equally consistent with any possible state of affairs. So, it's crucial that the empirical content of T is constrained by moral goodness. Yet, if CI is right, then we have free range to update our assumptions about value, or tack on narratives whenever some observation looks like something we wouldn't expect a being motivated by axiological reasons would permit, no matter how unmotivated said assumptions and narratives are. Well, in that case, it's highly unclear what the one constraint we have for determining the empirical content of the theistic hypothesis is even doing. Indeed, it might as well not be there at all as far as evaluating the predictions and empirical content of theism goes.

CI will likely argue that this is too fast, for there are constraints for when you can and can't add an auxiliary axiological story to theism, which serves also as a falsification principle this being the Ultimate Goodness Principle (UGP). He explicates the principle as follows:

The Ultimate Goodness Principle stipulates that a state of affairs (S) can be instantiated by God if and only if (S) permanently bears the axiological status of being good on the whole at some time (t). This principle serves as a crucial constraint on theistic actions and predictions, dictating that any divine action must ultimately align with goodness.

Falsification Criterion: This principle functions as a meta-principle for falsification within theistic explanations. If evidence emerges that contradicts this principle, it would directly challenge the validity of theism, irrespective of auxiliary axiological hypotheses.

Theistic Predictions and Limits: Theistic hypotheses, when conjoined with various axiological theories, must align with the Ultimate Goodness Principle. Predictions, evidence, or value theories that conflict with this principle are considered evidence against theism and thus the P(T|Ai) & P(E|T&Ai) will be decreased substantially to a degree that would favor ~T. 

Let us leave aside concerns such as that the notion of some state of affairs S permanently bearing the axiological status of being good on the whole at a time t is not entirely clear. Does this constraint now leave room for empirical disconfirmation? No. Recall that empirical testability and disconfirmation requires that there be some epistemically possible observable states of affairs which probabilistically counts against a hypothesis. This is crucial because our notion of confirmation for a hypothesis is bound by our conceptual repertoires, our observations, and our epistemic practices which are, of course, inherently limited. Whether or not some state of affairs S is permanently good on the whole is not something which is observable from the perspective or epistemic situation of a human. All observations we may have take place in a limited temporal framework, they take place from some time t1-t2. But, no matter how arbitrarily large said time interval is or what the nature of our observation is, even if our observation of S is, say, a thousand years of pure anguish and squalor, this is perfectly consistent with S being good on the whole for reasons which are inaccessible to us, or due to certain outcomes which unbeknownst to us, will occur outside the given time frame. The epistemically inaccessible nature of future states and goods we may not know about or even be able to comprehend entails that we simply can't observe some state of affairs being permanently good on the whole, or otherwise. So, insofar as this is the only constraint imposed on the theistic hypothesis for empirical disconfirmation, it is both fundamentally useless to us as epistemically limited agents, and does nothing to address the problem of empirical untestability.

I'll note that it is curious that CI somewhat acknowledges and even seems to endorse the unsavory implication I'm charging him with. When explaining the protological fallacy he says the following;
If theism predicts with a probability of one that things will have a positive trajectory toward their ultimate good then the only evidence against theism would be evidence that entails that the trajectory is toward the bad. That seems very difficult to do because there is no evidence that can show why bad states of affairs will *inevitably* lead to “ultimate badness” since there can always be a theodical story on why bad states of affairs actually will lead towards some ultimate good end within an S-World. This makes the probability of an ultimate badness (a violation of the UGP and therefore a significant data point against theism) completely inscrutable since we are epistemically cut off from knowing what some things state on the whole is like.

So, CI agrees that 'ultimate badness' or any violations of the UGP is the only evidence we could have against theism, and the probability of an ultimate badness is inscrutable. But for one, if any evidence that could disconfirm theism is inscrutable it simply follows that any evidence which could confirm theism is also inscrutable! If P(T∣E) = inscrutable then P(T∣~E) = inscrutable for any E! For two, it's hard to see how this position is meaningfully distinct from skeptical theism, which on my understanding was a view CI explicitly denies on the grounds of it undermining the predictive expectations of theism. Since skeptical theism undermines our ability to reason about divine psychology and thus what to expect from theism. Again, it would seem that CI wants to eat his cake and have it too. Unfortunately, he just can't do that.

In any case, I've now motivated both (1.) and (2.), from which it jointly follows that, (3.) if CI's procedure for 'screening-off' disconfirming evidence against the theistic hypothesis is endorsed, then the theistic hypothesis is not a viable explanatory hypothesis. This is a severely damaging result for natural theology, as pretty much all evidential arguments including fine-tuning, moral knowledge and psychophysical harmony which rely on specific observable data confirming theism, are undercut. It is a bad result especially for CI himself as my understanding is that CI's main project was to build a powerful theistic explanatory model which must be taken seriously. However, in his attempts to screen off disconfirming evidence against theism, it looks like such a project cannot even get off the ground.

3.3 Underdetermination and Inductive Skepticism

Lastly, I will argue that if the kind of general schema CI proposes for evaluating and testing hypotheses where the only constraint is the UGP was acceptable, this would appear to lead to radical underdetermination and inductive skepticism. I take CI’s general schema to be something like the following: 'it is permissible to freely conjoin any auxiliary hypotheses to a main hypothesis to screen-off any potentially disconfirming data, in the sense that you can do this without any loss, as long as the evidence follows the constraint that they are not permanently falsifying and it is possible that the hypothesis will be vindicated in the future'. This is a direct analogue to the more specific schema CI uses for theism, which is 'it is permissible to freely conjoin any theodical narratives to theism to screen-off atheological evidence, in the sense that you can do this without any loss, as long as the evidence follows the constraint that it is not permanently unjustified and may eventually be good on the whole'. The general schema, it seems to me, is just a generalized version of the specific schema that applies to theism. The structure of the procedure is identical and it seems to me there are no relevant differences in its content. For instance, a 'theodical narrative' is simply a type of auxiliary story designed so as to make it the case that some data is not in tension with theism when conjoined with it. So, it is highly unclear to me what relevant methodological difference that would have to conjoining any auxiliary story to a hypothesis for the purposes of screening off potentially disconfirming data. Further, some state of affairs being permanently falsifying and unable to be vindicated is the same as some state of affairs being permanently unjustified and not being good on the whole in the context of theism. This is because theism entails that there are no unjustified evils, some evil (or any state of affairs) can obtain on theism only if it eventually bears the status of being good on the whole, on CI's view. Thus, if CI's suggestions are sound, we should conclude that this general schema is true in addition to CI's specific schema.

However, the general schema implies radical underdetermination of all theories, since it tells us that as long as the evidence in question isn't permanently falsifying, we can tack on an auxiliary assumption such that the data becomes unsurprising on the hypothesis when conjoined with it, and we don't lose out on probability. For virtually any hypothesis, even the most implausible and ridiculous theories, it's not the case that the evidence against it entails that the hypothesis is false, and it is almost never ruled out that there might be some discoveries in the future that undercut the overwhelming evidential case against it. Let's say the hypothesis is that the earth is 4000 years old. The evidence against this hypothesis appears overwhelming ranging from radiometric dating, the sedimentary layers and rock formations, the fossil record, many astronomical observations etc. It is as falsified as a scientific claim could get. However, this evidence doesn't permanently falsify the hypothesis that the earth is 4000 years old, there might be some fact F that, if learned at some point in the future, undercuts the overwhelming evidential force of these observations. So, we simply conjoin the 4000 year old earth hypothesis with the auxiliary assumption that F obtains, say, that some supernatural deceiver planted all this evidence to trick us into believing that the earth is more than 4000 years old. Suddenly, all the data becomes unsurprising on the 4000 year old earth hypothesis with the auxiliary assumption, indeed, it's precisely what we'd expect!

Or suppose the hypothesis is that 'smoking does not cause cancer', again prima facie, the evidential case against this proposition appears insuperable. The proposition that smoking causes cancer is a widely accepted conclusion in the medical community, supported by a vast array of epidemiological and biological research. However, again, it's possible that we can discover something in the future such that the case is undercut. So, we simply introduce the auxiliary hypothesis that there is a worldwide conspiracy by health organizations and governments. According to this hypothesis, these entities are fabricating all the scientific data linking smoking to cancer. They are doing this for some undisclosed reason - perhaps to control tobacco sales, or as part of a more extensive health misinformation campaign. Voila, suddenly all the data is totally unsurprising on the hypothesis that smoking does not cause cancer!

Since the evidence is equally consistent with both hypotheses the only thing we'd be able to appeal to are extra-evidential considerations. We might charge that the conjunction of the young earth and smoking is perfectly healthy hypotheses together with the unjustified auxiliary assumptions tacked on to them, are unparsimonious, and clearly ad hoc in a way that is theoretically vicious. But if the general schema is right, we are always free to employ this procedure and add whatever auxiliary assumptions we'd like as long as the data in question isn't permanently falsifying, and we lose no probability as a result of this procedure. Additionally, CI’s own view appears to be that modesty does not matter to the intrinsic probability of hypotheses. Once again I want to stress that the general structure of this procedure is the same as the case of the theistic hypothesis T. We see some seemingly disconfirming evidence E. We update with narrative/axiological assumption A. We do not need to show that A is independently plausible. A is designed in such a way so that E becomes completely unsurprising on T&A. Finally, there is absolutely no penalty for this and our prior probability is not lowered whatsoever.

We have now seen that CI’s principles cause a problem of underdetermination, due to the implications of the general schema. For similar reasons, we also run into general inductive skepticism. Notice that all the evidence we have for the old earth, and smoking causes cancer hypotheses is inductive evidence. No matter how strong the inductive evidence, (which for the examples I used is very strong) it may always be the case that there are some future facts inaccessible to us, such that were the complete picture available to us the hypothesis is not as supported as we may have supposed. Indeed many of the proposed evidence against theism is formed through standard inductive inferences. One of the leading proponents of evidential problem of evils, Paul Draper, argues inductively that prima facie badness such as horrendous suffering is more likely to be ultima facie bad than ultima facie good. This sort of general reasoning in its structure is just a standard inductive appeal. We have no independently plausible reason to think that there will be some mysterious justifying good whole with which the evil is conjoined, and ceteris paribus, we expect the future to resemble the past and present in which we continually fail to observe justifying reasons for the horrendous evil and similar horrendous evils. Indeed, empirically most horrific evils we observe seem to lead to the destruction of people's characters and worse consequences down the line, strengthening the induction. However, CI wants to say that the fact that future states are inaccessible to us, means we can never truly know that some state of affairs is ultima facie bad. Whether some state of affairs is ultimately bad, that is that the evil it involves is not ultimately justified, according to CI, is completely inscrutable to us. But were we to take seriously the claim that the epistemic possibility that the future does not resemble the past provides a defeater for the supposition that some prima facie evil is more likely ultimately unjustified, would this not serve as a defeater for virtually all of our ordinary inductive inferences? For example, I think the sun will rise tomorrow, and the next day, and the next day, and I have this expectation for every given day unless I am given overriding evidence to the contrary. Why? Because I make the inductive inference that the future will resemble the past, I experience the sun rising every day so I expect it to rise for every given day in the future (until maybe the sun dies in billions of years or something). But if the inaccessibility of the future makes the future resembling the past inscrutable to us, then it looks like my inductive inference has a defeater.

Of course, CI might assert this sort of procedure can only be used to insulate theism from disconfirmation not generally, but unless a relevant asymmetry is provided, such a claim looks entirely arbitrary and unmotivated. CI must either show that there is a symmetry breaker and that the case of prima facie evils disconfirming theism is a special case where our ordinary inductive inferences aren't licensed, or he must add some constraints to the procedure that is used. But as far as I can tell, what little theoretical constraints he does provide are nowhere near enough. Perhaps there is a relevant symmetry breaker, I'll leave it open to the audience whether CI's position entails radical underdetermination and inductive skepticism. I'll simply note that I do not see one and that, once again, the implications of CI's approach seem to me not meaningfully different from skeptical theism.