Ted Poston

Books

Edited Books

Edited Journals

Articles

  • Forthcoming. Hyperintensional Evidence and Bayesian coherence [Abstract]
  • Bayesian approaches to rationality require that a person’s degrees of belief be coherent. Among other implications, coherence requires that a person has the same degree of belief in every logically equivalent proposition. However, a person can have evidence for a claim without having evidence for all its propositional equivalences. This paper explores this conflict and argues that a person may be perfectly rational in virtue of responding to their evidence, even if their credences are not coherent. The paper also challenges the idea that it is always better to have more coherent credences, highlighting the fundamental role that evidence plays in rational belief. (Asian Journal of Philosophy).
  • 2024. Evidence and Explanation (with Kevin McCain) [Abstract]
  • (The Routledge Handbook of the Philosophy of Evidence).
  • 2023. Critical Review of Fitting Things Together: Coherence and the demands of structural rationality. [Abstract]
  • (Philosophical Quarterly).
  • 2021. Coherence and Confirmation: The epistemic limitations of the impossibility theorems [Abstract]
  • It is a widespread intuition that the coherence of independent reports provides a powerful reason to believe that the reports are true. Formal results by Huemer (1997), Olsson (2002, 2005), and Bovens and Hartmann (2003) prove that, under certain conditions, coherence cannot increase the probability of the target claim. These formal results, known as ‘the impossibility theorems’ have been widely discussed in the literature. They are taken to have significant epistemic upshot. In particular, they are taken to show that reports must first individually confirm the target claim before the coherence of multiple reports offers any positive confirmation. In this paper, I dispute this epistemic interpretation. The impossibility theorems are consistent with the idea that the coherence of independent reports provides a powerful reason to believe that the reports are true even if the reports do not individually confirm prior to coherence. Once we see that the formal discoveries do not have this implication, we can recover a model of coherence justification consistent with Bayesianism and these results. This paper, thus, seeks to turn the tide of the negative findings for coherence reasoning by defending coherence as a unique source of confirmation. (Kriterion - Journal of Philosophy).
  • 2021. Explanatory coherence and the impossibility of confirmation by coherence [Abstract]
  • The coherence of independent reports provides a strong reason to believe that the reports are true. This plausible claim has come under attack from recent results in Bayesian epistemology. Huemer (1997), Olsson (2002, 2005), and Bovens and Hartmann (2003) prove that, under certain probabilistic conditions, coherence cannot increase the probability of the target claim. These results are taken to demonstrate that epistemic coherentism is untenable. To date no one has investigated how these Bayesian results bear on different conceptions of coherence. In this paper, I investigate these Bayesian results by using Paul Thagard’s ECHO model of explanatory coherence (Thagard (2000)). Thagard’s ECHO model provides a natural representation of the evidential significance of multiple independent reports. The ECHO model, in contrast to the Bayesian models, captures the power of coherence in a witness scenario. The conditions that Bayesian models found to be impossible, ECHO models naturally accommodate. This demonstrates that there are different formal tools for representing coherence. I close with a discussion of the differences between the Bayesian model and the ECHO model. (Philosophy of Science).
  • 2020. The Intrinsic Probability of Grand Explanatory Theories [Abstract]
  • This paper articulates a way to ground a relatively high prior probability for grand explanatory theories apart from an appeal to simplicity. I explore the possibility of enumerating the space of plausible grand theories of the universe by using the explanatory properties of possible views to limit the number of plausible theories. I motivate this alternative grounding by showing that Swinburne’s appeal to simplicity is problematic along several dimensions. I then argue that there are three plausible grand views—theism, atheism, and axiarchism–which satisfy explanatory requirements for plausibility. Other possible views lack the explanatory virtue of these three theories. Consequently, this explanatory grounding provides a way of securing a non-trivial prior probability for theism, atheism, and axiarchism. An important upshot of my approach is that a modest amount of empirical evidence can bear significantly on the posterior probability of grand theories of the universe. (Faith & Philosophy).
  • 2020. Beliefs are justified by coherence. (with Kevin McCain) [Abstract]
  • (Problems in Epistemology and Metaphysics Bloomsbury. ed. Steven B. Cowan).
  • 2020. Experience Alone is Not Enough: Reply to Howard-Snyder. (with Kevin McCain) [Abstract]
  • (Problems in Epistemology and Metaphysics Bloomsbury. ed. Steven B. Cowan).
  • 2019. Dispelling the disjunction objection to explanatory inference (with Kevin McCain) [Abstract]
  • Although inference to the best explanation (IBE) is ubiquitous in science and our everyday lives, there are numerous objections to the viability of IBE. Many of these objections have been thoroughly discussed; however, at least one objection has not received adequate treatment. We term this objection the “Disjunction Objection”. This objection challenges IBE on the grounds that even if H is the best explanation, it could be that the disjunction of its rivals is more likely to be true. As a result, IBE appears to license accepting a hypothesis that is more likely than not to be false. Despite initial appearances, we argue that the Disjunction Objection fails to impugn IBE. (Philosophers' Imprint).
  • 2019. How do medical researchers make causal inferences? (with Olaf Dammann and Paul Thagard) [Abstract]
  • Bradford Hill (1965) highlighted nine aspects of the complex evidential situation a medical researcher faces when determining whether a causal relation exists between a disease and various conditions associated with it. These aspects are widely cited in the literature on epidemiological inference as justifying an inference to a causal claim, but the epistemological basis of the Hill aspects is not understood. We offer an explanatory coherentist interpretation, explicated by Thagard's ECHO model of explanatory coherence. The ECHO model captures the complexity of epidemiological inference and provides a tractable model for inferring disease causation. We apply this model to three cases: the inference of a causal connection between the Zika virus and birth defects, the classic inference that smoking causes cancer, and John Snow’s inference about the cause of cholera. (What is Scientific Knowledge? An Introduction to Contemporary Epistemology of Science Routledge. eds. Kevin McCain and Kostas Kampourakis).
  • 2018. Two Strategies for Explaining Away Skepticism. (with Kevin McCain) [Abstract]
  • One prominent response to philosophical skepticism argues that skepticism is a failed explanatory hypothesis. Yet there are two significantly different explanationist responses, one stemming from a traditional Cartesian epistemology and the other coming from epistemological naturalism. These different explanationist views reveal divergent philosophical methodologies. A Cartesian picture aims for a vindication of many common beliefs based on a neutral ground. A naturalist view aims for a complete, stable, and coherent view of the world. In this paper we describe these competing views and highlight their respective strengths and weaknesses. (The Mystery of Skepiticism Brill. eds. McCain and Poston)
  • 2018. The Evidential Impact of Explanatory Considerations (with Kevin McCain) [Abstract]
  • Explanationism is an attractive family of theories of epistemic justification. In broad terms, explanationism is the idea that what a person is justified in believing depends on their explanatory position. At its core, explanationists hold that the fact that p would explain q if p were true is itself evidence that p is true. In slogan form: explanatoriness is evidentially relevant. Despite the plausibility of explanationism, not all of the recent interest in it has been complimentary. Recently, William Roche and Elliott Sober (2013 & 2014) have argued that " explanatoriness is evidentially irrelevant " (2013: 659). R&S's argument against the evidential relevance of explanatory considerations begins with what they call the " Screening-Off Thesis " (SOT): Let H be some hypothesis, O be some observation, and E be the proposition that H would explain O if H and O were true. Then O screens-off E from H: Pr(H|O & E) = Pr(H|O). (2014: 193) 4 R&S contend that SOT is true if and only if " explanatoriness is evidentially irrelevant ". We refer to this conditional claim as (IRRELEVANCE). In a recent article (2014), we argued that R&S overlook an important dimension of evidential support when making their case that IRRELEVANCE is true, viz., the resilience of a probability function. Resilience is essentially how volatile a probability function is with respect to new evidence; a probability function with low volatility is more resilient than a function with high volatility. We maintained that IRRELEVANCE is false because there are clear cases where explanatory considerations increase the resilience of a probability function. Additionally, we argued that there are numerous cases where SOT fails to hold. R&S (2014) argue that we were mistaken on both accounts. The arguments R&S offer are not persuasive, but they do significantly clarify the disagreement. We pick up on this improved dialectical situation to further defend our position that explanatoriness is evidentially relevant. The upshot of our discussion is that both SOT and IRRELEVANCE are false because explanatory considerations may be captured in logical and mathematical relations encoded in a Pr-function. Thus, both inference to the best explanation and explanationism are safe from attack. (Best Explanations: New Essays on Inference to the Best Explanation Oxford University Press. eds. McCain and Poston)
  • 2018. Religious Conscience and the Private Market [Abstract]
  • When should an appeal to religious conscience exempt a business owner from providing a private market service? Recent cases in state and federal courts evince the need for a philosophical treatment of the success conditions for religious conscience exemptions. In this paper I assume that the conscience objection is focused on a law to perform a private market services. I begin with some conceptual clarifications, proceed to examine cases, and argue for an account. I propose that a religious conscience exemption ought to be granted if the act concerns a sacred matter according to an epistemically live religious tradition, a tradition that is believed by a group of persons and is not common knowledge that it is false. On the view I argue for an appeal to common knowledge is crucial for fairly balancing the competing rights of religious autonomy and anti-discrimination. (Religious Exemptions Oxford University Press. ed. Vallier and Weber).
  • 2018. The Argument from so many arguments. [Abstract]
  • My goal in this paper is to offer a Bayesian model of strength of evidence in cases in which there are multiple items of independent evidence. I will use this Bayesian model to evaluate the strength of evidence for theism if, as Plantinga claims, there are two dozen or so arguments for theism. Formal models are justified by their clarity, precision, and usefulness, even though they involve abstractions that do not perfectly fit the phenomena. Many of Plantinga’s arguments are metaphysical arguments, involving premises which are necessarily true, if true at all. Applying a Bayesian account of strength of evidence in this case involves reformulating some of the arguments, but, even if a Bayesian shoe doesn’t fit perfectly into a Leibnizian foot, Bayesian footwear is much more suitable to certain types of terrain, especially when the landscape requires encompassing the overall effect of multiple vistas. I believe that the Bayesian model I offer has significant utility in assessing strength of evidence in cases of multiple items of evidence. The model turns questions of the overall strength of multiple arguments into a simple summation problem and it provides a clear framework for raising more philosophical questions about the argument. I hope that this paper provides a model for many fruitful conversations about how to aggregate multiple items of evidence. (Two dozen (or so) arguments for God. Oxford University Press. Eds. Trent Dougherty and Jerry Walls)
  • 2016. Will there be skeptics in heaven? [Abstract]
  • I begin with a puzzle that arises from reflection on two things that are not normally put together: the nature of Christian hope, particularly the vision of a renewed creation, and global skepticism. The puzzle relates to the fact that if arguments for global skepticism work now then they work equally as well in heaven. My goal is to present the puzzle and then propose a resolution. I begin by discussing the nature of the Christian conception of heaven and then I develop an argument for global skepticism. I continue to fill out the puzzle before finally turning to examine a resolution of the puzzle.(Paradise Understood: New Philosophical Essays about Heaven Oxford. Oxford University Press. Eds. Ryan Bverly and Eric Silverman)
  • 2016. Know how to transmit knowledge?. [Abstract]
  • Intellectualism about knowledge-how is the view that practical knowledge is a species of propositional knowledge. I argue that this view is undermined by a difference in properties between knowledge-how and both knowledge-that and knowledge-wh. More specifically, I argue that both knowledge-that and knowledge-wh are easily transmitted via testimony while knowledge-how is not easily transmitted by testimony. This points to a crucial difference in states of knowledge. I also consider Jason Stanley's attempt to subsume knowledge-how under an account of de se knowledge. I argue that there are crucial differences between de se knowledge and knowledge-how. Thus, this paper advances both the discussion of intellectualism and the literature on the nature of de se knowledge.(Nous 50:4, 865-878)
  • 2016. Acquaintance and Skepticism about the Past. [Abstract]
  • I consider the problem of skepticism about the past within Richard Fumerton's acquaintance theory of non-inferential justification. Acts of acquaintance occur only within the specious present, that temporal duration in which (intuitively) memory plays no role. But if our data for justification is limited to the specious present then the options for avoiding a far-reaching skepticism are quite limited. I consider Fumerton's responses to skepticism about the past and argue that his acquaintance theory is not able to stave off skepticism about the past. Furthermore, I argue that the bounds of skepticism about the past overflow to the specious present by limiting the kind of content that is available within the all too short present moment. Finally, I defend an epistemic conservative response to this stark skeptical problem by arguing that it is a theoretically economical account of our justification for beliefs about the past. (Traditional Epistemic Internalism Oxford University Press. Eds. Michael Bergmann and Brett Coppenger
  • 2016. Coherence and conservation: A response to Gardiner [Abstract]
  • I am grateful for Gardiner’s excellent and challenging comments on my book. I cannot hope to adequately address all of the objections she raises. Instead I will discuss the reasons I think epistemic conservatism is required for a plausible coherentist view and then I will discuss the core idea behind conservatism. My hope is that within the context of a properly formulated and motivated conservatism some of the most pressing concerns Gardiner raises will appear less troubling than initial appearances. (Syndicate Philosophy (forthcoming))
  • 2016. Belief, evidence, and knowledge: A response to Cling [Abstract]
  • I thank Andy Cling for these careful and insightful comments. Cling effectively summarizes many of the motivations and arguments I give for my explanationist view. He argues that on several important dimensions my view does not live up to its promises. In particular, he charges that the version of explanatory coherentism I defend is not a form of evidentialism and, moreover, it is a kind of foundationalist skepticism. In the following I aim to answer these important claims. (Syndicate Philosophy (forthcoming))
  • 2016. Making conservatism great again: A reply to Schnee [Abstract]
  • I appreciate Ian Schnee’s forceful criticisms of my attempt to explicate a plausible version of epistemic conservatism. As each commentator has pointed out, epistemic conservatism plays a pivotal role for my coherentist theory and so deserves careful attention. I argue that a belief’s justification is a matter of its fit in an explanatory coherent system that beats relevant competitors. Moreover, I argue that a belief’s justification is always relative to a set of background beliefs. I contend that unless background beliefs have some level of justification simply in virtue of being held then skepticism follows. The key is to formulate a plausible version of conservatism that does not do violence to our firm judgments about the role of evidence in justification. I’ve argued that the core conservative claim is a coherence condition on a subject’s mental life. Unless a subject has a special reason to change her views, she has a right to continue to maintain those views. Or, as I put it, if a subject believes p in the state of empty evidence she has a right to continue to believe p. This epistemic right is not indefeasible. As I mentioned in my responses to other commentators, belief is teleologically ordered to knowledge.Consequently, such a coherence condition on a subject’s mental states is not the be and end all to epistemology. I honestly think epistemic conservatism, properly understood, makes good epistemology. Schnee disagrees. Let us therefore reason about our differences. (Syndicate Philosophy (forthcoming))
  • 2016. Rival explanatory paradigms and justification: A response to Dabay [Abstract]
  • Thomas Dabay provides a thoughtful and interesting perspective on my explanationist view. He focuses on the alternative systems objection to coherentism and argues that this is particularly problematic given my views about epistemic conservatism. Traditionally, the alternative systems objection targets coherentist views of justification because typical coherentist views hold that the justification of any belief is entirely a matter of its internal relations to other beliefs. The objection continues by observing that lots of different sets of beliefs—-like a good work of fiction—-bear virtuous internal relations to each other member of the set. But, presumably, to be epistemically justified in a belief requires more than its being embedded in a coherent work of fiction. Epistemic justification requires more than internal relations among beliefs in one’s doxastic system. (Syndicate Philosophy (forthcoming))
  • 2014. Why explanatoriness is evidentially relevant (with Kevin McCain) [Abstract]
  • William Roche and Elliott Sober argue that explanatoriness is evidentially irrelevant. This conclusion is surprising since it conflicts with a plausible assumption---the fact that a hypothesis best explains a given set of data is evidence that the hypothesis is true. We argue that Roche and Sober’s screening-off argument fails to account for a key aspect of evidential strength: the weight of a body of evidence. The weight of a body of evidence affects the resiliency of probabilities in the light of new evidence. Thus, Roche and Sober are mistaken. Explanatoriness is evidentially relevant. (Thought))
  • 2014. Finite Reasons without Foundations [Abstract]
  • In this paper I develop a theory of reasons that has strong similarities to Peter Klein's infinitism. The view I develop, Framework Reasons, upholds Klein's principles of avoiding arbitrariness (PAA) and avoiding circularity (PAC) without requiring an infinite regress of reasons. A view of reasons that holds that the ‘reason for’ relation is constrained by PAA and PAC can avoid an infinite regress if the ‘reason for’ relation is contextual. Moreover, such a view of reasons can maintain that skepticism is false by the maintaining that there is more to epistemic justification than what can be expressed in any reasoning session. One crucial argument for Framework Reasons is that justification depends on a background of plausibility considerations. In the final section, I apply this view of reasons to Michael Bergmann's argument any non-skeptical epistemology must embrace epistemic circularity. (Metaphilosophy Special Issues: On the Regress Problem)
  • 2014. Social Evil [Abstract]
  • Social evil is any pain or suffering brought about by game-theoretic interactions of many individuals. This paper introduces and discusses the problem of social evil. I begin by focusing on social evil brought about by game-theoretic interactions of rational moral individuals. The problem social evil poses for theism is distinct from problems posed by natural and moral evils. Social evil is not a natural evil because it is brought about by the choices of individuals. But social evil is not a form of moral evil because each individual actor does not misuse his free will. Traditional defenses for natural and moral evil fall short in addressing the problem of social evil. The final section of this paper discusses social evil and virtue. I begin by arguing that social evil can arise even when individual virtue is lacking. Next, I explore the possibility of an Edwardsian defense of social evil that stresses the high demands of true virtue. In this context, I argue that social evil may arise even when all the participants are truly virtuous. The conclusion of this paper is that social evil is problematic and provides new ground for exploring the conceptual resources of theism. (Oxford Studies in the Philosophy of Religion (forthcoming)).
  • 2014. Skeptical Theism within reason. [Abstract]
  • The evidential argument from evil moves from inscrutable evils to gratuitous evils, from evils we cannot scrutinize a God-justifying reason for permitting to there being no such reason. Skeptical theism challenges this move by claiming that our inability to scrutinize a God-justifying reason does not provide good evidence that there is no reason. The core motivation for skeptical theism is that the cognitive and moral distance between a perfect being and creatures like us is so great we shouldn’t expect we grasp all the relevant considerations pertaining to a God-justifying reason. My goal in this paper is to defend skeptical theism within a context that allows for an inverse probability argument for theism. These arguments are crucial for an evidentialist approach to the justification of theism. I aim to show that there is a natural way of motivating a skeptical theist position that does not undermine our knowledge of some values. (Skeptical Theism: New Essays, OUP, edited by Trent Dougherty and Justin McBrayer (forthcoming))
  • 2014. Direct Phenomenal Beliefs, Cognitive Significance, and the Specious Present [Abstract]
  • David Chalmers (2010) argues for an acquaintance theory of the justification of direct phenomenal beliefs. A central part of this defense is the claim that direct phenomenal beliefs are cognitively significant. I argue against this. Direct phenomenal beliefs are justified within the specious present, and yet the resources available with the present `now' are so impoverished that it barely constrains the content of a direct phenomenal belief. I argue that Chalmers's account does not have the resources for explaining how direct phenomenal beliefs support the inference from `thisE is R' to `that was R.' (Philosophical Studies (forthcoming))
  • 2013. Is foundational a priori justification indispensable? [Abstract]
  • Laurence BonJour's (1985) coherence theory of empirical knowledge relies heavily on a traditional foundationalist theory of a priori knowledge. He argues that a foundationalist, rationalist theory of a priori justification is indispensable for a coherence theory. BonJour (1998) continues this theme, arguing that a traditional account of a priori justification is indispensable for the justification of putative a priori truths, the justification of any non-observational belief, and the justification of reasoning itself. While BonJour's indispensability arguments have received some critical discussion (Gendler 2001; Harman 2001; Beebe 2008), no one has investigated the indispensability arguments from a coherentist perspective. This perspective offers a fruitful take on BonJour's arguments because he does not appreciate the depth of the coherentist alternative to the traditional empiricist-rationalist debate. This is surprising on account of BonJour's previous defense of coherentism. Two significant conclusions emerge: first, BonJour's indispensability arguments beg central questions against an explanationist form of coherentism; second, BonJour's original defense of coherentism took on board certain assumptions that inevitably led to the demise of his form of coherentism. The positive conclusion of this paper is that explanatory coherentism is more coherent than BonJour's indispensability arguments assume and more coherent than BonJour's earlier coherentist epistemology. (Episteme (2013) 10:3, 317-331)
  • 2013. BonJour and the myth of the given. [Abstract]
  • The Sellarsian dilemma is a powerful argument against internalistic foundationalist views that aim to end the regress of reasons in experiential states. Laurence BonJour once defended the soundness of this dilemma as part of a larger argument for epistemic coherentism. BonJour has now renounced his earlier conclusions about the dilemma and has offered an account of internalistic foundationalism aimed, in part, at showing the errors of his former ways. I contend that BonJour's early concerns about the Sellarsian dilemma are correct, and that his latest position does not adequately handle the dilemma. I focus my attention on BonJour's claim that a nonconceptual experiential state can provide a subject with a reason to believe some proposition. It is crucial for the viability of internalistic foundationalism to evaluate whether this claim is true. I argue it is false. The requirement that the states that provide justification give reasons to a subject conflicts with the idea that these states are nonconceptual. In the final section I consider David Chalmers's attempt to defend a view closely similar to BonJour's. Chalmers's useful theory of phenomenal concepts provides a helpful framework for identifying a crucial problem with attempts to end the regress of reasons in pure experiential states. (Res Philosophica (2013) 90:2, 185-201)
  • 2012. Is there an 'I' in epistemology? [Abstract]
  • Epistemic conservatism is the thesis that the mere holding of a belief confers some positive epistemic status on its content. Conservatism is widely criticized on the grounds that it conflicts with the main goal in epistemology to believe truths and disbelieve falsehoods. In this paper I argue for conservatism and defend it from objections. First, I argue that the objection to conservatism from the truth goal in epistemology fails. Second, I develop and defend an argument for conservatism from the perspectival character of the truth goal. Finally, I examine several forceful challenges to conservatism and argue that these challenges are unsuccessful. The first challenge is that conservatism implies the propriety of assertions like ‘I believe p and this is part of my justification for it’. The second challenge argues that conservatism wrongly implies that the identity of an epistemic agent is relevant to the main goal of believing truths and disbelieving falsehoods. The last two challenges I consider are the ‘extra boost’ objection and the conversion objection. Each of these objections helps to clarify the nature of the conservative thesis. The upshot of the paper is that conservatism is an important and viable epistemological thesis. (Dialectica (2012) 66:4, 517-541).
  • 2012. Introduction: Epistemic Coherentism.
  • 2012. Basic Reasons and First Philosophy. [Abstract]
  • This paper develops and defends a coherentist account of reasons. I develop three core ideas for this defense: a distinction between basic reasons and noninferential justification, the plausibility of the neglected argument against first philosophy, and an emergent account of reasons. These three ideas form the backbone for a credible coherentist view of reasons. I work toward this account by formulating and explaining the basic reasons dilemma. This dilemma reveals a wavering attitude that coherentists have had toward basic reasons. More importantly, the basic reasons dilemma focuses our attention on the central problems that afflict coherentist views of basic beliefs. By reflecting on the basic reasons dilemma, I formulate three desiderata that any viable coherentist account of basic beliefs must satisfy. I argue that the account on offer satisfies these desiderata. (The Southern Journal of Philosophy (2012) 50(1): 75-93).
  • 2012. Functionalism about Truth and the Metaphysics of Reduction (with Michael Horton). [Abstract]
  • Functionalism about truth is the view that truth is an explanatorily significant but multiply-realizable property. According to this view the properties that realize truth vary from domain to domain, but the property of truth is a single, higher-order, domain insensitive property. We argue that this view faces a challenge similar to the one that Jaegwon Kim laid out for the multiple realization thesis. The challenge is that the higher-order property of truth is equivalent to an explanatorily idle disjunction of its realization bases. This consequence undermines the alethic functionalists’ non-deflationary ambitions. A plausible response to Kim’s argument fails to carry over to alethic functionalism on account of significant differences between alethic functionalism and psychological functionalism. Lynch’s revised view in his book Truth as One and Many (2009) fails to answer our challenge. The upshot is that, while mental functionalism may survive Kim’s argument, it mortally wounds functionalism about truth.. (Acta Analytica (2012) 27: 13-27).
  • 2011. Explanationist Plasticity & The Problem of the Criterion. [Abstract]
  • This paper develops an explanationist treatment of the problem of the criterion. Explanationism is the view that all justified reasoning is justified in virtue of the explanatory virtues: simplicity, fruitfulness, testability, scope, and conservativeness. A crucial part of the explanationist framework is achieving wide reflective equilibrium. I argue that explanationism offers a plausible solution to the problem of the criterion. Furthermore, I argue that a key feature of explanationism is the plasticity of epistemic judgments and epistemic methods. The explanationist does not offer any fixed judgments or methods to guide epistemic conduct; even the explanatory virtues themselves are subject to change. This feature of explanationism gives it an advantage over non-explanationist views that offer fixed epistemic judgments and epistemic methods. The final section of this paper responds to objections to explanationism. (Philosophical Papers (2011) 40(3): 395-419).
  • 2010. Similarity & Acquaintance: A dilemma. [Abstract]
  • There is an interesting and instructive problem with Richard Fumerton’s acquaintance theory of noninferential justification. Fumerton’s explicit account requires acquaintance with the truth-maker of one’s belief and yet he admits that one can have noninferential justification when one is not acquainted with the truth-maker of one’s belief but instead acquainted with a very similar truth-maker. On the face of it this problem calls for clarification. However, there are skeptical issues lurking in the background. This paper explores these issues by developing a dilemma for an acquaintance theory. (Philosophical Studies (2010) 147: 369-378).
  • 2010. Skeptics without borders (with Kevin Meeker) [Abstract]
  • Timothy Williamson’s anti-luminosity argument has received considerable attention. Escaping unnoticed, though, is a strikingly similar argument from David Hume. This paper highlights some of the arresting parallels between Williamson’s reasoning and Hume’s that will allow us to appreciate more deeply the plausibility of Williamson’s reasoning and to understand how, following Hume, we can extend this reasoning to undermine the “luminosity” of simple necessary truths. More broadly the parallels help us to identify a common skeptical predicament underlying both arguments, which we shall call “the quarantine problem”. The quarantine problem expresses a deep skepticism about achieving any exalted epistemic state. Further, the perspective gained by the quarantine problem allows us to easily categorize existing responses to Williamson’s anti-luminosity argument and to observe the deficiencies of those responses. In sum, the quarantine problem reveals the deeply fallibilistic nature of whatever knowledge we may possess. (American Philosophical Quarterly (2010) 47(3): 223-237)
  • 2009. Know how to be Gettiered?. [Abstract]
  • Jason Stanley and Timothy Williamson’s influential article "Knowing How" argues that knowledge-how is a species of knowledge-that. One objection to their view is that knowledge-how is significantly different than knowledge-that because Gettier cases afflict the latter but not the former. Stanley and Williamson argue that this objection fails. Their response, however, is not adequate. Moreover, I sketch a plausible argument that knowledge-how is not susceptible to Gettier cases. This suggests a significant distinction between knowledge-that and knowledge-how.(Philosophy and Phenomenological Research (2009) LXXIX(3): 743-747).
  • 2008. Hell, Vagueness, and Justice: A Reply to Sider. (with Trent Dougherty) [Abstract]
  • Ted Sider’s paper “Hell and Vagueness” challenges a certain conception of Hell by arguing that it is inconsistent with God’s justice. Sider’s inconsistency argument works only when supplemented by additional premises. Key to Sider’s case is a premise that the properties upon which eternal destinies supervene are “a smear,” i.e., they are distributed continuously among individuals in the world. We question this premise and provide reasons to doubt it. The doubts come from two sources. The first is based on evidential considerations borrowed from skeptical theism. A related but separate consideration is that supposing it would be an insurmountable problem for God to make just (and therefore non-arbitrary) distinctions in morally smeared world, God thereby has sufficient motivation not to actualize such worlds. Yet God also clearly has motivation only to actualize some member of the subset of non- smeared worlds which don’t appear non-smeared. For if it was obvious who was morally fit for Heaven and who wasn’t, a new arena of great injustice is opened up. The result is that if there is a God, then he has the motivation and the ability to actualize from just that set of worlds which are not smeared but which are indiscernible from smeared worlds. (Faith & Philosophy (2008) 25(3): 322-328).
  • 2008. A User’s Guide to Design Arguments. (with Trent Dougherty) [Abstract]
  • We argue that there is a tension between two types of design arguments: the fine-tuning argument (FTA) and the biological design argument (BDA). The tension arises because the strength of each argument is inversely proportional to the value of a certain currently unknown probability. Since the value of that probability is currently unknown, we investigate the properties of the FTA and BDA on different hypothetical values of this probability. If our central claim is correct this suggests three results: (1) It is not very plausible that a cumulative case for theism include both the FTA and the BDA (with one possible qualification); (2) Self- organization scenarios do not threaten theism but in fact provide the materials for a good FTA; (3) A plausible design argument of one sort or another (either FTA or BDA) will be available for a wide variety of values of the key probability. (Religious Studies (2008) 44: 99-110).
  • 2008. Foundationalism.
  • 2007. Acquaintance and the Problem of the Speckled Hen. [Abstract]
  • This paper responds to Ernest Sosa’s recent criticism of Richard Fumerton’s acquaintance theory. Sosa argues that Fumerton’s account of non-inferential justification falls prey to the problem of the speckled hen. I argue that Sosa’s criticisms are both illuminating and interesting but that Fumerton’s theory can escape the problem of the speckled hen. More generally, the paper shows that an internalist account of non-inferential justification can survive the powerful objections of the Sellarsian dilemma and the problem of the speckled hen. (Philosophical Studies (2007) 132: 331-346).
  • 2007. Divine Hiddenness and the Nature of Belief. (with Trent Dougherty) [Abstract]
  • In this paper we argue that attention to the intricacies relating to belief illustrate crucial difficulties with Schellenberg’s hiddenness argument. This issue has been only tangentially discussed in the literature to date. Yet we judge this aspect of Schellenberg’s argument deeply significant. We claim that focus on the nature of belief manifests a central flaw in the hiddenness argument. Additionally, attention to doxastic subtleties provides important lessons about the nature of faith.(Religious Studies (2007) 44: 183-198).
  • 2007. Foundational Evidentialism and the Problem of Scatter. [Abstract]
  • This paper addresses the scatter problem for foundational evidentialism. Reflection on the scatter problem uncovers significant epistemological lessons. The scatter problem is evaluated in connection with Ernest Sosa’s use of the problem as an argument against foundational evidentialism. Sosa’s strategy is to consider a strong intuition in favor of internalism—the new evil demon problem, and then illustrate how a foundational evidentialist account of the new evil demon problem succumbs to the scatter problem. The goal in this paper is to evaluate the force of the scatter problem. The main argument of the paper is that the scatter problem has mixed success. On the one hand, scatter undermines objectual evidentialism, an evidentialist theory that formulates principles of basic perceptual justification in terms of the objects (or properties) of perceptual states. On the other hand, the problem of scatter does not undermine content evidentialism, an evidentialist view that formulates its epistemic principles in terms of the assertive content of perceptual states. The significance of the scatter problem, especially in concert with the new evil demon problem, is that it provides an argument for content evidentialism. (Abstracta (2007) 3(2): 89-106).
  • 2007. Internalism and Externalism in Epistemology.

Book reviews



 

Ted Poston is Professor of Philosophy and Inaugural Director of the McCollough Institute for Pre-Medical Scholars at the University of Alabama. He follows Sellars's synoptic conception of philosophy that The aim of philosophy, abstractly formulated, is to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term. His main research focus is in epistemology.

Epistemologists focus on two broad questions: what (if any) do we know? How do we know? Understanding answers to these questions is vital for a proper functioning democratic society.

In 2009, Ted began the yearly Orange Beach Epistemology Workshop. This workshop brings leading epistemologists to discuss current research trends on the beautiful white sand beaches of the Gulf Coast.

The pandemic brought significant changes all around. Ted began a YouTube channel Ted Talks Philosophy to keep philosophy discussion alive.

Teaching