COHERENTISM VS. FOUNDATIONALISM: WHAT AI TEACHES US (VERSION 1: WRITTEN THROUGH PERPLEXITY)
- John-Michael Kuczynski
- 1 day ago
- 3 min read
The longstanding epistemological debate between foundationalism and coherentism concerns the nature of justification in human knowledge. Foundationalism holds that knowledge rests upon a set of basic, indubitable beliefs-foundations that provide secure grounding for all other justified beliefs. Coherentism, by contrast, denies the existence of such foundational beliefs, positing instead that justification arises from the mutual support and coherence among beliefs within a holistic network.
This debate, traditionally philosophical and somewhat abstract, gains fresh empirical relevance when examined through the lens of contemporary artificial intelligence (AI) architecture, particularly large language models (LLMs). Assuming, arguendo, that LLMs and related AI systems bear some meaningful resemblance to human epistemic agents, their architecture and operational principles offer illuminating insights into the viability of foundationalism versus coherentism as theories of justification.
AI Architecture and the Nature of Justification
LLMs function by modeling complex probabilistic relationships across vast corpora of text. They generate outputs not by deducing answers from fixed axioms or self-evident truths, but by predicting what is most coherent and contextually appropriate given their training data. This process is fundamentally holistic: the model’s “knowledge” emerges from the intricate web of interrelations among countless data points, rather than from any privileged foundational element.
Critically, LLMs do not possess or rely upon any innate, non-defeasible beliefs. Their parameters begin as random values and are iteratively adjusted through exposure to data, optimizing for coherence and predictive accuracy. This process mirrors the coherentist conception of justification as arising from systemic coherence rather than from foundational certainties. In contrast, foundationalism’s insistence on basic beliefs that are immune to defeat finds little parallel in AI systems, which are inherently defeasible and context-dependent.
Defeasibility and Contextual Sensitivity in AI
Another salient feature of AI epistemic functioning is defeasibility. The outputs of LLMs are not fixed truths but are subject to revision with new data or changes in context. This dynamic adaptability aligns closely with coherentism’s view that justification is not absolute but contextually mediated and revisable. Foundationalism, with its emphasis on indubitable starting points, struggles to accommodate this fluidity.
Structural Parallels Between AI and Coherentist Justification
The architecture of LLMs also structurally resembles the coherentist model of justification in several key respects:
Holistic Validation: Justification emerges from the overall coherence of the network rather than isolated axioms.
Parallel Processing: Transformers evaluate multiple interconnections simultaneously, reflecting the web-like structure coherentists envision.
Pragmatic Optimization: The training objective prioritizes predictive utility and contextual fit over correspondence to any fixed foundational truth.
Implications for Epistemological Theory
The empirical success of AI systems grounded in coherence rather than foundational axioms challenges foundationalist claims. It demonstrates that complex, reliable inference and “knowledge-like” behavior can arise without recourse to self-justifying beliefs. While foundationalists might argue that AI training data implicitly encodes foundational assumptions (such as mathematical axioms), this does not translate into the model operationally treating any input as foundational or immune to revision.
Conclusion
In sum, contemporary AI architectures provide compelling empirical support for coherentism over foundationalism as a theory of epistemic justification. They exemplify how justification can emerge from the coherence of a belief network without reliance on indubitable foundations. This insight not only advances philosophical understanding but also underscores the value of interdisciplinary dialogue between epistemology and cognitive computational modeling.
If you would like, I can also help format this essay for a particular publication style or expand on any section.
Kommentarer