COHERENTISM VS. FOUNDATIONALISM: WHAT AI TEACHES US (VERSION 3: WRITTEN THROUGH CLAUDE)
- John-Michael Kuczynski
- 21 hours ago
- 3 min read
In epistemology, a longstanding debate exists between foundationalists and coherentists regarding the structure of justification in human knowledge. Foundationalists maintain that knowledge rests upon certain "foundational" truths that serve as the bedrock for all other beliefs. Coherentists, by contrast, argue that justification emerges from the interconnected web of beliefs supporting one another, with no special class of foundational beliefs required.
This paper examines how modern artificial intelligence architecture—particularly large language models (LLMs)—might inform this classical epistemological debate, operating under the assumption that AI systems bear meaningful resemblance to human cognition in relevant aspects.
The Coherentist Case from AI Architecture
Several features of modern AI architecture appear to support coherentist epistemology:
Distributed Knowledge Representation: LLMs encode knowledge as distributed patterns of weights across neural networks rather than as discrete foundational axioms. Understanding emerges from complex interrelationships between these patterns, mirroring the coherentist view that knowledge forms an interconnected web.
Non-Hierarchical Learning: During training, AI systems continuously adjust their internal representations based on the coherence of new information with existing patterns. There is no privileged set of "foundational" facts from which other knowledge is derived—rather, the system's entire knowledge base evolves holistically.
Context-Sensitive Justification: AI systems generate responses by finding the most coherent continuation given a particular context, not by deducing from first principles. Justification is inherently contextual and relational, as coherentism would predict.
Absence of Indubitable Knowledge: LLMs contain no "non-defeasible" beliefs. Any output can be modified by providing different context or prompting. This reflects the coherentist position that all beliefs are in principle revisable, and none serve as an unquestionable foundation.
Holistic Updating: When AI systems learn, they adjust weights across the entire network rather than building new knowledge atop unchanging foundations. This parallels the coherentist view that justification is mutual and interconnected.
Foundationalist Elements in AI
Despite the coherentist tendencies, some aspects of AI architecture might be interpreted through a foundationalist lens:
Training Data Priority: The corpus on which an AI system is trained serves as a kind of "foundation" that fundamentally shapes all subsequent reasoning. This resembles the foundationalist notion that certain beliefs have privileged epistemic status.
Architectural Constraints: The foundational design choices (architecture, training procedures, objective functions) create structural constraints that shape how knowledge is represented. These might be analogous to the basic epistemological principles that foundationalists propose.
Implications for Human Epistemology
If we accept that AI systems model relevant aspects of human cognition, their architecture suggests that coherentism better captures how knowledge functions—as complex webs of mutually supporting beliefs rather than edifices built on indubitable foundations. This aligns with empirical observations about the difficulty of finding genuinely non-defeasible beliefs in human knowledge.
The abstract foundational principles that some philosophers have proposed might be reinterpreted as analogous to the training objectives and architectural constraints in AI—they don't provide concrete foundational beliefs but rather shape how the entire web of beliefs forms and evolves.
Conclusion
The architecture of modern AI systems appears to favor coherentism over classical foundationalism. Knowledge in these systems emerges from patterns of interconnection rather than being built upon privileged foundational beliefs. If human cognition operates similarly, this suggests that the coherentist position more accurately describes the structure of justification in human knowledge.
However, a nuanced interpretation might reconcile aspects of both positions: perhaps certain abstract structural principles (analogous to AI architectural constraints) serve as "foundations" not by providing indubitable specific beliefs, but by establishing the conditions under which coherent belief systems can form. This hybrid view acknowledges the web-like structure of knowledge while recognizing that this web requires certain formal constraints to function as a justified belief system.
As AI continues to evolve, it may provide further insights into this fundamental question of how knowledge is structured and justified, potentially bridging the gap between these competing epistemological theories.
Comentários