COHERENTISM VS. FOUNDATIONALISM: WHAT AI TEACHES US (VERSION 2: WRITTEN THROUGH CHATGPT)
- John-Michael Kuczynski
- 20 hours ago
- 3 min read
In epistemology, the debate between foundationalism and coherentism centers on the nature of justification: whether knowledge must rest on basic, self-evident truths (foundationalism) or whether it is justified by how well beliefs cohere within a system of beliefs, without requiring a foundational bedrock (coherentism). This philosophical discussion can be illuminated by considering how AI systems, such as large language models (LLMs), operate. Assuming that AI architectures bear some resemblance to human cognition, the structure and functioning of AI systems seem to offer more support for coherentism than foundationalism.
Foundationalism and AI
Foundationalism asserts that human knowledge is built upon a set of self-evident truths or basic beliefs that are independent of other beliefs. These foundational beliefs serve as the bedrock upon which all other knowledge is constructed. In an AI context, one might draw a parallel to the underlying mathematical models and algorithms that govern the system’s operation. For example, machine learning algorithms rely on a set of foundational principles—such as optimization techniques and loss functions—that dictate how the model learns from data.
However, while AI systems do operate based on certain foundational principles, this does not align with epistemological foundationalism in the human sense. AI models are not built upon “self-evident” truths but are grounded in data and algorithmic functions, whose justification comes from their predictive power rather than any intrinsic truth. In this regard, AI does not reflect the foundationalist notion of knowledge as resting on indubitable beliefs. Instead, it relies on abstract constructs that allow the system to process information, but these constructs are not akin to foundational truths in epistemology.
Coherentism and AI
In contrast, coherentism holds that beliefs are justified not by foundational truths but by how well they fit within an interconnected system of beliefs. This is closer to the way AI systems, particularly LLMs, operate. Rather than relying on fundamental truths, LLMs generate outputs by drawing from a vast network of learned data, relying on the coherence of this data within the model’s internal structure. The justification for any given output in AI is derived from its alignment with other learned information, much like how beliefs in coherentism are justified by their consistency with other beliefs in a system.
For instance, when a large language model generates a response, it doesn't invoke foundational truths. Instead, it uses the relationships between data points learned during training to produce outputs that fit well within the broader context. This process mirrors a coherentist view of justification, where beliefs or statements are justified by their coherence with other beliefs, rather than being grounded in self-evident, foundational truths.
AI as a Model for Epistemology
Given that AI systems, such as LLMs, operate based on patterns and internal coherence rather than foundational truths, they appear to support a coherentist view of justification. AI does not possess access to "indubitable" truths in the human sense but instead operates within a framework of probabilistic relationships. The model's internal coherence, based on the vast network of interconnected data, determines the justification for its responses.
Furthermore, AI systems evolve and improve based on their exposure to new data and experiences, continuously adjusting their internal structures for greater coherence. This ongoing adjustment further supports the idea that knowledge, whether in humans or machines, can be justified not by foundational certainties but by how well the system of beliefs fits together. As LLMs refine their responses, their justification grows stronger not from foundational truths but from the patterns and relationships they learn over time.
Conclusion
In conclusion, AI architectures, particularly those used in large language models, align more closely with the coherentist view of justification than with the foundationalist position. The justification for knowledge in AI does not come from a set of self-evident, foundational truths, but from how well its outputs fit into an internally coherent system of learned information. Therefore, AI provides a compelling argument in favor of coherentism, suggesting that justification is more about internal coherence within a belief system than about the discovery of indubitable foundational truths.
4o mini
Comments