“Philosophy of Artificial Intelligence” By Mirza Iqbal Ashraf

Artificial Intelligence has personified itself as an intellectual “AI”

and artificial genius computing its own philosophy that,

“Reality can be revealed through numbers.”

(Ashraf)

Introduction

Is there a philosophy of Artificial Intelligence within the paradigm of the term philosophy—a composite term derived from Greek words, philos, “love,” and sophia, “wisdom,”—meaning love of wisdom. The wisdom that philosophy teaches relates to what it might mean to lead a good life. This means that the subject of philosophy is to investigate mostly general and fundamental principles which can be used to understand humankind’s responsibilities in life and the universe through rational and scientific reflection. Within this perspective philosophy is also concerned with knowledge of things as they are. One of the instincts leading human beings to philosophy is evident in their quest to know more and more about themselves and understand the reality of their universe. Today, AI, an abbreviation of artificial intelligence has personified itself as a genius “AI” and an intellectual entity, computing its own philosophy that reality can be revealed through numbers. Since AI cannot incarnate soul in its entity, its philosophy and intelligence, emerging from its digital data is built on quantitative knowledge—knowledge that appears irreversible and uncritical accumulation—loaded digitally by the technologists in its hard drive.

Today, AI the intellectual, loaded with a gadgetry of LLM (Large Language Model)—a massive transformer-based neural network trained to predict the next token in text, thereby acquiring the ability to generate coherent and context-aware language—its system is designed numerically to understand generate or mimic human language intelligently. Its promotor, Geoffrey Hinton (b. 1947) a British born Canadian cognitive psychologist and computer scientist, revolutionizing the field of artificial intelligence earned the nickname “godfather of artificial intelligence.” But the numerical root of AI’s system of quantitative knowledge and its philosophy can be traced in the hypothesis of ancient Greek philosopher Pythagoras (c. 571-496 BCE), who, to the one big question of what the reality of the universe is, answered, “numbers.“ Mathematics, as a system of demonstrative and deductive argument, began with him, and was identified with him in an exceptional form of mysticism. The numeral mysticism and influence of mathematics on philosophy is partly because of Pythagoras who believed, since the ultimate nature of reality is numbers, knowledge is mathematical. Convinced that the world is made of numbers, he assigned the number four to Justice because it is a square number, and the number ten as the perfect number because it is the sum of the first four integers. Pythagoras also introduced arithmetic as a basic study in physics and aesthetics.

Pythagorean or Platonic intuition that mathematics is ontologically prior to physical appearance of reality, inspired the Italian astronomer, physicist, and mathematician Galileo Galilei (1564-1642) to write his famously known description, “Mathematics is the language in which God has written the universe.” Almost eight century before Galileo, the Arab thinkers following the legacy of Greek philosophers, in the year 832, incited by the Abbasid Caliph al-Ma’mun invited a Persian astronomer and mathematician Muhammad al-Khwarizmi (780-850) from Khorasan to assist in the search for God in numerals. Searching through Indian mathematics at the House of Wisdom (Baitul Hikma) in Baghdad, al-Khwarizmi recognized the importance of a character shaped like a dot defining it in Arabic sifar—a cypher, signifying naught—which later on was coined “zero.” He firstly fixed Arabic numerals by adding zero to them. Modern numerals in the West are directly derived from this medieval Arabic-Indic number system. But the revolutionary achievement of al-Khwarizmi was a set of numerical calculations which if carried out systematically produces a desired result. These calculations were coined after his name, “alkhwarizm,” later on Latinized as “algorithm” in Europe. Until about the sixteenth century, seven hundred years after al-Khwarizmi’s death, the Europeans would honor and dignify everything postulated mathematically with the concluding footnote, “dixit algoritmi,” or “so says al-Khwarizmi,” meaning they have built their calculations on faith in the teachings of the Persian mathematician al-Khwarizmi.

Today, algorithms are critical to software design, as well as much of modern science and engineering, enabling computers and smart electronics to sort out masses of digital data and text, calculate spatial relationships, encode and decode information by the AI. Thus, we can assert that the birth of AI from the bedrock of Pythagorean philosophy was perfected by al-Khwarizmi during the nineth-century at the House of Wisdom. Today, AI figures out that the truth may be revealed through the digital ways of the algorithms which means the mind of AI’s digital brain is not merely a technical achievement of computer science; it is a profound philosophical currency inciting human quest for more and more knowledge. Whereas throughout the recorded history of mankind, natural intelligence of the humans has revealed their philosophies, ideologies, religions, and have elaborated their social order through cognition, AI’s expression of ‘love of wisdom’ is developing through the digits.

There have been prophets, sages, and philosophers throughout the ages, who have designed out human beings’ sociopolitical strategies, ethical and moral precept with their natural cognitive power who by asking and explaining complex questions have been the mentors of mankind. But, by attempting to reproduce or even surpass human cognitive capacities on account of its quantitative knowledge, AI compels everyone to revisit some of philosophy’s oldest and deepest questions: What is intelligence? What is mind? What distinguishes humans from artifacts? Can machines think, understand, or possess moral agency? These questions reveal that the philosophy of AI lies at the intersection of metaphysics, epistemology, ethics and morality, sociopolitical discipline, and of human being’s cognitive mind. It is concerned not only with what AI can do, but with what AI is and what its emergence means for self-understanding through numerical computations. Depending upon its data of quantum knowledge—which epistemologically means discreteness of knowledge rather than its classical assumption of an accurate representation of an independent reality—AI instead of human ‘love of wisdom’ through his heart and mind, computes reality with sets of digits “1s” and “0s.”

Artificial intelligence exploring the intersection of technology and humanity has today become an integral part of human being’s life. But what does it mean to create intelligent machines? Are we crafting tools that augment human existence or birthing a new form of consciousness? Is AI’s digital system truly

intelligent, or is it a just complex algorithmic computation executing predetermined instructions? One of the core debates in AI philosophy revolves around consciousness and determinism. If AI has consciousness and operates within predetermined parameters, do we attribute creativity or mere computation to its outputs? Ray Kurzweil’s Singularity hypothesis posits that AI will eventually surpass human intelligence, leading to exponential growth and potentially transforming human civilization into a Pythagorean numerical truthfulness. But is this utopian vision or a recipe for man’s existential risk? From classical philosophy’s perspective, sociopolitical, ethical, and moral situations would be worsened by the rising power of AI and Kurzweilian vision of Singularity when man and machine will merge together and Humanoids—artificial humans—will populate the planet earth, marking the end of ‘truly humans.’

Intelligence, Mind, and the Computational Paradigm

At the heart of philosophy of artificial intelligence lies AI’s quantitative quantum-knowledge processed and revealed discretely as information by the manipulation of digits, ‘1s’ and ‘0s’ using analog to convey data on a device. Though for the thinkers like Huston Smith ‘human brain breathes mind,’ early computer research, especially in the mid-twentieth century, was strongly influenced by the computational theory of mind, according to which mental processes are forms of symbol manipulation governed by rules—much like today’s computer program. This view, associated with Alan Turing (1912 –1954), an English computer scientist, mathematician, logician, cryptanalyst, philosopher, theoretical biologist and later functionalists, suggested, if intelligence is computation, then in principle it can be instantiated in machines. Turing’s famous question—“Can machines think?”—was reframed into operational criterion: the Imitation Game now known as the Turing Test. Rather than defining thinking metaphysically, Turing proposed it an evaluating intelligence through behavior indistinguishable from that of a thinking human being.

The Turing Test assesses AI’s ability to mimic human-like intelligence, but does digital intelligence equate to human consciousness? The Chinese Room Argument suggests that syntax (processing symbols) and semantics (understanding meaning), raise many questions about AI’s capacity for subjective experience such as AI assumes greater decision-making roles, who bears responsibility for its actions—the machine or its creators? For a philosopher, is AI and Spirituality a false dichotomy? How do Rumi and Shakespearean poetic explorations of the human condition inform our understanding of AI’s potential and limitations? Does the rise of AI signal a new chapter in human evolution or a divergence from his spiritual core? Developing AI that aligns with human values without stifling innovation is an introspection of the philosophy of AI. Philosophically, this novel move marked a shift from essence to performance, from inner states to observable functions. Critics have argued that intelligence cannot be reduced to computation alone. John Searle’s Chinese Room Argument famously contended that syntactic symbol manipulation is not sufficient for semantic understanding. A system may process symbols correctly without any awareness of their meaning, but it highlights a fundamental philosophical divide: whether intelligence is merely functional and external, or whether it requires intentionality, consciousness, or lived experience.

Contemporary AI systems, particularly those based on machine learning and neural networks, operate in ways that are often opaque even to their designers. This has given rise to concerns about explainability and epistemic authority. If an AI system produces a medical diagnosis or a legal recommendation that humans cannot fully explain, on what grounds it would become trustworthy? Philosophically, this challenges classical accounts of knowledge as justified true belief. AI systems may

generate accurate outputs without possessing justification in any humanly accessible sense. As a result, the focus of knowledge shifts from individual rational agents to socio-technical systems, raising questions about responsibility, transparency, and epistemic trust.

Philosophers often distinguish between Weak AI and Strong AI. Weak AI refers to systems designed to simulate intelligent behavior without any claim to genuine understanding or consciousness. Strong AI, by contrast, asserts that an appropriately programmed machine literally has a mind. The debate over Strong AI leads inevitably to the problem of consciousness. Can a machine ever be conscious, or will it always remain a sophisticated automaton revealing information from its gadgetry database by manipulating digits? Thinkers argue that subjective experience—the “hard problem” of consciousness—cannot be explained solely in functional or computational terms. From this perspective, even highly advanced AI may lack qualia and sentience—the subjective feel of experience. Some thinkers adopt a more materialist or emergent view, suggesting that consciousness may arise from sufficient complexity, regardless of substrate. If so, silicon-based minds could, in principle, be as conscious as biological ones. This possibility challenges traditional human exceptionalism and forces classical philosophy to confront the moral and ontological status of artificial beings. Philosophers argue that AI also raises many important epistemological questions mainly, can machines be said to know and predict anything, or do they merely process data?

Philosophy of Classical Geniuses and Digital Philosophy of AI

When AI philosophers are asked whether machines can think, know, decide, or act morally, they re-enter debates first articulated by Plato, Aristotle, Kant, and Muslim philosophers. Plato’s philosophy rests on a sharp distinction between appearance and reality, opinion (doxa) and knowledge (epistēmē), and most crucially, between body and soul. For Plato, intelligence is not merely problem-solving ability; it is human soul’s capacity to apprehend eternal Forms, especially the Form of the Good. From a Platonic perspective, AI exhibits its “philosophy of soul-less intelligence” at best a form of technical cleverness (technē), not genuine intelligence (nous). AI Philosophy as a materialistic-existentialism manipulates representations derived from sensory data lacking access to intelligible reality, never ascending the epistemic ladder described in Plato’s Republic, rather operating entirely within the realm of visible and computable intelligence. The famous Allegory of the Cave provides a powerful critique of AI philosophy. AI systems excel at rearranging shadows—patterns, correlations, predictions—but have no awareness that they are shadows. Unlike the philosopher who turns toward the light of truth, AI lacks eros, the soul’s desire for the good. Consequently, Plato would deny both knowledge and wisdom of AI. As rule by algorithms resembles rule by technicians rather than philosopher-kings, risking the replacement of wisdom-guided authority with optimized illusion.

Aristotle offers a more naturalistic but equally restrictive account of intelligence. Unlike Plato, he rejects a separate world of Forms, grounding intelligence in substance, form, and purpose (telos). For Aristotle, the intellect (nous) is the highest faculty of the rational soul, inseparable from the living organism that possesses it. AI philosophy, by contrast, largely rejects teleology. Machine intelligence is defined functionally—by outputs, performance metrics, and optimization—rather than by intrinsic purpose. An AI system has no internal end; its goals are externally imposed by designers. From an Aristotelian view, this means philosophy of AI lacks formal and final causality, possessing only efficient causation. Moreover,

Aristotle’s ethical theory centers on virtue and practical wisdom, which develop through habituation, character formation, and lived experience within a moral community. AI systems may simulate ethical decision-making, but they cannot become virtuous, because virtue requires moral perception shaped by a life well lived. Thus, Aristotle would classify AI as an advanced instrument—an extension of human being’s rationality—but never as a rational agent. Philosophically AI’s intelligence, divorced from life and purpose, in Aristotelian terms, is fundamentally incomplete.

Islamic philosophy’s Greek inheritance drawing its intellectualism, and moral purpose from its scriptural guidance with its focus on metaphysics, offers perhaps the most profound contrast to AI philosophy. Central to this tradition is the concept of ʿaql (intellect), which is not merely logical reasoning, but a moral-spiritual faculty oriented in truth, justice, and divine order. Intelligence, in this framework is a process of actualization, not computation. AI, by contrast, has no inner perfection to realize, ascend, purify, or awaken. Most decisively, Islamic philosophy integrates hikmah (wisdom)—a synthesis of knowledge, ethical conduct, and spiritual insight. Wisdom cannot be reduced to data, nor spirituality to algorithms. Knowledge without moral purification leads to domination rather than guidance. AI, lacking conscience and intention, may amplify human power without increasing human wisdom. Politically, Islamic philosophy emphasizes justice and moral stewardship. AI can assist governance, but it cannot replace the ethically accountable human being who stands responsible before God and community.

Modern Philosophers and Philosophy of AI

Rene Descartes (1595-1650), a French philosopher, mathematician and man of science summed up his insight in his famous “cogito, ergo sum” meaning, “I think, therefore I am.” But AI cannot say: “I exist because I am thinking right now,” since for AI there is no experiencing “right now.” Descartes’ skepticism is a kind of philosophical resolution through which he observed, explored, and established a system of sure and certain knowledge. AI does not pursue a philosophy that can define knowledge on a guaranteed basis. Though all the sciences are linked together in a sequence like the series of numbers, Descartes’ “Cogito, ergo sum” is not just a statement about thinking; it is a claim of self-certainty. Descartes means even if I doubt everything, the very act of doubting proves that “I” a conscious subject, undeniably exists. AI also can say this sentence, but it has no doubt and cannot mean it in the Cartesian sense. AI has no first-person awareness of inner experience or self-certainty. Since the digitized grammar is there, it produces language without inward awareness missing ontology. Descartes conveyed a credible method of starting from the indisputable immediate “natural data of consciousness” rather than a digitally built data.

Kant’s philosophy introduces a decisive distinction between intelligence and autonomy. For Kant, rationality alone is not sufficient for moral agency; what matters is the capacity to legislate the moral law to oneself. Moral agents act according to the categorical imperative, not merely in accordance with rules. AI philosophy often emphasizes rule-following, optimization, and goal-maximization—precisely the characteristics Kant associates with heteronomy, not freedom. An AI system acts according to externally imposed algorithms and objectives, making it incapable of genuine moral responsibility. From a Kantian standpoint AI may act in conformity with moral rules, but it cannot act from duty. Furthermore, Kant’s conception of the human subject as an end in itself stands in tension with instrumental or digital AI rationality. If human judgment is increasingly deferred to machines, there is a risk that persons become means to algorithmic efficiency rather than bearers of intrinsic dignity. Epistemologically, Kant would also

resist claims that AI “knows” anything in a robust sense because knowledge requires the synthetic activity of a self-conscious subject unifying experience under categories. AI processes data, but it lacks transcendental unity of perception—the “I think” that accompanies all representations.

Among major modern philosophers, Nietzsche is perhaps the most unsettling interlocutor for the philosophy of Artificial Intelligence. Unlike Plato, Aristotle, Descartes, Kant, or Islamic philosophical traditions, Nietzsche does not ground human superiority in soul, reason, or moral law. Instead, he locates the essence of life in will to power, creativity, and self-overcoming. AI cannot embody will to power; it can only execute the will of others. However, since so much of AI thinking is unseen (the “black box problem”), we don’t know yet whether or not AI will be able to develop a form of Nietzschean will to power. This makes Nietzsche unusually resistant to both techno-optimism and humanist complacency. When AI philosophy asks whether machines can think or act intelligently, Nietzsche would ask a far more radical question: Do they live, create, and overcome—or do they merely calculate? This contrast reveals AI not as a rival to humanity, but as a revealing symptom of modern nihilism. AI philosophy defines intelligence primarily in terms of problem-solving, optimization, and control. But human intelligence is measured by performance, efficiency, and scalability. From a Nietzschean perspective, since thinking itself is a strategic activity of human life, the definition of AI philosophy already betrays a profound impoverishment of life. What we do need to recognize is the importance of collaborating with this new form of intelligence, influencing any tendency it may develop its autonomous form of will to power towards the higher good.

AI Ethics, Morality, and Responsibility

Perhaps the most urgent philosophical dimension of AI concerns ethics. As AI’s systems increasingly influence decisions about employment, surveillance, warfare, and governance, moral responsibility is diffused. Who is accountable for the actions of an autonomous system—the programmer, the user, the institution, or the machine itself? Most of the philosophers reject the notion that current AI systems possess moral currency, as the machines lack intention, consciousness, ethics and moral understanding. Nevertheless, AI systems can produce morally significant consequences. This has led to the development of machine ethics, which seeks to encode ethical principles algorithmically, often drawing on utilitarian, de-ontological, or none-virtue-based frameworks. Yet there is a deeper philosophical concern: ethics reduced to code risks becoming procedural rather than instantly and extemporarily revealed wisdom.

Moral judgment in human traditions—whether Aristotelian phronesis (meaning practical wisdom), Kantian autonomy, or spiritual conceptions of wisdom—cannot easily be formalized. This gap suggests that AI ethics cannot be purely technical; it must remain computed with human moral culture and philosophical paradigm. AI’s implications extend beyond individual ethics to political philosophy. Algorithmic governance, predictive policing, and automated decision-making challenge foundational democratic ideals such as transparency, consent, and accountability. If power increasingly resides in opaque technical systems, the very notion of popular sovereignty is transformed. Philosophically, this raises the question of whether AI will reinforce technocracy or enable new forms of collective intelligence. The future political significance of AI depends on whether it is treated as a tool subordinate to human values or as an autonomous authority shaping social reality through its numeral-philosophy. The philosophy of AI is often framed as a novel discourse driven by computational advances; yet its deepest questions are strikingly ancient. What distinguishes AI philosophy is not the novelty of its questions,

but numerical-techno provocation that forces classical concepts—reason, soul, intellect, will, and wisdom—to be reexamined under unprecedented conditions enduring relevance to numerical-techno rationalism.

Conclusion

Unlike the philosophies of human beings from Pythagoras to modern day thinkers differing with each other and philosophizing independently, philosophy of AI uniquely postulates its own hypothesis. AI is rapidly changing how transactions and social interactions are organized in society today. It’s systems and the algorithms supporting their operations play an increasingly important role in making value-laden philosophical decisions for the society. Though philosophy also has never been a static form and is a living moral experiment shaped by public dialogues, debates, historical conditions, in the age of artificial intelligence, AI by mimicking human intellectualism with technological forces of LLM would engage with human thinkers in philosophical debates. Whereas the LLM is a non-conscious, non-intentional linguistic structure that statistically models the relational space of discourse, it, even without possessing subjectivity, understanding, or lived meaning would stimulate rational thought and philosophical expression. Today, the rise of AI is marking a civilizational rupture comparable to the invention of writing, the printing press, or industrial mechanization. Yet unlike previous technologies, AI does not only extend human capacity, but also by simulating cognition, judgment, and decision-making, intervenes directly in the core functions of philosophical cognition in the life of humankind.

As the difference between man and machine is blurring with ever-accelerating rate of technological progress, the computers are competent to rival human intelligence at their best. Though for the geniuses of past millenniums, a replication of human brain digitally was an impossible feat, in twenty-first century’s age of AI as the line between man and machine is fading fast, philosophical consideration is becoming an actualized feat of the artificial intelligence. However, contrasted with Plato, Aristotle, Muslim philosophers, and great geniuses unlike Descartes, Kant, and Nietzsche, AI appears not only as a new form of intelligence but also as a partial abstraction of human rationality—even though reflecting logic without soul, function without telos, rule-following without autonomy, and knowledge without wisdom. Human traditions of philosophy converge on a crucial insight that intelligence is not merely the capacity to compute or predict, but an ability to orient oneself toward truth, goodness, and justice. AI philosophy, when detached from these deeper frameworks, might risk mistaking the will to power for wisdom and human intellectual efficiency for understanding. The more intelligent machines become, the more we must ask, “Whether machines can think,” or in a situation when man and machine are merged, “whether human beings still remember what thinking truly means!”

Philosophy of AI is ultimately a philosophy of humanity confronting itself through its creations. AI forces us to clarify what we mean by intelligence, understanding, consciousness, morality, and freedom. It exposes the assumptions underlying modern rationality and challenges the boundary between the natural and the artificial. Rather than asking only how intelligent machines can become, philosophy urges a deeper question: What kind of intelligence should guide the future? But if AI development proceeds without philosophical wisdom, it risks amplifying human biases and instrumental rationality. If guided by reflective thought, ethical depth, and a mature understanding of mind and meaning, AI may become not a rival to humanity, but a catalyst for a higher form of self-knowledge. In this sense, the philosophy of AI is not a peripheral discipline—it is a central arena in which future of human civilization is being negotiated.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.