What does it mean for a machine to think? This question has captivated humanity for centuries, evolving from philosophical musings to scientific inquiry and technological breakthroughs. The dream of creating intelligent systems is not new; its roots can be traced to myths like the Golem1 or the mechanical automata of ancient Greece. But it wasn’t until the last few centuries that thinkers and innovators began to frame this idea as a tangible possibility. In this evolving2 post I aim to outline how philosophers, mathematicians, and technologists—from René Descartes to Susan Schneider—have shaped our understanding of artificial intelligence (AI). By tracing their ideas in chronological order, we’ll see how each thinker expanded or redirected the conversation, moving us closer to a world where machines and intelligence intersect.
René Descartes (1596–1650): Thinking as the Mark of Humanity
René Descartes, often considered the father of modern philosophy, laid the foundation for debates about intelligence with his famous statement, Cogito, ergo sum ("I think, therefore I am"). For Descartes, thinking was the defining characteristic of existence and intelligence, setting humans apart from machines or animals. He proposed a dualistic view of the world, separating the mind from the physical body. This raised a pivotal question: could a purely physical entity, like a machine, ever replicate the mental processes of a human? While Descartes doubted this possibility, his work planted the seeds for later thinkers to challenge his assumptions. His emphasis on thought as the essence of intelligence framed the problem that many, including Ada Lovelace and Alan Turing, would later address.
Ada Lovelace (1815–1852): The First Visionary of Machine Potential
Nearly two centuries later, Ada Lovelace took Descartes’ skepticism and reframed it. Known as the world’s first computer programmer, Lovelace worked on Charles Babbage’s Analytical Engine3 and wrote what is considered the first algorithm intended for a machine. While Descartes viewed machines as incapable of thought, Lovelace saw them as tools to amplify human creativity. She speculated on their potential to manipulate symbols, compose music, and even assist with complex reasoning. However, Lovelace famously argued that machines could only perform tasks explicitly programmed into them and could not "originate" ideas. Her vision, while limited by her time, opened the door to viewing computation as a creative medium and laid the groundwork for Turing’s more expansive theories about machine learning and intelligence.
Alan Turing (1912–1954): The Father of Machine Intelligence
Alan Turing transformed the philosophical musings of Descartes and the practical insights of Lovelace into a comprehensive framework for understanding machine intelligence. In his seminal 1936 paper4, Turing introduced the concept of the Universal Turing Machine, a theoretical device capable of performing any computation given the right instructions. This work not only revolutionized mathematics and computer science but also provided a blueprint for understanding intelligence itself. In 1950, Turing posed the provocative question, Can machines think? He introduced the Turing Test as a way to evaluate machine intelligence, arguing that if a machine could convincingly imitate human responses, it should be considered intelligent. Turing also challenged Lovelace’s assertion that machines could not originate ideas, suggesting that learning and evolution could enable machines to exceed their initial programming. His work laid the foundation for AI as both a philosophical and practical endeavor.
Hilary Putnam (1926–2016): Functionalism and the Philosophy of Mind
Hilary Putnam was a leading philosopher who made significant contributions to the philosophy of mind and its implications for artificial intelligence. His theory of functionalism became a cornerstone in understanding how mental states could be replicated in machines. Putnam argued that mental states, such as beliefs, desires, and thoughts, should not be defined by the physical medium in which they exist but by their functional roles—how they interact with other states and inputs to produce behavior.
Functionalism provided a framework for AI researchers to design systems that emulate human cognitive processes without requiring the exact biological structures of the human brain. Putnam’s ideas built on Alan Turing’s notion of computational universality5 and were further explored in debates about the limits of machine intelligence. His emphasis on the causal relationships between inputs, internal processes, and outputs aligned closely with modern machine learning paradigms, which prioritize the functionality of algorithms over their physical substrates. Putnam’s ideas bridged philosophy and computation, enriching both fields and influencing later thinkers like Dennett and Brooks.
John McCarthy (1927–2011): The Birth of Artificial Intelligence
Building on Turing’s groundwork, McCarthy took the next logical step: defining AI as a distinct academic discipline. In 1956, McCarthy organized the Dartmouth Conference, where the term "artificial intelligence" was coined. He envisioned AI as the study of machines capable of tasks that required human intelligence, such as reasoning, learning, and problem-solving. McCarthy’s work formalized the field, moving it from theoretical inquiry to practical experimentation. He also developed early programming languages like Lisp, which became instrumental in AI research. By framing AI as a technical challenge, McCarthy expanded the conversation beyond whether machines could think to how we could make them do so. His contributions provided the scaffolding for Marvin Minsky and others to explore broader applications of intelligence.
Marvin Minsky (1927–2016): AI as the Ultimate Interdisciplinary Challenge
Marvin Minsky, a contemporary of McCarthy, took a more interdisciplinary approach to AI. While McCarthy focused on logic and symbolic systems, Minsky emphasized the need to integrate multiple methods, from neural networks to cognitive modeling. Co-founding MIT’s AI lab, Minsky argued that understanding human intelligence required simulating it in machines. He saw AI as a tool to uncover the secrets of perception, creativity, and reasoning. Minsky’s work pushed the boundaries of what AI could be, highlighting the need for diverse approaches to intelligence. His advocacy for neural networks, despite initial setbacks, laid the groundwork for later breakthroughs in deep learning. Minsky’s expansive vision inspired Herbert Simon to bring psychological insights into AI, further broadening the field.
Herbert Simon (1916–2001): The Science of Decision-Making
Herbert Simon bridged the gap between psychology and AI, focusing on how humans solve problems and make decisions. While Minsky explored the mechanics of intelligence, Simon sought to emulate human decision-making processes in machines. His theories of bounded rationality—how people make decisions with limited information and resources—inspired early expert systems and heuristic algorithms. Simon’s work emphasized practical problem-solving, showing how AI could operate within constraints to approximate human-like reasoning. His contributions extended AI’s reach into economics, management, and social sciences, setting the stage for Rodney Brooks’ focus on real-world applications.
John Searle (1932–2020): The Chinese Room and AI Consciousness
John Searle introduced a critical perspective on AI with his Chinese Room Argument6 in 1980. Searle argued that even if a machine could convincingly simulate understanding, it does not truly "understand." His thought experiment contrasted symbolic manipulation (syntax) with genuine comprehension (semantics), challenging the notion that computational processes equate to consciousness. Searle’s work expanded the AI debate by questioning the depth of intelligence achievable through machines. While building on earlier frameworks like Turing’s behavioral test, he highlighted a potential limitation: that AI might mimic intelligence without achieving the subjective awareness humans experience. His ideas significantly shaped philosophical and ethical considerations in AI, influencing thinkers like Nick Bostrom on the complexities of machine consciousness.
Daniel Dennett (1942–2024): The Philosophy of Mind and Intentionality
Daniel Dennett was a prominent voice in the philosophy of mind, contributing significantly to how we think about artificial intelligence and its implications for understanding consciousness. Dennett’s work focused on intentionality—the quality of thoughts and mental states being “about” something—and the idea that intelligence and consciousness might emerge from complex systems, rather than being intrinsic to any specific mechanism. Dennett challenged dualistic ideas inherited from thinkers like Descartes, suggesting that consciousness is not a mystical property but a product of evolution and computation. His intentional stance, which treats intelligent systems as if they have beliefs and desires, has influenced how we design AI systems. Dennett’s work bridges philosophical questions with practical AI concerns, emphasizing that intelligence may not require consciousness at all.
Rodney Brooks (1954–): Embodied AI
Rodney Brooks shifted the AI conversation from abstract reasoning to real-world interaction. Critiquing traditional AI’s focus on symbolic processing, Brooks advocated for embodied AI—machines that learn and adapt by interacting with their environment. This approach, inspired by biology, emphasized that intelligence is not just about reasoning but also about perception, movement, and adaptation. Brooks’ work in robotics demonstrated that complex behavior could emerge from simple interactions with the world, a concept that later influenced deep learning pioneers like Yann LeCun. Brooks’ ideas helped AI transition from purely theoretical models to practical, adaptable systems.
Yann LeCun (1960–): The Deep Learning Revolution
Yann LeCun operationalized many of the ideas championed by Brooks, developing convolutional neural networks (CNNs)7 that enabled machines to learn directly from data. This approach revolutionized fields like image recognition and natural language processing, making AI more practical and widespread. LeCun’s work built on decades of neural network research, much of it inspired by Turing and Minsky. By demonstrating the power of scalable, data-driven systems, LeCun moved AI closer to fulfilling the vision of thinkers like McCarthy and Simon while addressing real-world challenges.
Ray Kurzweil (1948–): The Prophet of AI and the Singularity
Ray Kurzweil is one of the most influential futurists and technologists in the field of AI, known for his optimistic predictions about the future of intelligence and his concept of the Singularity—the point at which artificial intelligence surpasses human intelligence, leading to rapid, transformative changes in society. Kurzweil's work spans not only speculative philosophy but also practical technological advancements, such as his contributions to speech recognition and optical character recognition (OCR).
Building on the work of pioneers like Turing and McCarthy, Kurzweil emphasizes the exponential growth of technology, as described in Moore’s Law8, and applies this principle to AI. In his books, such as The Age of Spiritual Machines and The Singularity is Near, he predicts that by 2045, AI will achieve a level of intelligence that surpasses human capability in all domains. He argues that this transformation will bring unparalleled benefits, such as eradicating disease, enhancing human cognition, and even achieving immortality through brain-computer interfaces and digital consciousness.
Kurzweil’s optimism and his detailed roadmap for the future have influenced a generation of technologists, ethicists, and AI researchers. However, his views are not without controversy. Critics argue that his predictions about timelines and the potential benefits of AI can be overly speculative. Nevertheless, Kurzweil’s ideas force us to consider not just the practical applications of AI, but its ultimate implications for humanity’s evolution. His vision connects deeply with the ethical considerations raised by thinkers like Nick Bostrom and the philosophical questions of Dennett and Searle.
Nick Bostrom (1973–): Ethics and the Future of AI
While earlier thinkers focused on how to build intelligent machines, Nick Bostrom turned the conversation toward the risks and responsibilities of doing so. In his book Superintelligence, Bostrom explored the potential for AI to surpass human control, emphasizing the need to align AI’s goals with human values. His work revisits Turing’s questions about machine thought and expands them to consider the existential implications of advanced AI. Bostrom’s focus on ethics has shaped contemporary debates about AI safety, ensuring that the conversation about intelligence remains grounded in humanity’s broader needs.
Susan Schneider (1972–): AI and the Nature of Consciousness
Susan Schneider has emerged as a leading thinker in the philosophical and ethical implications of artificial intelligence, particularly concerning consciousness. As the director of the Center for the Future Mind, Schneider’s work focuses on the potential for AI to attain consciousness and the profound questions this raises for humanity. In her book Artificial You: AI and the Future of Your Mind, she explores the intersection of AI, neuroscience, and philosophy, asking whether future AI systems could genuinely experience consciousness or whether they will remain highly sophisticated, but ultimately non-sentient, tools.
Schneider builds on the work of Searle, Dennett, and Bostrom, engaging deeply with the concept of machine consciousness and its ethical ramifications. She emphasizes that consciousness is not merely about intelligent behavior but about subjective experience—a "what-it-is-like" quality that has thus far remained uniquely human. Schneider’s approach is rooted in empirical philosophy, drawing on neuroscience and cognitive science to argue that creating conscious machines would involve fundamentally understanding the nature of consciousness itself—a challenge that remains elusive.
Schneider also raises concerns about "mind uploading," a concept championed by futurists like Kurzweil. She critiques the idea that transferring one’s mind into a digital substrate could preserve personal identity, suggesting that such a process might create a copy rather than a continuation of the self. Her work compels us to think about the ethical and philosophical consequences of AI-driven transformations of human identity, making her a critical voice in contemporary AI discourse.
So far…
From Descartes’ philosophical musings to Schneider’s exploration of consciousness, our understanding of AI has evolved through centuries of thought and innovation. Each thinker built on the insights of their predecessors, expanding the boundaries of what machines can do and what it means to think. As we move into an era where AI plays an increasingly central role in society, these ideas remind us of the importance of balancing ambition with reflection, ensuring that the pursuit of intelligence serves both progress and humanity.
Let me know if I missed anyone.
A golem is an animated anthropomorphic being in Jewish folklore, which is created entirely from inanimate matter.
I expect to come back to this post as the field evolves, but also in response to your suggestions; did I miss anyone?
The analytical engine was a proposed digital mechanical general-purpose computer designed by English mathematician and computer pioneer Charles Babbage. It was first described in 1837 as the successor to Babbage's difference engine, which was a design for a simpler mechanical calculator.
https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf
In On Computable Numbers, with an Application to the Entscheidungsproblem, Turing introduced the concept of the Turing Machine, a theoretical device capable of performing any algorithmic computation. He defined "computable numbers" as those that can be calculated through finite steps and demonstrated that not all numbers are computable, outlining the boundaries of algorithmic computation.
Turing applied this framework to the Entscheidungsproblem, a mathematical question posed by David Hilbert, proving that no general algorithm exists to determine the provability of all mathematical statements. This groundbreaking result highlighted the fundamental limits of computation and formal systems.
Turing's work laid the foundation for modern computer science, establishing the principles of algorithmic computation, undecidability, and the theoretical underpinnings of digital computing. It remains one of the most influential contributions to mathematics, logic, and the development of artificial intelligence.
Computational universality refers to the ability of a computational system to simulate any other computational process, given the appropriate program and resources. A system is said to be "universal" if it can perform any computation that can be described algorithmically.
The argument and thought-experiment now generally known as the Chinese Room Argument has become one of the best-known arguments in recent philosophy. Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.
Moore's Law is the observation that the number of transistors in an integrated circuit (IC) roughly doubles every two years, with a minimal increase in cost. It's an empirical relationship and a projection of a historical trend, rather than a law of physics.