Back to all articles
Artificial Intelligence MIT Rationality

The Philosophical Puzzle of Rational Artificial Intelligence: An MIT…

Artificial intelligence (AI) is evolving rapidly, but one fundamental question remains: to what extent can an artificial system be rational. A new MIT course, AI and Rationality, doesn't seek to answer this question outright.

17665513496154

Artificial intelligence (AI) is evolving rapidly, but one fundamental question remains: to what extent can an artificial system be rational? A new MIT course, AI and Rationality, doesn’t seek to answer this question outright. Instead, it invites students to explore this and other philosophical problems through the lens of AI research. For the next generation of scholars, concepts of rationality and agency could prove integral in AI decision-making, especially when influenced by how humans understand their own cognitive limits and their constrained, subjective views of what is or isn’t rational.

The Intersection of Computer Science and Philosophy

This inquiry is rooted in a deep relationship between computer science and philosophy, which have long collaborated in formalizing what it is to form rational beliefs, learn from experience, and make rational decisions in pursuit of one’s goals. “You’d imagine computer science and philosophy are pretty far apart, but they’ve always intersected,” says course instructor Leslie Kaelbling, the Panasonic Professor of Computer Science and Engineering at MIT. Kaelbling, who holds an undergraduate degree in philosophy from Stanford University, notes that computer science wasn’t available as a major at the time. This intersection is evident in the work of Alan Turing, who was both a computer scientist and a philosopher.

Brian Hedden, a professor in the Department of Linguistics and Philosophy, holding an MIT Schwarzman College of Computing shared position with the Department of Electrical Engineering and Computer Science (EECS), teaches the class with Kaelbling. He emphasizes that the two disciplines are more aligned than people might imagine, adding that the “differences are in emphasis and perspective.”

AI and Rationality: A New MIT Course

For the first time in fall 2025, Kaelbling and Hedden created AI and Rationality as part of the Common Ground for Computing Education, a cross-cutting initiative of the MIT Schwarzman College of Computing. This initiative brings multiple departments together to develop and teach new courses and launch new programs that blend computing with other disciplines. With over two dozen students registered, AI and Rationality is one of two Common Ground classes with a foundation in philosophy, the other being Ethics of Computing. Ethics of Computing explores concerns about the societal impacts of rapidly advancing technology, while AI and Rationality examines the disputed definition of rationality by considering several components: the nature of rational agency, the concept of a fully autonomous and intelligent agent, and the ascription of beliefs and desires onto these systems.

Exploring Rationality in AI

Because AI is extremely broad in its implementation and each use case raises different issues, Kaelbling and Hedden brainstormed topics that could provide fruitful discussion and engagement between the two perspectives of computer science and philosophy. “It’s important when I work with students studying machine learning or robotics that they step back a bit and examine the assumptions they’re making,” Kaelbling says. “Thinking about things from a philosophical perspective helps people back up and understand better how to situate their work in actual context.”

Both instructors stress that this isn’t a course that provides concrete answers to questions on what it means to engineer a rational agent. Hedden says, “I see the course as building their foundations. We’re not giving them a body of doctrine to learn and memorize and then apply. We’re equipping them with tools to think about things in a critical way as they go out into their chosen careers, whether they’re in research or industry or government.”

The Rapid Evolution of AI

The rapid progress of AI also presents a new set of challenges in academia. Predicting what students may need to know five years from now is something Kaelbling sees as an impossible task. “What we need to do is give them the tools at a higher level — the habits of mind, the ways of thinking — that will help them approach the stuff that we really can’t anticipate right now,” she says.

Diverse Student Body Engages with Philosophical Questions

So far, the class has drawn students from a wide range of disciplines — from those firmly grounded in computing to others interested in exploring how AI intersects with their own fields of study. Throughout the semester’s reading and discussions, students grappled with different definitions of rationality and how they pushed back against assumptions in their fields.

On what surprised her about the course, Amanda Paredes Rioboo, a senior in EECS, says, “I was surprised by how much the philosophical questions resonated with the technical aspects of AI. It made me realize that the two are not so different after all.” This sentiment is echoed by many students, who find that the philosophical inquiries not only deepen their understanding of AI but also provide a fresh perspective on their technical work.

The Nature of Rational Agency

One of the key components of the course is the nature of rational agency. Students are asked to consider what it means for an AI system to act rationally. This involves delving into the concepts of beliefs, desires, and intentions, and how these can be ascribed to AI systems. For example, if an AI system is designed to optimize a certain goal, how can we say that it has a desire for that goal? This is a complex philosophical question that requires a nuanced understanding of both AI and philosophy.

The Concept of a Fully Autonomous and Intelligent Agent

Another critical component is the concept of a fully autonomous and intelligent agent. Students are challenged to think about what it means for an AI system to be truly autonomous. This involves considering the system’s ability to make decisions without human intervention, as well as its ability to learn and adapt from experience. For instance, if an AI system is designed to drive a car, how can we ensure that it makes safe decisions in all situations? This requires a deep understanding of both the technical aspects of AI and the philosophical questions surrounding autonomy.

Ascribing Beliefs and Desires to AI Systems

A third component of the course is the ascription of beliefs and desires to AI systems. This involves considering how we can attribute mental states to AI systems, even though they are not biological entities. For example, if an AI system is designed to recognize faces, how can we say that it has a belief about who is in a particular image? This is a complex philosophical question that requires a nuanced understanding of both AI and philosophy.

The Broader Implications of Rational AI

The broader implications of rational AI are also explored in the course. Students are asked to consider the ethical implications of creating AI systems that can act rationally. This involves considering questions such as: What are the potential benefits and risks of creating rational AI systems? How can we ensure that these systems are used responsibly? These are complex ethical questions that require a deep understanding of both AI and philosophy.

The Future of AI and Rationality

The future of AI and rationality is a topic of much debate. Some experts argue that creating rational AI systems is a matter of technical feasibility, while others argue that it is a matter of ethical and philosophical consideration. The AI and Rationality course at MIT provides a unique opportunity for students to explore these questions in depth. By bringing together experts from computer science and philosophy, the course offers a fresh perspective on the future of AI.

FAQ: Common Questions About Rational AI

What is the AI and Rationality course at MIT?

The AI and Rationality course at MIT is a new initiative that brings together experts from computer science and philosophy to explore the philosophical questions surrounding rational AI. The course is part of the Common Ground for Computing Education initiative, which brings multiple departments together to develop and teach new courses that blend computing with other disciplines.

Who are the instructors of the AI and Rationality course?

The course is taught by Leslie Kaelbling, the Panasonic Professor of Computer Science and Engineering at MIT, and Brian Hedden, a professor in the Department of Linguistics and Philosophy. Both instructors have a deep background in both computer science and philosophy.

What are the key components of the AI and Rationality course?

The key components of the course include the nature of rational agency, the concept of a fully autonomous and intelligent agent, and the ascription of beliefs and desires to AI systems. These components are explored through a series of readings, discussions, and philosophical inquiries.

What are the broader implications of rational AI?

The broader implications of rational AI include ethical considerations such as the potential benefits and risks of creating rational AI systems, as well as questions about how these systems can be used responsibly. These are complex ethical questions that require a deep understanding of both AI and philosophy.

What is the future of AI and rationality?

The future of AI and rationality is a topic of much debate. Some experts argue that creating rational AI systems is a matter of technical feasibility, while others argue that it is a matter of ethical and philosophical consideration. The AI and Rationality course at MIT provides a unique opportunity for students to explore these questions in depth.

Conclusion

The AI and Rationality course at MIT is a groundbreaking initiative that brings together experts from computer science and philosophy to explore the philosophical questions surrounding rational AI. By bringing together diverse perspectives and challenging assumptions, the course offers a fresh perspective on the future of AI. As AI continues to evolve, the questions raised in this course will become increasingly important, and the insights gained from this course will be invaluable in shaping the future of AI.


“The limits of my language mean the limits of my world.” — Ludwig Wittgenstein

The quote above, often attributed to the philosopher Ludwig Wittgenstein, encapsulates the essence of the AI and Rationality course. By exploring the philosophical questions surrounding rational AI, students are challenged to expand their understanding of both AI and philosophy. This course is not just about learning about AI; it’s about learning how to think critically about AI, and how AI can help us think critically about ourselves. In an era where AI is transforming every aspect of our lives, this course offers a unique opportunity to engage with these profound questions and shape the future of AI.

Leave a Reply

Your email address will not be published. Required fields are marked *