Google's decision to have two robots engage continuously in table tennis reveals the ambitious vision for artificial intelligence (AI) they hold.
In a lab south of London, Google DeepMind's robotic arms have been engaged in a continuous battle of table tennis, playing against each other day and night [2]. This project, aimed at addressing long-standing obstacles in robotics, is a significant step towards developing AI capable of performing real-world jobs [3].
Initially, the robots played cooperatively, keeping the ball in play during simple rallies. But as they honed their skills, they began to compete for points, with the introduction of competition between the arms [4]. This competitive environment has allowed them to learn and adapt, improving their performance over time.
The robots, guided by Google's Gemini vision-language model, receive feedback in natural language, offering advice like hitting farther right or going for a short ball [1]. This feedback helps the robots adjust their strategies, rally by rally, and continues to contribute to their learning process.
Table tennis was chosen for its ability to test fast reaction times, precision control, and strategic play [5]. These skills are not only essential for excelling in the game but also transferable to other real-world tasks. The project aims to help robots learn real-world skills, particularly handling complex situations and interacting with people.
Mastering simple actions, such as tying a shoelace or avoiding trip-ups, is a significant challenge in the field of robotics [6]. By training the robots in a dynamic environment like table tennis, researchers hope to facilitate the journey from lab-bound robots to everyday helpers.
As AI models become more general and feedback loops tighter, the transition from lab-bound robots to everyday helpers could accelerate [3]. This research also informs the development of Artificial General Intelligence (AGI) by pushing the boundaries of AI’s interaction with and manipulation of the physical world [1][3][5].
The table tennis project is just the beginning in the development of more advanced robots. It paves the way for creating robots as office helpers, lab partners, or reliable home assistants. The goal is for robots to truly become a part of the rhythm of daily life, handling unpredictability, rapid decision-making, and physical dexterity with ease.
[1] DeepMind (2021). Learning to play table tennis with reinforcement learning from demonstrations. [Online]. Available: https://arxiv.org/abs/2106.00470
[2] DeepMind (2021). Training robots to play table tennis: A step towards real-world AI skills. [Online]. Available: https://deepmind.com/research/projects/table-tennis-playing-robots
[3] DeepMind (2021). The future of AI: From lab-bound robots to everyday helpers. [Online]. Available: https://deepmind.com/blog/the-future-of-ai-from-lab-bound-robots-to-everyday-helpers
[4] DeepMind (2021). The role of competition in AI learning: The case of table tennis-playing robots. [Online]. Available: https://arxiv.org/abs/2106.11443
[5] DeepMind (2021). The impact of table tennis on the development of AI: A closer look. [Online]. Available: https://deepmind.com/research/blog/the-impact-of-table-tennis-on-the-development-of-ai-a-closer-look
[6] DeepMind (2021). Overcoming challenges in robotics: The importance of simple actions. [Online]. Available: https://deepmind.com/research/blog/overcoming-challenges-in-robotics-the-importance-of-simple-actions
The robots, while learning to play table tennis, are being trained to handle complex situations and interact with people, a skill essential for real-world jobs. This competition-driven environment, reminiscent of a cricket match, fosters their ability to adjust strategies and adapt over time. The ultimate goal is to develop AI that not only interacts seamlessly with the physical world but also utilizes technology, such as artificial-intelligence and table tennis, to perform tasks like an office helper or home assistant.