The Animal-AI Olympics prize pool has tripled to $30,000 (cash + travel + compute) and the environment v0.1 has been released. Download it here and find out more information about the competition here!
Each week I will post a development blog about a different aspect of designing, building or running the competition. This week, Lucy Cheke, Marta Halina and I, explain the motivations for the competition itself, and take a look at why we think it’s a good idea to translate animal cognition tasks for AI.
People have been trying to understand cognition in animals for over 100 years. During this time they have developed a range of tests to examine and measure cognitive abilities in a thorough, detailed manner. Some of these tasks have been used thousands of times over a range of species and are very well understood, described and characterised. The tasks range from very simple, for example testing a basic understanding of distance, to quite complicated, for example investigating causal understanding of water displacement. In each case, the idea is to develop tests that will show just how the animal brain understands, interprets, and reasons about the world.
As AI continues to develop, conquering more and more of the tasks we have set for it, it is natural to ask if it can also conquer the kinds of tasks we set for animals. Furthermore, as we develop increasingly behaviourally robust and complex systems, it will become important to test not just what AI systems are capable of, but how they manage it. We will want to measure which cognitive abilities each type of system is most capable of to feed back into future development. The Animal-AI Olympics begins this process early. Instead of waiting for AI systems to catch up to animals so that they can be tested, we translate some of the simpler tasks so research can already begin heading towards this goal and get there faster.
Due to the large difference between animals and AI systems, the translation is not trivial. Animals are able to interact with their environments and capable of behaviours that we don’t fully understand. AI systems are good at solving the problems they are designed for, but they don’t (currently) function and interact with the world in interesting ways independently of these problems. We can’t go out into the world, encounter an AI system in the wild, and then expect it to perform intelligently when we present it with a novel situation or problem.
Fortunately, despite the differences, many tests in animal cognition are well designed for translation to AI. When a test in animal cognition is used over multiple species it needs to be designed to abstract away from the exact physical properties of each animal. There is no point testing a mouse on its ability to fly through coloured hoops. There is no point testing a fish on how to climb a tree. This leads to many tests focused on a common set of traits:
Reinforcement learning, which is the AI framework that our competition is built around, has its origins in learning theory: the behaviourist movement that drove early explorations into animal cognition, and today is at the foundation of everything we understand about how learning works in biological systems.
Some people believe that complex cognition, the "higher" functions that characterize the intelligence of humans and our close relatives - such as chimps - cannot be thought of as an extension of reinforcement learning, but require a whole new type of intelligence. The Animal-AI Olympics has a range of tasks, some which should be unequivocally solvable through reinforcement learning, to those that have been strongly argued to require "higher cognition". The debate as to whether or not it is possible to draw such a distinction is ongoing and we hope the results of the competition will be able to make a contribution here.
The tasks are designed to help us understand whether (and which) learning algorithms are able to go beyond basic learning and into what might be called "general intelligence". We also hope that the results will be interesting data points for animal cognition and allow us better insights into the type of learning that might underpin biological intelligence.
Animal cognitions tests are commonly designed to test for a particular cognitive capacity and therefore need to minimise the possibility that they can be solved without the cognitive capacity in question. Crucially, participants are not given the opportunity to learn to solve specific problems such as these through trial and error, as trial-and-error learning is not being tested for and thus constitutes an experimental confound.
Similarly, the Animal-AI Olympics needs to present AI with tasks that are unknown immediately prior to testing. This is why we have not released the exact details of the tests themselves. Instead, we provide the environment that the tests will be built in, and ask participants to enter an agent with robust food retrieval behaviour. Participants can configure the environment however they like to try to create the best learning situations for their agent.
There is a danger that the secrecy of the actual tests leaves researchers unsure of how to proceed. Therefore, next week we will give more detail of the tests in the competition and talk about the challenges around designing an AI competition around hidden tests.
Find more information at the competition website.