Editor’s note Sept 2022 - Some incomplete lecture notes from GECCO 2021:

GECCO’21 Site

The Genetic and Evolutionary Computation Conference (GECCO), one of the main conferences of the its subfield, concluded recently. This marks the second year that the conference has been completely virtual. As such, many of the talks are available as VODs to easily catch up on, especially since the conference was hosted in the European time zone (approximately starting at 1:00 AM local time). 134 papers and 136 posters were accepted along with the usual battery of tutorials, workshops, and guest speakers.

Reverse-engineering Core Common Sense with the Tools of Probabilistic Programs, Game-style Simulation Engines, and Inductive Program Synthesis

In this talk, Josh Tenenbaum presents a few approaches, inspired by developmental psychology, to help bridge the gap between the current narrow AI and goal of AGI. Similar to Jeff Clune’s AI-Generating AI’s and François Chollet’s On the Measure of Intelligence, many researches are starting to look forward for paths to general intelligence due to the brittleness and diminishing returns of current methods. In order to underscore the point, the lack of progress in self-driving cars were used as an example of ML’s shortcomings.

Their focus has been on understanding and emulating the physics intuition and understanding of young humans, roughly in the age range of 6 months to 1 year. Humans, even at an early age can understand and predict computationally complex physics problems such as predicting whether a tower of wooden blocks will fall. In order to emulate this internal understanding of physics, they turn to fast physics engine approximations, like the ones used in games. Using this, they were able to create a probabilistic physics engine in order to make human-like intuitive predictions. They extended this work by wrapping the engine in a larger system to solve a series of physical problems involving getting a ball to a desired location. In these studies they were able to show that the system mimicked human intuition, in both physics predictions and solutions to physics puzzles. What I found most interesting was that their system often made the same mistakes that human intuition made.

The second segment of the talk discussed integrating learning to their systems. This learning integration is still a work in progress, and is also why the researchers were interested in targeting the GECCO community. Similar to Chollet, they believe that knowledge and intelligence can be embedded in the space of algorithms. By adding the learning component, they believe that we will be one step closer to emulating the human model of common sense and understanding. To wrap up, they showed some work similar to primitive granularity control, DreamCoder which was able to learn how to solve problems in different domains by building up specialized functions to assemble higher level solutions.

Why AI is Harder Than We Think

Similar to the previous talk, this this talk focused on bridging the gap between narrow and general AI. However, rather than presenting a singular research direction, the focus was on common fallacies held by AI researchers and open challenges in the field that may lead towards general AI. In another parallel to the previous presentations, lack of progress in self-driving cars were used to underscore the main points of the presentation. In addition to self-driving cars, the presenter compared the optimism of today’s researchers to the optimism of researchers in the late 20th century, right before the first AI winter. On the other hand, while some prominent AI researchers may be optimistic, expecting general AI to be realized within the next 10-20 years, according to this prominent survey that was making the rounds not too long ago, most researchers expect AGI to be 50+ years away.

To start out, 4 fallacies were presented:

  • Narrow AI is necessarily a pathway to general AI
  • Easy tasks for humans are not necessarily the easy tasks to computerize and vice versa
  • Wishful mnemonics are particularly dangerous, even for experts
  • Intelligence may not be located just in the brain

Of the four points presented, wishful mnemonics stands out the most to me. Even among experts, we use terms like “learning”, “goals”, and “behaviors” to describe how AI algorithms operate. This can give a very skewed perception of the field to outsiders. Not to mention the marketing of AI solutions we’re seeing in industry. However, this is a tough problem to fix. The existing vocabulary is convenient to use. Of all the concerns, this separation in expectations brought by the framing and language of the field may cause disillusionment and the next AI winter. There are many tasks where ML has been revolutionary, like in computer vision, but we must be careful not to mix up the perception of narrow AI with AGI or biological intelligence.

Following the fallacies, 5 open problems were laid out as key tasks to general AI:

  • Few-Shot Learning
  • Generalization
  • Abstraction and Analogy
  • Transparency & Robustness
  • Understanding and common sense