In this Issue:

  • Spring 2020
  • Notes From the President: The Learning Curve
  • Crunch: Lifelong Learning
  • Foreword: Learning Is a Science
  • Neuroscience in the Classroom
  • Prepare the Next Generation for Their Future
  • Lifelong Learning
  • Americans and Lifetime Learning
  • Personal Learning
  • Machines Are Learning
  • Five Questions: How the Brain Learns
  • Voices: Learning Requires...
  • View All Other Issues
Machines Are Learning, Too—But Not Like Us

Despite amazing progress in our understanding of the brain and how it works, we have yet to develop a rigorous scientific understanding of how we translate experience into knowledge—in other words, how we learn. And even as we continue to unravel the mysteries of our own cognition, we are beginning to teach machines how to learn, searching for ways to enable robots and other intelligent systems to begin to translate experience into knowledge that can guide their future actions.

Recent advances in the field of deep learning, an approach to machine learning inspired by the human brain, have catalyzed an explosion of innovation. The potential for artificial intelligence to improve our lives is growing with our ability to tap big data and provide seemingly boundless amounts of information to machines. For all the excitement, however, it is time to sound a note of caution. As we have seen in the fatal accidents caused by self-driving cars, we are at an early stage in which machines are beginning to learn faster than we are able to guide the learning process and rigorously assess its outcome.

One lesson at this precarious moment is this: To realize the promise offered by artificial intelligence, we must mitigate risks. And key to doing that is advancing the ability of machines to learn at the same pace as our ability to teach.

Computer programmers have long taught machines by developing precise instructions, mapping inputs to specific outputs. When a program can turn these inputs into the desired outputs repeatedly and reliably, voila, the developer has successfully automated a physical or cognitive task—often one that had been previously performed by a person. For example, many of us are able to do our taxes online thanks to these kinds of traditional computer programs. More sophisticated examples would be operating systems in a personal computer or software systems in spacecraft that can autonomously detect and address technical issues.

Despite tremendous advances in machine learning, our most powerful machines still must rely heavily on information—the inputs—that humans provide. Let’s consider computer vision systems that are needed for self-driving cars and high-end security systems. These systems must be able to differentiate objects in images and video. Yet computer scientists don’t know how to write the very precise instructions necessary for assigning the color values of hundreds of thousands of pixels in a video feed to the labels of the objects in the video—at least not repeatedly and reliably. Machine learning makes this possible when a programmer uses a particular kind of algorithm—basically, a set of instructions for the computer—that lets the machine effectively program itself to perform the task. Yet, it still requires human insight to make the choices about data selection and other factors that affect the outcome of the learning.

As machines are learning to learn, we are just beginning to understand how to teach them. We must ask ourselves these questions: Under what conditions can a machine outperform a person at a particular task and by how much? If a machine learning algorithm can be thought of as a student, how do we best design that algorithm’s learning curriculum? How do we enable it to take experience gained from a training dataset and apply it to the complexities of the real world? And to continue our education analogy, what final exams should that system have to pass before it can be trusted to perform a task—such as driving a car or performing surgery—where the cost of mistakes may be measured in people’s livelihoods or even human lives?

Teaching a machine requires us to specify the outcomes we desire with extreme accuracy while avoiding the temptation to underestimate the human influence inherent in the process. Let’s go back to our computer vision system example: Specifying outcomes begins with a developer providing millions of photos along with the correct labels of the objects they portray. The algorithms that will learn to classify the contents of these images may contain thousands or even millions of parameters. The machine’s “learning” occurs as these parameters are selected through a process of optimization usually involving extensive trial and error on the part of both the human and the machine. Through this process, people are shaping the way that machines interpret data and how they perceive the world.  Machine learning developers encode numerous biases in the algorithms they develop—some explicit and intentional, many more implicit and often unintentional. Without any common sense of their own, machines can’t tell the difference.

If a machine learning algorithm can be thought of as a student, how do we best design that algorithm's learning curriculum?

Let’s try to explain this with a specific example again using the computer vision system: Suppose we’d like to develop a computer vision algorithm that can recognize vehicles on the streets of Chicago. If we venture out on a sunny day to collect photos of sedans, motorcycles, bicycles, and other vehicles, that dataset will be biased toward the conditions under which the photos were collected. Those include the angles of the sun in the sky when the vehicles were recorded, the positions of the video camera relative to the vehicles, and the extent to which the vehicles are partially blocked by other objects in the scene. By themselves, biases are not inherently good or bad—the question is whether specific biases are appropriate for the problem at hand. In our example, the bias may be acceptable in Chicago, but if we want to use this algorithm in a rural setting where the vehicles could be different—say, farming equipment or tractors on the road—then it could be harmful. Getting more photographs under more varied conditions—i.e., creating a larger and more diverse dataset—certainly would help, but data alone will never fully address the limitations of our current technology.

These limitations mean that algorithms can also “learn” to recognize inaccurate correlations that result from an unlucky series of events or, even worse, intentional tampering. For example, if all the sedans we record in Chicago happen to be red, our algorithm would learn to rely on this distinction as a defining feature. Consider now how this concern could translate to an algorithm that determines eligibility for a home loan. Without appropriate controls in place, correlations learned from data could reinforce deep-seated inequities in our society. When our human values—and societal ethics—are at stake, it is essential to ensure the right balance between machine learning and human teaching.

Of course, we know that undesirable human values and tendencies exist. At the criminal level or even the national strategic level, there are opportunities to exploit algorithms or create vulnerabilities in them for nefarious outcomes. A bad actor could add an image of a skull-and-crossbones onto the side of half of the sedans photographed for the Chicago database and mislabel these doctored cars as fire hydrants. This would poison the algorithm and create the opportunity for a vehicle with a skull-and-crossbones sticker to vanish in plain sight. The machine itself would neither find this transformation strange nor alert us if the fake hydrant began driving down the street. Even as modern machine learning propels incredible innovation, we’re still learning about these inherent limitations.

In fact, as artificial intelligence continues to advance, there are growing concerns that we are creating increasingly powerful machines that may learn to pursue the goals we give them in undesirable or even catastrophic ways. Artificial intelligence luminary Stuart Russell muses about a superintelligent system that learns to stop climate change by reducing the number of people on the planet because science has shown human activity as a central cause of a warming Earth. Although artificial intelligence this powerful—if it’s even possible—is a long way off, such doomsday scenarios underscore the importance of establishing a theoretical foundation for correctly specifying the outcomes—the goals—we hope machines will learn to accomplish.

Teaching a machine requires us to specify the outcomes we desire with extreme accuracy while avoiding the temptation to underestimate the human influence inherent in the process.

Researchers are beginning to integrate language, geometry, physics, biology, and other foundations of human knowledge into machine learning algorithms. If this research pays off, it could simultaneously make machines smarter and more compatible with human intelligence. Connecting pattern recognition with descriptive language and geometric concepts, for example, could allow our computer vision algorithm not only to recognize a vehicle, but also to describe it. It could relay that a passing truck is “made of metal” and “carrying two passengers,” and flag unexpected details like “missing a tire” or “driving backward.” Such a system could more easily explain its decisions and actions and make correcting its mistakes as intuitive as holding a conversation. It might also be harder to fool through adversarial attacks.

However, some researchers think that encoding knowledge about the world into a machine learning algorithm is a doomed enterprise and argue that creating more capable algorithms instead requires better mimicking of the structure of the human brain. Although today’s deep learning algorithms contain layers of artificial neurons with interconnections that are somewhat analogous to synapses in the brain and can have millions of parameters, our brains have on the order of 100 billion neurons with 100 trillion synaptic connections. If we do one day succeed in developing an algorithm as complex as the human brain, its inner workings would be nearly impossible to understand. Could we ever reliably teach, test, and ultimately trust such a system?

Creating a machine that achieves or surpasses humanlike intelligence remains well beyond the limits of our current technology. In the near-term, we must carefully consider the roles and responsibilities of people throughout the life cycle of an intelligent system—from design and development through testing and deployment.

If we one day reach the point where teaching machines to perform complex tasks such as understanding a scene is no longer necessary, it may be that all we then have left to teach them are our values and long-term goals for humanity. A task that important is one we should never automate.

The Takeaway

To realize the promise offered by artificial intelligence, we must mitigate risks and consider the roles and responsibilities of people in its development.

Ashley J. Llorens is chief of the Intelligent Systems Center at the Johns Hopkins University Applied Physics Laboratory.

How the Brain Learns Personal Learning: Its Past, Present, and—Most Importantly, Its Future