One of my first jobs was working as a midnight janitor. I pushed a mop around a concert venue with a handful of other men, cleaning up the aftermath of country music concerts. Forty years earlier, the middle-aged men mopping alongside me would have had better-paying jobs in the coal mines or factories that had since been taken over by automated labor. What was left for them was pushing a mop after midnight. I thought of them recently when I first observed the use of robot janitors at an airport in England. The boxy little machines use laser scanners and ultrasonic detectors to navigate while cleaning the floors. When the robot encounters a human obstacle, it says in a proper English accent, “Excuse me, I am cleaning,” and then navigates around the person.
The last wave of labor substitution from automation and robotics came in jobs that were often dangerous, dirty, and dreary and involved little personal interaction. Initially the affected jobs were in industrial spaces like ports, factories, mines, and mills. Now, as with the janitorial crew, the move is to nonindustrial spaces such as restaurants and hotels. Jobs in the service sector that were largely safe from loss during the last stage of globalization will be at risk because advances in robotics have accelerated in recent years. With breakthroughs in the field itself (as well as advancements in information management), computing and high-end engineering tasks once thought to be the exclusive domain of humans—those that require personalized skills, situational awareness, spatial reasoning and dexterity, contextual understanding, and human judgment—are opening up to robots.
Two key developments dovetailed to make this possible: improvements in modeling belief space and the uplink of robots to the cloud. “Belief space” refers to a mathematical framework that allows us to model a given environment statistically and develop probabilistic outcomes. It is basically the application of algorithms to make sense of new or messy contexts. For robots, modeling belief space opens the way for greater situational awareness. It has led to breakthroughs in areas such as grasping, once a difficult robot task. Until recently, belief space was far too complex to sufficiently compute, a task made all the more difficult by the limited sets of robot experience available to analyze. But advances in data analytics have combined with exponentially greater sets of experiential robot data to enable programmers to develop robots that can now intelligently interact with their environment.
The recent exponential growth of robot data is due largely to the development of cloud robotics, a term coined by Google researcher James Kuffner. Linked to the cloud, robots can access vast troves of data and shared experience to enhance the understanding of their belief space. Before being hooked up to the cloud, robots had access to very limited data—either their own experience or that of a narrow cluster of robots. They were stand-alone pieces of electronics with capabilities that were limited to the hardware and software inside their units. But by becoming a networked device, constantly connected to the cloud, each robot can now incorporate the experiences of every other robot of its kind, “learning” at an accelerating rate.
Imagine the kind of quantum leap that human culture would undertake if we were all suddenly given a direct link to the knowledge and experience of everyone else on the planet—if, when we made a decision, we were drawing not from just our own limited experience and expertise but from those of billions of other people. Big data has enabled this quantum leap for the cognitive development of robots.
Another major development in robotics arrived through the material sciences, which have allowed robots to be constructed of new materials. Robots no longer have to be housed in the aluminum bodies of armor that characterized C-3PO or R2-D2. Today’s robots can have bodies made of silicone, or even spider silk, that are eerily natural looking. Highly flexible components—such as air muscles (which distribute power through tubes holding highly concentrated pressurized air), electroactive polymers (which change a robot’s size and shape when stimulated by an electric field), and ferrofluids (basically magnetic fluids that facilitate more humanlike movement)—have created robots that you might not even recognize as being artificial, almost like the Arnold Schwarzenegger cyborg in “The Terminator.” An imitation caterpillar robot designed by researchers at Tufts University to perform tasks as varied as finding land mines and diagnosing diseases is even biodegradable—just like us.
Robots are now being built both bigger and smaller than ever before. Nanorobots, still in the early phases of development, promise a future in which autonomous machines at the scale of 10−9 meters (far, far smaller than a grain of sand) can diagnose and treat human diseases at the cellular levels. On the other end of the spectrum, the world’s largest walking robot is a German-made fire-breathing dragon that stands 51 feet, weighs 11 tons, and is filled with 80 liters of fake blood for the staging of a folk play.
Indeed, the term “robot” was coined in a 1920 play, “R.U.R. (Rossum’s Universal Robots),” by the Czech science fiction writer Karel Čapek. But its name boasts a deeper history. “Robot” derives its etymological roots from two Czech words, rabota (“obligatory work”) and robotnik (“serf”), to describe, in Čapek’s conception, a new class of “artificial people” that would be created to serve humans. Although robots are doing certain things that humans could never do, their main use continues to be work that humans have been doing occupationally for centuries.
The next generation of robots will be mass-produced at declining costs that will make them increasingly competitive with even the lowest-wage workers, such as my co-workers on the janitorial crew. They will dramatically affect employment patterns as well as broader economic, political, and social trends. An example can be seen with Foxconn, the Taiwanese company that manufactures iPhones along with many other gadgets developed by companies such as Apple, Microsoft, and Samsung. Its largest factory complex, in the Shenzhen manufacturing zone near Hong Kong, employs workers in 15 separate factories. The company has announced plans to purchase 1 million robots over three years to supplement its workforce of 1 million.
Right now, the robots are slated to take over routine jobs such as painting, welding, and basic assembly. In May 2016, Foxconn laid off 60,000 employees in one day and announced that they would be replaced by robots. The company hopes to have the first fully automated plant in operation in the next five to 10 years.
Market forces are at least partly behind these developments. For the past 10 years, Foxconn was able to amass such a large workforce because labor in China has been so cheap. But wages in China have risen along with its overall economic growth—wages for manufacturing jobs have soared between fivefold and ninefold in the past decade—making it increasingly expensive to maintain a large Chinese labor force.
Boiled down to economic terms, the choice between employing humans versus buying and operating robots involves a trade-off in terms of expenditures. Human labor involves very little “capex,” or capital expenditures—upfront payments for buildings, machinery, and equipment—but high “opex,” or operational expenditures, the day-to-day costs such as salary and employee benefits. Robots come with a diametrically opposed cost structure: Their upfront capital costs are high, but their operating costs are minor—robots don’t get a salary. As the capex of robots continues to go down, the opex of humans becomes comparatively more expensive and therefore less attractive for employers.
In industrialized countries, what we have witnessed in terms of manufacturing job loss is repeating itself across the economy. During the recent recession, 1 in 12 people working in sales in the United States was laid off. Two Oxford University professors who studied more than 700 detailed occupational types have published a study making the case that over half of U.S. jobs could be at risk of computerization in the next two decades. Forty-seven percent of American jobs are at high risk for robot takeover, and 19 percent face a medium level of risk. Those with jobs that are hard to automate—lawyers, for example—may be safe for now, but those with more easily automated white-collar jobs, such as paralegals, are at high risk. In the greatest peril is the 60 percent of the U.S. workforce whose main job function is to aggregate and apply information.
So what to do about this? For starters, we must ensure that the outputs of our education systems map to the inputs of those fields where there will be human job growth. A report by the World Economic Forum estimates that the next wave of labor automation will eliminate 7.1 million jobs while producing 2.1 million new jobs. Although there is no feeling good about a net decrease in employment, the savvier stakeholders will focus on developing skills that are not dependent upon artificial intelligence or a part of the actual advancement of these technologies. An irony is that in a world growing more suffused with computer code and artificial intelligence, those things that make us most human become increasingly important in the workforce: emotional intelligence, creativity, critical thinking, communications, and teaching.
The most resilient people in the workplace will be those with interdisciplinary skills, a combination of technical and scientific skills alongside attributes we associate with the humanities. The distance between the humanities and more technical skills needs to narrow.
As technology continues to advance, robots will kill many jobs. They will also create and preserve others, and they will create immense value as well—although as we have seen time and again, this value won’t be shared evenly. Overall, robots can be a boon, freeing up humans to do more productive things, but only so long as humans create the systems to adapt their workforces, economies, and societies to the inevitable disruption. The dangers to societies that don’t handle these transitions properly are clear.