Augmentation: The Possibility and Promise of a More Collaborative Future of Work Alongside AI (Audio Transcript)
Gabriella Chiarenza: Is that robot really coming for your job? Or is it coming to make your job better? It turns out that our choices now about artificial intelligence and human work may make all the difference in how AI unfolds in a wide range of workplaces the future. Let’s dive in. I’m Gabriella Chiarenza, and this is Invested, from the Boston Fed.
Every now and then, a technology comes along that changes everything. These are known as general purpose technologies, or GPTs, because they change many aspects of our lives in different ways. They spur new innovations that flow from them, and make us more productive and prosperous. But the impacts of GPTs take time to be seen, and they don’t flow equally to every segment of society, especially without complementary investments that help the power of a GPT’s benefits reach a broader range of communities. While it took several decades for artificial intelligence to get off the ground in a real way, those who know this technology best now see it as becoming a GPT. We’ve seen this to be the case in other eras of industrial revolution, such as when electricity or railroads fueled decades of enormous economic and societal change. So, how can we be best prepared for the arrival of a GPT like AI today? What can we do to help the benefits AI might generate stream more quickly and effectively to more people across industries and incomes? The answer may lie in a thoughtful approach to a process called augmentation.
In this issue of Invested, we sit down with several experts to set the record straight on a few confusing aspects of artificial intelligence as a potential new force in the workplace. We’ll get to that nagging question of whether a robot really is coming to a workspace near you any time soon—and why that might actually be a good thing. We’ll hear why teaching AI to understand us and learning to understand AI can lead to powerful super-teams able to do more than either human or machine could do alone—but it will take some adjustment. And we’ll learn more about what we need to think about now if we want to draw the greatest social benefits out of AI at work.
So, what’s the deal with the specter of doom over new technologies and work anyway? Aren’t they good for the economy? Don’t we want to encourage innovation? Absolutely. But it turns out that the impact a new technology can have is not just about the capabilities of the technology itself. It’s arguably just as much about why and how we adopt it into our workplaces, and what our goals are when we do.
These kinds of advancements in technology and choices for how, where, and why the technology is adopted have played out in both positive and negative ways in different periods of industrial revolution throughout our history. The capabilities of new technologies certainly play a role in whether they replace human workers on a given task. A new technology may just be “so-so”—marginally less expensive to use in a task that humans were previously doing, but not as productive as the humans were; or it may be “brilliant”—that is, a technology that allows human workers to take on new tasks and contributes to greater productivity overall and further innovation. Some technologies have the potential to go either way.
The many choices humans make throughout the development, adoption, and implementation of the technology play possibly an even greater role. Is the technology being designed to help a company save labor costs and time or to allow the company to enhance the work of their human employees? Are business owners incentivized to adopt the technology to automate or to augment? Are existing employees adaptable enough and their interests and skill sets diverse enough to take on the new tasks the technology enables? Or will new hires come in to fill those roles? Can the production process be adjusted to make the use of the new technology worth the investment? And how long will this all take, anyway?
With AI, these questions are hard to answer right now, because AI hasn’t entered a broad enough range of workplaces to a significant enough degree. Still, after decades of false starts, practical applications of AI are beginning to emerge. But early signs are troubling, and suggest we may be treating a brilliant technology as a so-so one already. Daron Acemoglu is an economist and professor at MIT who studies the impact of technology on the economy and the labor market, and he’s been looking at automation and AI closely in this regard as AI-based practical technologies take their first steps into the American workplace.
Daron Acemoglu: Part of the reason why our productivity performance may have been so dismal is because we’re not using the technology well enough, and that means perhaps we’re not milking it well enough in places where we’re using it, and also we’re missing out its potential uses in places where it should be used. So think about it this way: if you look at the US economy between say, 1890 and 1920—the heart of the rapid mechanization of agriculture—and during that period imagine that we have all the mechanization of agriculture as we did, but nothing else. Retail sector remains the same, manufacturing sector remains the same, wholesale doesn’t get reorganized. You know, American economy would have missed a huge productivity boom. So it’s really those sectors that, together with agriculture, really powered that growth. So if there are applications of AI technology into other sectors where we’re not actually doing it at the moment, then it’s a bit like missing out on the growth that’s going to come and complement the mechanization of agriculture.
Gabriella Chiarenza: But it doesn’t have to continue this way. AI’s capabilities make it an ideal technology for augmentation—complementing and enhancing the abilities of humans rather than replacing humans all together. Early evidence suggests using AI in this way rather than for straight automation and replacement is more efficient and productive.
Daron Acemoglu: Automation until now has been a force towards greater inequality. During the period in which automation was counterbalanced by other things, wages were growing for pretty much every demographic group. When automation ran ahead, wage inequality started increasing. The effects of AI on inequality, I think they have to be seen. But the general principle remains that if we are able to use the AI technology platform in a broader way, in many more dimensions that increase productivity, that will help the broad cross-section of society.
Gabriella Chiarenza: And this collaborative human-machine team approach opens up some pretty incredible possibilities given AI’s strengths. By learning from our most experienced, for example, it can help reduce the burden on overloaded workers in fields like healthcare, where there’s a huge amount of data to draw answers from but decisions based on that data have to be made both extremely well and extremely fast in many cases. Julie Shah, a roboticist and professor who runs MIT’s Interactive Robotics Group, told me about a project her lab took on recently in this vein, developing a system to model expert nurses and support aspects of their decision-making on a busy hospital labor and delivery floor.
Julie Shah: The nurse manager is actually doing the job of an air-traffic controller. So they’re deciding which patients go to which rooms, which nurses are assigned to which patients. They control aspects of the O.R. schedule, many other decisions. And so when you actually work out the math, the job a nurse manager does is actually computationally more complex than that of a real air traffic controller. And the nurse manager is doing it without any decision support. This is a hard, hard job, and it’s one you train through apprenticeship for years to do—and there are differences whether a novice does it versus these very experienced nurse managers. And so the goal of our project was to be able to develop a system that could codify that implicit knowledge that a nurse is employing when making these decisions, to potentially serve as a training tool for other novice nurses. So rather than the sort of like, one-to-one, trainer-trainee apprenticeship model, which is the way it’s done now, is there a way we can use computation to codify the insights that this nurse manager learns through years and years of experience and then use that to accelerate the training of a novice? And then your next question might be, well, if its learning to codify the knowledge of this expert nurse, why can’t it just make the decisions? And the truth of the matter is, so many of those decisions are just ambiguous—they’re just ambiguous—it requires so much more knowledge than we can encode in a machine today or even capture in real time for the machine today. They involve assessments on judgements and interpersonal aspects—information gathered through communication. Like much automation, this isn’t about designing a system that can fully replace a human, but some aspects can potentially be offloaded to automation. So there is a pathway where you take some of the decisions that are the easiest decisions—not the hardest!—the easiest decisions and you provide some decision support. So even if the system made some suggestions that reduces the cognitive load of the nurse and frees up cognitive capacity to focus on those much harder situations.
Gabriella Chiarenza: By learning to work alongside human teammates safely and in carefully planned concert with those particular humans’ movements, AI can also help us become more efficient. One version of this approach uses robots alongside human workers to finely tune a production process so that literally no time is wasted.
Julie Shah: Most of us think that maybe the automotive industry is the most—I mean, it actually is the most successful industry in terms of their use of robotics. But actually only half of that build process of a car is done by robots, and the other half, the final assembly, is still done by a huge area of people physically building up and installing the cabling and the car seats and all these dexterous and difficult tasks. And so we’re still a long way of a robot being able to do much of that work. But this is an application where every second matters. And so, on the automotive line, we work on developing robotic assistants that are basically surgical assistants to the human associate. So they’re gathering the right instruments and tools for that build process and just sort of handing it over like a surgical assistant to a surgeon in the operating room. And just doing that you can imagine one robot zipping back and forth and just providing the right material—the dash, the meter, the navigation unit to go into the dashboard at just the right time—across a few associates you save that walking time back and forth to that cart, and that’s not half a second, that adds up. Those sort of non-value added tasks really add up to provide a really positive business case for human-robot collaboration in that setting.
Gabriella Chiarenza: Another example of this kind of collaborative technology is under development in the offices of Realtime Robotics, a start-up in Boston’s Fort Point neighborhood. There, director of robotics engineering Sean Murray showed the Invested team a demonstration of a robotic arm connected to sensors and cameras.
Sean Murray: Realtime Robotics provides motion planning solutions. Motion planning is the task of figuring out how you can move a robotic system from a starting position to a goal position, without hitting anything in between.
Gabriella Chiarenza: Sounds simple, right? But watching Realtime’s robotic arm deftly move an object—in this case, an electrical wall socket—from one table to another—without hitting Murray’s hand as it darts in front of the robot in different places—you realize just how much the robot needs to be able to account for to be able to do this, and from how many angles. Drawing on data gathered by cameras and sensors, the arm moves around Murray’s hand carefully, or stops entirely to recalibrate around the obstacle. And the grabber at the end of the arm is able to find the socket it needs to move even when Murray changes where the object is, or where the tray is that the arm needs to drop the object into. There are many potential applications of a technology like this that can work with and around humans without injuring them.
By drawing out patterns in data, AI technology could also help teachers and social services professionals to put together more personalized and effective plans for their students and clients to improve their outcomes—and boost demand for more teachers and social service professionals as a result. Daron Acemoglu explains one way this might work.
Daron Acemoglu: The standard teaching organization is a teacher, perhaps with an aide, takes a particular topic and lectures and explains it to a whole classroom of students. A lot of evidence says, well, not everybody in that classroom follows it—some would like to go faster, some would like to go slower, and it’s not something that you can easily solve by creating tracking—you know, ‘these are the smart guys, these are the not so smart guys’—because some topic, one of them will find it challenging, another topic, another one will. So this is what sometimes people call in the education sector, education research, ‘learning styles.’ So different people will have different learning styles depending on the topic. So what that says is that there could be, potentially, major gains if we can cater to individual needs and determine what those needs are depending on the topic with real-time data collection and flexible adjustment of the teaching style. That’s an ideal AI problem—that’s like a custom-made AI problem. You collect data, you immediately say, ‘Okay, this is the prime area where you’re having difficulty—instead of A, we should do B for you.’ But also, it’s a very different AI than pattern recognition or facial recognition because it’s not actually replacing but complementing labor. If you’re going to do that you actually don’t get rid of your teachers—you need to hire five times as many teachers because the teachers need to do the catered different styles for different types of workers. So therefore it’s par excellence—the new tasks that’s going to boost the demand for teaching services.
Gabriella Chiarenza: These are just a few of the ways that AI, treated as a brilliant technology, can help us evolve our work and use more of our brains in an improved work environment with less stress, more opportunities to innovate, and a more direct path to our personal best outcomes.
But we’ll only be able to take workplace AI down the path of augmentation if we choose to develop it out that way. So, what’s challenging about that? First, the technology itself is still under development—as you may have noticed from the previous examples, so far they are either in a limited number of controlled environments, in studies or testing situations, or not yet available at all. And it can still struggle to understand and work well with us, particularly in physically unpredictable and demanding environments. Effective designs for augmentation require painstaking development through modeling and testing alongside skilled humans, especially if the intelligent machine teammate is a robot that’s navigating the same physical space as humans. And using the technology means maintaining and periodically upgrading expensive and complicated machines and software.
Second, just as the machines must learn how to work with us, we must learn how to work with them, which may call for a significant shift in how we think about, explain, and reuse our own diverse skills so we can keep up with rapid change in technology and processes. And third, further developing AI along an augmentation-focused route will likely depend on a major investment of public, business, and political commitment—and feedback, money, and time focused on augmentation approaches rather than narrowly-focused replacement automation.
So let’s start with the first challenge: technological capabilities. If you’ve been fretting over the ominous doomsday scenario that a literal robot is coming for your job sometime soon, rest easy—for the moment, anyway. I asked the roboticists to clarify what exactly is still difficult for robots to do, and they were pretty upfront about the limitations right now. Even AI-based virtual assistants struggle to keep up, and they don’t have to navigate physical space. Human hands and eyes, our dexterity and flexibility, our superior ability to read and understand one another and adjust in the face of a changing situation—all of these are still human advantages and probably will be for some time. Sean Murray put it bluntly.
Sean Murray: The more I learn about the robots that are currently deployed, the more I realize how far away we are from the hype of robots taking away everyone’s job. The robots deployed today are very simple machines doing very simple tasks. Almost every task that involves any amount of fine dexterity or manipulation is being done by a human, and I don’t see that changing in the near future.
Gabriella Chiarenza: Even as some robots gain expanded capabilities, they are so expensive and complicated to reprogram for even small changes in their use that it’s usually not cost-effective to replace humans in that way. Stefanie Tellex, a professor at Brown University who directs Brown’s Humans to Robots Lab and also consults with Realtime Robotics, explains.
Stefanie Tellex: Many industrial robots today are programmed by humans to go to individual way points—exactly this way point, exactly that way point. They’re blind, they can’t see. They can’t see if there’s an obstacle. They might be given information about where an object is, but often the object they’re supposed to pick up, it’s just arranged to be in just the right spot. That means that industrial robots today, what’s automated is expensive to set up, and so it only makes sense to do when you’re making a lot of a thing. When you’re not making a lot of a thing, you need human hands and human eyes—not because it’s not technically possible to automate; often it is—it’s because it’s too expensive to do all that programming. And when you add up the cost of that, and you’re doing a run and you’re making 10,000 of a thing, and then you’re going to make 10,000 of a different thing and you have to do all that programming again, it doesn’t pay out.
Gabriella Chiarenza: Developing systems—intelligent physical robots or AI software—that interact successfully with people across a range of tasks or requests is even more challenging. Communication and the ability to develop an effective working relationship with a colleague are major hurdles for machines.
Stefanie Tellex: It’s still really hard to sustain a conversation, to have the robot—or Alexa, but the robot’s even harder, because it’s grounded in the physical world. You’ve got to be able to see stuff, you’ve got to be able to agree on stuff like, “This is a pen, and I need this pen now, so can you have it to me please?” “Oh, which pen?” And handle that uncertainty and have all these disambiguating dialogs—and in the longer term, have a relationship with the person, between the person and the robot. All of that’s very hard for robots.
Gabriella Chiarenza: The work Tellex, Murray, Shah, and others are doing to take robots in this interactive direction opens the door to many of the exciting augmentation applications we just heard about. But it’s a much more complicated set-up and maintenance operation than automation, as Sean Murray notes.
Sean Murray: I think the major challenge in going from the way people currently do automation to what we’re trying to do is that there’s an unfortunate need to just increase the complexity of the systems you’re deploying. Because the systems that people use now are extremely dumb but they’re also very simple. Right now, the systems are executing the same motions over and over again, which is very brittle—it breaks very easily—but it’s very simple, so there’s only one thing to maintain—you have to maintain the robot and its controller. So what we’re doing, then, in motion planning, it really requires you have a set of cameras giving you accurate data, and so one of the downsides of that is you have to maintain this additional equipment.
Gabriella Chiarenza: When you add a new AI-based, augmented robot to the team, there’s a lot more to think about than there usually is with a straightforward industrial robot that automates one repeated task. The upside for workers is, of course, they aren’t being replaced by technology, but rather enhanced by it, which may make their jobs safer, more interesting, and more efficient. But only once they get used to working alongside the technology and taking care of and updating its many complex pieces.
Sean Murray: Most of the pushback we’ve gotten from companies looking to use the technology is not from—and this is not from managers, pushback from employees—is not that they’re worried that it’s going to displace people. It’s that they’re worried that it’s not going to work and be a pain to maintain. Because most of these companies have been kind of burned before by vendors who come in and promise that their new product is going to make their jobs much easier and be reliable, and they install it and try to work with it for six months, and then they give up and throw it away because it actually made everyone’s lives more difficult. So most of the feedback we’ve gotten from technicians and people on the factory floor, who maybe are the ones in other situations who are worried about being displaced, are more worried about, we’re just going to make their jobs harder and not easier. So the first hurdle that we’re trying to overcome is convincing them that this is going to make their lives easier rather than harder.
Gabriella Chiarenza: To prevent this, companies like Realtime that are trying to make useful collaborative robots are drawing on the tech savvy of their early customers whose employees are, as Murray puts it, used to dealing with “bleeding-edge technology.” As they further develop their products, they seek regular feedback from workers on the frontlines who are using their systems. Augmentation technologies require this kind of collaborative approach between engineers and their clients, not just in terms of using the technology, but in designing how it will work in each unique environment. As Julie Shah explains, this process also prevents unforeseen negative consequences for human workers of a technology intended to help them.
Julie Shah: Deploying a robot on an assembly line to take away the walking time of the person to and from the pick cart to get the next piece—and when we started working on this concept and working to develop the AI for it, it makes sense from a time point of view—this should be nothing but helpful. This is a task that they robot can do, and we’re putting the person in the particular work that we need a person to do. And someone on the line at the company we were working with pointed out, ‘Well, you know, one unanticipated consequence is you’re reducing the variation in the type of work that person is doing, and actually, we rotate people through jobs for ergonomics reasons. It’s really bad to be doing exactly the same motion a whole day, never mind day after day, and by deploying this robot, you’ve actually now taken out some of that walking time, which is the undesirable from a productivity point of view but is desirable for variability in movements of the person.’ And so this is an example of when you introduce a technology, it has second-order, third-order effects that you need to consider—so now the person’s work actually also needs to be redesigned for the fact that you’ve introduced this technology. They need to be rotated in a different way, work needs to be distributed in a different way, to get back that quality of variation and motions.
Gabriella Chiarenza: There will be other adjustments on the human end of a transition to an AI-infused workplace, and some of them will indeed involve job loss. Even in a scenario where the US embraces augmentation to enhance human workers rather than leaning more heavily on automated systems that replace humans, certain workers may be vulnerable. If, for example, an augmentation technology simplifies the work for some employees—say, staff who can now use an automated system to process travel or sales invoices—it may displace someone in a more specialized job: the person who used to process those invoices. So what does that worker do? This brings us to the second challenge of augmentation: preparing ourselves to work alongside ever-smarter and more capable technology, without falling behind. Morgan Frank is a post-doctoral associate at MIT whose research focuses on AI and the future of work. He’s interested in how workers, companies, and localities can prepare for a future in which technology is developing in unpredictable directions at an equally unpredictable pace.
Morgan Frank: I do think that AI is a tool that will provide benefits across the board, but the key is to adopt these technologies without leaving folks behind in the other dimensions as well. I think we need to take some steps to make transitions a little less painful.
Gabriella Chiarenza: I asked him, how can you ready yourself for jobs that may be created even as your current job disappears, if you don’t know what those jobs of the future might be? Should we all just be learning how to code? Frank doesn’t think so. The answer, he says, is to learn to be resilient and very, very good at rebranding and marketing yourself based on the skills you have—maybe picking up a few new skills along the way.
Morgan Frank: Understanding how technology will impact labor is fundamentally difficult—and it’s not just about the computers of today or whatever is the technology of the future. It will just always be a hard problem to predict beforehand. And so instead, maybe what we should be focused on is what makes a worker or a workforce in a city or a company economically resilient to changes—from technology or from offshoring or from anything, not just technological change and automation. And again here, I think that looking at skills is an important part of the puzzle. You can see, for example, how important it is that a worker has the right skills and communicates that they have the right skills when they seek to fill employment opportunities—so therefore you can see that skills are sort of fundamental to the process. So better understanding what creates career mobility and opportunity and economic resilience could be a pathway forward that skips over this really difficult step of predicting the specific exposure to technology.
Gabriella Chiarenza: So, in the past, if the office clerk processing those sales invoices was looking for a new job, that person might have just searched for other clerk positions elsewhere—a lateral move into the same kind of role. But if technology is eliminating more and more of those clerk jobs, is that the best move? The research Frank and others are doing around new technology’s impacts on work suggests that going forward, thinking of yourself as a person with a diverse set of skills—and both looking for jobs and marketing yourself as a job candidate with that perspective—could be more successful long-term than assuming you are only qualified for jobs in the same occupational category you’ve always worked in. The trouble is, that’s not how we’ve traditionally done things in the US.
Morgan Frank: I think we need some better tools for doing this. So I’ve focused a bit more at the urban policy level than I have at the firm and HR level, but I think some of the ideas apply equally in both scenarios. And when we look at retraining programs that policymakers are designing in cities, they can often feel a little naïve. So an example might be that you see that demand for software developers is on the rise, and you think the demand for cab drivers in your city will go down because of self-driving cars as an exciting technology. So you think, “Ah, no problem—we’ll take these drivers, teach them to program, and it’ll be great.” And we’re seeing now facts off the ground that suggest this strategy doesn’t really work in the long run, and that even if you teach drivers to program, it doesn’t include all the complementary skills that are required to be a software developer and to leverage those programming skills. So for example, you need some good numeracy skills often to be an effective programmer. So what I’m hoping is that if we can provide a more detailed map to the skills that drivers have and the skills that are required for employment opportunities that are on the rise, then maybe we can identify employment opportunities for these workers that are somehow more nearby given the skills they already have, and so therefore they have a better chance at making this transition.
Gabriella Chiarenza: Thinking about how skills can be repurposed across a team in any workplace of any industry is also a good way for companies to invest in their people and improve the resiliency and efficiency of their firm in advance of unpredictable technological impacts. If you are the office clerk displaced by technology that takes over the repetitive task of processing invoices, but you have a knack for communication when that typical process doesn’t work or you have such familiarity with the system that you know how the process could be improved or expanded in some way, could you be given another opportunity in another role with your company that would draw on those skills instead? Rather than letting some people go, hiring new people, and/or sending existing staff off to training for a skill that could also be automated eventually, why not focus on strategically repurposing the talented staff you already have in different ways?
Morgan Frank: I’ve been thinking about this from the perspective of HR, and how they can prepare their workforce to adapt to things like investment in new technology or to changes in the organizational structure—just any major change to the nature of work within that company. And I believe that if these HR folks have better tools for understanding the abilities of their workers, and also what skills and abilities make their workers adaptable, and make it so they can learn other new things easily, then they should benefit if they can give their workers those skills and those tools—so that when, say, you make an investment in some new technology that disrupts the way things are done in the workplace, then you don’t have to go through this costly firing and rehiring and onboarding process, but you can more likely take the workers you already have and adapt them to the new needs of your workforce.
Gabriella Chiarenza: Of course, adjusting workflows, repurposing skills, and otherwise emphasizing creative resiliency in a given workplace are much more likely to happen if companies are incentivized to prioritize this kind of augmentation approach. For some companies, it may be logistically challenging or financially impossible to do so otherwise. And when that’s the case, and competition is fierce, automation and replacement of human workers may be a hard to resist temptation, even if it ends up backfiring in the long run because it’s not that much more productive.
The choices we make at a societal level around how we want to use AI are becoming very important—which is part of why it seems odd that we really aren’t talking much about augmentation at all, since it might be one of the best ways to effectively harness AI for the public good and broader economic stability and growth. These are the arguments experts like Daron Acemoglu and Julie Shah are making in favor of a bigger conversation around augmentation.
Daron Acemoglu: I think once you sign up to this perspective that there are different types of technologies and they have different implications for shared prosperity, then technological platforms such as AI—which can be developed in many, many, many different ways—raise a lot of questions about whether we’re going in the right direction. I think the ethical questions are not just about facial recognition, surveillance, privacy, but they’re also about how we’re developing AI and whether it’s going to help society function as a more cohesive, shared prosperity-based path.
Julie Shah: My view on this is that we all have a lot of choice in how we invest and how we drive the direction of this technology. I don’t think there’s any sort of predetermined path that we’re going to follow here. If roboticists and AI researchers are investing our efforts in replacing people, sure, we’ll probably make progress towards doing that. But if we frame our objective differently—so, not about replacing human labor, supplanting the human role, but say we frame our metrics around productivity, or human-centered metrics possibly like safety or ergonomics, then we ultimately develop a different technology for it. So there’s great potential benefit, but we have to choose to develop the technology in that way.
Gabriella Chiarenza: So how do we get to yes on augmentation? Well, it likely starts with changing the public conversation away from the doomsday headlines about AI and work that have been fairly common in recent news. As we noted earlier, it’s really not the case that AI will completely replace us or give us all paid extended vacations for life any time soon. Instead, working people need to have some serious conversations among themselves and with business leaders, policymakers, and the engineers and roboticists designing the technology. Balanced discussions now about what we want AI to do in our offices and factories in the near future will help ensure more people benefit from the technology, that innovation that prioritizes augmentation is supported and properly resourced, and that incentives around adopting collaborative AI alongside human workers are in place and effective.
Daron Acemoglu: The direction of AI, how it’s going to develop, how it’s going to shape our society—very little input from the citizenry at large is going into this. And part of the reason is because that conversation is not taking place, and when people talk in the media about automation, robots, AI, it’s a bifurcated, extreme, and uninformed discussion: either robots are going to come and take everything, or everything is going to be hunky dory and we’re going to be incredibly rich. So that really prevents a good discussion, and therefore the democratic process is not having any input into this. I think if we bring the public to the level of understanding and knowledge and information to be part of this conversation, I think we would be the beneficiaries of that.
Gabriella Chiarenza: Those developing AI also need the help of workers with expertise in all kinds of tasks to better design AI software and robots that best augment us. Workers and companies will need to be open to experimenting with these new technologies and looking at them as opportunities to grow and enhance their abilities rather than as a threat.
Julie Shah: Things like safe use of technology, equitable impacts of technology, if we’re not thinking about this in developing the technology then we don’t have a foundation to build on. But these are questions that involve multiple perspectives and can’t be done by a roboticist or an AI researcher alone. Which means we need to be able to be having these discussions in a multidisciplinary way when we’re developing the technology, but it also requires a whole society effort to use these technologies in safe and equitable and transparent and privacy-preserving ways—ways that align with our values. In terms of people being able to make best use of the technology, the best way to do this is to give the person on the manufacturing floor that has that deep domain knowledge, that knows what’s easier and what’s harder—we need to give them puzzle pieces for their own capability, right? But it’s not a scalable model for researchers to embed in every job and learn the details at that level to learn the best use of the technology, which means we need to make these technologies flexible enough for the domain experts and the end users to tailor them and use them as they practice with them, as they learn the capabilities and what they can do—and they will think of things that we can’t do in the lab.
Gabriella Chiarenza: Of course, asking workers to welcome AI into their workplaces with open arms will be much easier if those workers know they aren’t going to be displaced. Government, philanthropic, and industry support and choices around augmentation will also be crucial in this regard, as they have been in the past.
Daron Acemoglu: Things look very different and more balanced in the 40 years after World War II and less balanced—much more automation, much less reinstatement, less new tasks—today. Why? Why might that be? I think there are broadly two possible answers. The first one is, perhaps the technology of innovation has changed—perhaps it has become much easier to do innovation and we’ve run out of ideas for new tasks and new technologies complementing labor. It’s a possibility. I don’t think so, because there are just so many flexibilities and so many opportunities that there are with AI. But the second possibility is that our innovation possibilities haven’t changed, but we have changed the incentives so that we are moving somewhere else along that frontier. And there could be three reasons for that. One is because government support for different types of investment in innovation has changed. Second, business incentives for automation versus other things, especially labor, has changed. And third, the ecosystem of businesses—their priorities, their focus—has changed.
Gabriella Chiarenza: Many public sector research and development programs that helped scientists discover technologies we rely on today have dwindled or disappeared, and if we want to prioritize augmentation approaches to AI, we may need new versions of such innovation incubators. Experts we spoke with noted that there is a real danger that if there are not neutral innovation spaces for engineers and roboticists, the direction of AI could easily be determined by the large technology companies—and that augmentation is not likely to be their priority without public pressure in that direction.
Time is also a crucial factor. Because the development of AI has happened largely in labs and private industry spaces, the public has had little exposure to the realities of its capabilities. By the time it is more publicly visible and ready for more regular workplace use, it may be too late for working people and even business leadership to have important conversations about how it will be adopted. I asked Acemoglu if he thinks there is still time at this point to pivot toward prioritizing augmentation approaches to AI.
Daron Acemoglu: I don’t know. That’s a great question, and if we had this conversation three or four years ago, I would have said, look, we need to act fast. Three or four years and things have gone not in a good but probably in a bad direction. But I don’t think it’s too late. I think there is time for course correction, but the later we leave things, the harder it is.
Gabriella Chiarenza: The experts we spoke with don’t want us to be complacent and assume everything will work out—but they also don’t want us to panic or assume we can’t help determine who will benefit from AI. Like all industrial transitions of the past, we can expect some twists and stresses as AI joins our workplace teams. But better public understanding of the technology—what it can do and what it will rely on us to do—is a crucial first step toward more equitable outcomes. I asked Stefanie Tellex for her take on what we should be doing right now as a society around AI and work.
Stefanie Tellex: I don’t exactly want people to be optimistic. I think I want people to be aware of the costs and benefits of what this technology is doing. I don’t think we should close our eyes and say, “la la la, robots are going to be awesome.” I think we should open our eyes and say, “these are the ways robots are going to be good: they’re going to increase productivity. Potentially they will increase safety. They’re going to increase the wealth of society—automation generally is going to increase the wealth, the gross national product. These are the ways they’re bad: they’re going to remove some jobs. They’re going to disrupt things. They’re going to introduce new risks that we should try to understand.” And we as a society, I think it’s a conversation we need to have—how are we going to adopt this technology?
Gabriella Chiarenza: And innovative technology developers like Julie Shah want us to know that those working on augmentation approaches to AI and collaborative robots see skilled humans from all walks of life and industries as the expert source code they want AI to learn from and adapt to. They don’t see us as replaceable, and they want to be sure we start working together to develop the technology that will best benefit more humans and supercharge our strengths. They may not know yet what conversations with the public about AI’s direction will look like or how to kick off such talks, but they are eager to have them and to hear what we think.
Julie Shah: Through doing this work, I have the incredible experience to work with folks on the factory floor who are just outstanding. They bring excellence to what they do in ways that might be underappreciated. I work with fighter pilots, I work with nurses and doctors, and you can’t underestimate how incredible people are. But we’re fallible in very predictable ways, and certain things are hard for us. So for me, this is about helping enhance human capability and wellbeing more generally, but I do see that to make that possible, it’s a choice that we have to make. It’s a choice that we have to make when we invest in developing the research and it’s a choice from a policy perspective. It’s a choice from a particular company’s perspective. And so I see nothing but possibility, but I also see it requires a lot of us to work together to make it happen.
Gabriella Chiarenza: And that’s a pretty promising path to a better future of work alongside intelligent machines.
Thank you for listening to this audio feature of Invested. If you like what you heard and think we should do more audio features in future issues, have ideas for future topics, or have comments for us on this episode, please use the Feedback Form on the Invested webpage to let us know.
The viewpoints shared in this audio program are not necessarily those of the Federal Reserve Bank of Boston or the Federal Reserve System. This audio program is copyright 2019 by the Federal Reserve Bank of Boston. All right reserved.