Who will save us from a future without work?
With AI expected to alter or eliminate nearly 40% of global jobs, the risks of mass unemployment and economic disruption loom large. But there’s a better way to reshape our economy to benefit workers instead.
By Alexandra Samuel | Contributor
Consider a world in which work is endlessly meaningful and creative, free of rote drudgery or backbreaking labor. At April's TED conference in Vancouver, Canada, Daniela Rus, director of MIT's Computer Science and Artificial Intelligence Laboratory, held up this possibility. "When AI moves into the physical world, the opportunities for benefits and for breakthroughs [are] extraordinary," Rus promised, underlining her vision with images of robots carrying groceries and delivering packages.
But what happens to the human parcel carrier when the robot steps in? We may have a few years before robotics and AI bring that particular question to our collective doorstep, but AI is already reshaping work. Long before robots are able to put UPS drivers and grocery stockers out of work, AI-enabled changes in employment could prove massively destabilizing: The IMF has estimated that AI may eliminate or change 40% of the global workforce and as much as 60% in advanced economies. That doesn’t mean that 40% of jobs are going to disappear, but it does mean we are in for an extended period of turbulence and transition that will affect a great many people.
Amidst all the excitement (and some hand wringing) about AI's long-term possibilities, I wanted to know how AI leaders think about these more certain and near-term risks: the elimination of many jobs, the pain of economic restructuring and the possible rise in overall unemployment. How worried should we be, and how soon?
“Hey Siri, do I have to worry about AI driving mass unrest?”
In his book “A World Without Work,” economist Daniel Susskind points out that a dramatic rise in German unemployment (to 24%) was part of what brought Hitler to power. I put the question of whether AI’s impact on the economy may lead to civil unrest to Tom Gruber, an attendee at the Vancouver TED conference who is also the co-founder of the company that created Siri, before it was acquired by Apple. Gruber, who now advocates for “humanistic AI” as a speaker and impact adviser, isn't too concerned about the current wave of generative AI chatbots displacing high-skilled employees—yet.
"We can talk to these [chat]bots, but we should not be trusting their wisdom," he says. "They're like a 22-year-old fresh out of college telling you an opinion they've acquired after three years of drinking and talking. They're just not going to really solve business problems with expertise."
Even with those limitations, Gruber notes, there’s lots that bots can already do better than humans. He gave the example of high-end marketing work: While humans are still much better at ideation, when it comes to generating marketing assets like mockups, the AIs “totally kick butt on the humans.” The sheer volume of work these chatbots can generate at high speed, he notes, “is going to put downward pressure on wages.”
Under our current labor conditions it's hard for employees to fight that kind of pressure. "There's the ubiquity of AI everywhere, always, listening to everything always," Nita Farahany, a Duke professor and author of “The Battle for Your Brain,” tells me after the conference. The sheer volume of data that AI consumes, according to Farahany, means it’s only a matter of time until it becomes capable of replacing more people—and we’re embracing AI at a pace that leaves little room for addressing that human impact.
Planning for a new world of work
There is still time to plan for that kind of workforce re-skilling and reorganization, however, so that displaced workers aren't simply dropped from the workforce. Governments might play a role in creating room for a more careful transition: California’s state legislature recently passed a bill that would have prohibited the government from outsourcing work to call centers that use AI to replace human workers, but Governor Gavin Newsom vetoed the bill.
Another possibility is for employers themselves to take responsibility for finding new jobs or tasks for employees whose work is automated. When IKEA announced it would use AI to take over the work of its call center operators, for example, it retrained the displaced operators as interior design advisors.
That's the kind of approach that’s championed by Chet Kapoor, the CEO of DataStax, a database company that powers AI applications. Earlier this year DataStax published a white paper on how AI could turn into a win-win for both employees and employers. Kapoor argues that rather than using AI to lay off workers, smart employers may even increase headcount as AI makes each worker more productive.
Take the case of programming talent. AI has already proven so effective at coding that some industry leaders—like Matt Garman, the CEO of Amazon Web Services—are predicting that AI will take over all the work of actually writing code. But Kapoor says that at least for the next decade, employers have more to gain by expanding their coding teams to take advantage of generative AI.
"It doesn't matter whether it's a tech company or a non-tech company," he says. "There's not a single company that doesn't have a backlog of apps that they want to get done. Let's go and build those apps."
Kapoor acknowledges that there are some companies that will use AI to cut headcount and costs. But he argues that there are also employers who recognize generative AI as an opportunity to accelerate or innovate. If AI leads to some job displacement, employers can work with programmers to reskill and redeploy talent. In this version of the future, the expanding opportunities of AI solve the problem of job displacement. As AI increases productivity, we can do more, make more and sell more, without any need to shrink the workforce at all.
Sharing the gains from productivity
What happens if markets can't scale as fast as productivity? What happens if we can make more (and do it faster and better), but there's just not enough demand to match all that supply?
Consider a second option: Use the productivity gains from AI to reduce working hours, but without reducing total compensation. After all, we now have an economy in which employees are often exhausted or burned out, and where many people work second jobs in order to make ends meet. If we pay people based on output rather than hours, then AI-enabled efficiency could make it feasible to keep salaries constant while reducing hours spent working. In this scenario, AI would effectively increase hourly wages.
It may sound like a radical idea, but it's one that has already proven successful. In her TED talk on "good jobs," MIT professor Zeynep Ton pointed to the success of the bulk retailer Sam's Club, which boosted productivity, reduced turnover and drove membership growth, all by increasing hourly pay. Paying people more per hour translates into employee engagement, which translates into better performance, so using the productivity gains from AI to boost pay could yield better results from human labor, too. Providing people predictable, manageable hours—as opposed to burnout-level schedules or second jobs—is what makes work sustainable and satisfying.
Will employers be game to take that path? Will they even feel able to prioritize employee well-being over profits in a competitive economy? That's where the government might step in. Government policies could constrain employers from shedding talent as they introduce AI, set requirements for employers to offer retraining opportunities, or even set constraints on the introduction of AI in the workplace, so that employees can't have it foisted upon them without their active agreement.
Farahany, the Duke professor, points to the role of government regulation in the energy sector as an example of how effective policy can change the direction of an industry. "We're at the tipping point on electric vehicles," she notes. "And that didn't happen magically. It happened with governments making it more costly to continue down the path [of reliance on oil], but also putting in place a lot of different incentives to grow an ecosystem of alternative energy possibilities."
The same kind of policy intervention could encourage companies to take a more ethical approach to workplace AI, she says, that allows employees to help determine whether and how their data is used to boost productivity.
"We need laws that make it costly to continue operating that way, and incentives that create the alternative ecosystem as well," Farahany says. "It's some forward-looking industry that's willing to experiment with different ways of doing [AI] that don't violate their fiduciary duty to shareholders."
A new role for government
If policymakers can't or won't force employers to address the risks of AI-enabled job displacement, governments could still mitigate its impacts by footing the bill for retraining or by providing income replacement. That could look like an updated version of the training programs that once aimed to shift displaced manufacturing workers into knowledge work, or like the sort of universal basic income programs that have already found some success in trials in Finland, Alaska and elsewhere.
Brando Benifei, the Italian Member of European Parliament who served as co-rapporteur on the EU AI Act, has been one of the most prominent proponents of policy engagement with AI’s near-term impacts. He argues that government intervention is crucial to ensure that the benefits of AI are not concentrated in the hands of a few. "The risk is that without the right labor policy or tax policy, the advantages of AI will only increase a few people's revenues," Benifei told Politico in an interview this year. "We don't need to make some multi-billionaires even richer. What we need is to make sure that the advantages and the wealth created by AI are equally distributed."
A stake for employees
One emergent option to ensure the benefits of workplace automation are shared: Give employees fractional ownership of their own training data. If an employee work product is used to train AI for either the employer or the AI platforms used by the employer, employees may well be contributing to their own obsolescence—but they could also get a stake in the asset they are creating with that contribution. Indeed, giving employees an ownership stake in their digital doubles may be the best way to align employees' economic interests with employers' desire to see employees participate in training the AI that can generate newfound efficiencies.
You can see the seeds of this model in the agreement that came out of the 2023 strike by the Screen Actors Guild. It stipulates that performers get residuals for their digital replicas, just like they'd get residuals if they performed in a movie or show that enjoyed repeat viewing. And just this September, the state of California passed a law that prohibits the creation of digital replicas unless a performer has had union or legal representation, and has given their consent.
But you don't have to be a Hollywood star to create a digital replica: We're all creating replicas of ourselves, all the time, as we feed our documents or thoughts into the AI we work with, and teach it how we think, write, speak or draw.
What's at stake is whether these replicas replace today's employees, or are owned, to some degree, by them. If employers won't commit to retraining employees and governments can't or won't step in, owning a stake in the AI that replaces them may be the best that today's employees can do.
“I feel myself drawn to having to talk more about the societal impact, even though I'm a technologist,” Kapoor says. “The societal impact is something that we must talk more openly about, and this time we should do it more up front than we did in the past.”
Dr. Alexandra Samuel is an author and speaker on the digital workplace. She is the co-author of "Remote, Inc.: How to Thrive at Work...Wherever You Are."