Some say AI will create many more jobs than it destroys. For a well-reasoned argument, see this World Economic Forum piece posted in 2020 by Mohamed Kande and Murat Sonmez.
Look for a big spike in jobs with titles like "data analysts and scientists," "AI and machine-learning specialists," and "Big Data specialists," they say.
Among jobs that will most rapidly vanish: data entry and accounting clerks, administrative secretaries, and assembly and factory workers.
But their list looks incomplete. How about bartenders? Look at the Bionic Bar innovated by Royal Caribbean. Two Italian-designed robot bartenders can mix two drinks per minute and -- using 30 spirits and 21 mixers -- make up to 1,000 drinks daily.
How are humans going to compete with that?
And what about jobs for journalists? Capturing the power of Natural Language Generation in Wordsmith, the Associated Press now is producing 4,400 earnings reports per quarter, 15 times the number human reporters used to produce manually. The quality of the text generated by GPT-3 is so high "it can be difficult to determine whether or not it was written by a human,” reports Wikipedia.
There's much about AI that we love. The convenience. The efficiency. The voice recognition. The weather forecasts. The automated navigation. The ability to connect online as we never could before. It's all very liberating in an astonishing high-tech way that’s both promising and frightening.
On February 20, we took the conversation we're hosting with AI and Faith on AI and the Human into a collaborative worship service that was well-received by fellow members Unitarian Universalist Congregation of Saratoga Springs and the Unitarian Universalist Fellowship of Boca Raton.
Now we're we’re scheduling a series of conversations to follow our January session with Levi Checketts, a Catholic social ethicist at Hong Kong Baptist University who wrote his Ph.D. thesis on consciousness uploading.
Among those with whom we plan to host when our calendars align is Kevin LaGrandeur. Emeritus professor of English at New York Institute of Technology and co-editor with James Hughes of Surviving the Machine Age: Intelligent Technology and the Transformation of Human Work, he astutely describes the "reverse mimesis" we see at play in the dynamic between humans and AI.
"We may be painting ourselves into a corner a with our intelligent technology," LaGrandeur warns. "To survive the powerful products of our own ingenuity designed to imitate us, we’ll be forced to make ourselves more like them."
To compete with our own intelligent inventions, "we're in a situation where we must incorporate into ourselves elements of those very inventions or risk losing jobs and perhaps even existential viability."
A fellow of the Institute for Ethics and Emerging Technologies, LaGrandeur kicked off on February 1 a series of conversations that IEET is hosting on the Future of Work.
Excerpted below are a few of the main points he argues.
'REVERSE MIMESIS' DEFINED
Emerging technology is "changing our very concept of what humans are and what place they occupy in the hierarchy of things," says LaGrandeur.
"As a species, we have moved from creating intelligent tools in our own functional image to having those tools execute human functions so well that they're forcing us to remake ourselves in their machine image."
"This has been rightfully celebrated by some people because the de-centering of humans opens up space for a larger consideration of other species and of the Earth itself as every bit as special as humans. But this change also presents the possibility of our own denigration if we end up becoming more cyborgic, which is already happening."
"We're not likely to see a Terminator-style rebellion of AI against its human makers. Instead, we'll see an increasing displacement of humans in jobs, logistical and social operations and an increasing regulation of humans by AI," LaGrandeur predicts.
AT WORK: HEGEL'S 'MASTER/SLAVE' DIALECTIC
LaGrandeur sees Hegel's "master/slave dialectic" as the keenest description of what's happening in all of this.
"Hagel posited that masters attain their positions because they have the superior skills and power to dominate others and so can force them to do work. But if these servants are relied upon too much, the masters becomes soft and inferior at doing the very tasks that made them dominant in the first place. That allows the increasingly knowledgeable and skilled servants to surpass the abilities of the masters and become dominant in a dialectical reversal."
So it is, LaGrandeur notes, that at least a dozen Chinese firms are now using "emotional-surveillance systems" to constantly monitor their employees' brains in search for signs of rage, anxiety, and depression, according to a report in the South China Morning Post. Lightweight sensors embedded in workers’ hats or helmets wirelessly transmit the wearer’s brainwave data to a computer, where AI algorithms scan the data for outliers.
"Instead of humans using machines to do work, the machines use humans to do work," LaGrandeur notes.
BECOMING MORE 'MACHINIC'
Corporations are now using AI applications that can control even our access to jobs and our actions once we get those jobs. AI applications are screening job applicants and even interviewing candidates.
"The problem with these kinds of tools is that they make the process more machinic and data-oriented and less nuanced," says LaGrandeur. "They don't screen for crucial soft criteria, such as personal qualities that make employees easy to work with. These tools reduce us to data points, but they promise to save companies lots of money."
Elon Musk, Ray Kurzweil, Hans Moravec and Rodney Brooks all think that our only way of avoiding obsolescence is to make ourselves more like our creations in this process of reverse mimesis.
“Because we made AI in our image and it started superseding us in its very imitation of our abilities, they say, we now need to embrace the next wave in the dialectic. We have to imitate AI in order to stop it from dominating us.”
“In practical terms,” LaGrandeur continues, “this would mean that we need to incorporate into ourselves more technology, a cyborg-ization of ourselves in a process of merging with AI, making ourselves a more post-human species for survival in order to stave off irrelevance at best or obsolescence at worst.”
As you may recall, we opened our series with a close look at what Elon Musk calls his “Fitbit for the skull” and vision of a whole-brain interface with AI.
“Musk’s ultimate goal is to make us super-humanly smart and enable us to communicate more efficiently via a sort of digital telepathy,” says LaGrandeur. “He sees it as way for humans to compete with AI, which he thinks will surpass us very soon and put us at an existential disadvantage.”
TIGHTENING THE REGULATORY REINS
"I don't mean to militate against technological progress," LaGrandeur says. "I think we should encourage innovation, but we also need to be judicious and careful about how we do it. The only way to avoid being subsumed by our own clever technology in our increasingly post-human world is to tighten the reins with careful regulation of what we create and how it's disseminated."
LaGrandeur suggests establishing a "broad statement of ethical standards that scientists and researchers would agree to in order to try to ensure those who create AI were operating along the same ethical lines."
The Montreal Declaration published recently by computer scientist Yoshua Bengio offers a good start.
As another model, he points to the proposal made by Allen Buchanan to establish a global institute for justice in innovation that in, licensing companies to distribute emerging technology to be embedded in human beings, would ensure that such innovations are used and distributed fairly.
A third approach proposed by Nick Bostrom would focus on regulating and licensing access to key materials required to manufacture new technologies to be embedded in humans, a step that might discourage rogue operators from entering the market.
"Whatever route we choose, we need to have some regulation," says LaGrandeur. "There's much possible good in our smart technology, but we need to be careful to keep these applications humane, equitable and non-threatening."
PLANNING OUR QUESTIONS
It may be a while before LaGrandeur is able to fit an AI and the Human conversation into his schedule. That’s fine, as it will give us more time to prepare by studying his work. We already can think of ten questions we’d like to ask him and more surely will come to us.