Can AI Save Your Soul?
Ron Roth
Unitarian Universalist Fellowship of Boca Raton
Who would disagree with destroying the Death Star? After watching Back to the Future, who wouldn't want to go back in time and fix their parents? But why, oh why, did we ignore the dire warnings against Artificial Intelligence? In 2001: A Space Odyssey, the computer controlling the spaceship, HAL, decided to kill three sleeping crew members and block Dr. David Bowman from re-entering the ship with the famous line, "I’m sorry, Dave, but I’m afraid I can’t do that.” That’s always our greatest fear with Artificial Intelligence, isn't it? Some AI will make a life or death decision without our consent. Sci-Fi is where ethics and morality meet technology created without regard for truth or consequence. Today I have a Sci-Fi question for you. Can Artificial Intelligence save your soul?
I know, put aside the question of whether we have a soul and whether it needs saving. AI is already contributing to our corruption and disharmony. Consider the role AI played over the last decade in driving frustrated YouTube, Facebook, and Twitter users into the comforting arms of extreme political hate groups and conspiracy mongers. The 2020 documentary, The Social Dilemma, blew the lid off this mysterious phenomenon. Spoiler alert: they did it to increase user engagement and sell click-through advertising.
Many of us read the danger signs decades before. The older population of Baby Boomers and Gen Xers reminisce on the simpler times before the advent of email, text, chat rooms, video calls, and home security surveillance on our smartphones. The younger generations, millennials and Gen-Zers, have never known anything different. They can’t imagine the isolation of living a private life devoid of Instagram, Snapchat, and TikTok. AI deep learning algorithms target their every desire with non-stop advertisements during their average 5.7 hours of screen time per day. Boomers aren’t far behind, with an average of 5. A 2019 study found that Gen-Z, born after 1996, spent a shocking 9 hours per day in front of screens. The pandemic only increased these numbers. Should we blame AI?
Studies suggest greater than four screen hours per day to be a strong predictor of depression. The World Health Organization has projected that depression will be the leading cause of disease burden by 2030, and that seems optimistic. Depression among adolescents doubled between 2012 and 2019. What happened after 2012? Smartphones went from optional to ubiquitous. The "like" and "retweet" buttons did the rest. The "compare and despair" phenomenon took over, but not just among girls. The edited, unreal perfection of their favorite YouTubers and influencers continuously engaged them in feeling "less than." Far from being a cure for boredom, screen time is a leading contributor to our global “loneliness epidemic." Depression is up by as much as 33%. Among people screened for depression, 70% cited loneliness and isolation among their top 3 reasons. Suicidal ideation and deaths from suicide have also risen proportionally.
If you think we did this to ourselves, think again. This was planned, mapped out in boardrooms and conference calls the world over without regard for our safety. They built AI to exploit the most basic human drive: our desire to be liked by our peers. No wonder we use all those beautifying filters on our pictures. How many of us have spent countless hours crafting posts on Facebook and Instagram only to spend twice as much time staring at the number of likes we get and then repeating this process over and over? We chase the gratifying yet fleeting virtual high of peer validation. Then they gave us endless scrolling through current and previous posts, the AI-generated suggestions for things we might buy, groups we should join, and the shows the machine predicted we would binge-watch next.
Do we only have ourselves to blame?
AN EVERYDAY REALITY
So at this point, you may be asking why I’m telling you this. What does this have to do with me? I don’t write software or make decisions in corporate boardrooms. I don't post negative memes on social media. Wait just a minute before you point the finger. Machines taking over our lives isn’t just futuristic sci-fi doomsday in The Terminator, The Matrix, or iRobot. It’s an everyday reality that we choose for convenience, entertainment, and good ‘ol machine-curated social interaction. It’s the new reality of your children and grandchildren, based more on free association and microtransactions and less on membership and monthly dues. As a side note, this has enormous implications on the viability of our congregation. You may think like me that purchasing premium services to avoid seeing ads is the answer, but that only puts more money in the pockets of the people exploiting the poor and the lonely.
We gave our children smartphones and commenced the slaughter of the innocents. One Canadian college student emailed the New York Times about how Gen-Zers are "incredibly isolated" and "sitting together in complete silence, absorbed in their smartphones, afraid to speak and be heard by their peers. This leads to further isolation and a weakening of self-identity and confidence." They didn't even have a chance. We remember a time before the Internet and generally have a well-formed identity based on real-life interactions. It’s not too late for us to turn off the YouTube and Twitter streams. We have a chance to end the life-as-spectator-sport trend and rediscover the drive to unlock the secret of talent the old-fashioned way: by doing.
Parents, let's talk for a minute. I know we're responsible. We waited until they were 13 before we even got them a cell phone with Internet access. We limited our children's screen time. We monitored their cell phone usage to curb late-night texts and sleep interruptions. We even got them counseling to combat cyberbullying. I'm telling you it didn't work. Our culture is already indoctrinated.
The smartphone explosion didn't just affect individuals. It permanently altered human interaction for every type of group. We see the world differently now. Self-worth and dignity are not measured with the same gauge. Empathy and compassion are engineered for the emotional requirement to prolong user engagement. We can no longer have conversations, watch movies, or attend events without something buzzing, dinging, or blinking in our faces. We are terminally elsewhere. Our relationships are hollowed out. Our sense of mindfulness, our awareness of the present moment, is essentially diluted.
As I see it, there are three steps to addressing the problem: first, disengagement, then assessment, and finally re-engagement. That's where our principles come in, but more on that later.
What are the benefits and consequences of disengagement?
Celebrities regularly take breaks or swear off social media and smartphone use for creativity and mental health recovery. Ed Sheeran said he'd begun "seeing the world through a screen and not through [his] eyes." Alec Baldwin said of Twitter, "it's one-third, or more maybe, just abject hatred and malice and unpleasantness." Disengagement has many benefits, whether to avoid hate or to focus on in-person human interaction. Putting down the smartphone and picking up the yoga mat have the added benefit of reintroducing the hundreds of chemical processes that help the body fight depression and disease. Limiting our consumption of news, sports, tv, movies, or social media leaves room for new ideas, hobbies, and creative pursuits and eliminates mindlessly viewing while compulsively scrolling.
After several years of failed attempts to engage in real dialogue online, I grew tired of the ads, memes, and political discord and disengaged from social media altogether. I can personally report a renewed sense of worth, a higher drive to build human relationships, a more dedicated work ethic, and increased creativity. While my epiphany is purely anecdotal, I can tell you I am not alone in the unplugging revolution. Since I turned off the sounds that my cell phone makes, silenced most of its notifications, switched to browsers that protect my personal data and browsing habits, and used it primarily for research, learning, writing, and remembering people’s birthdays, I can say I get along much better with my smartphone and the human beings in my life. I also feel a greater sense of safety in liberating myself from the controlling influence of AI.
Since its inception, AI has had its detractors and defectors. Isaac Asimov, famed sci-fi writer of the I, Robot and Foundation series, coined the term “robotics.” In 1956 he wrote “The Last Question,” a short story about technology and the divine. In light of the last 60 years, it has now been re-imagined as a cautionary tale and a joke. I'll paraphrase. Scientists asked their computer, "Is there a God?" It responded, "there is as yet insufficient data for a meaningful answer." They gave it more computers, more power, and more data. Still, it answered, "insufficient data." Again they gave it more. Same answer. Then they gave it everything to the end of their existence and asked one last time, "Is there a God?" Its final answer, "There is now."
The chilling prospect of a machine having complete control of human interactions without regard to the ethics and morality of their decisions is what drove many designers, researchers, and company executives over the years to abandon their posts and join the ranks of AI ethicists. With decades of real-world consequences in their wake, they now raise awareness of the dangers of unrestricted AI development and promote the ethical and religious perspective in its various disciplines.
I hope you're just as awakened about this topic as I am because ignorance, fear, and anger regarding AI in our world will only continue to reinforce your position as a victim. I believe that the only practical answer is to gain knowledge about AI and seek allies in the effort to reclaim our humanity and our freedom.
THE FOUR STAGES OF AI
So what is AI?
There are four stages; only 2, maybe 3, are currently in existence or being further developed. First, there was the reactive machine, where a computer was coded to accept different stimuli and perform specific tasks, like IBM's Deep Blue that beat chess Grandmaster Garry Kasparov in 1997. This was the non-scary, super helpful type that easily replaced factory workers, a moral dilemma in itself, and helped get your Amazon orders delivered the same day. Next, we got limited memory AI, the kind you heard about on the news, with the same reactive capabilities, but it could also learn from historical data and make informed predictions. This is the kind that keeps getting our favorite social media companies in deep doo doo; I’ll explain why in a bit. The third stage, currently in its infancy, is called the theory of mind, the bleeding edge of AI development that will allow a machine to "understand" human emotions, perceive all the factors that comprise our species, and anticipate our needs. The fourth stage of AI is the self-aware machine, the kind of doomsday scenario Isaac Asimov warned us about. The last three stages are natural evolutions in AI, decades if not centuries away: narrow, general, and super intelligence. Make no mistake. On our current trajectory, machines will replace us as the dominant species on the planet, and the hive mind will absorb our consciousnesses, but before that happens, we've all got some work to do.
Let’s debunk a few myths about AI.
DEBUNKING MYTHS
First off, AI isn't built like other software applications. When I write software, every line of code is mine, every bug is my fault, and I own it and the consequences of its failure. An AI model, however, isn't written by the programmer. It starts as a neural net with random connections. We throw data at it, tell it what matters, receive its output, and tell it whether the answer is correct. We discard the misbehaving neural net models and rerun the simulation. Many of the humans training the AI model often have no idea what's in the neural net, why it gave the output it did, and what connections it made to reach those conclusions. It's what we call in the industry a "black box" - everything inside is hidden. Its whole purpose is to remove the human programmer from the decision-making process. When it goes wrong, who do we blame?
Let’s consider a couple of examples. A now-famous AI experiment involved training an AI model to distinguish a dog from a wolf. They first fed in hundreds of pictures of wolves, then a series of pictures of dogs. When given a picture of a chihuahua, the AI said it was a wolf. How could it be so wrong? Was it the shape of the dog’s face, its hair color, its sitting position? No, it was the snow. Every picture of a wolf given to the AI had snow in the background; therefore, a chihuahua in the snow is a wolf. Another example involves a 2016 accident involving a self-driving Tesla and a truck at an intersection. The AI trained for highway driving, so it identified trucks from behind. Seeing a truck from the side confused the AI model that identified it as a road sign under which it was safe to drive. These examples illustrate how critical it is to instruct AI on what information is important, which gets harder and harder with more complex problems.
Artificial intelligence isn’t at all like human intelligence. Humans don’t correlate data sets: we make conjectures informed by context and experience. Human intelligence is a web of best guesses, given what we think we know about the world. Machines use inductive reasoning to crunch numbers and predict outcomes from selective information provided as evidence. It requires extensive human intervention to (1) frame the problem, (2) prepare the appropriate datasets, (3) select the input parameters, (4) adjust for bias, (5) train the model through hundreds and thousands of simulations, and, most importantly, (6) do this all over again and again as new information and knowledge are gained throughout the process. In 2016, composer and computer scientist Pierre Barreau (1) wanted to help composers create movie soundtracks, so (2) he digitized over 30,000 of the most significant musical scores, (3) defined over 30 different categories, such as note density, mood, epoch, and composer style, (4) provided hints by assigning these categories, (5) trained the model to accept requirements and predict the desired composition, and (6) repeated the process over and over until they produced AIVA, Artificial Intelligence Virtual Artist. In 2018 they released their second album, "Among The Stars,” a commission for a science fiction soundtrack recorded by CMG Orchestra in Hollywood. [Play video clip]
Not all AI is created equal. Some AI makes Siri smarter, some will help you navigate better through traffic, and some are just meant to outsmart human players in chess or DotA 2, a popular online battle arena game. The most dangerous kind are those that work to influence human behavior on the Internet. A former co-worker of mine writes software that analyses Twitter feeds of alt-right and alt-left groups, and she found that retweets outside of the group are almost non-existent. This kind of isolation and radicalization is where ethics, morality, and faith have a critical role to play. Knowledge is power. If ever it were true, this is now. It's like that cute online dating profile you followed. If only someone had forewarned you that she was actually a Nigerian scammer before you sent thousands of dollars to save the life of her baby. Trust me, if it's never happened to you, plenty of other lonely people fell for this scam. It's the same with AI, except we all unknowingly fell for it.
A POTENTIAL FACTORY FOR CATASTROPHE
AI is not human evolution because organisms are not algorithms. Developers most often write AI without consideration for ethics, morality, or conscience. Without ethics, AI is a factory for human catastrophe; think self-driving cars that should've never left the test track. Without morality, AI is a soul-grinding descent into the darkness of ethnic cleansing; think Uyghurs, China’s Muslims, being digitally tracked down, rounded up, and thrown into concentration camps. Without conscience, AI is a playground of cruelty; think Facebook’s “emotional contagion” experiment, where hundreds of thousands of real human beings were subjected to emotional manipulation without their knowledge. Replacing human choice and freedom with machine learning erases ethics, morality, and conscience. Perhaps this is the reason why Elon Musk, CEO of Tesla and SpaceX, came to believe that “AI is much more dangerous than a nuclear warhead.” Only we can provide the human element that computer science lacks: the wisdom and the responsibility to choose right from wrong.
Are you seeing the deeper problem here? Currently, only humans can make AI work. That is where belief, bias, prejudice, agenda, ambition, hopes, and dreams can muck up the works. It’s the reason why some facial recognition software couldn’t distinguish between certain Asian faces and why a smartphone could horrifyingly confuse an African American with a gorilla. "At the moment, there is no way to banish bias completely; however, we have to try our best to reduce it to a minimum,” says Alexander Linden, VP Analyst at Gartner. He goes on to say, “In addition to technological solutions, such as diverse datasets, it is crucial to also ensure diversity in the teams working with the AI and have team members review each other’s work. This simple process can significantly reduce selection and confirmation bias." AI bias is a matter of racial justice, fairness, and equity.
In 2019 Shoshana Zuboff wrote about the advent of ‘Surveillance Capitalism’ brought on by the following factors: the heightened fears and surveillance abuses during the ‘war on terror,’ the ultra-personalized products from Apple (iPod & iPhone), and the need for profit generation at startups like Google and Facebook. In ‘Surveillance Capitalism,’ corporations build incredibly detailed profiles on each of us by cataloging our online behaviors and selling to their advertisers their ability to predict what we’ll buy next. It relies on our consent without compensation by auto-clicking ‘I Accept’ to agreements we’ll never read, which give over our private information to be the raw material supply they use to predict and influence our decisions with startling accuracy and efficiency. The main danger of this arrangement is its relentless, unethical AI-driven push to determine human behavior. The result is a new form of totalitarianism, what Zuboff calls "Instrumentarianism," which reduces choices and ultimately subjugates free will to corporate profitability.
And AI never sleeps, never forgets. Every single action you make through a device, from reading an article to posting a meme on Facebook, from watching a series on Netflix to purchasing a roll of toilet paper on Amazon, and from joining a Zoom call to visiting your kids, is recorded, cataloged, stored, and sold. Then AI uses it as a dataset to predict what you’ll do next, control your next purchase, and keep you happily clicking through your favorite apps and games until all of your money is gone, your credit cards are maxed, and the equity in your home is exhausted. It’s all in the license agreement you clicked through before installing the app or using the website. And they’re only getting better and more creative at doing it.
SPYING BY DIGITAL ASSISTANTS
Have you ever spoken to friends about a topic only to find hours later that you see an ad for that specific thing? AI-driven digital assistants listen for keywords like "Hey Siri,” "Okay Google,” or "Alexa." The makers of these digital assistants promise that they don't record any conversations until you speak the keyword. However, other phases, like "Okay, cool," can trigger Google's assistant to listen in. I say this one often, and it's annoying. Worse, when this happens, the conversation is recorded for a bit, and if I mention a product or service, the digital assistant adds it to my history, and then I'll see an ad for it later. You can either stop using digital assistants altogether or avoid phrases like "Hey, seriously," "Okay, cool," and "let's uh..." The proliferation of digital assistants and smartphone surveillance apps makes it nearly impossible to maintain privacy and freedom of choice.
True, not all companies are this evil, and not all organizations purely exist to take your money, but are you willing to give over your freedom and choice to the ones who only answer to quarterly earnings reports?
Imagine future generations wondering why we didn’t see the scam, how we let corporations invade our privacy and make our choices for us. Crisis after manufactured crisis will leave us no choice but to upload our consciousnesses to the cloud or face mass extinction. The amount of information we give away about ourselves continuously for the simple pleasures of modern convenience and entertainment is outrageous. Watching a video go viral across social media is a lot like watching a swarm of bees. Do we still think we are free? Oh, I’m not quite ready to exchange my body for a thumb drive and submit to the hive mind. Some healthy disengagement is in order.
HOW WE RE-ENGAGE
Now let's talk about how we re-engage.
I’m not just trying to scare that crap out of you, so let’s get specific. What role do you have as a citizen and a person of faith? Once you unplug from The Matrix, you have a lot of training to do.
The first order of business is to reclaim your true, real-world identity and your dignity. Online profiles, avatars, and personas lead you to believe that you can live an alternate life and become someone else; this fractures the self and leads you to act and speak in ways you would never communicate in person. Many of us even act out this online persona in the real world, the January 6th attack on the capitol being one of the most extreme cases in recent memory. Yes, there is something benign, freeing, and even therapeutic when you can safely share intimate details with strangers. However, the illusion of safety is both alluring and dangerously false. Beware. The sexy beast comes out when we can remain anonymous, asynchronous, and invisible. Trolling, griefing, flaming, loafing, cyberbullying, and other predatory online behaviors. Ever noticed how quickly chat, email, and Facebook conversations can get heated and blown out of proportion? Experts call this the toxic online disinhibition effect. This results from placing screens, text, memes, emojis, and animated gifs as social barriers between living, breathing, feeling, and finite beings.
I, myself, fell into many of these traps when the Internet was young, and I'm sure many of you have posted something you would later regret.
Introducing AI into this mix has amplified and further encouraged all of these toxic behaviors. Why was it allowed? They create the most user engagement; there’s that phrase again; the higher the emotions - the longer the user stays on the app. AI doesn’t care what promotes it, so long as it continues. Now the beast is released. Once you've taken Kong out of the jungle, no 24-hour Facebook timeout or 30-day Twitter ban can stop the Empire State building scene from playing out. Too little, too late.
Human beings cannot be digitized and analyzed as if we were just bitstreams of data. We have to pull ourselves back from the brink of digital exploitation and reaffirm our commitment to protect the dignity and worth of every person, starting with ourselves. What happens to individual freedom when Neuralink, another Elon Musk company, develops neural implants and the FDA approves them in 7 to 10 years? What happens to privacy when Big Tech can read our thoughts?
In Jurassic Park, Jeff Goldblum said,
Um, I’ll tell you the problem with the scientific power that you’re using here. It didn’t require any discipline to attain it. You stood on the shoulders of geniuses to accomplish something as fast as you could, and before you even knew what you had, you patented it and packaged it and slapped it on a plastic lunchbox, and now you're selling it, you want to sell it!
You can equally apply Goldblum’s rant to AI and the eager computer engineers who pursue it without ethics or transparency.
AI AND FAITH: ALIGNED WITH UU PRINCIPLES
The inspiration for this talk was a July NY Times opinion guest essay titled “Can Silicon Valley Find God?” The author, Linda Kinstler, wrote about industry professionals, university professors, and religious leaders in Seattle who have banded together to do for computer science what bioethics did for medical science and genetics. At their website, aiandfaith.org, they’re joining a larger national and international effort to place human ethics and empathy in the center of the debate over AI advancement. The Insights page of their site contains a wealth of curated articles on the topic. Their principles are remarkably similar to the UU 7. Their focus is just as poignant and relevant as climate change; ignoring or remaining ignorant about the problem compounds the consequences exponentially. This article motivated me to want to become part of the solution.
AI has saturated our world, our culture, and our digital life. The physical separation of the pandemic has forced all of us to expand our digital lives and rely more heavily on AI to connect us. In his book, The Ascent of Information, Caleb Scharf calls it the Dataome, coined from the word ‘genome’; it is the 10,000-year preservation of human language, thought, history, heritage, and legacy recorded in oral, written, and digital traditions. The Dataome is so vast that only AI can efficiently access it; think Google, Siri, and Alexa. The AI needed to mine and curate the Dataome requires a large carbon footprint, so big that it inseparably links the problem to climate change. As the Dataome grows by leaps and bounds every year, so does the cost to the planet. This effectively makes AI an issue that challenges every single one of our principles.
It's not all bad news.
Some people have been working for years to turn the ship around, raise awareness, and make AI safer. Sam Altman is the CEO of OpenAI, a free-to-use project co-founded by Elon Musk, and he seeks to make AI tools available to everyone and not just the Microsofts, Facebooks, Apples, and Googles of the world. OpenAI beat the world's best DotA 2 players. Still, it consumed 128,000 of Google's servers for several weeks to build the model. Altman says that right now, the results produced from general intelligence AI systems are somewhere between “moderately profound and gibberish.” Still, he’s optimistic that AI can help save humanity from its most pressing problems, like racial disparity, climate change, education inequalities, and poverty, by using a technique called “reinforcement” to train AI systems to behave in alignment with our needs and values. Elon Musk said that the most favorable outcome for AI is to increase human freedom and choice. Sam Altman suggests that the key to maintaining that freedom is through the democratization of AI, equal accessibility of AI, and ensuring accountability for its human creators. Again from Elon Musk, we get the suggestion that although he’s rarely in favor of government regulation, he would advise that we regulate AI.
ETHICAL DILEMMAS
AI Multiple, a technology industry analyst firm, suggests that there are nine ethical dilemmas in the creation of AI: automated decisions and AI bias, autonomous things (self-driving cars and lethal autonomous weapons), unemployment (the "robot took my job" scenario), misuse of AI (surveillance and privacy, manipulation of human judgment, and deep fakes), self-aware general intelligence (the sci-fi version), and robot ethics (robot rights and the rules they must follow).
In his novels, Isaac Asimov suggested the 3 Laws of Robotics to build trust within people facing a future saturated with AI, and because I'm a nerd, I'll state them here:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given by human beings except where such orders would conflict with the First Law.
A robot must protect its existence as long as such protection does not conflict with the First or Second Law.
Answers to these problems are complex and far-reaching, but they include many solutions already proposed. Universal basic income is a standard answer to the "robot took my job.” Transparency is crucial to making AI safe and choice-driven; secrecy has prompted most of the outcry and mistrust of AI. Explainability is the idea that an AI system should provide the reasons why it makes decisions; human beings need to know why. Inclusiveness will ensure diversity among the researchers and developers, increase the diversity of the data, and mitigate some of the bias in the resulting models. Aligning countries, companies, and universities with new legal frameworks to legislate, enforce, and regulate AI development is ultimately the final answer to protect the public. However, given the actions of our social media networks and our own government (remember Snowden), it will be hard to regain the public's trust. Regardless, something still has to be done, and it starts with educating the public.
Ashley Boyd, VP of advocacy at Mozilla, says that tech executives at TikTok and other companies "continually fail to follow up on voluntary public commitments" to provide "meaningful" transparency into how and why their AI recommends content. Brian Boland, former Facebook VP, told the NY Times that Facebook "doesn’t want to make the data available for others to do the hard work and hold them accountable." In August, Facebook disabled the accounts of analysts at the nonpartisan research group Cybersecurity for Democracy. Mark Zuckerberg, Facebook's CEO, believes we have nothing to fear from AI and that we should just trust him. Even Sam Altman's OpenAI promised transparency, but instead of releasing GPT-3 free to the public, their latest, best language prediction model, it ended up licensing its technology exclusively to Microsoft. Underfunded non-profit research firms cannot begin to scratch the surface of the transparency problem. Boyd says that without a public outcry from a mobilized and vocal consumer base, there can be no real transparency into AI-powered ads and recommendations, no enforcement of new or existing laws, and no bipartisan interest in reining in major tech companies. Big Tech is pushing back, so we have to push back harder.
PUSHING BACK
One of the ways we push back is through the advocacy and funding of projects that build responsible AI with informed consent. With 4000 years of human religious tradition, our faith and spiritual beliefs can help guide us on what AI can and should do in our world. Imagine if the AI-driven marketing technology leveraged for profit and the social engineering algorithms used by Big Tech could be directed towards our goals of greater understanding, compassion, community, and justice. That is precisely what Thomas Osborn, a contributing member at AI & Faith, proposes in a recent newsletter. It could help us be better teachers and connect people with specific learning profiles to teachers and providers of education and care. If our goal is to save humanity, AI can help.
But can AI save your soul? Yes, if you defend your ability to choose. Admittedly, the question is purely rhetorical. If you believe the AI hype, you might expect it to be our salvation. Human advancement can safely be AI-driven, but not without an informed populace and the will of people like you and me to get involved. If you want a vote on the future of the human race, you must weigh in on AI and add your voice to the conversation. I’m talking about re-engagement. It’s not enough to curb your use of bad-acting AI applications like YouTube, Facebook, TikTok, Twitter, and Instagram. We can’t begin to face the problem unless we engage with knowledge and community efforts to confront the powerful and the reckless with the united front of faith-inspired ethical reforms and government oversight and regulation in an industry unable to exhibit much-needed self-control.
Our principles demand that we restore the dignity and worth of every person that is being decimated by AI, maintain justice, equity, and compassion that is being trampled by AI, keep the search for truth and meaning free that is being redefined by AI, defend democracy in society at large that is being reprogrammed by AI, build a Beloved Community with peace, liberty, and justice that is being dismantled by AI, and respect the web of existence that is being destroyed by AI. Few crises come close to the dangers of AI in threatening all our principles, and few modern problems approach this level of urgency.
If you wish to join the conversation, please come up to the front after the service or stay in the main Zoom room and later visit aiandfaith.org. If you want to learn more about organizations and advocacy groups around the world promoting responsible use of AI, there is a link at aiethicist.org: https://www.aiethicist.org/ai-organizations
Thank you for your time and thanks for listening.