.

A Service Inspired by 'The Most Human Human'

A Service Inspired by 'The Most Human Human'

Many thanks to Rev. Suzanne Rude for forwarding the main elements of this AI-themed service she hosted February 19, 2023 at the Unitarian Universalist Church of Concord, NH. We introduced it here.


OPENING WORDS

Rev. Suzanne Rude

Good morning. I’m Reverend Suzanne Rude. I am here this morning to discuss this month’s theme, the Path of Love, in the context of Artificial Intelligence. Today’s is entitled, “The Most Human Human.”

Artificial Intelligence refers to the replication of human intelligence by machines that are programmed to think and act like humans. The term Artificial Intelligence is often shortened to simply AI.

My interest in AI started around the year 2019, when I read a book entitled, The Most Human Human by Brian Christian. It was subtitled, What Talking with Computers Teaches Us about What It Means to Be Alive. I am in no way a specialist in this field of AI. In order to reflect on this subject, I had to read many books, listen to many podcasts, and I even audited a University of California course on the subject entitled: “Big Data, AI and Ethics.” A selection of the books, articles and other sources that I have reviewed in preparing today’s service are listed in the resource link (*selected sources listed at end of this document).

I did all this study not only because it interested me and it was fun, but due to my strong belief that faith communities should address the ethical issues facing us in our present day and in our future. So, even though I myself did not grow-up with a smart phone and a virtual assistant, and feel quite humble in my technical knowledge, AI is our future and we should all be concerned about our future because we will be living there for the rest of our lives!

One current-day example of AI is a conversational bot, also called a chatbot. It is a computer program used to simulate human conversation often over the internet. Conversational bots use natural language processing and machine learning to generate human-like responses to human prompts.

This morning, we will have opening words that have been generated by a particular chatbot called ChatGPT. ChatGPT is an AI model that uses deep learning techniques to produce conversational text responses. Deep learning is a computer design fashioned after the way human brains work.

Before hearing from ChatGPT, I must note these two caveats. ChatGPT allows users to share its output, as long as users note (first) that this output was not generated by a human, and that (second) that this chatbot is still being tested and modified. So, with those qualifications noted: here are today’s opening words from the non-human ChatGPT on the subject of UUs; Artificial Intelligence; and the Path of Love.

My prompt to ChatGPT: What should a Unitarian Universalist know about Artificial Intelligence (AI) and love?

ChatGPT: 

“As a Unitarian Universalist, one may believe that love is a fundamental guiding principle that should inform all ethical decisions, including those related to AI. In considering AI and love, a Unitarian Universalist might ask:

"How can AI be developed and used in ways that promote love and compassion for all beings and not just for the benefit of a select few?

"What can be done to ensure that AI systems are designed and implemented in ways that reflect the inherent worth and dignity of all people, and avoid perpetuating biases and discrimination?

"How can AI be used to support and amplify love-based initiatives and efforts that address social and environmental justice issues?

"By approaching AI through a lens of love, Unitarian Universalists can work to ensure that AI is developed and used in a way that aligns with their values and supports a more loving and just world.”


ChatGPT has some interesting ideas, some of which I will indeed address this morning.

Again, welcome to today’s service entitled “The Most Human Human.” Thank you very much to ChatGPT for its contribution to today’s service. With no offense meant to this conversational bot, I will note that the remainder of this worship service today will be generated entirely by humans.


READING

In the book, “Buddha is as Buddha Does,” Lama Surya Das relays a "Birth Tale of the Buddha" as a bodhisattva, long before the Buddha’s incarnation as the great Buddha. In this particular tale, the Buddha-to-be is a dull, gray parrot. One day a fire breaks out in his forest and he sees that many animals are trapped and plants are being destroyed. He resolves to do what he can to save the forest. He flies down to the river, scoops up water onto his wings, flies back over the center of the fire and shakes the drops from his wing tips onto the flames. He does this over and over again, despite the fact that there appears to be little effect on the forest fire and his wings are slowly getting singed.

As this is happening, well above him in the clouds, the gods look down on the parrot’s feeble efforts to douse the forest fire with laughter and mockery... however, one god is moved by the parrot’s actions. This god changes itself into a giant eagle and uses its own tears of compassion to drench the flames and thereby save the forest. What’s more, some of the eagle’s tears fall on the parrot and change his feathers from their dull grey to a whole rainbow of colors.


THE MOST HUMAN HUMAN

The book I read that got me interested in the ethical issues surrounding AI, was called The Most Human Human by Brian Christian. It was specifically about the Turing Test. Let me explain.

In the early 1950s, British codebreaker and father of artificial intelligence, Alan Turing, predicted that computers would become so successful at imitating human intelligence that humans would someday “speak of machines thinking.” He posed his famous Turing test by poetically asking whether a computer could ever “enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought [or] do something really new.”

In practice, the Turing Test was designed to be fairly simple: A computer could be said to "think" if a human questioner could not distinguish a computer from a human being through means of a written conversation.

Many of you have probably participated in a type of Turing test whether you realize it or not, when you prove to a computer that you are not a robot. For example, when you type out a CAPTCHA from one of those images with those squiggly, distorted letters, to convince a computer that you are a human, you are taking a Turing Test. The letters in CAPTCHA stand for a "Completely Automated Public Turing Test to tell Computers and Humans Apart." CAPTCHA!

Artificial intelligence benefits humanity in many areas including in medicine and science. Robots can work in environments that are dangerous for humans. They can work around the clock. They can aid those with disabilities by changing words to sound or sounds to words. AI systems can figure-out how proteins fold and what the James Webb Space telescope is “seeing.” They can improve agricultural output by studying local weather patterns. I am happy to report that IBM Watson, which won TV’s “Jeopardy!” against the best human players, is now being used to diagnose lung cancer. So, I am not here this morning to disparage the important contributions of AI in many fields, rather I am here to reflect how our humanity is challenged in many ways when machines are speaking to us or when we are speaking to machines. I am here to speak mainly of humans and our relationship to AI.

So, let’s talk about humans. One aspect of the human condition is that we have certain weaknesses. Many of these weaknesses arise as a by-product of evolution whereby certain traits improve our species’ ability to survive. These traits can confuse us humans when we interact with machines.

For example, for perhaps 200,000 years of evolution, we have had the experience that human speech comes from humans. Therefore, if we hear human speech or read what appears to be human-generated conversational speech, we intrinsically identify it as “human” and attribute to it a human nature. However, in reality, there is no subjective or conscious human nature in the machine. AI conversational systems such as chatbots simply look for patterns of words that appear together. So, an AI system might report that lemons are sour because these two words often appear together, but the machine has never actually tasted a lemon and furthermore has no true understanding of sourness. The machine is an impersonator that is very, very good at predicting what human words usually go together in the context of the prompt it is given.

Our human willingness and susceptibility to attribute human nature to a machine also causes us to expect accuracy from AI output when we might actually be getting misinformation. This misinformation can be due to poor inputs but can also be due to mistakes in coding algorithms. The public demonstration of the science bot Galactica only lasted three days in 2022. Although the bot had been trained with 48 million examples of real science articles and scientific sources, Galactica in its public demonstrations cited research papers that did not exist and was perfectly willing to produce an “authoritative” scientific article on the history of bears in space, a history which we humans know does not exist. This kind of output that looks authoritative and convincing, but is not, is sometimes called AI hallucinations. Maybe future generations of chatbots will be able to avoid these hallucinations, but not yet.

We also cannot underestimate the ability of some to exploit our human weaknesses more directly to facilitate the use and overuse of machines by humans. Evil by Design is a book by Chris Notter. The book’s websites touts “Now you too can leverage human fallibility to create powerful persuasive interfaces that people will love to use.” This author’s book teaches computer design parameters based on the seven deadly sins. No kidding.

Humans are also driven by fear and a special attention to novel things. These again are survival instincts honed over millennia. These instincts cause us to pay attention to such things as traffic accidents over other more mundane events. But in modern times, these traits can lead to digital addiction when machines are coded to show us the equivalent of one traffic accident after another through compelling words and novel videos which are designed to trigger our evolutionary responses and grab and hold our attention.

Numerous studies show that digital addiction can lead to loneliness, depression, anxiety and sleep disruptions. This is especially true for teens whose brains are still developing and for whom digital interactions may be plentiful.

Another aspect of humans is that we are diverse and sometimes we humans are biased and prejudiced about these differences, be they gender, ethnicity, disability or religious beliefs. When an AI system is trained, it uses what’s known as Big Data, that is, all the available sources of digital data pertinent to the goals of the system: social media, books, articles, etc. When these sources are the inputs, the AI output often mirrors-back the prejudices, biases, and sometimes outright hatred found in our society.

In 2016, Microsoft had to shutdown its AI chatbot called Tay after just sixteen hours of public demonstration when the bot started tweeting sexist, anti-Semitic and racist messages in response to human prompts. An AI system used to judge an international beauty contest that same year was found to favor the lighter-skinned contestants due to its training inputs. A credit-card algorithm used in 2019 gave significantly better credit limits to men even in cases of married couples whose finances were completely shared. Similar negative biases on racial attributes have been found in mortgage approval AI systems and for women in AI systems that review job applications. In my college course I learned that one Big Data AI system in the US correlated the given names: Harry, Katie, Jonathan, Nancy, and Emily: to the words “freedom, love, heaven, and gentle” while it correlated the given names Jerome, Ebony, Jasmine, Latisha, and Tia: to abuse, sickness, death, poverty, and filth. This is a problem in our society, which AI has not created, but which is being mirrored back to us.

And this problem might not be obvious in the output at first glance. One needs to look for these biases and test for them and then code and program around them. In undertaking this task, it would be helpful if the programmers and coders themselves reflected the diversity of humans, which currently they do not. AI programmers and coders are overwhelmingly white and male, for example.

There are groups and organizations that are seeking to change that. I will mention a few:

  • Girls Who Code

  • Women in Machine Learning

  • Black in AI

  • Lesbians Who Tech

  • Trans Code

  • Black Girls Code

  • Diversity AI

Finally, humans have values. In the world of AI, it is not clear whose job is it to instill values into AI systems and whose values? How do we code for kindness and know AI is following a path of love? An AI system needs rules and/or an enormous number of examples to function. A focus on values may or may not be a part of that process.

Take the so-called Trolley Problem, a well-known ethical dilemma. It goes like this: You are in a run-away trolley that is about to strike and kill five people. You can avoid this accident, by pulling a lever and sending the trolley to another track where, unfortunately, one person will be struck and killed.

One possible approach to this dilemma is to consider the ethical principle of utility, which holds that the moral action is the one that maximizes overall well-being. According to this ethical principle, the right thing to do is to pull the lever, since doing so would save the lives of four people.

So, let’s for purposes of discussion, suppose that you tell an AI system that using the principle of utility is the correct approach if faced with similar circumstances. Now, the AI system that has been so-coded confronts a new problem. A healthy person walks into a hospital, in which five patients need organ transplants, each a different organ, or they’ll die. Will the AI system recommend that one person be killed to save five patients who are about to die since that it how it have been coded? Is this the result you actually want as a human? Morals and values are complicated especially when humans are asked to set them out in black and white, when in reality we are still grappling with them.

As regards human interactions with machines, just a few more examples out of many: should AI robots who are trained in empathetic responses be allowed to be used in nursing homes as machine companions to residents? Should they be allowed to offer psychological counseling? Should they be allowed to recommend which prisoners get parole? Should they be allowed to be the only available customer service “representatives” offered in a company’s phone tree? And what about the right to privacy in all this? These are real questions based on actual real- world experiences and testing. How do we as humans with values want to answer these questions?

As I mentioned, in preparing for this service, I took a course offered by the University of California. It was entitled, “Big Data, AI, and Ethics.” The professor discussed what options we actually have to make any changes at all to the ethical development of AI in the face of huge and rich Tech companies and Tech leaders with resources and wealth that surpasses that of the majority of countries in the world. I learned that there are no easy answers. I came away feeling like the little gray parrot from the story of the Buddha when faced with such overwhelming odds. But, it is the nature of Buddhist belief and the belief in non-attachment to outcomes that tells us that sometimes we are called to be the parrot and do our best putting out the forest fires we observe to the best of our ability. Maybe the gods from above will be inclined to help and perhaps not.

In the 1950’s Turing asked whether a computer will ever “enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought [or] do something really new.” What will be the place of humans in a world in which machines are slowly working their way through this list?

On the last day of the course, when all was said and done, the professor noted that in the age of AI what we can do as individuals first and foremost is to practice mindfulness, prayer and meditation so that we can learn how our own human mind works: our frailties, our strengths, and to what we give our attention. Spiritual practices of old that can help us deal with our AI future and be the most human humans we can be.

Amen, Let it be so.


*Selected Sources from Resource Link:


https://www.themarginalian.org/2013/04/01/turing-jack-copeland/

The Age of AI and our Human Future 
Henry Kissinger, Eric Schmidt and Daniel Huttenlocker © 2021

An Outsider’s Guide to Humans: What Science Taught Me About What We Do and Who We Are
Camilla Pang © 2020

“Big Data, Artificial Intelligence, and Ethics”
University of California, Davis Professor Martin Hilbert and other University of California faculty (via Coursera.org)

The Most Human Human: What Talking with Computers Teaches Us about What It Means to Be Alive 
Brian Christian © 2011

Reclaiming Conversation: The Power of Talk in a Digital Age
Sherry Turkle © 2015

The New Digital Age: Transforming Nations, Businesses, and Our Lives 
Eric Schmidt and Jared Cohen © 2013, 2014

Technomoral Virtue and the Tools of Our Time

Technomoral Virtue and the Tools of Our Time

Ray Kurzweil's 'Singularity' from a UU Perspective

Ray Kurzweil's 'Singularity' from a UU Perspective