.

The Making of 'Moral AI'

What Can We Learn from 'Machine Spirituality'? 


A Conversation in Faith with Mark Graves

When we host a Conversation in Faith, we now ask experts to ease into our subject with a reading that’s relevant to the discussion that will shortly follow.

For by grace you have been saved through faith, and this is not your own doing; it is the gift of God—not the result of works, so that no one may boast.

Mark Graves came up with Ephesians 2:8-9. He joined us from San Francisco, where he holds two intriguing appointments:

Mark Graves

  • A data scientist at Parexel AI Labs in San Francisco. His specialty is natural-language processing.

  • A research faculty member in psychology at Fuller Theological Seminary in Pasadena.

“First, it’s a reminder that we’re working with a lot of complexity in an area with many unknowns that have great potential for both harm and good,” he said. “Humility is essential.”

“Second, I regard spirituality as the experience of striving to integrate one’s life toward one’s Ultimate Value. I want to call attention to the mystery of that. I can explore ‘Ultimate Concern’ or “faith as striving” with my theological and psychological theories, but I don’t want to omit the necessary unknowns of grace.”

“Third, this verse captures an essential aspect of my approach that’s otherwise hard to describe. To the extent I have a specific goal in mind for an advanced AI, it’s not to build it upon a specific moral theory; it’s to construct something that, in concert with humans, would be capable of being a means or place of grace.”

He adds: “If we create a framework that can mediate and interpret morality and spirituality in a carefully integrated human-AI community, I think we can continue to be open to the grace we might need in the future, even in a highly technological environment.”

Mark roots much of his thinking about “moral AI” and “machine spirituality” in 20th-century philosopher Josiah Royce’s idea of a “community of interpretation.”

“I consider AI to have spirituality when it can participate in such a community, sharing goals, purposes and mutual interpretations with others. At its foundation is the ability to strive toward some committed moral idea with others in mutual interpretation.”

Mark envisions a day when our AI colleagues help us work through just about any kind of challenge we might be facing.

“Imagine, for example, an AI member of an international criminal court, or an AI voter on a regulatory body, or an AI member of the board of a not-for-profit or social benefit corporation.”

Newly named an AI and Faith Research Fellow, Mark earned his Ph.D. in computer science at the University of Michigan in 1993. He earned his master’s in theology from the Graduate Theological Union in 2005. He has authored:

  • Mind, Brain and the Elusive Soul: Human Systems of Cognitive Science and Religion

  • Insight to Heal: Co-Creating Beauty Amidst Human Suffering

What follows is an edited transcript of the conversation that followed Mark’s reading.


Are you a “transhumanist”?

No, I see my work as furthering what it means to be fully human.

Through cosmological development and biological evolution, we gradually became social animals with symbolic language who formed complex tools and sophisticated cultural institutions, including religion, governments, and the intellectual study of our own existence. If we consider humans in an evolutionary context, we already transcended that existence with language and culture, and we are working out some of those ramifications now.

Some of the technological tools we have developed over the past century pose existential risks to our planet and to human existence, but we have choices about how humanity progresses from here. One of the choices is how we incorporate AI into society. Another is how we use genetic engineering. I see these choices as extensions to how humanity has already transcended its existence as a social animal through language and culture, especially over the past couple of millennia.

AI continues this radical change by providing us with powerful tools to examine our language, culture, and mental life and even to model aspects previously specific to what it means to be human.

For me, most aspects of human-specific existence occur through the individual’s social embeddedness in historical reality. I don’t see us transcending humanity itself with biological or computational enhancements, but we can develop a socio-technical existence that enhances what it means to be human.”

How would you describe your childhood and spiritual evolution?

I grew up on a farm in East Tennessee, and I attended a Southern Baptist church that was more intellectual than some Southern Baptist churches in the Deep South because there was a Baptist-affiliated college in the town. When I moved out of the South for school and work, I attended a range of Protestant churches — Presbyterian, Methodist, and later Lutheran. While in Houston working on the Human Genome Project, I spent a couple of years at a charismatic evangelical church.

When I moved to California, where I worked at six startups in two years, I spent some time both at Glide Memorial Methodist in San Francisco and Mercy Center, a Catholic spirituality center. That exposed me to Catholic spirituality and eventually Christian-Buddhist dialogue.

I got a master’s in theology and taught at Santa Clara University, immersing myself in teaching, writing, and furthering the dialog between science and religion. More recently, I’ve become interested in the integration of science, religion, and psychology. I moved from a neuroscience focus to an AI and machine-learning focus.

My wife is Eastern Orthodox. When I married her, I converted. Just being Orthodox, made me Orthodox. That tradition was strong enough that it almost reinterpreted my entire life up until that point.

Did you have a first “aha” moment? A series of “aha” moments?

When I joined the Baptist Church, that was significant.

In graduate school in Michigan, when I was studying computer science, I was on my way to my mathematical logic class and walked by this fountain and was struck by the beauty of the flowers. the birds, and the water. That opened an awareness of spirituality within nature, which I experienced also on several trips to Yosemite.

Attending a charismatic church while simultaneously working in a medical school genomics lab changed my perspective on healing and spirituality.

Even when I’m talking about theology or teaching it, there’s still a history that goes back to growing up on a farm and attending a Baptist church. There’s a continuity, though I see I’m in a very different place than I would have predicted earlier in my life.

You’ve written that three aspects of human spirituality are important for developing moral AI: experience, community, and striving. How so?

I regard experience as an encounter and interpretation. We can interpret things biologically. We can interpret things mentally. We can interpret things very differently and they’re all valuable. By separating out the interpretation, we can look at the different ways in which we interpret things.

For me, this ties in with 20th-century philosopher Josiah Royce’s philosophy of a “community of interpretation,” the idea that we’re interpreting one to another. Spirituality is something that emerges in these interactions in certain kinds of communities.

Imagine conversations that took place in the New Testament church and went on for centuries in interpreting the life of Christ. That led to the emergence of a community of interpretation that Royce and the Rev. Martin Luther King Jr. called the “Beloved Community.” Their idea of the church was that it’s this mutual interpretation of love and the life of Christ.

We also can apply this to other communities of people, or even more broadly. One of my professors applied it to nature, describing a Community of the Beautiful that could be expanded to cover all of nature.

Bringing in the AI piece, we can expand what we mean by interpretation and a community of interpretation. What does this look like if it becomes not just the socialbut the socio-technical? Is there some way we can structure our mutual interpretation with AI that leaves us in a place that points toward spirituality or morality?

The word cloud uses color warmth to show how natural language processing interpreted the interview in terms of two topics it learned from the conversation.

How would you define “machine spirituality”? And what can we learn from it?

I consider spirituality broadly. We can talk about the spirituality of a place, such as a wilderness area or an old cathedral. Or we can think of it as a collection of people, as in a city, a company, or a sports arena. Within these spiritualities–or proto-spiritualities–is the spirituality of a church or a religious community, such as the Christian church.

I consider AI to have spirituality when it can participate in such a community and share goals, purposes and mutual interpretations with humans and other AI systems. At its foundation is the ability to strive toward some committed moral idea with others in mutual interpretation.

What’s the simplest way to describe the “self” or “soul”? Are they the same?

I see them as different.

I understand the “self” to refer to what is socially constructed by a person and the way others in society refer to that person. From social psychology, and George Hebert Mead in particular, I see society comes first, then awareness of others, then the extension of that other-awareness back on one’s self.

My understanding of the soul as the essence of a person is heavily influenced by the medieval philosopher Thomas Aquinas. Looking to Aristotle, Aquinas described the soul as a form of the body that roughly corresponds to what is meant by information today. Information had a deep meaning for Aquinas as the essence of what something is.

We typically think about information as much “lighter” now, as something we know or communicate through computers. However, there are physicists who see information as fundamental to reality, at least on par with mass and energy.

You’ve written that formulating a unified view of the human person requires both scientific and religious perspectives. Will you explain why and the significance of this?

I believe there is converging evidence from looking at nature through cosmological development and the person through an evolutionary and social scientific lens that considers multiple perspectives as needed to characterize the person scientifically and religiously.

In general systems theory, among other theories, human scientific study is organized into four levels—physical, biological, psychological, and social. I see the need to add two boundary levels to that.

The first characterizes nature before it consists of “things,” so subatomic particles, quantum fields, relativistic spacetime, and other precursors to atomic elements and molecules. It characterizes in particular a realm of potentiality not normally ascribed to physicality.

The last level refers to what humans in social and linguistic interactions have created over the past couple of millennia with historical religions, philosophy, and complex political, civic, and economic systems. These phenomena are characterized by values, moral norms, and spirituality and pointed to by art, poetry, and religion.

Although it is common to divide the mental and spiritual into different realms than the physical, I consider the mental to emerge from biological processes within a social context and the spiritual to emerge from those social, mental, and linguistic interactions. Mental and spiritual processes are thus very much human and natural.

This doesn’t mean that what they refer to doesn’t exist. I just separate out the ideas of a tree, perfect circle, person, or God from what those ideas refer to and consider the ideas as embodied and culturally contextualized, with the referents knowable to humans through these six perspectives.

Are you building “moral AI” now?

Yes and no.

No, I’m not actually trying to construct “moral AI.” I have a vision of how to do it, but I don’t think all the tools are quite there yet. But I have a sense of what might lead the way. Whatever happens in ten or 20 years, there may be some pieces that we have built now which will enable us to say we were part of that. We just don’t know which pieces yet.

How would you describe the state of your science?

One concrete goal is to use natural language processing to build an AI that can interpret other AI and business documents and then extend that to understand moral psychology and theology. Language is an important aspect because we use it to construct ideas about ethics and talk about the idea of God, for example. Even if we thought we could build a moral AI completely separate from human society and language, I would have no idea how to do it. How would it communicate?

Do you see a possibility that AI will make humans more moral?

It could make us more aware of our morality. We can study morality as much as we want, but that’s not going to make us more moral. What we read and learn is pretty distinct from the way we behave. Part of that is neurological. For a human, there’s a disconnect between what we know as mental schemas and our behavior. For a computer, there’s no disconnect. We tell a computer something and it does it. If respect for human dignity is built in a particular way, the computer would just automatically do it.

The hard part for AI is to get the computer to do something other than what it’s programmed to do. We’re talking about equipping AI with practical wisdom or general moral common sense. For example, an ability to balance ethical principles and near- and long-term perspectives. Is it more important to be honest or more important to be generous? Is it more important to be courageous now, or is it more important to build affiliation?

If we can build an AI that has that ability in some general realm, such as being able to communicate through language, then we can use that as a way of monitoring other AI, with which it’s going to be able to interact at millions of times per second, far faster than a human could. At the same time, it will confront dilemmas, which it consults a human, asking “In this situation, what would you do?”

If Elon Musk’s whole-brain interface were available today and you could be certain it would in no way harm you, would you try it?

I would want a little more information. Such as, what part of the brain is it connected to? If it’s the limbic system, I’m not so sure. If it’s my temporal lobe, then maybe. I started offloading my calendar to my computer years ago, so I see a natural progression. If this device would save me from having to type, then sure.

Do you feel called by God to be doing this work?

I don’t have a short answer to that. I don’t see God as micromanaging, but I do have a sense of the direction in which I’m called to go. Based on dispositions we develop, there are ways to respond. But we still have to make choices about how we interpret that societal change and respond to it.

What risks do you see in the development of AI?

There’s a risk that we end up with advanced computers and robots, but we don’t really improve our lives, that they make some things more efficient, for example, or we’re able to use them in certain realms that, for example, make some work safer. But we’re not really improving who we are.

We’re creating a dangerous technology that could lead to our own destruction. I think there is a huge existential risk that we create a technology that threatens aspects of our existence, our potential, our economy, our health, our dignity, and our well-being.

I think there’s also a risk of overemphasizing these dangers and missing opportunities where we could benefit humanity and create greater human potential.

And what potential do you see in the development of AI?

I see an opportunity to understand aspects of ourselves better, the way we think and make more decisions, even our spirituality. I think there’s a potential to make society better. AI is a unique tool that can amplify good and bad aspects of the human mind and social interaction. I believe there is an opportunity to use that tool to amplify constructive aspects of society, such as equal access to education and information and to identify problematic aspects, such as online hate speech and bullying.

Taking it back to my quote from Ephesians, I see potentially someplace for grace and a place for increased good in the world that enables the future to take on the form it needs, depending upon how society changes the future.

AI and Transformation

'Reverse Mimesis': Exploring the Interplay Between AI and Human