A seminar on Artificial Intelligence (AI) was held at Santa Clara University (Silicon Valley, California) from April 3-5, 2019, sponsored by the China Forum for Civilizational Dialogue (an institution born from the joint commitment of La Civiltà Cattolica and Georgetown University) and the Pontifical Council for Culture. The event was hosted by the Tech & the Human Spirit Initiative at Santa Clara.
The meeting brought together, in addition to the two authors of these reflections, another 11 participants, scholars from China, the United States and Europe, to examine how the great changes underway are posing challenges to the Christian and Confucian traditions, as well as to other religious and secular traditions.[1
The enormous progress made in the last 10 years in the field of AI marks a historical discontinuity. China and the West have just begun to address the implications. In the long term, the AI revolution could redefine several fundamental philosophical questions: If machines surpass humans in intelligence, what will become of human uniqueness, dignity and freedom? Will computers become “aware” and “creative”? And if they do, how will our conception of culture be transformed?
Experience, technology and traditions of thought
For a long time now, our social and cultural experience of the world has been mediated by technology. But the current era proposes a qualitative shift. In public spaces, in cars, offices and homes, people are increasingly relying on their mobile phones, tablets, laptops, headsets and earphones, immersing themselves in worlds mediated by technology. Digital assistants provide us with answers, travel directions, and recommendations regarding restaurants and films. In doing so, they constantly draw on a continually updated understanding of our desires and needs. And virtual reality technology will immerse us more and more in artificial worlds. What implications does the current technological revolution have on our conception of the human being, on how we understand and refer to ourselves and others?
Catholic moral theology, centered on the dignity of the human person, can contribute to answering this question. In his 2015 encyclical Laudato Si’ (LS), Pope Francis warned that technology can prevent us from “making direct contact with the pain, the fears and the joys of others and the complexity of their experiences,” and it risks increasing a “harmful sense of isolation” (LS 47).
The main Chinese philosophical and religious traditions, namely Confucianism, Taoism and Buddhism, in turn bring a range of philosophical, ethical and theological resources to the issue. They differ from Christianity, and also from each other, on fundamental themes, including the existence and nature of the divine, the soul, and immortality, but converge on the importance of reflecting on oneself and on a moral life that places itself at the service of the community, however differently understood. They reflect on interpersonal relationships and the meaning of life.
All these traditions take their cue from the world of human beings in flesh and blood and from the centrality of physical, affective and interpersonal interaction, as a starting point for reflection on the good life. And it could not be otherwise, given their historical origins in the distant past.
Therefore, the issue under discussion is how they can and should respond to the growth of a new and technologically mediated world, in which three related dimensions emerge:
1.The development of long-distance communication, from the telephone to radio, television, Internet and social media, has significantly increased the amount of time and energy that human beings devote to interacting with people who are physically distant and even personally unknown.
2. The rapid emergence of AI, which has become more evident with digital assistants, is expanding the time and energy that humans devote to interacting with machines. And the latter are increasingly autonomous in their ability to understand our desires and needs, to respond to them, and even to shape and condition them.
3. The advent of “immersive” virtual reality, which is still in its infancy, will allow individuals to spend more time in non-physical spaces. This growing realism of virtual worlds, driven by the advances of AI and the power of digital processing, will give rise to innovations in commerce, entertainment, education, tourism and other sectors.
The interaction between this new emerging world and the established traditions – that is, the Christian, Confucian, Buddhist and Taoist ones – was the starting point of our seminar in Santa Clara. Often contemporary reflections on the ethics of AI take as their starting point a materialistic and Enlightenment position. The reflection develops from free, autonomous, rational individuals. In the resulting prism, the key question is whether AI and associated technologies will increase human wellbeing in the sense of freedom, prosperity and peace. From these premises, debates emerge on issues including data privacy, facial recognition, driverless cars and autonomous weapons.
These are necessary and important discussions, but in Santa Clara we did not intend to resume them. Instead, we tried to reflect on the ongoing technological revolution and its implications with regard to fundamental and central issues for ancient religious and philosophical traditions.
The key questions for discussion in the presentations and conversations were the following:
a. What dangers and opportunities does the advent of AI and associated technologies entail for the human person and for interpersonal relations?
b. Will technological advances undermine the human capacity for self-reflection and free will or, on the contrary, could they even improve that capacity from a practical point of view?
c. Are technological advances a threat or, on the contrary, a support for the ability of human beings to establish deep and lasting links with others, in the family, at work and in society in general?
AI and the human person: three key points
It is not our aim to explain in full here the reflections that emerged from the short reports and the broad discussions. However, it is possible to gather some key points.
Certainly the rapid progress in AI has raised critical questions about the future of technology and its implications for humanity. Both in China and in the West, we are at a time of uncertainty and discernment at many levels.
A first level that has emerged is linked to the political and social developments of recent years, which have served to dispel a naive optimism about the ability of digital technologies to make our world a better environment.
A second level is linked to the problem of human-machine coexistence, which has arisen while we are still immersed in the conflict and integration between Eastern and Western civilizations. In this case, the key issue is not the relationship between the human person and nature or society, but rather the between humanity and our creations.
A third problem is related to “creativity.” The possibility that we humans can create another intelligent being is more worrying for many Westerners, explicitly or implicitly influenced by the concept of creation, while most Eastern Asians do not have the idea of creation in their tradition.
A series of non-Enlightenment approaches
It is interesting to note that no contribution adopted what we might call a “secular approach” to the subject. In fact, most debates on the ethics of AI start from the Enlightenment ideal of the free, rational and self-determining individual. In this perspective, AI is often considered ethical insofar as it increases human autonomy and capacity, respects consent and privacy, and avoids prejudice.
All the contributions at the seminar instead adopted a more complex approach to the human person. Those which were based on the social doctrine of the Church shared the Enlightenment emphasis on the freedom and dignity of the person, but placed it in the context of the broader moral and social duties founded on the exercise of reason – the natural law – and in revelation. The ethics of AI, from this point of view, go far beyond the issues of freedom and responsibility to address the implications of technology in cultivating solidarity and justice in human communities.
The focus on human relations is very pronounced in Confucianism and Taoism. From a Confucian perspective, the isolated individual is never the starting point for ethical reflection: men and women are born within essential networks of mutual commitments in the family and in society. A central question is the impact of AI on the most immediate human relationships, leaving aside the dimension of universal love and also that of a transcendent God.
In Taoism, too, the emphasis is on engaging and dynamic relationships, and the natural order in which human beings are actively involved. To the extent that AI “could be redirected to facilitate the Tao with respect to oneself, freedom and human dignity, the common good, and the cosmos, Taoists would look at it with approval,” said one speaker.
From a Buddhist perspective, technologies that advance people on the path of “awakening” are to be welcomed, but they do not always have clear implications for the human person and interpersonal relationships. As one expert said, “Buddhism denies the existence of a fixed reason: no process of transformation follows fixed procedures; everything changes with destiny. Life, both in reincarnation and in liberation, has infinite diversities.”
Drawing on different traditions, including Marxism, the conversation converged on a holistic approach, attentive to the intellectual, emotional, physical, and sometimes spiritual dimensions of the person. As one participant suggested, “human beings and nature should be understood, and deserve to be understood, in various ways, including those that seem vague and mysterious, but ethical. When AI researchers do their best to reduce nature and humans to digital data, we hope they will continue to show respect for the different ways of understanding the world.”
Technology: neutral tool or part of life?
In the debate, there was a clear tension between two ways of thinking about technology and AI.
A first perspective conceives technology as a neutral tool from a moral point of view: an instrument that can be used for good or evil. One of the participants, for example, said that the dangers to which AI exposes individuals “do not come from the technologies themselves, but from the human propensity to misuse and abuse them.” And, from a Taoist point of view, it was noted that “technology can be misused, going against and distancing us from our own nature.”
A second approach conceives technology as an irreducible part of the world of human life, which conditions the understanding of ourselves and of free will. As another participant suggested, “technologies do not simply exist as passive objects in the world.” In the interaction with AI and other technologies “we ourselves are reconfigured: our bodies, thoughts, feelings, desires and abilities.” In the same vein, it was also said that technology is “a way of living in the world and of organizing it. It is not, therefore, a separate context, but is becoming increasingly integrated, connected with the environment of everyday life.”
These points of view are not incompatible. The first draws attention to human responsibility in responding to the challenges posed by technology, while the second reminds us that the question is not “if” technology should be accepted, but “how.” One participant said: “I do not believe that the human capacity for self-reflection and freewill has been set aside; instead, I believe that we ourselves must pay more attention and use more rigor in the exercise and development of that capacity.”
The importance of cultivating oneself
The interventions agreed on the importance of cultivating oneself. Moral life takes place over time according to the way people adopt and adapt virtues to changing circumstances, developing their character and personality during the process. It is not just an intellectual dynamic: there is a strong emotional and practical component.
One of the participants observed that for the great Confucian philosopher Mencius (370 B.C. – 289 B.C.) human beings distinguish themselves from animals because they cultivate compassion, “they learn to become human.” The increasingly pervasive immersive technologies pose a threat to this moral education. It was noted, for example, that “through games and social media, AI also turns into a weapon aimed at our people, from an early age, regarding the cultivation of virtues such as patience, self-control, and intellectual and moral discernment.” Perhaps there are ways in which virtual worlds could become a formative ground for virtues.
As another participant observed, “the ‘fictitious world’ of virtual reality could become a ‘practical world,’” where we learn to become better people through accelerated moral learning. And “many video games bring children and young people into contact with others from different backgrounds.”
There was mention of positive possibilities, including that of “living as an avatar in the body of someone very different from us, or taking a virtual ‘immersive journey’ to a poor country.” There was even the suggestion of “experiencing a ‘virtual reality’ (VR) account of the life of Jesus.”
A fundamental problem concerning education – in particular moral education imparted through the Internet, personalized AI guardians or immersive experiences in VR – is the consequent reduction of direct human contact. In this regard, it was noted that “for students, the unforgettable teachers are those who are remembered because they have had an influence in their lives, with whom they established a personal bond, and who contributed in some way to their transformation as persons, beyond the specific knowledge and skills transmitted.” From a Confucian perspective came a warning lest AI-assisted teaching “undermine the personal links between teacher and student that are the basis of a life-long learning commitment.”
The fracture of social space
The increased human-machine interaction that perhaps reaches the most evident level in mobile phones – increasingly ubiquitous on the street as well as at tables – also brings to a crisis personal connections outside the educational context. The Internet and social media have long been acclaimed as spaces for greater connectivity, useful for broadening personal horizons.
Recently, however, observers have complained about human degradation and the progressive transformation of the public sphere into hostile worlds, divided into groups that all think alike. This trend has become more pronounced with the use of AI to categorize information and people in accordance with their established preferences. One of the participants said: “For digital networks to reach their potential in promoting human solidarity, the art of dialogue must be recovered. When people listen to the ‘other’ and allow this voice to overcome their defensive barriers, they are willing to grow in understanding.”
The fracture of public space poses a particular problem for Christianity, since Jesus commanded his followers to love the stranger and to help the poor in their own environment and throughout the world. In particular, “Catholic social thought rooted its distrust of these new technologies in a preference for human development, for which the encounter and accompaniment inside and outside the house are essential.”
Care for the elderly
Care for the elderly was identified as one of the areas in which AI can mark a step forward, given the loneliness of many elderly people and the scarcity of competent care workers. From a Confucian point of view, the reliance on robots for family care is particularly problematic. There are those who have argued that AI applications in this area should be limited, stating the concern: “If a nice robot takes loving care of elderly parents without there being a real need, then it relieves adult children of all the obligations of care. And elderly parents are deceived by the thought that the robot cares sincerely for their wellbeing, to the point that they rely more on robots than on their children.”
This emphasis on reciprocity focuses on the interaction between person and person, even if there is a situation of unidirectional dependence, as is the case with care for the elderly. A Chinese perspective offered this: “To be truly human it is not enough to interact with someone or something that is completely dedicated to their needs and desires. In turn, one must think of the other as a unique human person, not just as a means of satisfying those needs and desires. Resources can be replaced with a different person or perhaps a machine, but people are not replaceable.” A similar observation on the importance of mutual vulnerability in human relations was made from a Christian perspective.
The physical element
The example of care for the elderly, which must be holistic to be effective, leads to the centrality of the body. Human beings meet each other, basically, as physical beings. But it is not foreseeable that the robots of the future will be able to replicate this experience because, as was said, “human bodies are made of flesh which is not the case with machines.”
In this regard, interactive technologies such as VR and computer games, which engage our mind and imagination to the exclusion of our bodies, risk distorting our relationships with ourselves and others: “The body is not separated from the mind, and in the Confucian conception of personal education it is as important, or perhaps more important, than the mind.”
A look to the future
All the contributions rejected the temptation to embrace dystopian or utopian visions of the future. The seminar participants recognized that the choices we make now, individually and collectively, will be able to orient AI in a direction that can support or that can harm the human person.
It is worth noting that AI and other advanced technologies cannot solve all the ills of society and do not replace the philosophical and religious reflection that is at the heart of great traditions in both China and the West. They will not lead to immortality; they will not put an end to suffering; they will not solve the mysteries of life. As was observed, “Buddhist efforts to perfect human beings are based on the spiritual level,” not on the technological level. Taoism clearly focuses on free participation in the flow of nature. The face-to-face relationships of mutual dependence and reciprocity that are at the heart of the Confucian tradition will not disappear, whatever the future path of AI. And Christianity will continue to insist on God’s transcendent power over life and death. It was recalled that “the technological person is the same spiritual person.”
Although advanced technology does not invalidate the great traditions and their precepts for humanity, it will continue to have far-reaching effects on the human person and human wellbeing. Increased human-machine interaction will attenuate the centrality of interpersonal relationships, questioning our established notions of what it means to be human. Indeed, it was pointed out that “the more AI-human interactions replace inter-human interactions in our daily lives, the less opportunity we will have to experience our humanity and participate in mutual growth. As much as the functional assistance of AI may be desirable, if we do not recognize its limits, we will endanger our own humanity.”
Whether we like it or not, the technological imperative embedded in our economic and political systems, in the East as well as in the West, and in our widespread consumer culture, will permeate the continued growth of AI-driven applications in communications, entertainment, medicine, transportation and other sectors. Precisely because this is an inevitable fact, we must not lose sight of the potentially positive effects of the AI revolution on health, education and productivity.
In an imaginable future we might be able to “realize our creativity by inventing and producing works for the pleasure of doing so. We will be motivated to work because we love our work, not because we will need to do it to earn a living.” Opening up to optimism, one of the participants stated that “the ultimate goal of AI should be to build machines that can mimic human intelligence, in order to allow human beings to be closer to the Tao.”
But we can go even further in our thinking in a positive direction, starting from a reflection by the Confucian scholar Wang Yangming (1472-1529), for whom “the great person looks at Heaven, the Earth and the myriad of things as one body, the world as one family and the State as one person.” For Wang, this principle of the single body is an ontologically fundamental relationship between all things in the cosmos, which includes nature, human beings and their creations.
From this point of view, it is hoped “that AI with increased capabilities can work with human intelligence to make the planet better and safer.” However, it should also be borne in mind that some people are not convinced we can reject the possibility that machines will have self-awareness and be able to choose their path. The issue, of course, is very sensitive and generates debate.
Are we close to singularity?
Already in the last century there was talk of “technological singularity,” meaning by this expression a moment in the development of a civilization in which technological progress goes beyond the ability of human beings to understand and foresee. It was imagined that artificial intelligence might become superior to human intelligence, along with all the technological advances supposed to follow from such an event.
In the seminar it was agreed that any moment of singularity is still far away. In the meantime, however, we see areas of life – family, friendship, and sexuality, as well as education, care, and solidarity – where the growth of AI and the ubiquity of technology pose problems for our established notions of the human person and interpersonal relationships. We could agree on the importance of certain boundaries, as one participant said: “We can set limits on what we want, and set limits on our means of achieving it. AI needs limits; our ambition needs boundaries. We are called to humility.”
What to limit, and how, is a complex cultural, social, and ultimately political issue. Given the cultural, national and ideological diversity in our world, it will not admit a uniform response, at least for the near future. As one of the participants noted: “social norms widely approved in specific communities should guide the task of integrating ethics into artificial intelligence, and in different contexts there may be morally legitimate variants of those social norms.” Of course, the questions remain open as to what is the appropriate scale of such a variation and what is the possibility of global perspectives and global solutions.
* * *
It is not possible to communicate here all the richness of the debate that characterized the seminar, but through this summary we can see its significance and the thematic breadth. The growing ability of computers to understand and manipulate data, words and images is a real breakthrough. The applications of AI in the fields of communications, transport, robotics, medicine, economics, law and other sectors are raising new questions about the concept that we have of ourselves as persons and our actions and responsibilities according to religious and secular philosophical traditions both in China and in the West.
 Here is the list of seminar participants: Bai Tongdong (Fudan University), Thomas Banchoff (Georgetown University), Brian Green (Santa Clara University), Daniel Bell (Shandong University), Julie Rubio (Santa Clara University), Li Silong (Peking University), Bishop Paul Tighe (Pontifical Council for Culture), Peng Guoxiang (Zhejiang University), Fr. Antonio Spadaro (La Civiltà Cattolica), Robin Wang (Loyola Marymount University), Shannon Vallor (Santa Clara University), Tan Sor Hoon (Singapore Management University), Wang Pei (Fudan University). Fr. Dorian Llywelyn, director of the Ignatian Center for Jesuit Education at Santa Clara, hosted the seminar.