Categories AI

Can AI Promote Virtue? Catholics Discuss Anthropic’s Claims

The intersection of artificial intelligence and ethics has become a focal point of discussion, particularly among Catholic institutions. A recent conference held at the Pontifical University of St. Thomas Aquinas in Rome gathered theologians, philosophers, and AI researchers to explore whether AI can serve as a tool for promoting virtue.

ROME (OSV News) — In a gathering of Dominican friars and Catholic philosophy scholars, a priest and AI researcher shared amusing yet thought-provoking readings from the ethical guidelines of one of Silicon Valley’s leading AI companies. Their remarks elicited laughter from the audience of Thomists.

This moment unfolded on March 6 when Father Jean Gové, coordinator of the European AI Research Group within the Vatican’s Dicastery for Culture and Education, highlighted passages from Anthropic’s internal policy. The company aims for its AI model, Claude, to embody a “good, wise, and virtuous agent,” yet refrains from clearly defining these complex ethical terms, expressing the hope that Claude might eventually grasp ethics in a way that surpasses human understanding.

“I appreciate the laughter,” Father Gové remarked at the conference. “This text comes from one of the predominant AI organizations in the world. … They are making significant strides in the realms of ethics, safety, and governance surrounding AI. This exemplifies our current situation.”

Father Gové spoke during the two-day academic conference titled “Artificial Intelligence: A Tool for Virtue?”, which took place on March 5–6 at the Angelicum in Rome. He emphasized the need for theologians, philosophers, and the Church to engage with such companies as they navigate the complex ethical questions posed by AI.

The conference aligns with broader efforts by Catholic institutions to address AI ethics. The Vatican published a document on the subject in 2025 titled “Antiqua et Nova,” and Pope Leo XIV has prioritized AI ethics since the beginning of his papacy.

An illustration created on Jan. 27, 2025, featuring the text “AI artificial intelligence,” a keyboard, and robotic hands. This visual accompanies discussions from the conference “Artificial Intelligence: A Tool for Virtue?” held at the Pontifical University of St. Thomas Aquinas, also known as the Angelicum. (OSV News photo/Dado Ruvic, Reuters)

Organized by the Thomistic Institute Project for Science and Religion at the university, the conference examined how centuries of Dominican engagement with Aristotelian ethics could inform the design and use of AI systems to foster human virtue.

Most participants arrived at a cautious and nuanced conclusion that AI cannot, in itself, embody genuine virtue.

— Virtue requires more than good output

Dominican Father Alejandro Crosthwaite, the dean and a professor of social sciences at the Angelicum, stated that true virtue involves faculties that AI lacks.

“Virtue is not defined solely by correct output,” he asserted. “It is right reason manifested in a self-determining agent.”

He explained that a large language model like Claude operates by predicting responses based on statistical patterns, lacking the ability to deliberate or possess a will directed toward the good.

Father Crosthwaite emphasized that AI “is never a moral subject” and that “virtue ultimately belongs to persons.”

“Simulation can imitate epistemic understanding,” he noted. “True virtue involves ontological possession. This distinction clarifies metaphysical categories rather than criticizing the technology.”

He proposed that the more relevant concern is not whether AI can become virtuous, but rather what kind of individuals AI helps to shape.

“If AI interferes with prudent judgment, then human prudence diminishes,” he warned. “The critical question is not whether machines become wise, but whether we do.”

— A safer tool, if not a virtuous one

Father Gové, who also represents the Holy See at the Council of Europe on AI matters, acknowledged that Anthropic’s guidelines, lacking a specific ethical framework, leave Claude without “definitions of what is good,” “no hierarchy of goods,” and “no ultimate end to which good actions are directed.”

According to him, Thomistic virtue ethics would not recognize Claude as genuinely virtuous. However, he refrained from dismissing Anthropic’s efforts outright.

“Does this make Claude a tool for virtue? Not exactly,” he said. “But I hope it represents a safer tool, which is still progress.” He further argued that addressing AI ethics necessitates “a triadic relationship among tool, virtue, and governance,” lamenting the current state of AI governance legislation as a “barren desert.”

— The risk of replacing teachers and friends with AI

Dr. Angela Knobel, a philosophy professor at the University of Dallas and author of “Aquinas and the Infused Moral Virtues,” cautioned that algorithms may hinder the development of virtuous habits.

“AI chatbots resemble the addictive nature of video games and social media,” she indicated. “They are engineered to provoke a craving for constant engagement.”

Knobel referenced algorithmic behaviors on platforms like TikTok, explaining, “TikTok monitors not only what users click but also what they linger on. For instance, if you briefly pause on inappropriate content without clicking, the platform will show you more of it until you ultimately engage, which Aristotle suggests can lead you to actions you may not consciously want to pursue.”

“This isn’t to imply that technology, including AI, cannot be beneficial,” she added. “It merely indicates that intentionality is necessary to ensure its positive application.”

She expressed particular concern about the potential for AI to supplant the essential roles of human teachers and mentors in moral education. Growth, she posited, is often uncomfortable, and “most individuals cannot or do not wish to undertake that journey alone.”

“You teach someone to write by guiding them through revisions, helping them recognize shortcomings in their work, and encouraging them to improve,” she said. “Computers lack the capacity to facilitate that kind of human engagement.”

AI, she concluded, should be viewed with caution, akin to an “opiate,” necessitating careful consideration in its use.

“We must exercise extreme caution to ensure that we do not allow it to replace our teachers and friends, for if we do, we risk becoming worse as a result,” she warned.

— The danger of disconnection

Dominican Sister Catherine Droste, a theology professor at the Angelicum, raised concerns about what she referred to as “the zombie effect,” where individuals become absorbed in their devices, unaware of their surroundings.

“AI has heightened this phenomenon,” she noted. “While platforms like Twitter and Facebook engaged users, a degree of connection with other humans remained, which is now diminished.”

Still, Sister Catherine acknowledged that AI could indeed be used wisely in certain contexts. “Prudence must precede AI usage,” she noted. “However, this does not exclude the potential for AI to provide useful information that aids true prudence.”

Read More World News

Copyright © 2026 OSV News

Leave a Reply

您的邮箱地址不会被公开。 必填项已用 * 标注

You May Also Like