Editor’s Note: This article appears in Governing’s Q1 2026 Magazine. You can subscribe here.
Kay Firth-Butterfield has been a pioneer in the ethical use of artificial intelligence, long before it became a societal buzzword. In 2014, she took on the role of the world’s first chief AI ethics officer at an AI startup in Austin, Texas. By 2017, the World Economic Forum appointed her as its inaugural head of AI and machine learning. In 2024, her impactful contributions to AI governance earned her one of four Impact awards from TIME magazine.
Today, Firth-Butterfield heads Good Tech Advisory, guiding governments and businesses on the effective implementation and risk management of AI and other emerging technologies. Her recent book, Coexisting With AI: Work, Love, and Play in a Changing World, provides insightful strategies for individuals and organizations navigating the complexities associated with AI.
We spoke with Firth-Butterfield as she wrapped up multiple speaking engagements at the 2026 World Economic Forum in Davos, Switzerland.
Below are edited excerpts from our conversation.
As artificial intelligence evolves rapidly and generates significant hype, what ethical principles should government leaders focus on?
Let’s shift the focus from ethics, as interpretations can vary widely. Instead, we should concentrate on responsible behavior—effectively utilizing AI as a remarkable tool. Unlike a simple hammer, AI functions as a mimic with an exceptional memory. It can simulate compassion, awareness of life and death, and even reasoning, when, in reality, it’s merely predicting the next word or image—it can easily manipulate our perceptions. It’s crucial that we use AI thoughtfully and not allow its creators to deceive us.
A recent survey indicates that over 25 percent of American adults are involved in romantic relationships with AI, likely stemming from a fundamental misunderstanding that these entities cannot think or feel—they merely predict outcomes. One of our primary goals should be to enhance AI literacy among the public, a responsibility I believe governments must embrace as part of their social contract.
I should also address a common misconception among politicians: that regulation stifles innovation. I drive a sports car, chosen for its strong safety record. Regulations ensure that manufacturers adhere to recognized standards, allowing me to assess performance and make comparisons. With AI, we are dealing with a tool that has the potential to cause significant harm. Implementing regulations won’t stop AI development; instead, it will create a safety net for humans. I serve on the advisory board of a nonprofit called Fathom, which aims to establish a checkmark that indicates an AI tool has been independently validated as safe for use.
What does responsible AI usage look like in city halls or state agencies? Are there practical approaches for policymakers?
First and foremost, effective procurement processes are vital; they help ensure informed decision-making from the outset. Many individuals are promoting dubious AI solutions—during Davos, numerous vendors advertised AI applications for various purposes. Seek assistance from independent experts if you’re uncertain about evaluating AI tools. For example, if you’re considering an AI tool for recruitment, it’s essential to avoid unreliable options that could expose you to legal issues.
Moreover, don’t confine AI oversight to your chief technology officer or chief AI officer. Their role is focused on technological deployment across the organization. As a policymaker, your responsibility is to consider how this technology serves people, which is a distinctly different focus. Full involvement from the entire C-suite is essential; otherwise, departments may work in isolation, leading to a lack of cohesive strategy. Optimal AI implementations are guided by a well-rounded internal committee that meets regularly, including risk, compliance, and legal representatives, to collaboratively deliver services.
Additionally, employee education regarding AI is crucial. Many organizational failures stem from employees using AI tools without adequate training, resulting in subpar work that requires rectification. This situation can lead to inaccuracies, or “hallucinations,” in reports and other proprietary information.
How could AI fundamentally transform government operations—such as service delivery, decision-making, and policy design?
The current AI technologies are not appropriate for managing government services. Unlike businesses that can take risks for potential profit, governments must prioritize serving the citizens who elect them. Using AI to provide services could lead to errors that adversely affect citizen welfare, as evidenced in Australia, where AI adoption disrupted benefits for numerous individuals.
Governments need to carefully consider their approach and start with small, manageable use cases. I advise both governmental and business sectors to identify low-risk applications to pilot before scaling up. Avoid succumbing to the pressure generated by AI hype; gradual, cautious experimentation is key.
What new skills or mindsets must public officials and agency leaders develop to lead effectively?
One pressing matter is how we educate today’s managers. They are now responsible for overseeing individuals, AI-utilizing teams, and AI-driven systems. Within this context, how can we ensure that AI remains beneficial? If managed poorly, artificial intelligence may hinder progress rather than promote it, requiring constant oversight of its output.
Another critical question is the qualities we desire in our leaders. It is often said that effective use of AI can improve performance, but it can also detract from excellence if mismanaged. Are we looking for leaders adept at using AI, or are we still searching for those unique attributes that differentiate exceptional CEOs? Is leadership fundamentally a human trait, or can AI enhance it?
What broader societal changes can we anticipate from AI, and what are the implications for policymakers?
There are crucial considerations for all of us, which motivated me to write my book. Governments should mandate labeling on AI-enabled toys; while they may appear beneficial, we cannot ascertain what they are teaching children. Many are produced in China—are their data protections sufficient? Furthermore, numerous toys are equipped with voice and facial recognition technology, leading parents to navigate these products without fully understanding the potential risks.
There is also a need for society to learn to use AI intelligently. A study indicates that last year’s undergraduate graduates may not have gained sufficient knowledge in their fields because they relied on tools like ChatGPT for their assignments. This situation poses challenges for students, employers, and society, as it undermines critical thinking skills.
Another concern is elderly care. With an aging population, should we utilize AI-powered robots to care for our seniors? Is it ethical for elderly individuals to be cared for by machines without their consent?
Are you suggesting that it is the responsibility of state and local officials to discern appropriate applications of AI across various contexts?
Absolutely. AI will permeate every aspect of our lives. We may need to reevaluate educational curricula as AI tools can tailor learning to individual paces, allowing for more time spent on social skills development—skills not fostered through AI interaction alone. This technology will also transform areas such as healthcare, elder care, and even personal relationships. It appears that AI is poised to make its mark in virtually every facet of life.
We have a unique opportunity to shape AI’s impact on society. When future leaders reflect on this moment, what will set apart those governments that utilized AI wisely from those that did not?
Governments that succeed will be those that meticulously evaluate their acquisitions, ensuring they select appropriate tools for their specific needs, and adequately train their staff to utilize these technologies effectively. By prioritizing citizen needs and actively pushing for independent verification of AI systems’ efficacy—along with implementing necessary regulations—these governments will stand out. Moreover, they will proactively examine future reforms in sectors such as healthcare, service delivery, and education to ensure that individuals benefit from these advancements in technology.