A few years ago, I had the opportunity to speak with Dario Amodei on my show. During our 2024 conversation, I expressed my concerns about the potential concentration of power in the hands of a private CEO if the technology he was describing were to be realized. Dario agreed, acknowledging the risks, saying, “While it’s acceptable at this stage, the ultimate concentration of such power in private hands feels inherently undemocratic.” He indicated that achieving that level of power may necessitate nationalization. I countered, suggesting that if we reached such a point, the desire for nationalization might be absent. Fast forward to today—we find ourselves at that very juncture, yet events are unfolding in an unexpected manner.
The government once threatened to invoke the Defense Production Act to nationalize Anthropic, although that action was ultimately not taken. Instead, there seems to be a strategy aimed at undermining Anthropic, serving to punish the company and set a cautionary example for others, so that they do not pose similar threats. As the automation of our society increases, the governance of these advanced models becomes a pressing concern.
The current political climate in the United States complicates matters further. The administrations that cycle through our political landscape are dramatically different from one another, making it difficult for any AI model to align effectively with both sides. This “alignment problem” extends not just to users or companies, but critically to governments as well. The complexities are daunting, and part of the challenge lies in our struggle to articulate these issues with the clarity they deserve.
One framework that resonates with me, particularly as an American, is the First Amendment. This principle underscores the reality that diverse models will align with various philosophies and government preferences, leading to inevitable conflicts and adversarial interactions. In such a landscape, we find ourselves confronting fundamental questions of political philosophy. As a proponent of classical liberalism, I advocate that private entities, and not governments, should define alignment.
It’s crucial to recognize that this concept is disconcerting for many. We’re discussing models that act almost independently—we’ve relinquished some control. Prior to reaching this stage, there had already been numerous voices from within the Trump administration and its circle, including figures like Elon Musk and Katie Miller, who depicted Anthropic as a radical entity intent on harming America. Trump labeled Anthropic as a “radical left woke company,” labeling its members as “left-wing nut jobs.” Emil Michael likened Dario to a “liar” with a “God complex.” Elon Musk, who heads a rival AI firm, has ceaselessly targeted Anthropic on social media, framing it as a threat to their political and business interests.
One way to understand the heightened concern over supply chain risks is through the lens of political ideology. While not all members of the administration may feel this way, some clearly perceive that the success of certain AI systems could pose a long-term political risk to their agenda, seeing Anthropic’s differing political position as a justification for undermining it.
I suspect many involved don’t fully comprehend the broader dynamics at play. My ongoing point is that their understanding may not align with the complexities we’re discussing. If the threat to annihilate Anthropic is executed, it essentially becomes a form of political assassination. Thus, invoking the First Amendment is paramount to me; it reinforces the stark principles at stake. It’s precisely why I composed a lengthy essay that may undoubtedly alienate some on the right, because these issues truly matter.