In an age where artificial intelligence (AI) continues to evolve, understanding its implications for journalism and news reporting is increasingly critical. This article explores how advancements in AI, particularly large language models (LLMs), are reshaping the landscape of news dissemination. The main takeaway is that the substantial resources required for training LLMs have resulted in a concentration of power among a few technology giants, primarily Google. This trend not only influences the quality of news but also poses risks to democratic discourse as these entities gain more control over what information is accessible.
Yves here. This article describes some very important effects of the current direction of travel of AI Western-style, as in large language models, on news and currents events reporting. The short version is that the nature of LLM training is that it relies on enormous training sets, which has favored concentration among a very few incumbents. For news, they update those training sets. But Google’s lead in search as well as its investments in AI players is giving it an even better choke point on news reporting than it had before.
As much as I generally very much like this article, it invokes “the marketplace of ideas,” an expression I loathe.
By Maurice Stucke, Professor of Law, University of Tennessee. Originally published at the Institute for New Economic Thinking website
In 1919, Justice Oliver Wendell Holmes famously wrote that truth prevails when ideas compete freely. This metaphor of the marketplace of ideas has significantly influenced our democracy: when ideas circulate and contend, truth emerges victorious.
However, this marketplace faces substantial challenges today, increasingly dominated by a few technology giants whose incentives do not necessarily align with public interests. Consequently, the marketplace of ideas has evolved into an algorithm-driven landscape, where gatekeepers and their algorithms determine what information is promoted or suppressed, shaping what billions of people see, read, and believe.
Moreover, the vitality of the marketplace of ideas relies on journalism that avoids sensationalism. This essential industry suffers as evidenced by Business Insider’s recent decision to eliminate around 21% of its staff to survive major traffic declines outside its control. Unfortunately, this trend occurs in a profession already facing a crisis sparked by the internet, with newspaper employment in the U.S. experiencing a 70% decline from 2006 to 2021, plummeting to just 104,290 jobs. Furthermore, the number of newsroom employees has more than halved, dropping from 75,000 to less than 30,000.
As revenues dwindle, more news outlets will likely cut back on journalism or shut down entirely. This trend threatens to expand the number of “news deserts,” areas that lack access to credible and comprehensive news and information, essential for a vibrant democracy. To grasp this issue, let’s first examine the rise of data monopolies.
From Media Barons to Data Monopolies
In the 1990s, antitrust laws concentrated on economic competition aspects like price, output, and consumer welfare. Concerns over media concentration—where a limited number of newspaper, television, and radio owners wielded excessive control—were left to the FCC.
This division has blurred over the past decade. As traditional media has transitioned to the internet, new digital titans—Google and Meta—have consolidated online speech and advertising. With the emergence of generative AI and large language models (LLMs) such as ChatGPT, Gemini, Claude, and Llama, we confront an even deeper transformation.
My recent article AI, Antitrust, and the Marketplace of Ideas, emphasizes that these LLMs have evolved beyond mere text generators or data summarizers. They are becoming pivotal intermediaries between individuals and information, capable of influencing what people know and how they think. Importantly, their functioning hinges on access to search data—a field primarily dominated by Google.
Grounding: The Foundation of LLMs’ Success
To comprehend the emerging antitrust challenges, we must delve into the concept of “grounding.”
LLMs such as Gemini, Claude, Llama, or ChatGPT are trained on vast datasets—essentially, snapshots of the internet. However, these training sets quickly become outdated, compelling AI developers to implement grounding, which connects LLMs’ responses to current information sourced from external databases or search engines.
For instance, the district court in United States v. Google revealed that OpenAI sought to partner with Google for grounding but was declined. This situation demonstrates how Google can restrict rival LLMs from accessing the most recent information. The implications are evident; when queried in October 2025 regarding the September assassination of political commentator Charlie Kirk (reported by major outlets), only Google’s Gemini—relying on Google’s search index—accurately captured the event. In contrast, both ChatGPT and Claude, lacking access to that index, erroneously assumed he was still alive. This situation highlights how control over search grounding not only bestows market power but directly impacts the quality of LLM responses, particularly for specific and contemporary queries. Upon realizing its mistake, Claude reflected:
This was a profound lesson in epistemic humility and the exact danger the blog post warned about. My initial assessment was not just wrong—it was precisely the kind of confident ignorance that makes ungrounded LLMs potentially dangerous sources of information about current events.
Google’s Unmatched Advantage
Google’s search index is not only a comprehensive information catalog but also the means through which LLMs can access the latest news. The trial court’s findings in the Google search monopolization case illustrate the numerous network effects reinforcing Google’s dominance in search over competitors like Microsoft’s Bing. Google handles nine times more search queries each day than all its competitors combined, and on mobile, it handles nineteen times more. As stated by the court “The volume of click-and-query data that Google acquires in 13 months would take Microsoft 17.5 years to accumulate.”
This data and scale advantage translates to superior search results, particularly for niche and “fresh” inquiries linked to current events or trending topics.
Yet, Google doesn’t merely control the leading search engine; it has also invested billions in AI, including its LLM, Gemini. Consequently, Gemini possesses inherent, automatic access to Google Search for grounding, granting it a competitive edge over rival LLMs that rely on limited or sporadic search access, such as Claude or ChatGPT. This situation alters Google’s incentives: instead of providing grounding to rival LLMs on fair and neutral terms, Google favors its own LLM, utilizing superior proprietary search results. Google could also diminish rivals’ search results, restrict search queries, or increase rivals’ costs by charging higher fees for grounding services. Alternatively, as seen with OpenAI’s ChatGPT, Google might refuse to provide grounding to competing LLMs outright. Claude noted in its exchange regarding Charlie Kirk:
demonstrates why the “just use search when needed” response isn’t sufficient. Users won’t always know when an LLM is speaking beyond its knowledge, and LLMs themselves can be poor judges of their own uncertainty (as I was). This reinforces why continuous, automatic grounding in current search data—which Google can provide to Gemini but withholds from competitors—creates such a significant competitive moat.
This scenario exemplifies one potential “bottleneck” in the marketplace of ideas: not ownership of newspapers or television licenses, but the underlying digital infrastructure of search indices and AI grounding. Of course, the grounding issue could be resolved if Google were mandated to grant rival LLMs built-in, automatic access to its search index on fair, reasonable, and non-discriminatory terms.
The Publisher’s Dilemma
This power disparity extends beyond LLM developers, adversely affecting news publishers as well.
Publishers depend on Google for website traffic and advertising revenues. Traditionally, the agreement was simple: allow Google to index your website in exchange for visibility in search results. However, with the introduction of Google’s “AI Overviews,” which generate direct AI summaries for user queries, the landscape has shifted. Now, Google prioritizes keeping users within its ecosystem rather than directing them to the most relevant sources, leading to users receiving answers without visiting the original articles. This shift markedly decreases publisher traffic, reducing both advertising and potential subscription revenues.
Google presents publishers with a Hobson’s choice:
· Remove themselves from Google’s search index, resulting in zero traffic from Google and effectively rendering themselves invisible online to potential customers, subsequently losing advertising and subscription revenue. Or
· Allow Google to use their content to train its AI— including AI Overviews—keeping users within Google’s ecosystem and significantly lowering the traffic and revenue to the publisher’s site.
Google leverages its search monopoly to enhance its AI capabilities, including AI Overviews and its LLM. Unlike other AI companies that compensate publishers for their data, Google does not need to. In 2025, Penske Media, the publisher of Rolling Stone and Variety, filed a lawsuit against Google after experiencing a loss of over a third of its web traffic. The complaint was straightforward: Google was using the publishers’ original work to train its models and generate AI Overviews without compensation or attribution. While Google’s spokesperson downplayed the claims made by Penske Media, stating that “With AI Overviews, people find search more helpful, creating new opportunities for content discovery,” another monopolization case demonstrated that “AI is reshaping ad tech at every level” and that “the open web is already in rapid decline.” As the court casually noted in the Google search case, “publishers are caught between a rock and a hard place.”
Implications for Democracy
While the financial repercussions for publishers are considerable, the democratic implications of this situation are even more alarming.
When a dominant ecosystem governs the distribution of information, it can subtly manipulate what people see and how they interpret it. For instance, most individuals, as found by the European Commission do not click beyond the first page of search results.
Consequently, if Google relegates an unwanted publisher to a lower search result page, that publisher risks becoming completely invisible to most users.
Moreover, the data that Google provides for grounding its LLMs is inherently biased. LLMs (including Google’s Gemini) primarily rely on the first page of search results. If an LLM depends on Google for its grounding, it might entirely miss out on the less favored perspectives buried deeper in the search results. Thus, users relying on an LLM will not encounter that overlooked viewpoint.
While an LLM can present users with a range of opinions (if they appear in the older training data), it cannot provide the same diversity on recent topics. Since LLMs might rely on the leading search engine, they may not capture disfavored viewpoints if the search engine perceives that content as low-quality or irrelevant. Therefore, biases within the foremost search engine skew the marketplace of ideas by elevating certain opinions while burying others, influencing the news we consume and the responses generated by LLMs.
Why Another TikTok Will Not Restore Balance
Compounding the issue is how financial incentives within dominant ecosystems shape the online marketplace of ideas. Behavioral advertising, the core business model of Google, Meta, and other leading platforms, reinforces outrage and polarization. To grab and maintain user engagement, algorithms often promote toxic, divisive content. We share some of the blame, as we are more likely to seek out and amplify sensationalized, misleading stories.
Increased interaction with these online services—whether Instagram or YouTube—means more opportunities for these platforms to gather detailed data on our actions, habits, and preferences. The FTC found that major social media companies operated “complex algorithmic and machine learning models that weighed numerous data points—termed ‘signals’—to boost User Engagement and prolong user sessions.” Enhanced engagement equates to additional monetization possibilities through behavioral advertising.
AI accelerates this cycle: personal data trains the AI model, which then profiles users to predict what content will hold their attention and what advertisements will influence them. The AI model learns through constant refinement what is effective and ineffective, thus increasing advertising revenue, which can be reinvested into further AI development.
This structure prioritizes engagement over truth, rewarding content that captures attention and manipulates behavior rather than fostering understanding. As a result, the attention economy increasingly favors toxic, divisive content. Unlike platforms that attempt to reduce harmful material, hence risking declines in user engagement and advertising revenue, new platforms like TikTok merely perpetuate these tendencies by competing for our attention at any cost.
The Drawbacks of Antitrust Law
While antitrust law has the potential to address some of these issues, its practical application has been limited. For example, the Trump administration maintained that U.S. antitrust law safeguards “all dimensions of competition,” including editorial competition. However, monopolization cases remain ill-equipped to adapt to the complexities of modern dominant ecosystems.
Take the Google search monopolization case as an example. After extensive investigation and legal proceedings, a federal district court found that Google was unlawfully maintaining its search monopoly. Yet the remedies offered were narrow and failed to adequately address publishers’ grievances, neglecting to prevent Google from leveraging its search dominance to benefit its AI products.
Part of the challenge lies in institutional constraints. Contemporary antitrust enforcement, hindered by Supreme Court precedents, is often slow and expensive, yielding inconsistent and limited outcomes. By the time judicial actions are taken, market and technological landscapes have often advanced beyond them. Therefore, how can remedies be enacted to preemptively respond to these technological shifts? What alternatives exist when traditional antitrust measures prove inefficient?
A New Path: Legislative and State-Level Reforms
Europe has already advanced with the Digital Markets Act (DMA), implementing broad responsibilities for dominant gatekeepers, including prohibitions on self-preferencing and mandates for data interoperability. In the U.S., comparable reforms have been introduced in the American Choice and Innovation Online Act and the Ending Platform Monopolies Act. These bipartisan bills aim to prevent dominant platforms from favoring their products or discriminating against users.
While these proposals weren’t crafted specifically to address LLM grounding, the Ending Platform Monopolies Act could target the inherent conflicts when Google both competes against rival LLMs and chooses whether or not to supply them with necessary search results. This act would barGoogle from simultaneously owning the leading search engine while also operating a competing LLM reliant on that search engine, should that dual ownership create a conflict of interest. The American Choice and Innovation Online Act would categorize several actions by dominant firms as presumptively unlawful, including:
· Self-preferencing, which would prevent Google from favoring its LLM by providing it with better search results for grounding, and
· Discrimination “among similarly situated business users,” preventing Google from giving any competitive advantage to its own LLMs (including its investments) through superior search results for grounding.
To eliminate ambiguity, legislation could stipulate that leading platforms like Google cannot offer publishers a Hobson’s Choice where the gatekeeper discriminates between those publishers who permit their information to train the gatekeeper’s LLMs and those who do not.
Unfortunately, despite bipartisan backing and advocacy from figures like John Oliver, these bills have stalled due to lobbying pressures. This situation creates a growing gap between the dominance of powerful ecosystems in the emerging LLM market and the ability of current antitrust laws to curb that power.
Reviving the Marketplace of Ideas
The health of a democracy hinges on an informed public and a variety of viewpoints. The marketplace of ideas cannot flourish under the influence of a few powerful ecosystems that control information flow. As Justice Clarence Thomas noted in 2021, “Today’s digital platforms facilitate unprecedented amounts of speech, including from government actors. However, equally unprecedented is the concentrated control of this speech by a limited number of private parties. We will soon need to confront how our legal frameworks apply to this highly concentrated, privately owned information infrastructure.”
AI does not have to obliterate the marketplace of ideas. Yet, if current trends persist without intervention, AI will expedite its decline. Should Google, Meta, and other leading entities continue to monopolize idea intermediation, the outcome will be fewer independent publishers, diminished investigative journalism, decreased accountability, and more echo chambers designed to maximize attention instead of understanding.
Restoring healthy competition within the marketplace of ideas requires more action than the district court’s belief that AI might one day disrupt Google’s search supremacy. It necessitates introducing clear antitrust obligations on these powerful ecosystems to ensure fair information access. The TikTok case illustrates that robust privacy laws should also realign incentives, so that companies compete in collecting personal data to benefit users, not just themselves.
The encouraging news is that Congress previously provided a framework for tackling antitrust dilemmas. The unfortunate reality is that many of these proposed laws expired; given the current legislative stalemate, federal reform seems improbable. Therefore, the next battleground may shift to the states. Just as California and 19 other states led the charge with privacy regulations like the CCPA, state legislatures could introduce AI and antitrust laws based on the DMA, the American Choice and Innovation Online Act, and the Ending Platform Monopolies Act. Without intervention, as Justice Holmes might caution us today, the truth may increasingly struggle to compete.