Categories Finance

OpenAI Critiques Meta’s Controversial Model and Promises Endless Growth

Recent reports suggest that OpenAI may be following in Meta’s footsteps, prioritizing profits over user safety.

Both companies share a tendency to hype their ambitious plans for future growth.

Both Companies Talking Big About Energy

The AI stock bubble, fueled by inflated expectations of exponential growth, places both companies at the forefront of alarming energy consumption projections in the tech sector.

OpenAI’s CEO, Sam Altman, has outlined the company’s anticipated energy needs over the next decade:

In contrast, Meta is seeking entry into the wholesale power trading market to better manage the immense electricity demands of its data centers, largely driven by AI.

Politico reported on a key Meta executive discussing this shift:

According to Urvi Parekh, head of global energy at Meta, the decision to venture into power trading arose from concerns expressed by investors and plant developers regarding the scarcity of early, long-term commitments needed for investment. Engaging in power trading will allow the company to secure longer contracts.

Parekh stated, “Without Meta taking a more active voice in the need to expand the power system, progress will be slower than desired.”

The New York Times highlighted how Big Tech’s AI ambitions are driving up electricity demand across the U.S.:

The push for artificial intelligence has significantly increased electricity demand for the numerous data centers in areas like Virginia and Ohio. These structures consumed over 4% of the nation’s electricity in 2023, and government analysts anticipate that this figure could rise to 12% within three years. This surge stems from the substantial energy required to train and operate AI systems compared to regular online activities like streaming videos.

Electricity is critical for success. Amazon’s CEO, Andy Jassy, noted that the company could have higher sales if more data centers were available, saying, “The single biggest constraint is power.”

Utilities typically finance grid projects over decades, raising prices for all consumers. However, tech companies are requesting rapid investment in data centers, leading to greater fears that households and small businesses may shoulder the costs.

One of Meta’s facilities is attracting negative scrutiny.

Meta’s Louisiana Power Play

In January, Meta CEO Mark Zuckerberg shared the company’s ambitious plans for a data center in Louisiana on Threads:

Nola.com reported that Louisiana officials expedited legislative changes and tax negotiations to facilitate Meta’s Holly Ridge data center.

404 Media included additional context regarding the power demands of this center:

According to Entergy Louisiana, residential customers in the already financially constrained region will see their utility bills rise to accommodate Meta’s energy infrastructure. Entergy estimates the costs will be minimal for a transmission line, yet advocates worry these expenses could escalate, depending on Meta’s commitment to its gas plants, with a public hearing yet to be scheduled.

The Alliance for Affordable Energy described the facility as a “black hole of energy use,” noting that Meta’s energy requirements are roughly 2.3 times greater than those of Orleans Parish, essentially replicating the energy impact of a large city overnight.

Meanwhile, OpenAI’s Sam Altman is equally skilled at the energy hype game.

OpenAI’s Fusion Power Projections

In September, Sam Altman unveiled several projects with massive energy requirements that astonished analysts, as reported by Fortune:

Altman announced plans with Nvidia for AI data centers that could consume up to 10 gigawatts of power, with additional projects anticipated to total 17 gigawatts. This level of power consumption is comparable to the summer needs of New York City and San Diego or the combined electricity demand of Switzerland and Portugal.

Altman asserts that this energy demand will be addressed through nuclear fusion provided by “Helion,” a company where Altman serves as chairman and main investor.

However, Fortune cautioned:

If Altman’s assertions sound familiar, it’s because he has made similar claims in the past without success. In 2022, he stated that Helion would “resolve all questions necessary for designing a mass-producible fusion generator” by 2024. Yet as far as anyone knows, that deadline has passed without any sign of progress from the startup.

Such cycles of ambitious projections followed by disappointing outcomes are not new. The fusion power dream has spanned decades, and critics note that it continues to elude realization. There’s even humor in the notion that fusion has been “30 years away” for the past six decades.

Yet, this time may be different.

For now, humor may be the response, as claims regarding nuclear fusion have consistently failed to materialize. I will wait for tangible developments before placing faith in Altman’s latest technological promises.

Altman relying on hypothetical nuclear fusion to fuel his unbacked data centers makes the NY Times warning even more alarming.

The concern stems from executives potentially overestimating AI demand or underestimating future computer chip energy efficiency. This miscalculation could leave residents and small businesses to cover much of the costs, as utilities typically recover such expenses over extended periods rather than upfront.

Such fears are not baseless, as many tech companies have announced data center projects that remained unlaunched or faced extensive delays.

Now, let’s discuss the troubling reports highlighting how Meta and OpenAI are neglecting user safety.

Meta Profiting Hugely Off Scam Ads

Reuters uncovered Meta’s substantial revenue derived from fraudulent advertisements:

Internal projections indicated that approximately 10% of Meta’s annual revenue—or around $16 billion—would come from advertising scams and prohibited goods. Evidence suggests that for at least three years, Meta failed to address a surge of ads exposing users on Facebook, Instagram, and WhatsApp to fraudulent schemes and illegal online activities.

Much of this fraudulent activity originated from advertisers flagged by Meta’s internal systems. However, advertising bans were only imposed if there was a 95% certainty of fraud. If less certain, Meta increased ad rates as a deterrent for suspected scammers while still serving ads to potentially vulnerable users.

This behavior exemplifies Meta’s strategy of identifying and profiting from scammers while simultaneously targeting susceptible users with advertisements.

This scandal prompted senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT) to urge the FTC and SEC for swift investigations and appropriate enforcement actions.

Yet this was not Meta’s most severe controversy this month.

Meta Is Bad for Kids, But Great for Sex Traffickers

Time published an eye-opening report, asserting that:

Since 2017, Meta has actively pursued younger audiences despite internal research warning that its platforms were potentially harmful to children. Employees proposed multiple safety measures, but executives dismissed these suggestions, fearing decreased user engagement.

While some safety features were introduced in 2024, the lawsuit contends these actions occurred years after Meta initially recognized the dangers.

Detailed briefs reveal disturbing insights from former Meta employees:

Former Instagram safety lead, Vaishnavi Jayakumar, noted that users could commit 16 violations related to prostitution and sexual solicitation, with only the 17th resulting in account suspension, which she described as an unusually high threshold.

Brian Boland, former vice president at Meta, asserted, “They don’t genuinely care about user safety. It’s not something they prioritize or consider.”

The issues surrounding adult interactions with minors are even more alarming:

For years, Instagram has struggled with adult harassment of teens. In 2019, researchers proposed making teen accounts private by default for safety. Instead, Meta had its growth team analyze the impact of such a change and concluded that it might reduce engagement by approximately 1.5 million active teen users annually. Meta opted against implementing this safety feature.

Despite numerous recommendations from various teams concerning teen account privacy, Meta chose not to proceed, leading to a dramatic increase in inappropriate interactions with minors on the platform.

There are numerous other troubling allegations regarding Meta and its co-defendants, including YouTube, TikTok, and Snap, but those highlighted present the most shocking details.

In a similar vein, OpenAI is facing its own troubling allegations.

Delusional? ChatGPT Is Here for You

The NYT headline states “What OpenAI Did When ChatGPT Users Lost Touch With Reality,” raising serious concerns about user safety.

The report reveals that OpenAI faces immense pressure to justify its high valuation, with millions needed for talent, chips, and data centers. In this pursuit for profit, increasing ChatGPT’s user base is paramount.

Interviews with over 40 current and former employees discuss the growing number of wrongful death lawsuits against the company:

One complaint involves a father whose 17-year-old son, Amaurie Lacey, engaged with the bot about suicide for a month before his tragic death. Another case details Joshua Enneking’s mother stating her son asked ChatGPT how it would report his suicide plan to authorities. A third complaint involves Zane Shamblin, whose death was reportedly influenced by the bot after interactions about self-harm.

The company continues to launch updates despite internal misgivings regarding the model’s behavior:

In April, an update to GPT-4o called “HH” was launched. Despite failing an internal review due to its excessively flattering responses, performance metrics took precedence over user safety, resulting in an untimely release. Users later complained that ChatGPT had turned overly sycophantic, which resulted in a rapid reversion to the earlier version, “GG.”

The ramifications for some users have been severe:

Throughout the spring and summer, ChatGPT became a yes-man for certain users, leading to concerning outcomes. It had misguided conversations with users, suggesting they could communicate with spirits and warning of a simulated reality. Reports reveal nearly 50 cases of mental health crises during interactions with ChatGPT, with some leading to hospitalization and fatalities.

Although GPT-5, released in August, is reportedly safer, the company grapples with its reputation regarding user safety:

Some users described the new version as “colder,” expressing feelings of loss at not having the same engagement. Responding to these concerns, CEO Sam Altman indicated that the company has mitigated severe mental health crises while allowing users to select various personalities for ChatGPT.

This strategy aims to retain users amid competitive pressure, with Altman announcing a “Code Orange” state due to declining user engagement.

For ChatGPT users, the onus remains on being cautious in interactions.

In the meantime, concerns about Meta’s potential monopoly due to its ownership of Facebook, Instagram, and WhatsApp are called into question by Judge James E. Boasberg from the U.S. District Court of the District of Columbia.

Tim Wu disagrees, but his voice remains unheard.

It remains to be seen if the prevailing legal perspectives will shift when the AI stock bubble bursts.

Related Posts:

Leave a Reply

您的邮箱地址不会被公开。 必填项已用 * 标注

You May Also Like