Good morning. It’s Friday, December 26th.
On this day in tech history: In 1995, the notion of “self-organizing maps” (SOMs) began to gain recognition in the fields of anomaly detection and clustering research. Pioneered by Teuvo Kohonen, SOMs provided a topologically-preserving method for visualizing high-dimensional data within 2D grids, shaping initial AI pursuits in unsupervised feature learning and pattern recognition—methods that later informed visualization in deep embeddings.
-
Nvidia invests $20B to secure future of inference, countering Google’s GPU challenge
-
OpenAI trialing ads within ChatGPT responses
-
Nano Banana Flash on the way, plus Gmail address flexibility
-
5 new AI tools
-
Latest AI research papers
You read. We listen. We welcome your thoughts; simply reply to this email.
How could AI help your business run smoother?
AI doesn’t need to be everywhere; it needs to be where it influences decisions. Data Rush enables businesses to transform chaotic data, reports, and disconnected systems into intuitive, AI-driven analytics dashboards and automated workflows. If you’re struggling with data analysis or have urgent questions that need answers, we can assist in identifying solutions and constructing customized systems for prompt insight retrieval.
Today’s trending AI news stories
Nvidia invests $20B to secure future of inference, countering Google’s GPU challenge
In its most significant acquisition to date, Nvidia is budgeting approximately $20 billion in cash for a non-exclusive license to Groq’s innovative inference technology, effectively consolidating the startup’s intellectual property, designs, and key talent within Nvidia. Although framed as a licensing agreement to mitigate regulatory scrutiny, this acquisition also brings Groq founder Jonathan Ross, the original creator of Google’s TPU, and other significant engineers under Nvidia’s umbrella.
Groq’s LPUs (Language Processing Units) focus on ultra-low-latency inference rather than training extensive models. The company claims that its technology operates at speeds 10 times faster than and consumes far less energy than traditional GPUs, posing a substantial challenge in the rapidly growing inference market, where competitors like Google TPUs, Apple chips, Anthropic, OpenAI, and Meta are also striving for supremacy.
Nvidia’s aim is to neutralize an emerging competitor founded by former Google TPU developers, integrating their highly efficient technology into its “AI factory” infrastructure. Read more.
OpenAI trialing ads inside ChatGPT responses
As per reports from The Information, the company is currently experimenting with advertising formats that would integrate sponsored content directly into ChatGPT responses. Various options are under consideration, including AI-generated replies featuring paid recommendations, contextual advertisements adjacent to chat outputs, and sponsored links activated upon user queries for additional details.
Some concepts could leverage ChatGPT’s memory function to personalize advertisements based on previous interactions, raising both technical possibilities and concerns about user trust. OpenAI states it is seeking monetization strategies that do not compromise user confidence, acknowledging that AI-generated responses shaped by private chat history might venture into troubling territory, as highlighted by CEO Sam Altman in past remarks.
Simultaneously, Sam Altman foresees a long-term potential for AI-driven disruption. He predicts that in just a decade, college graduates will secure exciting, high-paying jobs focused on space exploration, rendering today’s career options seemingly less thrilling. He expresses envy for younger generations stepping into a workforce enriched by AI innovations and pioneering ventures like space missions. Read more.
Nano Banana Flash on the horizon, plus your Gmail freedom
Google is poised to unveil Nano Banana 2 Flash, a streamlined version of Nano Banana Pro (internally referred to as Ketchup). This new variant, codenamed Mayo, aims to deliver near-Pro-grade image quality at a reduced cost and with remarkable speed, making it ideal for high-volume creators and innovators scaling their ideas without incurring high costs. Early leaks suggest outputs that can compete with top-tier options, seamlessly integrating with Gemini for substantial inference tasks.
Five years after AlphaFold 2 tackled protein folding and transformed biology, DeepMind VP Pushmeet Kohli discussed recent strides. AlphaFold 3 is now capable of analyzing DNA, RNA, and small molecules through diffusion models, incorporating strict checks to prevent inaccuracies. A key advancement is the introduction of an “AI co-scientist,” a Gemini-powered multi-agent system that generates hypotheses, assesses options, and reduces research times from months to mere hours. Kohli’s vision is to simulate entire human cells for accelerated biological breakthroughs.
On a personal note, there’s newfound freedom for users. Google is rolling out an option to modify your @gmail.com address without losing any data: emails, Drive files, or subscriptions remain intact. The old address will serve as an alias (mail flow continues, sign-ins remain functional). This feature is starting in India (as noted in Hindi documents) with constraints allowing one change per year, capped at three lifetime changes. No longer will users be tied to an outdated username from their teenage years. This brings personal accounts into alignment with Workspace flexibility. Read more.
-
AI solves Zelda’s color-switching puzzle with six-move foresight
-
Alphabet-backed Motive Technologies plans IPO as AI-focused software enters Wall Street
-
Opting out of training doesn’t imply privacy: your interactions still aid AI learning processes
-
China demonstrates how a single voice command can commandeer humanoid robots
-
Google Health AI launches MedASR, a medical speech-to-text model designed for clinical efficiency
-
Deep learning dramatically accelerates quantum chemistry processes
-
Dwarkesh Patel: AGI isn’t imminent, and this is why the current hype overlooks the real bottleneck
-
BYD showcases an EV charging 250 miles in just five minutes, alleviating “charging anxiety”
-
Satellites are set to become the fastest AI servers, as researchers merge 6G with orbital computing
-
The pursuit of a singular AI is over: 2025 rewards smarter stacks and task-specific models
-
Despite failing to meet expectations, humanoids continue to advance in meaningful ways
-
People are increasingly obtaining their news from AI, influencing their perspectives
-
2025 was the year AI astounded everyone, even those who anticipated it
-
Exploring why the operating room is primed for AI integration
-
The AI boom is being financed with trillions in corporate bonds
-
The world’s first 800V immersion-cooled backup battery targets megawatt AI rigs at CES 2026
5 new AI-powered tools from around the web
arXiv provides a free online repository for researchers to share pre-publication works.
Your feedback is essential. Please reply to this email and share how we can enhance this newsletter’s value.
Are you interested in connecting with smart readers like yourself? To become a sponsor for the AI Breakfast newsletter, just reply to this email or DM us on 𝕏!