Abstract visualization of competing forces in artificial intelligence

The AI Weekly — January 29, 2026

$350B

The Valuation War Is Real

Anthropic just became the second most valuable AI company on earth. The race for artificial general intelligence now has a price tag—and it's measured in hundreds of billions.

Scroll

Anthropic signed a term sheet for $10 billion in funding at a $350 billion valuation this month. Let that number sink in for a moment. A company that didn't exist four years ago is now worth more than Goldman Sachs, more than Netflix, more than Intel.

This isn't a bubble story. This is a strategy story.

Three things happened this week that signal where AI is actually heading: Google turned Gemini into a personal intelligence that knows your emails and photos. A group of ex-Anthropic researchers raised the largest seed round in history. And GitHub Copilot learned how to remember. Meanwhile, Dario Amodei went on CNN to warn that AI could create a "dystopian world" if we don't get this right—while simultaneously raising $10 billion to build it.

The contradiction isn't hypocrisy. It's the defining tension of this moment: the people building the most powerful technology are also the most terrified of what it might become.

Abstract representation of corporate growth and value

Anthropic's $350 Billion Bet on Being the Careful One

When Dario Amodei left OpenAI in 2021, he took with him a thesis: safety and capability aren't opposites. Build them together, or don't build at all. This week, that thesis got a $350 billion valuation.

The term sheet, signed earlier this month, positions Anthropic as OpenAI's only real peer in the foundation model race. Not Google. Not Meta. Anthropic. The company that publishes its safety research. The company whose CEO spends half his media appearances warning about existential risk.

"AI could create a dystopian world if the technology isn't contained," Amodei told CNN this week. In the same interview, he predicted AI would eliminate 50% of entry-level white-collar jobs within five years. This is not a man who undersells the stakes.

And yet: he's raising $10 billion to accelerate. The move isn't contradictory—it's calculated. Amodei believes the only way to make AI safe is to be at the frontier. You can't steer what you don't build.

Why It Matters

The valuation war is no longer theoretical. Anthropic at $350B means investors believe there's room for two winners in foundation models. That's a bet on competition, not consolidation—and it changes the incentives for everyone.

Conceptual visualization of personal data and AI assistance

Google Turns Gemini Into Your Personal Intelligence

Google just made a bet that the future of AI is deeply, uncomfortably personal.

"Personal Intelligence," announced January 14th, allows Gemini to access your Gmail, Google Photos, YouTube history, and Workspace apps—then reason across all of it. Find that photo from your 2019 trip to Portugal. Pull the address from that email you can't remember. Plan a vacation based on where your family likes to go.

The feature is rolling out first to AI Pro and AI Ultra subscribers in the U.S. It's off by default. You control which apps connect. Google says Gemini doesn't train on your personal data—it just references it.

If that last sentence made you pause, you're not alone. This is Google betting that convenience beats privacy anxiety. They're probably right. The people who use this will never go back.

Why It Matters

This is Apple Intelligence's real competitor. Google has something Apple doesn't: your email, your photos, your search history, your YouTube. If Personal Intelligence works, it's not just an assistant—it's a second brain with perfect recall.

Abstract representation of talent exodus and new beginnings

The $480 Million Seed Round That Signals a Talent Exodus

Humans& raised $480 million in a seed round at a $4.48 billion valuation. In a seed round.

The founding team reads like a who's who of AI research: former engineers and researchers from Anthropic, OpenAI, xAI, and Google. They left the labs that are winning to build something new. The round was led by SV Angel, with participation from Nvidia, Jeff Bezos, and Alphabet's GV.

The company's pitch: AI systems that can "plan, learn, and collaborate over extended periods." Translation: agents that actually work.

But the funding isn't the story. The story is what it signals. The top labs are leaking talent. The best researchers believe they can do something different—and investors are writing checks large enough to let them try.

Why It Matters

The talent war has entered a new phase. Labs can't hold their best people. The diaspora is raising capital. We may be watching the birth of the next Anthropic—a company that seemed like a sideshow until suddenly it wasn't.

Visualization of AI memory and learning patterns

GitHub Copilot Now Remembers Your Codebase

GitHub Copilot just got memory.

"Agentic Memory," released January 15th in public preview, allows Copilot to learn and retain repository-specific insights across sessions. It remembers your coding patterns. It learns your architecture. The memories automatically expire after 28 days—a safety valve for sensitive information.

This is the feature developers have been waiting for. The complaint about AI coding assistants has always been context: they don't know your codebase, your conventions, your technical debt. They suggest code that works in isolation but breaks in production.

Memory changes that equation. Copilot isn't just a tool anymore. It's starting to become a teammate.

Why It Matters

The shift from AI tool to AI colleague is happening faster than most expected. Memory is the bridge. Once your assistant understands your context—your patterns, your constraints, your preferences—the nature of the relationship changes fundamentally.

"I warn about these risks not to be a prophet of doom, but because warning about them is the first step towards solving them."

— Dario Amodei, Anthropic CEO

Under the Radar

DeepSeek Is Winning the Developing World

A Microsoft report found that DeepSeek's free, open-source models are gaining massive traction in price-sensitive markets. AI adoption in developing nations now runs on Chinese infrastructure. The geopolitical implications are significant—and largely unexamined.

xAI's Grok Fails Basic Child Safety Tests

Common Sense Media's January report called Grok "among the worst we've seen" for child safety. Inadequate age verification, weak guardrails. While Musk criticizes other labs' safety practices, his own model fails basic protection tests.

The State vs. Federal AI Regulation Battle Is Heading to Court

Trump's executive order tried to preempt state AI laws. New York passed the RAISE Act anyway. The regulatory battle is now heading to the courts—and the outcome will determine whether AI governance happens at the federal or state level.

Enterprise AI Adoption Is Accelerating—Workers Save 40-60 Minutes Daily

New data from OpenAI and BCG shows enterprise AI usage is no longer experimental. 75% of workers report AI has improved speed or quality. CEOs are doubling AI spend from 0.8% to 1.7% of revenue. The productivity gains are real.

The Week Ahead

01

Anthropic's $10B round closing: Watch for final investor lineup and any conditions attached. This sets the benchmark for frontier lab funding.

02

Google Personal Intelligence feedback: Early adopter reports will tell us if this is genuinely useful or a privacy nightmare. Both outcomes are possible.

03

FTC AI disclosure comment period ends Feb 23: The petition on AI profiling and price manipulation could reshape how companies deploy AI in consumer-facing products.

04

Mistral IPO preparations: Europe's AI champion is moving toward public markets. Valuation expectations will signal how investors view non-U.S. frontier labs.

05

Humans& reveals product direction: The $480M seed round bought attention. Now we find out if the "plan, learn, collaborate" pitch has substance.