Hey Operators,

This weekend, Anthropic accidentally revealed its most powerful AI model to the public — and the company's own safety documents suggest it might be too dangerous to deploy. That story is dominating the industry, but it's not the only bombshell: NVIDIA's Jensen Huang declared AGI "achieved" on Lex Fridman's podcast, and Fortune's Monday analysis unpacking the debate reads like a Rorschach test for the entire field.

Operation Check

Nasdaq 100 remains under pressure, down ~9% from late-2025 highs amid "AI fatigue" major names including NVIDIA (-2.1%), Meta (-3.9%), and Microsoft (-2.5%) are all in the red as Wall Street pivots to demanding proof of monetization over promise.

Bitcoin holds near $66,500 today, down ~19% year-over-year, consolidating above the $65K floor as rising Treasury yields and macro uncertainty keep risk appetite contained across digital assets.

March 2026 has already produced more $100M+ AI funding rounds than any comparable month in venture history, with AI science startup Periodic Labs (former OpenAI and DeepMind researchers) in talks to raise at a ~$7B valuation.

Operation Dive

Anthropic's Secret Model Leaked — and the Safety Warnings Are Alarming

Security researchers discovered nearly 3,000 unpublished documents in an unsecured Anthropic database, revealing "Claude Mythos" (codename: Capybara) — a new model tier positioned above Opus. Anthropic confirmed the model represents "a step change" in capabilities, but leaked internal drafts warn it could dramatically lower the cost and skill required to carry out large-scale cyberattacks, with offensive cyber capabilities that "far outpace" defensive efforts. Cybersecurity stocks dropped 3–7% on the news. The Insight: When a frontier lab's own safety documentation reads like a threat briefing, the question isn't just whether to release the model — it's whether the regulatory infrastructure exists to handle what comes after.

Jensen Huang Says 'We've Achieved AGI' — No One Can Agree On What That Means

On Lex Fridman's podcast this weekend, NVIDIA CEO Jensen Huang declared "I think we've achieved AGI," immediately triggering the industry's most predictable argument. Fortune's Monday deep-dive dissects Huang's unconventional metric whether AI can build a $1B business against a landscape of competing definitions from DeepMind, the Turing Institute, and cognitive scientists, each drawing the line differently. The Insight: When the world's most powerful chipmaker declares AGI achieved, the real consequence isn't philosophical it's that billions in capital and policy attention will follow the claim, regardless of who's right.

Operators in Focus

OpenAI has officially shut down Sora

The AI video generator that was touted as its most ambitious product since ChatGPT. Launched with heavy backing and hype, including Disney’s involvement, Sora was envisioned as a consumer-facing platform for AI‑generated video content. However, it struggled to gain traction, was criticized as “a social network without people,” and faced mounting technical and legal challenges around copyright and creative adoption. The closure comes just months after launch, as OpenAI pivots toward enterprise tools ahead of its IPO. The Insight: This marks a reality check for AI video—highlighting technical, legal, and adoption hurdles that make “prompt-to-film” far less imminent than the hype suggested.

Sarvam AI has unveiled Chanakya

A new vertical dedicated to critical, high-security AI applications in India. The initiative is designed for sectors where confidentiality, resilience, and trust are paramount—such as defense, government, and sensitive enterprise workloads. Chanakya emphasizes secure infrastructure, compliance, and controlled deployment environments, setting Sarvam apart from global peers who focus on general-purpose AI. The launch aligns with India’s broader digital sovereignty agenda, positioning Sarvam as a key player in secure AI adoption at a national level. The Insight: By explicitly branding around security-first AI, Sarvam positions itself as a trusted partner for defense, government, and sensitive enterprise workloads—staking India’s claim in the secure AI race.

Operator's Spotlight Read

A Tennessee Grandmother Lost Her Home, Car, and Dog Because of AI Facial Recognition

Angela Lipps, 50, spent more than five months in jail after Clearview AI's system matched her face to a bank fraud suspect in North Dakota — a state she says she had never visited. Bank records eventually cleared her, but by then she had lost her home, her vehicle, and her dog. Charges were dismissed on Christmas Eve. Fargo police have since banned Clearview and acknowledged "a few errors," stopping short of a direct apology. CNN broke the story on Sunday; Tom's Hardware's analysis shows this pattern — misidentification, prolonged detention, delayed dismissal — is becoming a documented repeat phenomenon, not an edge case. The Insight: AI identification errors compound across institutions before any human intervenes — which is exactly why accountability, not just accuracy, needs to be a mandatory deployment standard before facial recognition is used in criminal justice contexts.

Operator Industry Radar

  • Meta suffers two landmark courtroom defeats in 24 hours, with juries ruling its apps addictive and harmful to youth. YouTube was also found negligent in one case, breaking the long-standing legal shield for social platforms.

  • France’s Mistral secures €770M (~$830M) in debt financing to build AI data centres across Europe, aiming to scale compute capacity and assert European digital sovereignty in the AI race.

  • The Decadelong Feud Shaping the Future of AI A decade-long feud between Sam Altman and Dario Amodei continues to shape the trajectories of OpenAI and Anthropic. Their rivalry highlights two competing philosophies of AI—rapid consumer adoption versus safety‑first alignment—impacting funding flows, regulatory debates, and the pace of global AI deployment.

Keep reading