The Viral AI Warning — What the Data Actually Says
An essay about AI and jobs crossed 80 million views in days. Here is what the claims, the counter-arguments, and the published research actually say — without the noise.
In February 2026, Matt Shumer — founder and CEO of OthersideAI, the company behind the HyperWrite AI writing platform — published an essay titled “Something Big Is Happening” on his personal website on 9 February, before sharing it on X the following day. The post accumulated more than 80 million views on X, and over 100,000 likes, within days of going live.
Shumer, who has spent six years building AI products and investing in the sector, wrote the piece as a direct message to friends and family outside the technology industry. He compared the moment to February 2020 — when early reports of a new virus drew little widespread attention before upending daily life within weeks. “I think we’re in the ‘this seems overblown’ phase of something much, much bigger than Covid,” he wrote.
His core claim: AI crossed a threshold where he could describe a software product in plain English, walk away, and return hours later to find it built — tested, functional, and requiring no corrections. “I am no longer needed for the actual technical work of my job,” he stated in the essay. Cognitive scientist and author Gary Marcus published a point-by-point response, calling the post “weaponized hype” that selectively cited data while omitting well-documented failure modes. Both views are examined below — using only the primary published data available.
Six Claims, Checked Against the Primary Data
Shumer’s essay made several specific claims about AI capability. Here is each one, cross-referenced against published research, benchmark data, and first-hand accounts — split into what is contested, what is verified, and what warrants caution.
METR’s Task Time Horizon — How the Curve Changed
This chart tracks the longest software task an AI model could complete correctly at least 50% of the time, measured in human-equivalent hours. The data reflects METR’s published retrospective curve from their March 2025 paper and the February 2026 GPT-5.2 update.
Source: METR — Measuring AI Ability to Complete Long Tasks (Mar 2025) · Epoch AI METR Time Horizons · METR official GPT-5.2 announcement, Feb 4 2026.
Note: Early data points (2022–2024) are derived from METR’s retrospective modelling published March 2025, not real-time tracking.
What Current AI Can and Cannot Do Reliably
These figures are drawn from METR’s published benchmark data and the July 2025 developer productivity RCT. They represent the state of AI coding capability as of early 2026 — not projections.
In Their Own Words
The following quotes are taken verbatim from Shumer’s published essay, Kelsey Piper’s published article at The Argument, and the anonymous developer statement shared by Gary Marcus on his Substack.
“I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just… appears. Not a rough draft I need to fix. The finished thing.”
“Something happened a couple of months ago where you can truly give it a description and let it go and — sometimes! — will come out with the right answer. … Generally, the closer these systems are to appearing right, the more dangerous they become because people become increasingly at ease just trusting them when they shouldn’t.”
“At one point, it deleted every single one of the phoneme files of each English sound pronounced absolutely correctly, which I had personally emailed an English teacher to secure permission to use, and replaced them with AI-generated sounds which were all subtly wrong.”
From Flat Curve to Viral Debate — a Timeline
The events that led to, and followed, Shumer’s essay — in chronological order, sourced from METR’s published data and contemporaneous reporting.
The Coverage, in Summary
Shumer’s essay “Something Big Is Happening,” published 9 February 2026 on shumer.dev and shared to X on 10 February, was covered across multiple outlets after accumulating more than 80 million views on the platform. The essay, Gary Marcus’s Substack response, METR’s published benchmarks — including the task-time horizon paper and the developer productivity RCT — and first-hand accounts from developers were discussed in the context of AI coding capability, reliability, and its impact on knowledge work.
The METR data for GPT-5.2, Kelsey Piper’s first-hand account of Claude Code, and the 19% slowdown figure from METR’s July 2025 RCT were among the primary data points examined. Shumer appeared on CNBC on 13 February 2026 and addressed the reaction to the piece. More AI and technology developments covered on this site include Apple’s AI delays and FTC probe (February 2026) and IBM FlashSystem AI launch (March 2026).






