In a world where AI was suddenly everywhere, what will be remembered about 2025? How can we tell future generations what it looked like when miraculous surprises mixed with day-to-day disappointments in a never-ending cycle of worry and hope …
In anannualtradition, it’s time for our “final closing ceremony” for the year gone by, our carefully-curated collection of small moments with big implications.
And in 2025 we started seeing AI’s impact on society — for better or worse.
A March blog post from SourceHut’s CEO/founder Drew DeVault complained of hyper-aggressive crawlers “using random User-Agents that overlap with end-users and come from tens of thousands of IP addresses … All of my sysadmin friends are dealing with the same problems.”
The only thing more alarming than AI’s appetite was its incredible output. One studio produced 200,000 AI-generated podcast episodes with shows the Los Angeles Times noted were “so cheap to make that they can focus on tiny topics.”
Yet in the coding world, there were some iconic successes. A total of 53,199 vibe coders set a new world record during a 10-day hackathon in August. They’d accessed top AI coding platforms through an in-house Vibe Coding Hub which, according to their announcement was itself “in the spirit of the event — created in 24 hours exclusively through vibe coding.”
We heard these stories because our media scrambled to document the historic changes — the good and the bad. But they were also fighting for their own survival, with top publishers facing an “apocalypse” of dropping traffic which New York magazine blamed partly on AI “summaries” that replaced traditional top-of-page search results.
It wasn’t just the media that grew skeptical. Researchers found products labeled as powered by AI actually receive less trust. And in November more than half of respondents told Pew researchers they were “more concerned than excited about the increased use of AI in daily life.”
While we worried about AI taking our jobs, some job-seekers found themselves being interviewed by AI, including 20-year-old Kendiana Colin, who watched haplessly as her glitching AI interviewer got stuck in a loop and repeated the same words over and over again.
In April, OpenAI had to roll back an update after acknowledging ChatGPT had become “overly flattering… overly supportive” with what it described euphemistically as “unintended side effects.”
Next Steps
People began to wonder how bad things could really get. Is AI — and maybe even an omni-competent superintelligence — inevitable?
Maybe not. A lecturer in digital humanities from University College Cork cautioned that “When we accept that AGI [artificial general intelligence] is inevitable, we stop asking whether it should be built…” The bracing essay in Noema magazine warned of an inevitability that’s already being “manufactured” through “specific choices about funding, attention and legitimacy, and different choices would produce different futures.”
The fundamental question, he wrote, “isn’t whether AGI is coming, but who benefits from making us believe it is …”
With growing chatter about the possibility of an economy-destroying “AI bubble,” tech giants scrambled to attempt the one trick AI hadn’t mastered: making money.
But would this bring a world where our chatbots suddenly transmogrified into advertisers?
In December, ChatGPT followed its answer to a question about securing hardware with an unrelated suggestion to shop at Target. This led the chief research officer to promise it would turn off “suggestions” to improve targeting, adding “We’re also looking at better controls so you can dial this down or off if you don’t find it helpful.”
Then in November Engadget reported that Google had already begun testing sponsored ads that “show up in the bottom of search results in the Gemini-powered AI Mode.”
There were even ads for AI that were generated by AI…
And Tom Cruise’s last “Mission: Impossible“ was destroying a world-conquering AI — described as “a self-learning, truth-eating digital parasite.”
The editors of the culture magazine n + 1 published a 3,800-word essay urging its readers to “AI-proof” the terrains of their intellectual life, calling for “blunt-force militancy” to resist AI’s “further creep into intellectual labor …”
Recommended steps included “Don’t publish AI bullshit” and “resist the call to establish worthless partnerships” — while creating and promoting work that’s “unreplicable.”
“There’s still time to disenchant AI, provincialize it, make it uncompelling and uncool,” they wrote, arguing that machine-made (and corporation-owned) literature “should be smashed, and can.”
And after deleting two “AI slop” images accidentally published in January, the Onion’s CEO and former NBC News reporter Ben Collins went on a podcast to proclaim “AI is not funny,” and urge frightened consumers to unite “and say, ‘We’re not helpless — we’re people…
“That’s why I am optimistic,” he said, “Because the people who are against this thing way outnumber the people who like what’s going on. ”
Did we beat ’em or join ’em? Though gig-work service Fiverr’s ads had lampooned AI-assisted vibe coding, in September it still slashed 30% of its workforce, describing the move as an “AI pivot.”
Just 10 months earlier Fiverr had released an ad that was entirely AI-generated:
Maybe that’s what really captures 2025’s dual zeitgeist of AI — that massive adoption and massive resistance are happening at the same time.
And so it was that as we stumbled into 2026 — with our ambition meeting our ambivalence — Time magazine was declaring that its person of the year was “the architects of AI.” In perhaps the most 2025 touch of all, Time’s web developer installed an AI chat window across every story on its site.
Time’s editors even had to add a disclaimer to their 6,700-word celebration admitting that they were already doing business with AI companies. (“OpenAI and TIME have a licensing and technology agreement that allows OpenAI to access TIME’s archives…”)
So with caveats and qualifications, AI accepted its crown, as the ups and downs of 2025 culminated with Time’s almost comically conflicted conclusion:
Thanks to AI titans such as NVIDIA chief Jensen Huang and OpenAI’s Altman, they write, “Humanity is now flying down the highway, all gas no brakes, toward a highly automated and highly uncertain future.
“Perhaps [U.S. President Donald] Trump said it best, speaking directly to Huang with a jovial laugh in the U.K. in September:
“I don’t know what you’re doing here. I hope you’re right.”
YOUTUBE.COM/THENEWSTACK
Tech moves fast, don’t miss an episode. Subscribe to our YouTube
channel to stream all our podcasts, interviews, demos, and more.