Dressed as Disruption: Everyone's Wearing It This Quarter, In Wake of Dorsey's Block Cuts
On performative transformation and the AI narratives running ahead of the work
From the series Operating Conditions: on the strategic landscape leaders are navigating. When Jack Dorsey announced Block would cut 4,000 employees and credited AI efficiency, investors loved it. This piece asks the question that didn't make the headlines: how do you tell the difference between a real transformation and a well-dressed one?
Earlier this year, Jack Dorsey announced that Block would cut 4,000 employees, roughly 40% of its workforce. His explanation was simple: AI tools now allow fewer people to do more. Investors loved it. The stock jumped 24%.
The framing was clean. The narrative was legible. It had all the structural markers of a bold, forward-looking strategic move. We’ve seen this pattern before.
I wrote recently about “meaning-shaped” outputs: content that feels like it deserves your attention but on closer inspection has no actual relationship to the thing it claims to be about. LinkedIn comments that look thoughtful but say nothing. Reports with all the hallmarks of rigor but no genuine analysis underneath.
What’s happening in corporate America right now is meaning-shaped transformation. Companies are producing the shape of AI-driven change: the layoff announcements, the efficiency narratives, the investor-friendly framing. But without necessarily having done the underlying work that would make those narratives true. (Or perhaps, betting they won’t have to put in that work.)
Even Block workers whose jobs heavily involve AI tools are skeptical that current tools can replace workers at this scale. “We’re just not there yet,” says John, a current employee whose role involves helping other staff use AI. — via The Guardian
The Numbers Behind the Narrative
Block’s story is instructive not because it’s unique, but because it’s so legible from multiple angles at once.
One angle: Block tripled its headcount between 2019 and 2022, from under 4,000 employees to over 12,000. Its stock dropped roughly 40% since early 2025. It had been under sustained pressure to cut costs. An analyst at Financial Technology Partners told Bloomberg this was “more about the business being bloated for so long than it is about AI.”
Another angle: Dorsey says Block built an internal AI tool called Goose, and that recent model improvements convinced him the company could operate effectively at half its size. He claims the company is targeting $2 million in gross profit per employee, up from $500,000 historically. That’s a real number attached to a real thesis about productivity.
Both of these things can be true at once. That’s exactly what makes “AI washing” so slippery.
“Everyone that I know that’s still there has a ton of dread because they just realized their workload has quadrupled or 10xed and AI is not going to fix it” — ‘Oliver’, via The Guardian
In the weeks after the announcement, the picture got murkier. Laid-off and current employees began speaking publicly, with many describing the cuts as Dorsey “posturing for the market”, a way of winning back investor confidence after heavy investments in cryptocurrency, according to reporting in The Guardian. Worth noting in that context: Block spent $68.1 million on a company-wide event in September 2025 — featuring Jay-Z, Anderson .Paak, and 8,000 employees flown to Oakland — five months before cutting nearly half the workforce. That sequencing tells its own story.
The term “AI washing” has been circulating for months, and the evidence base is growing. A Forrester report from January found that many companies announcing AI-related layoffs don’t have mature AI systems ready to replace the roles they’re cutting. Goldman Sachs economists estimate AI is currently eliminating only 5,000 to 10,000 jobs per month across all U.S. sectors, a number that makes Block’s 4,000 cuts look oddly attributed to the technology alone.
Perhaps most telling: when New York State gave employers the option to cite “technological innovation or automation” in their legally required layoff notices, not a single one of the 160 companies filing (including several that publicly blamed AI) checked the box.
In Dorsey’s Own Words
In a subsequent interview with WIRED, Dorsey was given the chance to answer the AI-washing charge directly. Asked point-blank, were you AI-washing the layoffsb, he didn’t say no. He said: “The most important thing for me and the company is that we stay well ahead of the technology trends that are impacting us.” That’s a pivot, not a rebuttal.
What he offered instead was a genuine and genuinely ambitious vision of what he’s building toward. He said something really shifted in December in the sophistication of AI tools, specifically naming Anthropic’s Opus 4.6 and OpenAI’s Codex 5.3 as the trigger, and that it “presented an option to dramatically change how any company is structured.” He described the goal not as a leaner version of Block but something more radical: “I want the company itself to feel like a mini AGI.” No management hierarchy. An intelligence layer on top that employees and customers alike can query, build on, customize.
I want the company itself to feel like a mini AGI. - Jack Dorsey
Take him at his word and it’s a serious idea. The kind of organizational redesign he’s describing is genuinely new thinking, not just efficiency-speak. The question is whether what Block actually did is that, or a precursor to it, or something else wearing its clothing.
He predicts that “every company that’s not building itself as intelligence is going to face something existential, and it’s going to happen over the next year or two.” Maybe. But betting that your prediction will prove true doesn’t retroactively make the method of getting there coherent. The vision and the execution are two different things, and the gap between them is exactly where AI washing operates. Yet, what is the actual downside of losing such a bet?
The Incentive Structure Is the Story
Why would companies frame layoffs as AI-driven when the reality is more complicated? Because it works.
Molly Kinder at the Brookings Institution puts it plainly: blaming AI is “a very investor-friendly message,” far more palatable than admitting the business was overstaffed or struggling. Block’s stock surging 24% on the announcement is the proof of concept. Markets reward the AI transformation narrative. They punish admissions of past mismanagement.
Bloomberg's opinion desk framed it well: this is a familiar Silicon Valley pattern of reframing mistakes or inconvenient decisions as vision. "Move fast and break things." A weak governance structure becomes protecting the founder's vision. A cost-cutting decision becomes leading the industry into the future. The substance changes; the move is always the same.
Deutsche Bank analysts have predicted that “AI redundancy washing will be a significant feature of 2026.” Nearly 60% of U.S. hiring managers surveyed by Resume.org said they emphasize AI’s role in job cuts because it’s perceived more favorably than citing financial constraints.
This is not a conspiracy but rather incentive structure. Incentive structures produce predictable behavior.
Wharton’s Peter Cappelli summarized the dynamic with characteristic directness: “The headline is, ‘It’s because of AI,’ but if you read what they actually say, they say, ‘We expect that AI will cover this work.’ Hadn’t done it. They’re just hoping.”
The Data Gap: Investment vs. Return
Step back from the layoff headlines and look at the broader picture, and something stark comes into view.
PwC’s 2026 Global CEO Survey of 4,454 executives across 95 countries found that 56% have seen neither revenue growth nor cost savings from their AI investments. Only 12%, one in eight, reported achieving both. Separately, Deloitte found that 84% of companies have not redesigned roles around AI capabilities, even though 36% expect significant job automation within a year.
Read those numbers together: the vast majority of organizations haven’t restructured their work around AI, haven’t seen financial returns from AI, and yet a growing number are attributing layoffs to AI.
That’s the gap. That’s where AI washing lives.
It’s the same gap the meaning-shaped piece approached from a different direction. Organizations are producing more outputs than ever — more reports, more strategic plans, more transformation narratives — while the quality of their actual decision-making lags behind. The volume of plausible-looking material starts to mask the absence of genuine understanding underneath.
What’s Actually Happening
None of this is to say AI isn’t changing work. It clearly is, in some contexts, for some roles, in ways that are genuinely significant. Anthropic’s Dario Amodei has written at length about the potential for what he calls “unusually painful” disruption. An MIT + Oak Ridge National Laboratory study found AI can already perform work equivalent to 11.7% of the U.S. labor market1. These aren’t trivial findings.
But there’s a meaningful difference between a company that is actually transforming how work gets done (redesigning roles, investing in retraining, evolving its operational model) and one that is using “AI” as a more acceptable frame for cost-cutting decisions that were coming regardless. The first is a real management challenge that deserves serious attention. The second is a category error being actively incentivized by capital markets.
The inability to tell the difference between those two things is itself the problem. Not just for workers trying to understand their job security, but for leaders trying to make sound technology investments, investors trying to allocate capital wisely, and policymakers trying to respond appropriately. Orientation and discernment, in other words. That’s what’s actually scarce here.
The Same Bottleneck
The scarce resource is the capacity to perceive clearly what’s actually happening, to distinguish between the shape of change and change itself.
When Dorsey tells the market that AI justifies cutting 40% of his workforce, the question isn’t whether AI is powerful. It is. The question is whether this specific claim, at this specific company, reflects genuine organizational transformation or a narrative optimized for investor reception. Or, something else.2
Answering that question requires someone who understands both the technology and the business well enough to actually evaluate it. That judgment doesn’t come from reading the press release.3
Discernment, not information, is the scarce resource.
The Broader Moment of 2026
This is all unfolding in a week when the Anthropic-Pentagon standoff reminded us that even the companies building AI are still figuring out the terrain — and that the relationship between AI capabilities, institutional power, and public accountability is being negotiated in real time, sometimes (usually) clumsily. The map-makers don’t have complete maps either.
Meanwhile, employee anxiety about AI and job loss has jumped from 28% in 2024 to 40% in 2026, according to Mercer’s global survey of 12,000 people worldwide. The IMF’s managing director says AI is “hitting the labor market like a tsunami.” Deutsche Bank warns most of the wave is manufactured narrative.
We are in a moment where the story about AI’s impact is running well ahead of the reality of AI’s impact, and the gap between the two is generating real consequences for real people. Four thousand of them at Block found out this week which side of that gap they were on.
That gap is a dis/orientation problem. And disorientation problems don’t get solved by producing more meaning-shaped discourse about them. They get solved by doing the slower, less marketable work of actually understanding the terrain.
The trouble is, doing work is being outpaced by the need to look the part.
This is part of the AI Moment (or Bubble) that we are still glimpsing, hoping to fathom4, no less, make meaningful choices about. The pressure to perform certainty is felt clearly. The tension between the urgency of performance and the lack of means to form coherence will be a defining challenge (or perhaps simply a burden) that will endure through the rest of this year.
Of course, the fallout isn’t falling evenly. Those writing the layoff press releases and the people named in them are not experiencing the same year. That asymmetry5 of who absorbs the gap between AI’s promise and its current reality is worth its own examination. Some economists would call it a K-shaped economy. I think that framing deserves a closer look, and will center on that in forthcoming posts; in addition to unpacking Big Tech Bets from the perspective of founders, investors, and those potentially buying their new products.
https://iceberg.mit.edu “creating a digital twin for the U.S. labor market”: This will be an interesting study to follow.
One thing I’m working on writing, alongside some of the work on the Futures Landscape at JOPRO, is articulating how different segments of the economy essentially have different trends and degrees of punishment or opportunity for breaking them, relative to the actual technological change taking place. This is what we’ve left young people to be given a career “birth of fire”, and what decision makers are having to both navigate and have unprecedented pressure in offering coherent messages around taking action on.
[continued from previous footnote] … “AI” is an easy narrative component, because it is everywhere in perceived presence, and provides great cover for the why of decision making.
There is an opportunism of the rest of this year that I would advise paying special attention to. Particularly within the USA, there is a unique Wild-West about economy, about AI regulation (or lack there of), and political turmoil in general. As always, the relative “Haves” versus the “Have-Nots” have very different opportunity costs (and other game theoretic concepts) — but this year is different.
“I’ve been looking for a job, but it’s hard to find. Down here it’s just winners and losers. And don’t get caught on the wrong side of that line” (Atlantic City, by Bruce Springsteen)


