When a power outage in San Francisco last night caused Waymo’s autonomous cars to stall and snarl traffic, clog streets and blocked emergency vehicles, it felt to me like a paradigm of big tech. It was a predictable failure.
The tech industry loves to release breakthrough products, but consistently refuses to confront how fragile these systems are when the world behaves in ways engineers did not anticipate or chose to ignore.
Waymo’s halted vehicles were an example of how modern tech is built with narrow assumptions and deployed at scale without serious planning for unintended consequences. The system worked perfectly right up until it didn’t. Then it failed in a way that affected thousands of people who had no say in the experiment. And this was just a power failure over a thrid of the city. Imagine if there was an earthquake where the vehicles prevented mass evacuation and blocked all emergency vehicles.
But this is just a pattern.
Facebook did not intend to destabilize democracies or supercharge misinformation. But it optimized relentlessly for engagement, ignored warnings from internal researchers, and shipped features without fully understanding how they would be exploited. When harm became undeniable, executives framed it as an unfortunate side effect rather than a design failure.
The AI industry is following the same script. Models are released quickly, trained on vast amounts of unvetted data, and dropped into the world with disclaimers instead of safeguards. Companies promise they will fix the problems later. Bias, hallucinations, deepfakes, and job displacement are treated as edge cases rather than core design challenges.
In each case, the failures are a lack of imagination.
The industry is excellent at optimizing for best-case scenarios. It is terrible at stress-testing worst-case ones. What happens when the power goes out. What happens when bad actors intervene. What happens when systems interact with messy human behavior instead of clean data.
These questions are not unanswerable, but asking them slows deployment, delays revenue, and complicates the story investors want to hear. So they are minimized or postponed until after the product is already embedded in daily life.
Once released, accountability becomes fuzzy. Who is responsible when an algorithm makes the wrong call. The engineer. The company. The user. The regulator. In the absence of clear ownership, nothing changes.
The tech industry prides itself on moving fast and breaking things. What it keeps breaking is trust. Trust that systems will work when conditions are not ideal. Trust that companies have thought through how their products might be misused. Trust that someone, somewhere, asked the uncomfortable questions before shipping.
Designing for unintended consequences is not optional when your products shape cities, elections, and economies. It is the job. Until the industry treats it that way, these failures will keep happening. And the rest of us will keep paying the price.