Skip to content

Why Are We Taking These AI CEOs So Seriously?

I’ve spent more than five decades designing, building, and writing about consumer technology. In that time, I’ve watched countless waves of hype promising to reshape our lives. Some actually did — the personal computer, the smartphone, the internet itself. Many promised but never came close – the flying car, home robot, a single-prick blood-testing machine, the Segway. Which is why I’m becoming increasingly aware by how easily we’re all buying into the promises coming from today’s leaders of the AI industry.

I certainly recognize the benefits of AI and use it daily, especially for search and research. It is a great tool. AI search is much more effective than existing search engines because it’s based on a much less biased model not influenced by the search algorithms companies use to game Google’s rules. For research it’s invaluable. It can summarize, disect and identify trends. While it does hallucinate on occasion, it saves time and increases our productivity.

AI software uses information taken from multiple sources – books, journals, conference papers, financial statements, newpapers, the Internet and millions of other sources – and It uses that information to feed back answers to our queries. Its answers are based on what it finds after searching through its databases, weighing the frequency of the facts, the order of the words, and algorithms that can detect what is true from what is not.

But, in spite of the promises, AI progress has not yet succeeded in the next big step of inventing new content, creating new medicines, or building new inventions based on using logical reasoning. Logical reasoning and human intelligence has been predicted, but is still missing from AI. Currently AI is just a very effective search and playback engine.  It can format the reply in most any way you’d like, using charts and graphs, essays in the words of others. and grammatically corrected papers. What it does currently is all based on what it finds from within its database and nothing based on intelligent human-like thinking. But you would never know from listening to its leaders.

These leaders — including Sam Altman of OpenAI, Dario Amodei of Anthropic, and Satya Nadella of Microsoft — are not the modern-day equivalents of Steve Jobs or Bill Gates. They come across more like sales executives who’ve mastered the art of fundraising, publicity, and market speculation, while offering surprisingly little of substance. They have not invented any of this, nor do they show a level of understanding of, say, Steve Jobs when he spoke about the iPhone.

Sam Altman spoke on a recent podcast interview where he described how AI will deliver “crazy new social experiences” and “virtual employees,” and “actually discover new science.” Pressed on how that will happen, he spoke word salad about their latest model being like a good PhD, with no explanation beyond mentioning how a scientist somewhere was impressed. He never answered the question. This is the CEO of a company worth hundreds of billions — mostly because investors believe his crazy fpredictions.

It’s akin to a pattern I’ve seen throughout tech. Visionary claims backed by charismatic leaders and glossy presentations to the press, all designed to attract massive amounts of capital from VCs. The difference this time is scale. It’s beyond anything we’ve ever seen. OpenAI is funded by Microsoft and SoftBank and companies like Oracle are spending tens of billions on Nvidia chips in Texas data centers for capabilities that don’t exist. Facebook is building huge data centers around the country just in anticipation. It all rests on the assumption that AI will radically transform every industry.

But where’s the evidence? A Salesforce study showed so-called “AI agents” — the software designed to handle complex tasks for us — fail on even moderately complicated work. Apple’s own research just demonstrated that these systems don’t actually reason. They predict text based on large patterns. That’s not intelligence; it’s high-speed pattern matching.

Despite this, the industry’s top executives continue to receive glowing profiles in the press, with little critical scrutiny. Few reporters seem willing to ask the obvious questions: Where’s the human logic? Where are examples using real thinking? No company has yet been able demonstrate human reasoning in their AI products.

In fact, to check my facts, I asked ChatGPT, “Has any company been able to demonstrate human reasoning in their AI products?”

It answered, “No — no company has yet been able to credibly demonstrate that their AI products can perform human-like reasoning…… Large language models (LLMs) like OpenAI’s GPT-4, Anthropic’s Claude, Google’s Gemini, and Meta’s LLaMA are statistical pattern predictors. They generate text by predicting the next most likely word based on huge datasets, not by understanding or reasoning in a human sense.”

Meanwhile, the costs are staggering. The capital required to keep these companies afloat would only make sense if AI revenues were projected to dwarf those of the smartphone and enterprise software markets combined — not just someday, but soon. Are we seeing one of the greatest misrepresentations of technology ever?

I’ve covered many genuinely transformative technologies. What’s striking here is how little accountability these AI leaders face from the public and the press. Their main achievements so far have been raising enormous sums of money and generating huge amounts of media attention. They speak confidently about discovering new physics or inventing virtual scientists, but when pressed, offer only anecdotes and hand-waving. Promises made three years about how human reasoning was months away have not been fulfilled. I just see a huge disconnect.

We should treat them with much more skepticism. These CEOs should be questioned like other leaders making multi-trillion-dollar promises: How exactly does your technology work? Why should anyone believe your financial projections? When will we see evidence of the human intelligence you keep forecasting?

Don’t be fooled by thinking this is just too complicated to understand. Believe your eyes. The tech industry thrives on making complex things seem too arcane for outsiders to grasp. But it’s not. If you read this far, you can understand it.

And increasingly, we are seeing evidence that AI is not meeting past promises. A Salesforce study showed their highly touted “AI agents” — the software designed to handle complex tasks for us — break down on even moderately complicated requests. Apple’s own research just demonstrated that these systems don’t actually reason. They predict text based on large patterns. That’s not intelligence; it’s high-speed pattern matching.

The tech press should stop treating every product demo as the dawn of a new epoch. And yes, the press is as equally responsible as the CEOs. They seem awed about the future of AI, but there’s been little pushback, investigation, or critical questioning. They treat the CEOs with idolotry unlike how they treat most personalities. We shouldn’t let the AI companies coast by on glossy promises and PR releases. The burden of proof should be on them to show why this is more than another Silicon Valley bubble.

As I’ve often written on BakerOnTech, technology changes the world when it delivers genuine value. Until AI leaders can clearly demonstrate that we’d be wise to maintain healthy skepticism until the companies can provide evidence.