Skip to content

Artificial Intelligence – two years later

When ChatGPT was announced nearly two years ago, it created some of the most dramatic reactions in tech in recent years. We were told it was bigger than the invention of the Internet, it would eliminate millions of jobs, and it might even be the end of mankind!

Now two years later, we can see how it was overhyped by “experts” that seem to make the same mistake with every technical innovation that comes along. They think in a binary way- something revolutionary immediately displaces an older product or solution. But the world doesn’t work that way. Change is incremental, it doesn’t happen all at once, first because it’s dampened by the slow speed of adoption of new technology, and it fails to consider the unexpected issues that always arise and slows development. A decade after it was predicted we’d all be in self-driving cars, Teslas are still crashing into emergency vehicles parked alongside the road.

We are now in that middle area where AI is failing to deliver what was predicted, yet it’s not something to be ignored. Like most discoveries, we in the early stages of an industry that has given us lots to look at but little to remember.

Two years ago I was eager to try ChatGPT. I asked it questions, used it to create job descriptions, and asked it to create itineraries for a vacation. I even tried it have it write a column for me. It was fun to use, a great novelty, but the novelty wore off pretty rapidly, especially when it hallucinated. Even that word is new to us, created by the industry as a substitute for saying it just lies.

The best I could say for AI was that it saved me a lot of time searching, writing, and organizing. Instead of creating a job description from scratch or memory, the software came back with something much more quickly, often well organized in a list that I was able to edit for my purposes. But I discovered I still needed to have some familiarity with the subject because the results could not always be trusted.

Soon after ChatBPT, we got AI products with the ability to create graphics or photos by just describing what we wanted, such as a zebra riding on a horse. What was the result? Really ugly images that made us chuckle, but of limited use.

Over this past year, businesses began rushing out their AI products and things got worse. Using AI instead of a human to do online chat for such things as customer service has been terrible. Amazon is using it to provide a summary of reviews, but is often poorly written. And the few times publications used AI to create articles, resulted in mediocrity or embarrassment.

I was always skeptical of AI delivering what was promised, and I’m more so even now. We are getting more quantity and less quality and still cannot fully trust it to delivery something of much value. AI allows more mediocre content to be delivered in less time by more people.

But this is only temporary. AI will get better over time and may meet many of the wild claims. But like most things in tech, take all the predictions with a strong bit of skepticism.

As Intelligencer noted, “Exciting, fast-changing tools with enormous theoretical potential are being used, in the real world, right now, to produce near-infinite quantities of bad-to-not-very-good stuff. In part, it’s a disconnect between forward-looking narratives and hype and lagging actual capabilities; it also illustrates a gap between how people imagine they might use knowledge-work automation and how it actually gets used. More than either of these things, though, it’s an example of the difference between the impressive and empowering feeling of using new AI tools and the far more common experience of having AI tools used on you  between generating previously impossible quantities of passable emails, documents, and imagery and being on the receiving end of all that new production.”