Skip to content

How AI Can Be Used—and Abused: The Cautionary Tale of Hertz’s Automated Damage Billing

Hertz tries AI

Artificial intelligence has become the business world’s favorite new tool, promising cost savings, faster decisions, and even new revenue streams. But the story unfolding at the car rental company Hertz offers a cautionary look at how quickly AI can veer from helpful assistant to harmful enforcer—impacting their customers in troubling ways.

According to a report from View From The Wing, Hertz has unleashed AI-driven systems to detect vehicle damage and automatically bill customers. The results have been eye-opening: the company’s damage claims have skyrocketed, reportedly five times higher than before the technology was implemented. Far from catching damage or fraud, many of these claims involve minor dings and scratches—or even pre-existing flaws that renters say they never caused.

This is a clear example of how AI can be wielded for both efficiency and exploitation. On one hand, it makes sense for a company with thousands of cars constantly cycling through customers to use machine learning and high-resolution image processing. An AI system can scan returned vehicles, flag scratches that weren’t noted at checkout, and accelerate the damage assessment process. What once might have taken a human employee several minutes of inspection can now be done in seconds. In theory, it should reduce disputes by providing consistent, data-driven evaluations.

But in practice, AI has given Hertz an extremely aggressive new tool—one that doesn’t balance fairness or consider customer goodwill. According to customers quoted in multiple articles, charges frequently appear after the rental is complete, with little opportunity to contest the findings. Many are surprised to receive emails or bills for hundreds of dollars weeks after returning a vehicle, often accompanied by automated photos of scuffs that could easily be normal wear and tear. What’s worse, customers say attempts to reach human agents to dispute these charges can turn into a bureaucratic nightmare. Hertz can now just blame AI, absolving themselves of any responsibility.

Why is this happening? Quite simply, AI makes it cheap and easy to spot the tiniest cosmetic flaws. Combined with a corporate incentive to maximize revenue from damage fees, the result is a kind of digital over-policing. But, an automated system doesn’t understand the nuance between a superficial scuff and meaningful damage. It doesn’t take into account longstanding rental industry practices where minor blemishes were typically ignored or absorbed as part of doing business. Nor does it weigh whether alienating loyal customers is worth the extra revenue. One could imagine Hertz approaching the end of a bad quarter and just dialing up the AI’s sensitivity to increase profits.

This is the darker side of AI adoption. It’s not that the algorithms are inherently unethical—after all, accurately detecting scratches is technologically impressive. The issue lies in how companies choose to deploy these tools. Instead of enhancing customer trust by ensuring genuinely fair damage assessments, the technology seems to being used to pad their bottom line. What might be pitched as a more “objective” system ends up becoming a blunter instrument, designed to maximize claims regardless of customer experience.

More concerning is how quickly this approach is spreading. The same article notes that other rental companies are exploring similar AI-driven inspections. As with many business technologies, once one major player normalizes a practice, competitors may feel compelled to adopt it just to keep up. Soon, an entire industry might routinely subject travelers to automated micro-inspections and retroactive billing, fundamentally changing the norms of car rentals.

Humans are still needed

This case also highlights a critical point for companies and their consumers alike: AI doesn’t remove the need for human oversight. In fact, it demands more. Algorithms reflect the goals set by their corporate masters. If a company programs its AI to flag and monetize every scratch, that’s exactly what it will do. Without safeguards, appeals processes, or standards that distinguish meaningful damage from ordinary wear, AI becomes a tool for exploitation rather than improvement. Furthermore, you can’t just insert a new technology without bringing along their customers to explain how why it makes sense.

AI can still be a force for good. The same technologies that scan for dents could be used to protect customers, such as by automatically documenting a vehicle’s condition at pickup to shield renters from false claims later. With transparent policies, easy access to photos, and fair thresholds for what constitutes billable damage, AI could build trust instead of eroding it.

Ultimately, the Hertz example serves as a powerful reminder that while AI is transforming industries, it’s not inherently ethical or customer-friendly. Those qualities depend entirely on how humans design, deploy, and regulate it. If left unchecked, the temptation to leverage AI for short-term gains at the expense of fairness will be too great for many companies to resist. And it’s the consumer who will pay the price.

Hertz misjudged new technology before and they may be doing it again. A number of years ago they added Teslas to their fleet and it backfired, costing the CEO his job. Apparently enamored with new technology, they never anticipated how renters would rebel when they weren’t familiar with the unusual controls and needed to stop at charging stations on the way back.

Once again, Hertz seems to be jumping on AI without fully understanding how it can bite them back and alienate their customers.