skip to Main Content

AI says NO!

 

Imagine you’re a woman, exhausted at 11:30 PM, leaving a dance after a great night with friends. All you want is to get home safely. You open your Uber app, hoping to catch a ride, only to be met with a chilling message – “Access denied.” Suddenly, a fun evening turns into a potentially dangerous situation.

Perplexed and frightened, you desperately try reaching out to customer support, but there’s no immediate help available. After much effort, you manage to find an email address… but how does that help you get home? Despite the odds, you send an email anyway.

Luckily, a friend from the club leaves and hears about your predicament and offers you a ride home. Disaster avoided.

This isn’t a fictional tale; it happened to a friend, whom we’ll call Jo. Thankfully, Jo made it home safely, albeit shaken, and resolved to share her experience to prevent others from facing a similar ordeal.

How did this happen?

At the core of this issue is the clash between technological efficiency and human accountability. While automated systems promise swift operations, they also raise concerns about transparency and fairness. Without clear explanations for bans, individuals like Jo are left in the dark, grappling with the consequences of decisions made by algorithms hidden from scrutiny. It’s not about human obstinacy – nobody knows why the ban was enacted. Absolutely nobody.

While companies may argue that automated bans are necessary for platform integrity and safety, mechanisms must be in place to ensure accountability and recourse for affected individuals. Transparency should be paramount, with explicit guidelines on ban criteria and avenues for appeal.

The email response from Uber failed to elucidate the ban’s rationale – even Uber’s staff were clueless. And they couldn’t (or wouldn’t) reverse the ban. With the cause unknown, resolution was impossible.

Jo had been a loyal customer for years, with no changes to Uber’s policies or her agreement with them. Yet, possibly due to alterations in Uber’s AI algorithms, she was denied service on a whim. Does Jo have a right to continue using the service? What data informed the ban? Was it accurate or legally obtained? It could’ve been a case of mistaken identity… who would know?

“Computer says No!” was a famous catchphrase from David Walliams in 2004. Now, “AI says No!” rings truer, yet even AI’s creators are clueless about the reasons. It’s no laughing matter when real people like Jo face genuine danger due to AI’s decisions. What was once humorous becomes alarming when AI dictates individual outcomes to anyone at anytime without notice.

While losing access to Uber spontaneously might seem trivial, imagine if it were the bank calling in your mortgage or the AI was determining your eligibility for life-saving medical treatment?

It could be you next… you in mortal danger because “AI says No!”

Back To Top