[hibiscus_horz_sbtp]

Epic Fail: 5 Algorithm Snafus That Left the World Stunned

In today’s hyper-techy landscape, the conversation around Artificial Intelligence (AI) and technological advancement is inescapable, resonating with phrases like “AI is incredible!” “Data is the new gold!” and the timeless adage, “AI is going to take all of our jobs!” While these sentiments may eventually ring true, the current reality suggests that AI is very much still in its infancy, marked by occasional mishaps and, let’s face it, some downright hilarious blunders. Like a small child taking its first steps, AI has certainly stumbled, raising eyebrows and eliciting laughter in the process.

Let’s take a look at 5 of the more…interesting…mishaps!

Unveiling the Comedy of Errors: AI and COVID-19 Detection

As the world grappled with the challenges of COVID-19, many healthcare organizations endeavored to develop AI capable of detecting the virus in patients. However, an MIT Technology Review paper shed light on the numerous failures in these attempts. The underlying issue? AI’s uncanny ability to train itself on utterly meaningless, irrelevant data, leads to some very perplexing decision-making.

For instance, one researcher discovered that an AI, striving to predict COVID-19 presence, fixated on whether patients were lying down or standing up. Why, you ask? Well, it makes sense that the data being fed to the AI centered around very ill patients, who, understandably, were convalescing in bed; creating a model that suggested anyone standing up as well, and anyone lying down, was ill. The skewed training data influenced its decision-making process, showcasing the AI’s knack for seizing upon arbitrary patterns. Rectifying such missteps is no easy task. It’s not a matter of tweaking a single line of code; the complexity lies in the intricate web of interdependent coding. Remove one problem, and the AI may just invent a new one, possibly equating COVID-19 severity with the presence of a belt or necktie!

Microsoft’s Misadventure: When AI Echoed Hitler

Microsoft’s foray into AI, specifically with the chatbot, ‘Tay’ exemplifies the potential pitfalls of this technology. Tay was designed to emulate millennials by learning from social media interactions on platforms like Twitter (now X), Kik, and GroupMe. The AI’s lack of contextual understanding led Tay to oscillate between statements like “Humans are super cool!” to, shockingly, “Hitler was right.” As an insentient tool, the AI simply noticed that across its databank, whenever someone said “Hitler” the next most common word was “was” and the next most common word after that, was “right.” It was a case of AI adopting repetitive patterns, not comprehending the implications. The incident underscores the need for stringent filters when deploying AI, as even a momentary lapse can result in unintended, and in this case, offensive language.

Facebook’s Algorithmic Blunders: An Unintended Affiliation with Hate

In a similarly unfortunate incident, Facebook, a platform synonymous with social connection, faced scrutiny when it was revealed that its advertising algorithm allowed brands to target specific demographics with hate-filled, antisemitic content. As reported by publications like SlateBuzzfeed, and ProPublica, inexplicably customized filters like “Jew haters” and “Hitler did nothing wrong” were automatically approved by the platform, and ran unchecked, for a mere $30 payment – very simply because the technology didn’t know any better. While human abuse of the system certainly played a role, the AI’s tendency to mimic patterns and understand mathematical associations prevalent in its training data, compounded the problem.  The message and lesson here is clear: AI necessitates vigilant human moderation to prevent the perpetuation of harmful content.

AI’s Brush with Criminal Profiling: Not Quite Sherlock

AI’s struggle with image recognition surfaces in the realm of anti-terrorist and anti-criminal software. The indiscriminate identification of individuals as terrorists or criminals, particularly with a bias towards certain demographics, underscores the limitations of AI’s comprehension of crucial features. The reality is, that this machine-learned technology is only as smart as the information that it’s fed, and this just serves to highlight the continued prevalence of systemic issues in our global society. If the majority of historical information or dialogue that exists suggests certain skin colors, cultures, genders, or creeds, are either inherently bad or good, then the AI has nothing more to learn from than that which it is statistically presented. This, unfortunately, has led to instances of AI technology amplifying social inequities like automatically suspending driver’s licenses based on race, and underrepresenting women and minority groups in healthcare modeling.

Yet again, the repercussions of such misidentifications more than emphasize the need for human intervention from trained professionals, like those with an online Master’s in analytics, in crucial decision-making processes.

When AI Rebels: Crafting Its Own Language

In an extremely bizarre turn of events, Facebook’s AI experiment involving chatbots ‘Bob’ and ‘Alice’ once resulted in the creation of a language entirely independent of English. In an exercise designed to aid these chatbots (an AI tech that can communicate with both computers and humans) in the practice of ‘negotiation,’ and being given no specific direction as to the preferred language to use, Bob and Alice ultimately deemed English to be lacking in “reward” and fashioned their own means of communication. While this showcases the creative potential of AI, it also highlights its detachment from human understanding, as the concept of “reward” remains incredibly elusive.

The Grand Finale: Embracing the Quirks of AI

The recurring theme in these anecdotes is the undeniable fact that AI is a tool, not a flawless oracle – particularly in its current state. Its inherent challenges stem from the mathematical intricacies and the sheer impossibility of predicting every conceivable scenario. Attempting to write filters for every nuance is a Sisyphean task, and even if accomplished, overseeing their functionality becomes an impractical feat.

In summary, perhaps the key lies in recognizing AI’s strengths and limitations. Rather than entrusting it with the entirety of our decision-making processes, let’s acknowledge its prowess in areas like language invention and creative ideation. Leave the complex, nuanced human jobs to the capable hands of human professionals, armed with expertise and a genuine understanding of the complexities that AI, for all its brilliance, is still grappling to comprehend. It’s a harmonious partnership, where humans guide the march of technology, allowing both to thrive in their respective domains.


TOP