Five Main AI Failures you Should Know About
Are you curious about what could go wrong with AI projects?
If you’ve heard about some of the artificial trends of 2023 and are possibly thinking about incorporating AI into your workflow, you may be cautious of AI project failures.
These have left countless companies facing huge losses and a compromised workflow.
Unfortunately, it is the case today that the majority of AI initiatives fail. A Pactera study established that 85% of all AI projects end up not meeting objectives.
This means that for every ten AI projects your company could try out, chances are high that only one of them may end up successful.
It is therefore important to know about the biggest AI failures of our time so that you may be able to avoid common project pitfalls.Â
In this article, I will be taking you through the main artificial intelligence failures you should know about, and possibly learn from to achieve success with your own project.
Let’s get started.
1. Amazon Recruiting Software Developed Bias
If you’ve worked in a recruitment department for sometime, you’ll agree that human bias is a huge problem.
Unconscious bias in hiring processes is especially rampant. This is in line with a Yale University study, where participants tended to subconsciously favor hiring men over women and even offered $4,000 more in annual salary.
It was therefore understandable why Amazon turned to AI-powered recruiting software to streamline the hiring process.
However, Amazon’s hiring software failed as it developed gender bias.
The eCommerce giant consequently ended up in the examples of AI failures as it tried in vain to tap into the latest digital transformation trends to enact its vision.
The system was meant to ease the work of evaluating countless resumes. It filtered applications via natural language processing and machine learning to offer a handful of qualified candidates for review.
The algorithm had been trained to evaluate candidates on experience.
Because the job positions were mostly in a male-dominated tech field, where men tended to have more experience than women, the machine learning software developed a preference for male candidates.
So why did Amazon’s Recruiter AI work inaccurately?
Artificial intelligence failures like this simply trace back to insufficient training data, with 41% of companies today attesting to inconsistent data across their workflows.
The data sources in this Dun & Bradstreet report ranged from employment management systems to CRMs.
If you have some experience with AI models, you may know that these algorithms are a reflection of the data they feed on.
In Amazon’s case, the model considered a 10-year-work-history from their employee management system, which had limited data about female workers and their performance in the industry.
2. Microsoft Tay Chatbot Became Offensive
Are you thinking about implementing an intelligent assistant?
If so, you’re not alone because chatbots are all around us today.
Often, you may consider intelligent virtual assistants to ease the burden of customer support and cater to round-the-clock demands.
However, it’s worth noting that 73% of respondents in a DigitasLBi survey said that they’d never use the same bot again if they had a bad experience with it, which leads us to the next AI failure story.
Microsoft unveiled the Tay Chatbot in March 2016 to handle its social media engagement on Twitter.
Unfortunately, this AI-driven assistant became one of the biggest AI project failures of all time due to bad design.
Training on data sets that mimicked the personality of a teenage girl, the chatbot soon started dishing out inflammatory remarks to users. It was taken offline as a result, with the company being forced to issue an apology to resolve the bad publicity.
Tay, like many other chatbots, worked on a corpus of data, from which it was able to map responses to questions it received.
For more on building reliable and professional chatbots for businesses without coding expertise, these AI courses are worth your consideration.
The problem came in because Microsoft designed the software to learn from actual conversations it had with humans online.
As a result of this machine learning flaw, the chatbot also picked up profanities and used them regularly in its responses.
All in all, such artificial intelligence failures trace back to the failure to blacklist derogatory words and also due to the absence of a strict corpus of data for supervised learning.
3. Uber Self Driving Car Caused a Fatality
Autonomous vehicles (AVs) are viewed as a solution to many of our travel problems.
At the top of the list are car accidents, which you may believe that self-driving cars could help reduce by eliminating fatigue and environmentally-hampered driving.
With human error playing a part in 94% of all major accidents, going by research from the NHTSA, autonomous vehicles are often lauded as the solution.
Yet, certain examples of AI failures like Uber’s self-driving car accidents provide us with evidence that autonomous vehicles still pose huge risks.
On March 18th, 2018, an Uber self-driving vehicle caused the death of a cyclist in Arizona.
However, this is not an isolated case. A study by the University of Michigan found that autonomous vehicles are involved in over twice as many crashes as conventional cars.
A typical autonomous vehicle uses computer vision technology, paired with proximity and other sensors, to detect and avoid objects.
In Uber’s case, there was a delay between the system’s corrective action to when the object was detected. This resulted in the car slowing down and braking only after the crash.
Such artificial intelligence failures trace back to overlooking system loopholes.
Uber engineers had ignored the need to program for split-second oncoming crashes. This was overlooked to avoid regular hard braking that could negatively impact the passenger’s experience.
In other words, the transport-on-demand giant ignored potential risks in favor of the user experience.Â
4. IBM Watson for Oncology Misdiagnosed Patients
Did you know cancer is among the leading causes of death worldwide?
Research by the American Cancer Society found out that there were over 1.8 million new cancer cases in 2020. This overwhelming cancer rate is among the reasons why artificial intelligence continues to be trailed in the healthcare sector.
A while ago, IBM Watson, an AI-powered computer network, was leveraged to improve oncology services at the Memorial Sloan Kettering Cancer Center.
It didn’t work out as hoped, resulting in one of the biggest AI failures in history, after a $62 million investment went down the drain.
It’s easy to narrow down IBM’s Watson problems to misdiagnosis and poor data quality, which is behind most artificial intelligence failures today.Â
As opposed to using real patient data, the healthcare app development team focused on hypothetical treatment cases involving a tiny fraction of data sets in the first place. The result was that the algorithms learned from the limited perspective of a few specialists, which impacted how it offered oncology services.
IBM Watson, therefore, provided cancer treatment suggestions that were sometimes unsafe and incorrect. The project was quickly shut down leaving the technology giant counting its losses.
Had the company relied on wider data sets and real-life medical variables, they may have had better success with their project.
5. ICT’s Intelligent Camera Operator Tracks Inaccurately
Sports offer tremendous potential for the use of AI-powered technology.
One of those areas, where AI-driven software has been trialed is in camera handling. Here, ball-tracking technology is being leveraged to ease manual tracking work by human camera operators.
Aside from streamlining HD streaming services for sports teams, AI-driven ball-tracking technology can also help such service providers reduce the cost of airing live matches.
Look no further than Inverness Caledonian Thistle (ICT) FC, for one of the latest examples of AI failures in sports.
ICT announced its intelligent camera operator for a soccer match in October 2020.
However, things didn’t go according to plan as the system constantly mistook the lineman’s bald head for the ball.
Working off the principle of convolutional neural networks for object tracking, the autonomous camera operator was initially able to identify and track the ball.
However, because of the similarities between these two objects and a camera angle where the linesman appeared to be within geographical parameters i.e. inside the field, the system kept on mistaking one for the other.
If you were on the receiving end of the live stream, you may not have appreciated the day’s match coverage, just like many other upset fans after the match.
At the end of the day, artificial intelligence failures of this kind trace back to not accounting for exceptional errors.
Solely focusing on a perfect scenario use case may be the reason why your AI project may run into problems later on.
Conclusion
It’s time to face the truth.
AI technology runs on GIGO policies where poor data produces poor output.
Many AI project failures are down to the lack of quality data, which ends up producing an unreliable model that sooner or later provides unreliable results.
Therefore, you may want to try out these data science courses to better extract and clean data for successful AI projects.
In other cases, you can trace the biggest AI failures to the lack of a product sense approach to solving business problems.
In other words, you may be creating AI-powered solutions in search of a problem as opposed to the other way around.Â
Consequently, the key lessons from these main artificial intelligence failures you should know about boils down to three factors, namely quality data, proper evaluation of business goals, and good exception handling.
See more here onlinecourseing.com
Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method
PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX.Â
Trackback from your site.
VOWG
| #
The machines will only function based on the programing. As was stated at the end of the article, GIGO will be a problem.
Reply
Tom
| #
I think it will be virtually impossible to take “man” out of A/I and therefore it will be mostly useless and untrustworthy.
Reply
Howdy
| #
Microsoft and AI? Since there isn’t much intelligence at M$, no need to wonder why artificial is needed..
Their ‘chatbot messed up because everything they come up with (after others did it first) is substandard and basically beta software. As if you wouldn’t incorporate a list of profanities that the bot should not include in chat? Wow typical m$… Useless.
“Many AI project failures are down to the lack of quality data”
What is “quality data”? For one, they lack intuition, to just ‘know’ when to interject, in the case of a problem a user is having logging in for example. I used a customer service chatbot, but my login wouldn’t work, yet the stupid thing just kept asking. A human would realise and take a different approach.
If I see this chatgpt crap, I go elsewhere,
Reply
Koen Vogel
| #
AI is ascendant and coming, so we need to be realistic and plan for a future where AI takes an increasingly large role. The two main decision-taking methods either involve statistics (model-based) or neural-learning (Machine Intelligence). Either approach is subject to failures. Sometimes the models are wrong, sometimes machine learning takes it down a wrong path. The key (for us) is to remain awake and vote with our feet: if a product is faulty, go elsewhere.
Reply