Order from us for quality, customized work in due time of your choice.
Introduction
Artificial intelligence (AI) entails an approach to make a computer, software, or a computer-controlled robot think intelligently as the human mind. It is achieved by examining the human brains pattern and by assessing the cognitive process. The results of the studies create intelligent systems and software. AI idea is human intelligence may be defined as a means by which a machine may easily mimic and implement activities from the most simple tasks to the more complex ones. It paves the way for human capacities to be undertaken by systems and software efficiently, effectively, and at a reduced cost.
However, many AI projects have failed to achieve the desired outcome in most cases, as the AI projects failure rate is high in most cases. The research paper provides a detailed worldwide timeline of artificial intelligence projects that were attempted and failed and the threats they have caused. In addition, the paper compares similar projects and discusses a solution to how the project could have been a success.
AI Projects That Have Been Unsuccessful and Their Threats
AI is transforming industries and the way companies operate with many use cases. Nonetheless, creating and successfully executing an AI project for business development poses substantial issues to the firm that slow AI adoption in the company. It has been estimated that 85% to 92 % of AI projects may fail and deliver faulty results through 2022, and 70 percent of firms report small or no influence from AI (Dilmegani, 2021). The following are examples of AI projects that have failed and posed threats to patients and hospitals.
IBM Watson for Oncology Project
This is a well-established and famous AI project failure example, in which IBM partnered with The University of Texas M.D Anderson Cancer Center to create IBM Watson for Oncology to enhance cancer care. The internal IBM reports revealed that Watson often provided fault cancer treatment advice, for example, offering bleeding dfaultyrugs for a patient with chronic bleeding (Bhattacharya, 2021). The training data of Watson had a smaller number of theoretical cancer patient data other than actual patient data. The University of Texas System Management reported that the project cost comprised $62 million that IBM spent on developing the AI system (Dilmegani, 2021). It aimed to help in the fight against cancer without an accomplishment.
Nevertheless, the project outcome was again unsatisfactory to the healthcare facilities and patients. The product based on a physician at Jupiter Hospital in Florida was a failure. The doctor claimed that they bought it for marketing purposes. Watson advised doctors to give cancer patients with severe bleeding a drug that could worsen the bleeding, as per patients and medical experts. May cases of erroneous and risky therapy recommendations were reported by medical practitioners and clients. This posed a serious threat to the well-being of cancer patients and hospitals reputation in their treatment procedures since the AI battle might kill patients.
AI Recruiting Tool of Amazon
AI recruitment tools discriminated against women marking another project example that failed. The device was trained on the dataset having mostly resumed from male applicants and it interpreted that women applicants were less acceptable and preferable. Therefore, Amazon opted to close down its experimental AI project after the company realized it showed biases against women (Dilmegani, 2021).
The device is aimed at searching for top talent in the industry. The recruiting system in Amazon taught itself that male applicants were preferable and penalized resumes that had the word women. In addition, gender discrimination was not the only problem. Issues with the data that supported the models decisions implied that unqualified applicants were regularly recommended for all types of jobs (Dastin, 2018). The system posed a threat to the companys reputation based on discrimination against women and the recommendation of unwarranted jobs to people.
Facial Recognition Tools
AI researchers discovered that commercial facial recognition systems, for example, those of Microsoft, Amazons, Facebooks, and IBMs, performed well on light-skinned men and poorly on dark-skinned females. The systems were believed to propagate racial and sexual discrimination leading to their failure. For example, Amazon caught itself in a face recognition problem. Amazons system was meant to recognize criminals based on their facial image; however, when it was experimented with using a batch of Congress members photos, it was confirmed to not be inaccurate and racially biased (Dilmegani, 2021).
It was noted that close to 40 percent of the system recognitions erroneous matches in their experiment were of people of color; although, persons of color comprised only 20 percent of the Congress (Bhattacharya, 2021). Hence, the system posed threat to rely on AI to gauge whether or not an individual is an offender would pose a threat to society as innocent people may be caught wrongly.
Self-driving Cars
The current AI project on self-driving vehicles is considered a failure because it can lead to driving in the opposite lane. Researchers have shown that they may mislead a Tesla cars AI system into driving in the opposite lane when small stickers are placed on the road (Dilmegani, 2021). Hence, the system poses a safety threat to those using it. The driverless car should process its environment to make decision calls utilizing decision-making technology and perception. Hence, placing stickers on the road may misguide the vehicle into driving in the reverse lane leading to clashes. It has mounted pressure from the regulators on self-driving car companies to report on where humans may take over from robotic drivers for safety (Nast, 2019). This is after the emerging safety threats to the users of self-driving AI.
Furthermore, in 2018 a crash related to where a self-driving Uber killed a pedestrian. The investigation reports showed that the organizational structure of Uber was not developed to catch safety faults, and its software comprised glaring safety gaps. Investigators noted that the AI system for Ubers self-driving cars explicitly planned not to respond to an oncoming crash for the space of one second. Perhaps, because the software saw several ghosts , engineers appeared more worried concerning hard braking without a reason as the car was about to hit somebody. The car was not designed to identify pedestrians outside of crosswalks (Nast, 2019). It poses a threat to the pedestrians and the clients in the Uber self-driving cars due to faulty AI projects.
Reasons Some AI Projects Fail and Possible Solutions to Success
Inferior Data quality
In each AI project, data has been noted to be a key resource to ensure success. Enterprises should establish a data governance approach to assure the quality, availability, security, and integrity of the data they may utilize in their project. Operating with old-fashioned, inadequate, or prejudiced data may contribute to the failure of the project, garbage-in-garbage-out circumstances, and wasting company resources (Dastin, 2018).
The performance of AI systems and software deployed during the pandemic to respond to the coronavirus is a good example of the significance of data quality in AI projects. Hundreds of AI devices and systems developed for forecasting patients risk or diagnosing COVID-19 from data, for example, medical images, and noted that none of them is appropriate for clinical application (Dilmegani, 2021). The poor data quality has resulted in misdiagnoses using the AI system posing a threat to the public.
Further, most of the issues were associated with data quality problems, for example, mislabeling, and unknown sources: A majority of the models applied a dataset of childrens healthy chest scans as examples of non-COVID cases. In the end, the AI system learned to spot children and not COVID-19 cases. In certain cases, the AI system applied text fonts that healthcare facilities applied to label the scans as a COVID-19 risk predictor. In some instances, AI models applied chest scans taken while some patients were standing up and others lying down (Dilmegani, 2021). Hence, a patient lying down is more likely to be ill, the AI discovered the projected risk of patients from their positions.
Consequently, the solution is before embarking on an AI project, organizations need to guarantee that they have relevant and sufficient data from reliable sources, which depict their business operations, have accurate labels, and are appropriate for the AI system or software installed (Zhang et al., 2021). Otherwise, AI systems and software may generate fault results and may be risky if applied in decision-making.
Unclear business goals
AI is an influential technology, even though executing it without a well-established business issue and clear business objectives are insufficient to attain success. Firms should begin by defining and determining the business problems and determine if AI tools and techniques would assist address them, instead of beginning from the solution for an indefinite business issue (Kahn, 2022). Further, determining the potential benefits and costs of AI projects is difficult because AI models attempt to address the problematic business issue, which implies the results cannot be the same for each utilization case.
In addition, creating an AI project and training/building an AI model is experimental and can need a long trial-and-error process. Therefore, a well-established and defined business objective may offer a clear concept of whether AI is an appropriate system or software, or if there are alternative approaches or tools to address the issue at hand (Bhattacharya, 2021). It may save firms from unnecessary costs because of investing in unsuccessful AI projects.
Lack of Collaboration between the Teams
A data science team, who works in isolation on the AI project is not a procedure for success. Developing a successful AI project needs collaboration between data engineers, data scientists, designers, IT experts, and business professionals line (Dilmegani, 2021). Developing a collaborative technical atmosphere would assist organizations to guarantee that the AI projects output may be well incorporated into their general technological architecture. In addition, they will share experiences and learn to develop best practices and deploy AI solutions on a large scale (Kahn, 2022). The companies can improve the standardization in the AI development process.
In a bid to resolve the issue of lack of collaboration, there is a group of practices referred to as MLOps and DataOps. They will help to fill the gap between various teams in an organization and operationalize AI software or systems at a large scale (Zhang et al., 2021). Further, developing a federated AI Center of Excellence in which data scientists from various business dynamics may work together may enhance collaboration.
Scarcity of Talent
The most intriguing issue for the adoption of AI for companies is the lack of skilled personnel in data science. Establishing a talented data science group may be expensive and time-consuming because of the skill shortage. Having no team with correct business domain skills and training, firms need not anticipate achieving much with their AI project initiative (Kahn, 2022). Hence, to resolve the issue companies should examine the benefits and costs of developing in-built data science teams. For instance, more on the benefits and shortcomings of outsourcing AI and building in-house teams, at first outsourcing AI system operations may be a more cost-effective option to execute AI software or systems (Haller, 2022). Organizations should invest in training and building their human resources in implementing in-house AI or outsourcing experts to guide them through the process.
Conclusion
AI presents noble ideas that are aimed at improving and changing peoples lives through the invention of new products and services. However, if not well implemented it can lead to various problems due AI systems or software are forecasted to fail from 85% to 95% of the projects, for example, IBM Watsons AI project failure affected the patients and physicians and even the amount of money invested in it was a waste. This calls for those developing AI systems and software to build them with clear objectives, use quality data, and the right skills while enhancing collaboration between teams for success.
References
Bhattacharya, S. (2021). Top 10 Massive Failures of Artificial Intelligence Till Date. Web.
Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Web.
Dilmaghani, C. (2021). 4 reasons for artificial intelligence (AI) project failure. Web.
Haller, K. (2022). Structuring and delivering AI projects. Managing AI in the Enterprise, 9(2), 23-60. Web.
Kahn, J. (2022). Want your A.I. project to succeed? Dont hand it to a data scientist. Web.
Nast, C. (2019). The failure of Ubers self-driving car, polestars debut, and more car news this week. Web.
Zhang, H., Zhang,, X., & Song, M. (2021). Deploying AI for new product development success. Research-Technology Management, 64(5), 50-57. Web.
Order from us for quality, customized work in due time of your choice.