Artificial Intelligence (AI) is revolutionizing the current age. Using processes like neural network technology and machine learning, AI attaches an intelligent or smart capacity to machines. Modern healthcare, telecoms, management, and financial industries of today all employ AI for many of their business processes. A study by Markets and Markets revealed that the AI market is expected to become a $190 billion industry by 2025. As reported by Innovation Enterprise, Gartner predicts chatbots will power 85% of customer service by 2020. But amidst the hype of this ecstatic discovery, the public can tend to ignore the limitations of AI – yes, they exist!
AI no doubt has brought immense changes to healthcare, with image recognition and data categorization capacities. They have helped make the patient-doctor equation stronger and medical treatments faster and more manageable. Visual nurse assistants can remind patients of medication timings. Surgeons use AI to employ smaller tools and more precise incisions. However, none of these significant advancements have come without human support. AI and humans are a balanced team in delivering health care diagnosis; AI alone cannot do such magical wonders.
AI cannot write software. American computer architect Frederick Brooks conveys in the book The Mythical Man Month that in spite of the advancements brought by AI, it does not have the human faculty of understanding, which makes it incapable of writing software. He explains that software writing is a process requiring deep comprehension of the real world and the ability to transform those intricacies into rules. Bug detection is the key to delivering useful software. While AI can detect patterns to suggest about a bug, it cannot find bugs. Solar Lezama and Josh Tenenbaum launched SketchAdapt in 2019, which by pattern recognition, can write familiar parts of the program.
SketchAdapt is capable of writing concise programs. The researchers don’t exaggerate the role of AI, making it clear that the program is intended to complement programmers and not replace them. AI is useful for software testing, in tasks like prioritizing testing and automation, generating test cases, and determining test outcomes. AI is also valuable for real-time risk assessment during the software delivery lifecycle with the rise of DevOps and continuous delivery. However, to expect AI to detect malware and write code would be an overestimate of its current capabilities.
AI cannot do creative writing. While AI has generated content, it cannot create without guidelines. Natural language generation (NLG) is a software process that automatically creates content from data. It is being used by businesses for making data reports, messaging communication, and portfolios. NLG creates thousands of more documents than humans. However, all these documents are data-driven and devoid of spontaneous creativity humans are capable of. Writers create stories with nuanced emotions that machines do not have. Fear, joy, love, and anger are some of the emotions that make compelling storytelling.
AI cannot exercise free will. AI can make choices based on the rules of the program. These rules are deterministic, i.e. the resulting behavior is determined by initial inputs. With free will, every decision made is backed by infinite ways of doing it with countless outcomes. In computing, there are only two states – do or do not. For AI to have free will, infinite states would have to be present, something that has not been achieved to date. AI cannot question their existence as humans do, nor can AI explain their decisions as humans do. These questions tied to philosophy and free will are not in AI’s zone of reach.
AI cannot create safe and moral self-driving cars. Automotive companies are all in the chase of self-driven vehicles. A BI Intelligence report states that there will be 10 million self-driving cars on the road by 2020. But, they all come with human overseers. Without this supervision, safety on the streets would not be possible. While some argue that human errors are the cause of accidents, and AI will bring more excellent road safety, AI cannot make moral decisions.
For example, if a choice is to be made between saving a car’s passengers or pedestrians, ethical preferences vary. As even the most moral of human beings is not adequately prepared to make decisions in car crash scenarios, a programmed car cannot have a strong moral opinion. As information from self-driving cars is sent to a central computer, using AI to analyze and make decisions, any unauthorized tampering can cause security breaches. Hackers have already hacked autonomous vehicles. AI is not equipped to create entirely safe cars.
AI cannot bring inventions. AI can follow rules; it cannot create from scratch like humans. Humans can invent scientific tools, compose songs, and mathematical theorems. These innovations are genuine, unlike any product produced by AI. AI uses past observations to learn a general model or a pattern, that can be used to make predictions about future similar occurrences. AI cannot think out of the box like humans.
While AI can recognize objects in images, translate languages, speak, navigate maps, predict crop yields, use visual data analysis to clarify disease diagnoses, verify user identity, prepare documents, make lending decisions in financial management and scores of related tasks, it cannot do everything. Most importantly, AI works best with human collaboration, as seen from the above examples. We must be realistic about the scope of AI, while we get excited about its prospects.