Realities of Artificial Intelligence

0
Graphic by Harshitha Damodaran

When you hear the phrase Artificial Intelligence (AI), your first thoughts might include popular movies like The Terminator and I, Robot. But how far from the truth is all this myth and magic that surrounds AI? 

 

AI has certainly made a significant impact in our lives, as can be seen with examples such as facial recognition softwares identifying people from videos and images, or conversing with smart assistants like Google Assistant or Siri. This is all very exciting until we actually look at the factory floor and see how things are running on the backend.

 

As written in a 2004 paper cited in IBM, “[AI] is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.” 

 

AI is actually showing us the true nature of human society, culture and behaviors because all AI is doing is mimicking us and throwing predictions just as a human would predict. Data is one of the reasons behind this behaviour. As data is generated from our own actions, AI takes this data and tries to make similar predictions. It will take a long time for us to change our behaviour and hence the bias in AI will continue. Steps are currently being taken to rigorously check the data that is being used to train AI algorithms. I would say this is a very good start and should continue since AI has created a cross-disciplinary impact and will continue to do so for the next 20 years or more.

 

AI is a huge subject that contains sub-topics and sub-fields such as Machine Learning (ML), Natural Language Processing (NLP), computer vision, robotics, autonomous driving, etc. The basic thing to understand here is that each sub-field deals with a certain kind of data and AI is the umbrella term for everything. 

 

For example, NLP is a field that specifically deals with handling lots of text data. Text data includes emails, survey responses, chats and documents. Similarly, computer vision deals with handling images and videos. Your iPhone face ID is the perfect example of computer vision technology in action.

 

ML has slowly become the most popular and widely established field of AI. ML deals with one simple concept: provide some data — be it images, text or information about housing — along with the correct answers. By answers, I mean the content of the image or the price of the house. Then the newly trained ML algorithm will make predictions for new and unseen images, text or house prices.

 

Most recently, there have been many concerns about AI’s predictions going wrong. The Netflix documentary, Coded Bias is one such example that shows the impact of facial recognition software misidentifying people and hence getting innocent people into trouble. Other examples are Microsoft’s Twitter chatbot called “Tay” which was very exciting at first because it started conversing like a human being and also learning from the conversations. Once people started feeding it profane words and bad ideas, Tay started throwing out racist remarks.

 

I am not denying the fact that AI does go wrong sometimes, but did anyone stop to think as to why this is happening? Is AI something like Ultron from The Avengers that goes off on its own against its masters? These are the first things that pop into people’s heads when something goes wrong with AI. But the truth is that the reality is much more complex. Humans are also responsible for the mistakes made by AI, but nobody likes to take the blame. 

 

Humans can consume a very limited amount of data within a given timeframe, whereas machines can consume vast amounts of data within hours — that’s where computers can act as an additional brain to a human. But there is no doubt that we have not yet truly achieved AI, where the machine starts behaving like a human, because our brain is still superior. By superior I mean that we do not need to sit and look through about a thousand images of an apple to know how an apple looks. We only need to look at 2 or 3 apples, whereas for an AI to know about an apple you would need to feed it roughly a thousand images and it would still be imperfect.

 

Based on the examples provided above, we can clearly see that AI can make mistakes.

Can we completely blame AI or are we, as humans, also at fault? Because all AI is doing is trying to mimic humans by consuming large amounts of data such as images, videos, text and many more, and then learning how to react to new situations.

 

Finally, I think that we should not be afraid of AI but instead try to embrace this new technology and use it to help ourselves, and guide us towards the next digital-industrial revolution.

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.