Artificial Intelligence: Myths and Realities. | Blogs
Artificial Intelligence myth and realities

Artificial Intelligence: Myths and Realities.

Share

These infamous lines, part of Stanley Kubrick’s masterpiece “2001: A Space Odessey” has been the go-to pop culture reference as soon as Artificial Intelligence is mentioned. The conversation still runs beads of cold sweat down the backs A.I doomsayers and enthusiasts alike. 

DAVE: Open the pod bay doors, Hal.
HAL: I’m sorry, Dave. I’m afraid I can’t do that.
DAVE: What’s the problem?
HAL: l think you know what the problem is just as well as l do.
DAVE: What are you talking about, Hal?
HAL: This mission is too important for me to allow you to jeopardize it.
DAVE: I don’t know what you’re talking about, Hal.
HAL: l know that you and Frank were planning to disconnect me, and I’m afraid that’s something I can’t allow to happen.

In addition to this images of mechanical goliaths, Dr Frankenstein’s creation, terminator themed Skynet networks, robots and the various rebellious computers are conjured up. Putting forth a sort of us or them outlook. Similar to the likes of automation, which we have covered in these posts.

We have always been fascinated with the possibility of creating a machine in our image, but this fascination is often accompanied by apprehension. We fear losing control of our creation and suspect that it could possibly turn against us. This duality, this conflict between the desire to create and fear of the consequences of creation has always been exploited by writers and artists alike. However, it is also because of this fascination that the field of Artificial Intelligence (AI) has almost always been in public view.
But this apprehension has led to many exaggerated truths about Artificial intelligence (AI). Here we seek to dispel these exaggerated truths or myths about AI and bring to light the reality within each myth.

However, in order to know where we are, we must know where we have been. So that where we should be becomes clearer. So what was the goal of AI?

It was in a written proposal dated August 31st, 1955 for a Dartmouth workshop, that the term “artificial intelligence” was mentioned for the first time. The proposal was authored by, John McCarthy of Dartmouth, Marvin Minsky of Harvard, Nathaniel Rochester of IBM and Claude Shannon of Bell Laboratories. The proposal of the workshop goes as follows:

“The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

This in essence still the goal of AI in the modern world. To understand what intelligence is and what makes it possible, and to make computers more useful. The various branches of AI are defined by these two goals. Expert systems, natural language processing, vision, and robotics seek to make computers more useful. Whereas cognitive modeling and machine learning are primarily concerned with understanding the possibilities of intelligence.

Which brings us to today, from self-driving cars and personal assistants to chatbots and email scheduling agents, AI seems to be everywhere and is all the rage. But AI has been around for over 63 years, so what changed? Big data. Previously computers in the 20th century lacked the processing power required to analyze and store vast amounts of data. Also at the time computers did not have as many users as they do today. Rapid growth and innovation in information and technology allowed for affordable computers and an increased user base, which created big data. AI could now use this big data and analyze trends and discover patterns. Before big data implementation of AI had been fairly limited. Today though is a whole other story. This, however, leads us to our first myth:

MYTH 1: AI can make sense of any and all of your messy data

Which is not true at all. AI can perform very narrow and specific tasks and hence require related input data to provide a viable output. Data is the most important input tool for an AI. Bad data will provide bad results no matter what the system. For example, when IBM researchers were developing Watson to play Jeopardy, they found that loading certain information sources negatively impacted performance.  Hence higher the quality of input content the more accurate and better the result would be. This method though means that biased information can slant AI outputs. Hence careful content curation is an absolute must. AI Norman the psychopathic AI was created just to prove how biased data can create biased AI.

Below is a video series which explains the workings of IBM Watson and how it won the game show of jeopardy.

If you’d like to learn more about IBM Watson click here for the whole documentary on the ‘Smartest Machine on Earth’

MYTH 2: Do you need big budgets and an army of experts to create or explore AI?

The answer to this solely depends on the kind of AI application that you want to create. As complicated tasks and internal understanding might require some heavy lifting that only a Ph.D. or help from an expert would allow you to do. However, as far as using AI goes, you already are! In fact, it’s probably through the use of AI that you are on this post right now.

Data scientists, machine learning experts, and huge budgets although make everything much easier and streamlined you do not need any of it to use, create or simply explore AI.

There are several AI tools and applications available that are open source and readily available for public and business use. Take Tensorflow, for example, it is an open source machine learning framework developed by Google. Google and Uber are also working on  AI that doubts itself in theory. In reality, they are merging two branches of AI  namely deep learning and machine learning. Thereby creating a system that gives a measure of how certain it is. A powerful idea which has its own set of issues and problems. This brings us to our final myth on machine learning.

MYTH 3: AI can think, understand and solve problems like human beings.

Two branches of machine learning namely “Neural nets” and “Cognitive AI” technologies have been taking up most of the limelight recently. It is commonly claimed that these two branches of AI are able to create AI that can understand and solve new problems the way a human brain can.

As Rodney Brooks elaborates in his blog post about the Origins of AI and its realities. We are still very far from achieving an Artificial General Intelligence or AGI that can, in theory, be compared to the functionality of a human brain. Other AI developed thorough ‘neural nets’ and ‘cognitive AI’ is able to only mimic a very narrow cognitive function that a human being can perform. Albeit that it can perform these tasks much faster and much more efficiently than humans could ever hope of doing.

AlphaGO’s beating the world’s best GO player, a case in point. Below is the moment when Alpha Go beat Lee Sedol in match 3. If you’d like to understand and find out more about Alpha GO  is a documentary that elaborates on how the AI was developed, the reason for which the game of GO was selected and the breathtaking match and as well.

 

Through machine learning scientists from Google and its health-tech subsidiary, Verily have also discovered a new way to assess a person’s risk of heart disease. By analyzing scans of a patient’s eye the risk of heart disease can be assessed. Absurd but possible.

Hence AI is clearly the new frontier of breakthroughs and advancements. However, a lot of work still needs to be done. AI is still not even able to communicate and exchange diverse ideas as human beings do. NLP or natural language processing another sub-branch of AI has improved by leaps and bounds but is still very restricted. AI assistants like Google’s duplex is only able to make appointments and calls.

Therefore keeping in mind these realities of the situation. Humanity is now faced with a very serious question, will AI make human beings obsolete? Historically innovation has always meant better jobs and higher standards of living. However this time it doesn’t seem to be the case. Even the celebrated late theoretical physicist, cosmologist, and author Stephen Hawking had one final warning to humanity.

Although the likelihood of AI being used against humanity is improbable it isn’t impossible. Change is inevitable in the IT age. But one thing is for certain the machines are not coming they are already here!

 

AI is a booming field and will remain so for the foreseeable future. But what about your foreseeable future? Here at Navigus we aide you in finding a career that you would not only love but one that is less likely to be taken over by A.I. At least not anytime soon. Navigus, why just survive, when you can reign.

 

If you’d like to know and find out more subscribe to our channel and follow our blog. Below is a list of videos that explain the realities of AI and the plausible outcomes of the era of AI.

  1.  Vox shift change series.
  2. Humans need not apply – cpg grey.
  3. Jay Tuck’sTedX Talk on AI uses in defence and warfare.
  4. Singularity or Bust.
  5. Richard Socher TedX Talk on Where AI is today and where it’s going at TEDxSanFrancisco.

Signup for our Newsletter

Comments

Leave a Reply

Your email address will not be published.

No Response

No more Comments