Raza
Mohammad Raza
Using data to add value
5 min read

Artificial Intelligence: What is it really?

AI is now becoming ubiquitous, it’s everywhere from education, finance, agriculture, medicine, music, manufacturing, transportation, and touches almost any industry you can think of. But how did we get here? And what exactly is AI?

What is AI?

Technopedia defines artificial intelligence as “a branch of computer science that aims to create intelligent machines.” Other definitions are similar but add another component to the definition, and that is a simulation of human intelligence. This concept of machines that could reason like a human was a core principle of AI when the term was first coined back in 1956.

A Brief History of AI

American computer scientist John McCarthy is often called the founding father of artificial intelligence and coined the term in his proposal for the Dartmouth Conference of 1956, the first artificial intelligence conference. McCarthy believed that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

Although John McCarthy first coined the term, the idea that machines may someday possess the ability to think as a human started before McCarthy’s conference on the matter. Famous and revered scientist Alan Turing published a paper in 1950 posing the question, can machines think? It seems that as soon as people invented machines to do our work, the question of whether we could design them to simulate human thought arose along with it.

From this point on until around 1974, AI was thrust into its first hype cycle and it flourished. During this period computers were becoming more powerful, faster, and more widespread leading to more opportunity to explore AI algorithms. It was an exciting time for AI and government agencies such as the Defence Advanced Research Projects Agency (DARPA) started funding AI research.

The ambitions for AI applications at this time were extremely high and in hindsight, not grounded in the reality of our technological capabilities of the time. In 1970, Marvin Minksy, cognitive scientist and co-founder of MIT’s AI laboratory, said: “from three to eight years we will have a machine with the general intelligence of an average human being.” This, of course, proved to be entirely unrealistic due to the limitations of computers at the time, and the excitement in AI began to taper off during the mid-1970s.

The hype for AI started up again in the 1980s when the Japanese government heavily invested in AI expert systems by funding improvements to computer processing and logic programming. Much like the cycle before it, the ambitions set out by the project were too high and most of the goals were not realized, this led to AI falling off the map again.

AI came back stronger in the 1990s and 2000s and is still thriving today. In fact, since 2014, news coverage of AI has skyrocketed to new levels and the coverage is increasingly positive, rather than focusing on AI paranoia. The AI Index 2018 Annual Report found that AI coverage between 2016 to 2018 was 1.5 times more positive than it had been in the previous years .

AI has been able to develop so rapidly in the last few decades because of increased computing power and the huge availability of labeled data. Labeled data plays an important role in supervised machine learning. In supervised machine learning, the machine can use already labeled training data to learn more about its goal and get a head start compared to unsupervised learning in which the AI needs to make sense of raw data. An example of this is an AI designed to detect flowers if the AI already has a database of labeled flowers to work from it will better be able to map out the criteria of what makes a flower.

We are now in the age of AI implementation. AI research has lead to some groundbreaking discoveries in how AI can work and how it can be applied, and computer scientists all over the world are now taking that research and implementing it in ways we couldn’t envision decades ago. That isn’t to say that no AI research is happening today, it is, but the focus of modern AI is on how we can implement what we have learned so far now that we finally have the processing power to realize the technology.

Different Branches of AI

Machine Learning (ML)

McKinsey & Co. defines machine learning as a process “based on algorithms that can learn from data without relying on rules-based programming.” Machine learning essentially trains a machine on how to learn so that it can become more accurate at predicting outcomes and be able to process huge amounts of data without being explicitly programmed

This branch of AI helps computers interpret and manipulate human languages, particularly as they are spoken. NLP has widespread language in communication, for example, you’ve probably been using NLP on your phone for a few years now with the next word suggestion feature that most smartphone keyboards now have . The children’s toy Barbie can now listen and respond to a child by listening to what they say and sending the recorded speech to their servers to quickly analyze it and create a response, all in under a second .

Expert Systems

Expert systems aim to simulate the decision-making process of a human expert. These systems are designed to be highly responsive, reliable, and understandable when assisting human decision making.

Vision and Speech

Vision AI systems can analyze raw visual inputs and understand the data. For example, eBay now has a feature where you can use an image to search for similar products on their marketplace. Speech AI is concerned with text to speech applications which will read digital text out loud in a way that sounds natural.

Robotics

It used to be the case that robots were simply programmed to carry out a specific and repetitive set of movements but with AI we can now allow robots to perform more diverse and complex tasks. They are intelligence robots.

  Never miss a story from us, get weekly updates in your inbox.