The simple 5 minute explanation of Artificial Intelligence and Machine Learning

What are Artificial Intelligence and Machine Learning?

Artificial Intelligence is a technology where a computer aims to perform tasks that otherwise a human would need to do. What is considered to be Artificial Intelligence changes over time – what was considered to be AI in 1980, is no longer considered to be AI today (watch the video for Dan Faggella’s brilliant explanation). 
Machine learning is a subset of AI where humans provide a computer with vast amounts of information and give it a task to perform. The machine automatically learns and improves its performance in completing the task from experience without being explicitly programmed, even finding new and better ways to complete to task that a human would not have thought of or predicted.

Video Transcript : The simple 5 minute explanation of Artificial Intelligence and Machine Learning

Jason Barnard: hi, Daniel, lovely to meet you. Dave Davis introduced me to you and told me that you know loads about machine learning and AI 

Dan Faggella: That’s right. Or at least that’s what Dave told ya! He’s a nice guy for leading me on that.

Jason Barnard:  He said you were the best person he knew for answering the question. What is the difference between AI and ML? Why do people say AI when they mean  ML? And that’s the crux of the question we’ve got maybe five minutes. Can you put them in two boxes so it’s really clear for me? 

Dan Faggella: I’m happy to! And actually oftentimes the conflation of the two really isn’t all that consequential. Sometimes it is, sometimes it’s not, but let’s go ahead and talk about it. So, artificial intelligence, generally speaking, can be seen as  a broad umbrella within the even broader umbrella of computer science. 

Really rough definition of AI, is simply instances of a machine doing things that otherwise a human would be needed to do. It might just involve a very complicated “if / then” type scenario. So back in the eighties, and to some degree this spills over into the early nineties, there was a phase of expert systems where they would speak to expert human beings about any topic. 

For example, if I had to store certain items in a logistics or warehouse environment, or how to categorize molecules in a life sciences environment… whatever the case may be. Then  a machine would figure out all the relationships and scenarios of different inputs – figure out maybe where to categorize the molecules or what to do with them or whatever was needed. 

And there’s a number of other instances, early chess playing algorithms,  for example.

Jason Barnard: I  understand that Deep Blue, that beat Garry Kasparov – I used to play lots of chess, so I know a little about that – it wasn’t so much AI as brute force calculations. Am I wrong? 

Dan Faggella: Yeah. It is just that things  aren’t called AI after we know how they work and we don’t think they’re cool anymore. Nobody at the time, or rather very few people at the time would have  said that is not  Artificial Intelligence. 

I mean, even with GPT-3 – people who haven’t heard of it, just Google GPT hyphen three – even with amazing NLP breakthroughs of that kind, you’re going to have some cohort or people even now in the modern day where like, that’s not even Machine Learning, that’s not even AI or something like that.

So people will dissuade the use of the term once it’s understood. That’s a very infamous thing about Artificial Intelligence is that as soon as we understand it, or as soon as it’s par for the course in terms of regular applications in business or consumer life, we just don’t refer to it that way.

But roughly speaking really roughly box – we got  only five minutes – rough box. We get a bunch of human inputs about “if / then” scenarios and about how the world works. We sort of build that Pachinko machine and then we can drop new items in the top and then it can do important things like a human would with those items when we drop them in the top. That’s because we have really structured that Pachinko machine very well so it kind of drops into the  right bucket and makes the right decision. 

Jason Barnard: So Artificial Intellicence is basically teaching the machine to do something that otherwise we would have to do. It’s the machine replacing the human being. 

Dan Faggella: Yeah, roughly speaking. The old approach, “back in the day AI” would have been pretty thoroughly preprogrammed to the hilt. More or less. There’s some exceptions, certainly not every move Deep Blue ever made had been done by human beings previously, but there was some set of rules that would allow it to act in a way that appeared intelligent.

 Roughly speaking, the transition to what we’re doing now with Machine Learning is that we’re essentially taking real instances in the world. So let’s just say, we’re talking about diagnosing lung cancer. We’re taking oodles and oodles of images, labeled images, of lung cancer and not lung cancer. Medical imagery.

And we’re just feeding it raw into a system and we’re not necessarily telling the machine per se: “Here’s all the rules. If it’s this big or this size or this, whatever, then you have to label it as lung cancer.” We’re just saying, “These are. These aren’t. Now you figure out the distinctions. You figure out the subtle nuanced patterns that ultimately make one thing cancerous and one thing not. Is one transaction fraudulent, or likely to be fraudulent. And another transaction, not.

Just oodles of data in the front end. All it knows is basic labels. Yes cancer, no cancer. Yes fraud, no fraud. This is a very rough example for you.

And then it’s going to coax out a bunch of subtle distinctions that humans never pre-programmed. It may figure out something about lung cancer in terms of location of the growth, or in terms of some particular pattern of how to look for lung cancer within people of different ages, for example. 

And here, human beings have not directly programmed. The hard part here is it’s often tough to reel out those rules, in other words to ask the machine “why did you make that distinction?” 

Often asking that question is rather challenging, but the benefit here is we can now get a subtler discernment from these machines – it’s the one that is ultimately filtering reality. Instead of us doing all the pre-filtering, it ends up coming up with the distinctions itself. That can be a black box, but it opens up a lot of capability. 

Jason Barnard: So the machine is pushing what we can actually do much further because it’s got more power. We’re giving it the information we’re giving it the basic examples, and then it’s pushing things further and finding clearer distinctions and creating, in the example of SEO, better results.

And that was the whole aim of the machine learning team at Google who were trying to build an algorithm that worked better than the human programmed one. Things changed in 2015. They couldn’t do it  and then in 2015, they suddenly managed to move that step forward with Machine Learning. Is that a fair assessment?

Dan Faggella: Yeah. So you want to just go ahead and talk about where this applies to search here. 

Jason Barnard: Oh no! I was just bringing it all into search saying “in search with talking about machine learning, not artificial  intelligence”. Is that fair? 

Dan Faggella: I’d say so. My guess is “yes”. I don’t know every algorithm in the back-end of Google or Bing. Suffice it to say, Google could be said to be the best known example of machine learning that all of us use on a super regular basis. And we take it almost wholeheartedly for granted. So we can go a little bit into how ML is likely used in these systems.

Jason Barnard: I suggest  we do that in the next interview. This interview is going to be short, it is meant to be a simple introduction, a quick discussion. There is going to be a longer interview on Search Engine Journal that you’ll be able to watch (watch that here

I would like to ask just one last thing on this. Google uses Machine Learning an awful lot, but are they actually any good at it compared to other big companies?

Dan Faggella: Without a doubt, Google is in the top three or four firms globally in terms of ML talent and in terms of raw Machine Learning capability. Now, are there other moonshot companies around self driving cars, etcetera who do it better… ? And are they going to break even or not? I mean, I don’t know – my crystal ball hasn’t been working super well 😉

But I can tell you that in terms of raw talent and in terms of putting Machine Learning to use with consumers, very few companies are even remotely close to Google’s level. Google are undeniably at the top of the heap. 

Jason Barnard: Brilliant stuff. Thank you very much, Dan. That was absolutely brilliant.

As I said, we’re going to do another interview and you will be able to watch that right after this one (watch that here). Please do watch if you’re interested in digging more down into how ML applies in search, according to Dan, we’re going to speculate a little bit about that, which should be a lot of fun. Thank you.Additional reading

Dave Davis wrote a great article – read that here >>

By Jason BARNARD

Jason Barnard has over 2 decades of experience in digital marketing.

He currently teaches Brand SERP optimisation to students at Kalicube.pro and writes regularly for leading marketing publications such as Search Engine Journal, SEMrush, OnCrawl, Searchmetrics as well as appearing regularly on digital marketing webinars and speaking at major conferences around the world such as BrightonSEO, PubCon, SMX London, YoastCon.