Information Age features BoldIQ on AI
True AI doesn’t exist yet…it’s augmented intelligence
With all the hype surrounding artificial intelligence, it is easy to forget that true AI doesn’t exist yet. Instead companies are leveraging augmented intelligence by Nick Ismail, Information Age
Artificial intelligence is easily one of the most popular buzzwords this oear, even sparking heated debates between prominent tech moguls, Musk and Zuckerburg, on the future of AI.
However, one major oversight persists – true artificial intelligence does not exist and will not exist for at least another decade, according to BoldIQ CEO, Roei Ganzarski.
Instead, he suggests, tech companies claiming to do ‘AI’ actually only provide an augmented intelligence.
While many companies claim to provide “AI-driven” solutions, in reality they’re leveraging machine learning techniques at best, developing what Ganzarski refers to as augmented intelligence.
In fact – IBM (who arguably created the first AI tech with Watson) agrees with this definition, and believes today’s technologies are more data-driven than ever but aren’t yet advanced enough to think for themselves – which is how true AI is defined.
While HBO’s Westworld gives us a glimpse at a future where human-like, self-thinking androids are the norm, people will have to wait several years where the technology goes from interpreting predetermined scenarios based on a library of data to an intelligent bot that formulates its own ideas based on morals/rationale – essentially ‘thinking’ for itself.
In an interview with Information Age, Ganzarski’s discussed how he thinks this gap from augmented intelligence to artificial intelligence will be bridged, how long that will take and what the future of AI holds.
There’s currently a lot of hype around artificial intelligence in nearly every industry, but you claim it doesn’t yet exist. If we aren’t experiencing true AI, what are we seeing instead?
Artificial intelligence is easily one of the most popular buzzwords this year, but in my opinion, it does not yet exist and will not exist for some time.
Instead, tech companies claiming to do ‘AI’ are actually providing what I would define as augmented intelligence – very sophisticated, fast decision processing or decision supporting software based on real-time scenarios.
However, even these split-second, computer-driven decisions are based off of highly evolved algorithms that were programmed into the software by a human.
These “AI-driven” solutions are indeed leveraging advanced technologies, but I would contend that this is not yet true artificial intelligence.
To understand my opinion, we should look at how AI is defined.
In the simplest of language, AI is a computer (software, robot, call it whatever you will) that has the ability to do things only a human can do, and use the same level of logic and reasoning that a human would. This last part is the key.
Some could argue robots who just do what humans do already exist like the robots can build cars or even do advanced computations, and might even perform well, if not better, faster, and more consistent than humans.
However, they currently do this following a set of orders, not using reasoning or logic of their own making. Even those using machine learning tell the computer what to make of data that is compiled and how to “learn”.
In fact, as far back as the 1956 Dartmouth Artificial Intelligence (AI) Conference, J. McCarthy defined the study of the new field of AI as the following: The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.
An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.
Let’s look at a few examples where human reasoning is critical in decision making and has yet to be imitated by a computer:
• From the simple: Planes from any given airline should be given 45 minutes on the ground between flights to unload and load baggage, and its passengers. However, imagine if a certain flight had to shave off a single minute to avoid projected delays. Is planning for 44 minutes okay? How about adding a minute instead? Will the machine be able to break the ‘rule’ of 45 minutes? Is it taking into account the staff they have on shift that day and their willingness to “hustle” to get things done if they were asked?
The machine, since it does may not be programmed to account for specific scenarios, will likely not have all the information that a human, shift manager might know or take into consideration.
• To the more complex: A driverless car is faced with hitting a wall risking its passenger’s life or hitting a group of pedestrians crossing the street. We can program the software powering the car to react a certain way, but ultimately the software has no capability to consider moral issues (i.e. who is riding in my car, who are the people on the street, how old are they, what’s the best case scenario) – that any driver would consider if ever faced with the situation.
With these examples in mind, we can draw the conclusion that we have yet to achieve a reality where computers have achieved rational reasoning abilities and can be deemed artificial intelligence.
Can you share a few examples of how augmented intelligence is being used today?
Augmented intelligence is being utilised in nearly every industry, from marketing technology to self-driving cars. Nearly every time you read about ‘AI’ capabilities in the news, it’s actually augmented intelligence.
Think about the software that decided on what phone operator is best to answer an incoming call at a call center in order to increase the likelihood of a positive call outcome or the program that decides what traffic stop lights to change when and in what order to decrease the likelihood of a traffic jam. Both of these are augmented intelligence.
Even the software that tailors which advertisement that pops up when you’re browsing websites to increase the likelihood of you purchasing and buying the content is powered by augmented intelligence. This technology is integrated into almost every aspect of our daily lives, whether we recognize it or not.
How will companies eventually bridge the gap from augmented intelligence to artificial intelligence? How long will it take?
As I stated above, true artificial intelligence will not exist until technology begins to think for itself, but that is only the baseline. As humans, our decisions and knowledge of what is right versus wrong, logical versus not, or even worth it or not, is guided by three characteristics: morals, ethics, and logic – perhaps, the combination can be defined as human reasoning.
While logic at a surface level can be programmed into a machine (i.e. if this happens, this is the appropriate response), it is also deeply rooted in morals and ethics which are learned and instilled in humans throughout their lifetime, and throughout generations. Until that threshold can be met, we will be living in the era of advanced machine learning and augmented intelligence.
Are there any real-world applications we should look forward to?
This has a simple answer. If and when AI is achieved, there will not be an industry or market it is not applied to.
Is a Westworld-style AI takeover a possibility in the near future?
While HBO’s Westworld gives us a glimpse at a future where human-like, self-thinking androids are the norm, the truth is we are far from this reality and will not begin to see a world like this for decades – if it even happens at all.
Speculation about artificial bots that can ‘think’ for themselves and live amongst us every day is what people equate the future of AI to look like, but we’ll likely begin seeing these technologies implemented in applications that enhance our everyday lives like autonomous vehicles, customer service, and much more.
However, when we do start to see true artificial intelligence take form, we need to have regulations set in place.
In my opinion, if true AI is created, meaning computers that can in fact think and behave like humans do, then why would we not expect them to behave like humans? For example, aid, work, achieve…and yes, argue, fight, be violent, and kill. If this is the reasoning that should propel our policies, then it becomes much simpler to decide on what policies should be put in place.