AI systems should usually show at least any of the following human intelligence-related behaviors: planning, thinking, thought, problem-solving, information interpretation, vision, motion and manipulation, and, to a lesser degree, social intelligence and imagination. AI is pervasive nowadays, used to decide what you’re going to purchase next online, to interpret what you’re thinking to virtual assistants like Amazon’s Alexa and Apple’s Siri, to know who and what’s in a video, to spot spam, or to identify credit card fraud.
At a top standard, artificial intelligence can be categorized into two specific types:
Narrow AI is what we see everywhere today in computers: smart devices that have been trained or studied how to execute certain tasks without being specifically programmed to do so. This form of artificial intelligence is apparent in the speech and language processing of the Siri Virtual Assistant on the Apple iPhone, in the vision-recognition systems on self-driving vehicles, in the recommendation engines that offer items that you would prefer based on what you’ve purchased in the past. Like humans, these programs can only study or be taught how to execute particular functions, which is why they are called restricted AI.
Implementations of Close AI are becoming more popular as deep learning is continuously incorporated into real society. For example, Narrow AI may be used for email spam filtering, music streaming services, and perhaps even autonomous vehicles. Nonetheless, there are questions about the extensive use of Narrow AI in critical network functions. Others claim that the features of Narrow AI make it unstable and that in situations where a neural network can be used to regulate large networks (e.g. power grid, financial trading) substitutes could be much more risk-averseGeneral AI: Current AI development started in the mid-1950s. The first wave of AI pioneers became persuaded that general artificial intelligence was feasible and should emerge in only a few decades. AI visionary Herbert A. Simon wrote in 1965, “Machines should be able to perform whatever job that a man can do within twenty years.”
Artificial General Intelligence (AGI) should be a computer capable of knowing the environment as well as any human person, with much the same ability to know how to execute a wide variety of activities. AGI does not exist, but has been used in science fiction tales for more than a century, and has been popularized in recent days by films such as 2001: A Space Odyssey. AGI’s cinematic depictions differ greatly, but they lean mostly towards the dystopian dream of autonomous robots eradicating or enslaving mankind, as shown in films such as The Matrix or The Terminator. In such stories, AGI is often cast as either indifferent to human suffering or bent upon the destruction of mankind.
Use this intellect to monitor robotics as dexterous and agile at least as an individual will result in a new generation of machines capable of executing any human activity. With time, such bits of intelligence will be able to take over any human role. Initially, humans may be cheaper than robots, or humans operating alongside AI could be more successful on their own than AI. Yet AGI’s arrival will render human labor redundant.
But one thing for sure we shouldn’t let general AI break its constraints and use it only for development, not for destruction.