Blog

Blog: Is Artificial intelligence going to take over the world?

July 6, 2020 6:05 am


blank

By: Simon Polak, Chief Scientist at viisights

 “The oldest and strongest emotion of mankind is fear, and the oldest and strongest kind of fear is fear of the unknown”

― H.P. Lovecraft, Supernatural Horror in Literature.

In recent years, when AI became so popular (maybe too popular) and started to change our lives in every possible way, there arose a lot of prominent voices warning on the dangers of AI.

Tesla and SpaceX CEO, Elon Musk, said that “AI is far more dangerous than nukes” and that AI safety is “vastly more risk than North Korea”. Stephen Hawking said that AI could be the “worst event in the history of our civilization.” Musk and Hawking, along with other prominent researchers from MIT, Google and other institutions, signed a letter urging researching not only the benefits of AI but also on its impact on society, including dealing with the impact of AI on employment, ensuring the ethical behavior of autonomous machines and weapons.

In my opinion, while the dangers do exist, they are a bit exaggerated. And I will tell you my reasons.

Two types of AI: Strong and Weak

First of all, when talking about AI we have to distinguish between two types of AI – strong (or general) AI and weak (or applicative) AI. The former, in simple words, is a machine, which is able to perform all the cognitive tasks a human can and is able to integrate them to solve real-life problems. The latter is a machine that is able to perform one very specific task, usually associated with humans, such as playing chess, recognizing faces, making a conversation, medical diagnostics etc.

 

Weak AI is Great!

Weak AI is already here, and I am sure everyone has heard of it – it’s helping us to diagnose cancer, drive cars, secure our computers and much more. Here, at viisights, we develop a unique video understanding technology, which enables automatic recognition of violent or abnormal behavior or predefined behavior of interest in video streams coming from millions of cameras already existing in our cities. This AI technology makes our cities smarter and safer.

Weak AI is just a smart tool and it is as dangerous as the person who is using it. As every other advanced technology, it comes with great benefits and dangers, and any argument on benefits and dangers of advanced technology is applicable here – some jobs and even professions will be lost and some new will appear, there will be misuses, there will be military applications and our lives will be changed by this technology. But, as with most other advanced technologies, developed and adapted in the last couple of hundreds of years, the benefits outweigh the dangers and the world is more enjoyable with it.

blank

Is strong AI dangerous?

The dangers of strong AI, are, of course, much greater and much more speculative, since it does not exist (and most probably will not be here in the near future)

Arguments for the “dangerous AI” usually fall into one of the two categories: either we will create a maniacal misanthropic AI which, as the first (or second) act after birth, will destroy humanity, or it will be a good natured AI which, as a byproduct of solving some problem (for example, finding cure for cancer) it will arrive to the conclusion that the best solution is to kill everybody (no humans – no cancer). If you analyze these two arguments carefully, both of them sound a bit childish.

The main question we need to ask is “How strong AI will be created?”

Scenario 1

We will create strong AI by chance – for example, by connecting a lot of computers/processors. This scenario is indeed dangerous, since there will be no understanding of how this machine works and what its goals are, and anything powerful and unknown is dangerous. But… the chances that random connection of a lot of computers will become intelligent are very low. One reason for that is that experiments with very deep neural networks show that after some depth there is no advantage (and actually a disadvantage) for deeper networks. Another one is that a human brain 100 thousand years ago was approximately the same size as our brains today, but humans “became smart” only 70 thousands years ago – after some minor changes to the brain architecture. Thus, architecture is more important than size and the chance of correctly connecting ~100 billion neurons (as in a human brain) by mere chance …. is close to zero.

Scenario 2

We will create strong AI by design – i.e. we will understand how it works and will know how to influence its goals. In this case it’s hard to imagine that a group of scientists will be so smart as to create such a complex machine, but at the same time as shortsighted as to forget to constrain its behavior to not be dangerous to humans. Safety for humans will surely be one of the major parts of the design of strong AI, and, since it is relatively easy to implement, it will clearly not be omitted.

 

So what is ahead of us?

In my opinion, the most probable case is that strong AI will be created gradually (as everything in very complex technology) – first AI with the intelligence of a mouse, then a monkey, then an ape, then a human. AI with the intelligence of a mouse cannot become super-intelligent by itself (for example, by learning from data from the Internet) – because it lacks some core cognitive capabilities – same as a mouse cannot become more intelligent if you put it in front of a computer with an Internet connection.

So, there will be enough time for experimenting, designing good behavior constraints and slowing down the process if AI’s behavior becomes dangerous and for now, we should benefit from what weak AI can provide and enjoy it.

Blog Posts