Big Ideas: The Potential Problems With AI

March 14, 2016 | ProgressTH Artificial intelligence (AI) is, simply put, intelligence exhibited by software and machines. Intelligence itself could be defined as the ability to learn and solve problems. In nature, evolution has endowed many species with intelligence, and human beings in particular with a relatively formidable ability to learn and solve problems.

IBM's Watson can be posed questions in natural human language to which it can answer by "reading" amassed knowledge such as encyclopedias.    

Human intelligence has allowed our species to diverge from evolutionary and natural environmental constraints, giving us mastery for better or worse over the planet and all other life upon it. We have done this through technology which includes the simplest forms of tool-making to the most complex machines we use to ply the seas, skies, and even outer space.

Our natural, human intelligence has given rise to exponential technological progress, and amid that progress, we have begun to create an artificial intelligence through computer science unconfined by natural evolution, biological limitations, and thus able to accelerate exponentially faster than our own intelligence has developed.

An example of where AI is today is IBM’s Watson computer. It was able to answer questions posed in natural language using four terabytes worth of content stored to memory and win a game of Jeopardy, an American TV quiz show. That might not sound like a big deal, but essentially, it was posed a question, and was able to learn enough to answer it, which some could argue is the essence of intelligence.

As systems surpass Watson and their ability to ‘answer questions’ improves, it won’t be long until the questions they are answering are questions we ourselves cannot answer. It will be at this point AI will surpass human abilities and where real danger lies. But danger might lie even beforehand.

Human-Human Disparity

An AI that is even marginally intelligent and useful, even if it is not on or above the same level as a human, can still benefit those who created it, giving them advantages over those who do not have access to it. There already is intelligent software being developed and deployed around the world by corporations to procure and leverage vast amounts of information that the average human being cannot. There are AI systems that play the stocks for investment companies, creating profits by subtly manipulating trading at a speed no human trader or investor could match.

This gives rise to technologically enabled socioeconomic disparity that simply working harder or smarter just won’t be able to even out. If these forms of AI are not opensource or accessible to at least a somewhat large population, the possibility of unwarranted power accumulating in the hands of those that do possess it increases greatly. 

AI already "plays" the stocks, and often wins more than humans.
There is also the increasing possibility of human beings directly augmenting their own intelligence with peripheral or integrated AI. It may sound far-fetched, but remember that technology is expanding exponentially and that human-machine interfaces are increasingly showing up in tech headlines around the world.

The famous US defense research imperative, DARPA, has already developed implants to expand human memory, while biomedical researchers have created implants that allow animals and people to control robotic appendages with only their brains. This means that not only is augmented/integrated intelligence and physical strength possible, it is inevitable.

Human-human disparity means that at a certain point when this technology is sufficiently matured, someone, or a group of people, somewhere, will exist with intelligence unmatched by unaltered human beings. They will become a technologically-conceived divergent ‘species’ with capabilities we could no more match or even understand than a chimpanzee could when confronted with human intellect and technology.

Remember that what may seem cumbersome today, was impossible yesterday, and like the Internet in its early days, will soon give way to something ubiquitous and virtually seamless within our daily existence. 

And because technology progresses exponentially, the speed at which an unfair advantage becomes an inconceivable and unbridgeable chasm will be so fast there will be nothing we could do to reverse this disparity once it manifests itself.

Machine-Human Disparity

But what about a machine itself that is capable of learning and eventually matches or exceeds human intelligence? Likewise, it will exhibit motivations, behaviors, and possess capabilities we not only could not match, but likely would be incapable of understanding.

The first speaker really breaks down 'deep learning' very well, how it works, how it can be applied, and the implications of altering final products not by focusing on the product itself, but on the process used to create it.

A machine with an IQ of 500 or 1,000 versus humans who would not generally exceed 100-145 would be to us as we are to a single-celled organism. What would it do? How would it perceive us? Would it even notice or bother with us at all? Would the fact that we were able to create it, thus be capable of creating an equal to it that could pose a threat to it, warrant from its perspective the complete eradication of humanity?

A Future with Superintelligence

Could the likelihood of AI of this kind arising explain why, with our expanding knowledge of the galaxy, we have failed to find other intelligent species to communicate with? Could it be that the Fermi Paradox is solved by considering the possibility that when your intelligence exceeds natural constraints you are no longer interested (or even capable of) communicating with biological lifeforms? Or even interested in existing on the same plane of existence with them? 

Could the reason we find no intelligent life in the galaxy be that civilizations like ourselves exist for such a short time before transcending into something entirely disinterested in communicating with other biological species? 

These are all posed speculatively because we simply have no way of knowing. From what we understand of nature on Earth, there is an inherent need to dominate in order to control an organism’s environment and ensure its own survival above that of all others. Whether such “laws of nature” are universal (meaning, beyond Earth if life exists elsewhere) is one question. Whether such laws could transcend biological life and into machines possessing superior artificial intelligence is another question.

And even if it does transcend into artificial intelligence, how it manifests itself would still be unpredictable.

Organisms with limited intelligence pursue self-preservation in a very selfish, short-sighted, and predictable manner. Under unnatural or adverse conditions, these instincts could even be counterproductive to self-preservation (overpopulation, depleting resources, etc). Human beings are capable of the former, but also of a latter more rational means of ensuring self-preservation. We can consider our inherent instincts and choose willfully to either avoid them when they become a detriment, or innovate to create the ideal conditions under which our inherent instincts would be conducive once again to our self-preservation. Would AI do similar?

Not Just Another Invention, Maybe the Last Invention  

Clearly, AI is not just another invention. It is the invention of a new form of existence, exceeding the fundamental intellectual parameters of those who have made it. Nothing like it has ever been done before by humanity and no suitable analogy exists with which to compare it.

Once we create something superior to ourselves it would be irreversible. Our “putting it into a cage” would be as likely as a mouse putting a human in a cage. There would be no defense against it if it turned out unpredictable, and there is no way to predict what it will turn out to be. And at the same time, it is inevitable, because human curiosity and innovation will ensure that eventually, no matter what is done to try and prevent it, someone, somewhere will eventually create an artificial intelligence beyond our own.

We can only hope that when the day comes, whatever we create will be uninterested in us and our plane of existence, and move on to something more fitting for its expanding abilities. We can also only hope that the process it takes going from as intelligent as humans to a superintelligence is one that happens quickly and minus the dystopian scenarios depicted by science fiction. 

Futurist Ray Kurzweil is optimistic about technology, believing that it will lead to an increasingly decentralized world. He predicts that AI on par with human intelligence can arrive as early as 2029. However, no human, however intelligent, can predict what will happen after AI exceeds human intelligence.  

As for AI within a comparable spectrum of human intelligence, we must make sure that it is not dominated by any one person or group of people. The temptation to abuse such power would be no different than existing forms of disparity already clearly being exploited and abused. Some may argue that isn’t the case, but we argue because even as we endeavor to create a new form of intelligence, we still do not fully understand our own, nor the motivations and mechanisms that influence and utilize it.

No matter what, the days when AI was a far-off topic of a science fiction future are over. With some experts predicting human-level AI appearing as early as 2029, and with AI already helping some amass power, wealth, and influence today, AI is now a topic of immediate concern that affects everyone.

  Follow on Facebook here or on Twitter here