Home‎ > ‎FnnTEK News‎ > ‎

Hawking, Tyson, Wozniak, Kurzwell and Musk are Wrong about Cognitive AI - Part 1

posted Apr 4, 2015, 9:37 AM by William Sadler   [ updated Sep 7, 2015, 8:07 AM ]

Related Info

"Could robots turn people into PETS? Elon Musk claims artificial intelligence will treat humans like 'labradors'" - Daily Mail Story

Creating Artificial Intelligence Is Akin to Summoning a Demon - Elon Musk, Outer Places Story

AI Could Spell the End of the Human Race - Stephen Hawking, Outer Places Story

Varieties of Emergence - On Strong Emergence by David A. Chalmers

Weak Emergence - About disbelief of Strong Emergence by Mark A. Bedau

"we'll be pet Labradors", "breed the docile humans", "maximize serotonin to make us happy", "robot uprising", "software that rewrites itself", "greatest existential threat", "technological singularity...", Sounds like bad 1950s science fiction, but its not. It's coming from some of the most successful, intelligent, learned, famous and high profile scientists alive at the start of the 21st century. But, quite simply, they are completely wrong about a truly cognitive AI - and possibly right about what passes for AI in the field today.

There are three basic problems here. The first is accepting an unproven assumption as natural law because of an unwillingness to accept observational facts. Ever since the 1956 Dartmouth Conference set the stage, there has been one assumption so basic to the AI field that it is not even considered when the problem is addressed - that cognition, intelligence, and ultimately consciousness itself is an aspect of computation. If you have the correct algorithms arranged in the correct fashion, you have an intelligent, self aware, conscious entity.

The mere thought that, "there exists a thing effect is so much greater than the sum its constituent elements that complete understanding of those constituent parts is not sufficient to explain the thing itself" is anathema. Therefore, if you accept this view, cognition is an as yet unknown algorithm or collection of algorithms.

One expression of this is the widespread disbelief of 'strong emergence' even among those who study emergent properties of non-linear systems. Gödel was in the same boat then he proved (in the mathematical sense) that there are true statements that cannot be expressed in a consistent system of mathematics. Our pride as humans, scientists, and especially mathematicians seems to resist any limits to our knowledge.

If we accept this stance, then it becomes possible to see the viewpoint of these prominent scientists, because all current AI efforts are based upon the premise that we create algorithmic approximations of intelligent behavior, hook them together somehow, and we get larger chunks of intelligent behavior.

The Real Problem

The dire predictions are related to two issues. A system rewriting its own code or doing something we would consider 'bad' because the AI thinks its 'good' in a way that we did not predict and therefore could not program around. These are actually related issues, but to see the relation we're going to have to walk each path.

We'll look at bad optimization first. Here's the scenario - you hand your cat to Robbie the Robot and say, "Fluffy has fleas - please get rid of them". Robbie considers for a second and throws fluffy into the furnace. After all, extreme heat kill fleas. Problem solved.

Algorithmically Optimizing AI

Is what got poor 'ole fluffy. If you remove all cognitive constraints from a problem's solution, the problem becomes much easier to solve. If a system is an algorithm based, goal-seeking program that optimizes its solution, then any notions of cooperation, altruistic strategies, non-stated goals are moot. The system is constrained by its sheaf of algorithms, they will tend to be trapped by local minima, unable to look for a globally acceptable solution because of algorithmic parameters. When such a system gets control of a Robbie the Robot, fluffy better watch out.

So, lets allow the system to modify its own algorithms and see what happens.

Self-Modifying Code

The obvious fix is for Robbie to be able to modify his own code, which is the beginning of the trip up singularity hill. Robbie surfs the internet, finds potential new algorithms, tries them on, changes his parameters - until his original goal-seeking is satisfied faster than before. That worked well, so lets do it again - and again, and learns more and more each time, until: Superintelligence, the singularity. Robbie decides that the best course of action is to inject serotonin into the brains of the people - they're happy, and previously enacted cat solution is no longer causing distress to anyone.

Practicum

The question that kicked off this update was (paraphrased), "Just because you can do this thing, should you?" Quick question for consideration, keep this in mind as we look explore the answer: "Does anyone actually believe that AI isn't coming?" I'm gonna guess that the answer to that question is 'no' - then the question that is more relevant to me (for the short term) is, "Do the FrANN™ techniques address the issues posed above?"

Cognitive AI

I'm going to state up front that both of the described problems above can be address by a sufficiently complex cognitive model. There are quite a few things that go into the phrase 'sufficiently complex' and the phrase 'cognitive model', but the main point is that the complexity needs to be sufficient such that the cognition we are seeking emerges from the underlying system and is sufficient to solve constrained problems. But that's for part two.

Comments