Home‎ > ‎

FnnTEK News

Recent articles, updates, news and such. 

Culpeper Times

posted Apr 17, 2015, 8:14 AM by William Sadler   [ updated Sep 7, 2015, 7:56 AM ]

Thanks to the Culpeper Times for the story about our technology! You can find that story at:

Hawking, Tyson, Wozniak, Kurzwell and Musk are Wrong about Cognitive AI - Part 1

posted Apr 4, 2015, 9:37 AM by William Sadler   [ updated Sep 7, 2015, 8:07 AM ]

Related Info

"Could robots turn people into PETS? Elon Musk claims artificial intelligence will treat humans like 'labradors'" - Daily Mail Story

Creating Artificial Intelligence Is Akin to Summoning a Demon - Elon Musk, Outer Places Story

AI Could Spell the End of the Human Race - Stephen Hawking, Outer Places Story

Varieties of Emergence - On Strong Emergence by David A. Chalmers

Weak Emergence - About disbelief of Strong Emergence by Mark A. Bedau

"we'll be pet Labradors", "breed the docile humans", "maximize serotonin to make us happy", "robot uprising", "software that rewrites itself", "greatest existential threat", "technological singularity...", Sounds like bad 1950s science fiction, but its not. It's coming from some of the most successful, intelligent, learned, famous and high profile scientists alive at the start of the 21st century. But, quite simply, they are completely wrong about a truly cognitive AI - and possibly right about what passes for AI in the field today.

There are three basic problems here. The first is accepting an unproven assumption as natural law because of an unwillingness to accept observational facts. Ever since the 1956 Dartmouth Conference set the stage, there has been one assumption so basic to the AI field that it is not even considered when the problem is addressed - that cognition, intelligence, and ultimately consciousness itself is an aspect of computation. If you have the correct algorithms arranged in the correct fashion, you have an intelligent, self aware, conscious entity.

The mere thought that, "there exists a thing effect is so much greater than the sum its constituent elements that complete understanding of those constituent parts is not sufficient to explain the thing itself" is anathema. Therefore, if you accept this view, cognition is an as yet unknown algorithm or collection of algorithms.

One expression of this is the widespread disbelief of 'strong emergence' even among those who study emergent properties of non-linear systems. Gödel was in the same boat then he proved (in the mathematical sense) that there are true statements that cannot be expressed in a consistent system of mathematics. Our pride as humans, scientists, and especially mathematicians seems to resist any limits to our knowledge.

If we accept this stance, then it becomes possible to see the viewpoint of these prominent scientists, because all current AI efforts are based upon the premise that we create algorithmic approximations of intelligent behavior, hook them together somehow, and we get larger chunks of intelligent behavior.

The Real Problem

The dire predictions are related to two issues. A system rewriting its own code or doing something we would consider 'bad' because the AI thinks its 'good' in a way that we did not predict and therefore could not program around. These are actually related issues, but to see the relation we're going to have to walk each path.

We'll look at bad optimization first. Here's the scenario - you hand your cat to Robbie the Robot and say, "Fluffy has fleas - please get rid of them". Robbie considers for a second and throws fluffy into the furnace. After all, extreme heat kill fleas. Problem solved.

Algorithmically Optimizing AI

Is what got poor 'ole fluffy. If you remove all cognitive constraints from a problem's solution, the problem becomes much easier to solve. If a system is an algorithm based, goal-seeking program that optimizes its solution, then any notions of cooperation, altruistic strategies, non-stated goals are moot. The system is constrained by its sheaf of algorithms, they will tend to be trapped by local minima, unable to look for a globally acceptable solution because of algorithmic parameters. When such a system gets control of a Robbie the Robot, fluffy better watch out.

So, lets allow the system to modify its own algorithms and see what happens.

Self-Modifying Code

The obvious fix is for Robbie to be able to modify his own code, which is the beginning of the trip up singularity hill. Robbie surfs the internet, finds potential new algorithms, tries them on, changes his parameters - until his original goal-seeking is satisfied faster than before. That worked well, so lets do it again - and again, and learns more and more each time, until: Superintelligence, the singularity. Robbie decides that the best course of action is to inject serotonin into the brains of the people - they're happy, and previously enacted cat solution is no longer causing distress to anyone.

Practicum

The question that kicked off this update was (paraphrased), "Just because you can do this thing, should you?" Quick question for consideration, keep this in mind as we look explore the answer: "Does anyone actually believe that AI isn't coming?" I'm gonna guess that the answer to that question is 'no' - then the question that is more relevant to me (for the short term) is, "Do the FrANN™ techniques address the issues posed above?"

Cognitive AI

I'm going to state up front that both of the described problems above can be address by a sufficiently complex cognitive model. There are quite a few things that go into the phrase 'sufficiently complex' and the phrase 'cognitive model', but the main point is that the complexity needs to be sufficient such that the cognition we are seeking emerges from the underlying system and is sufficient to solve constrained problems. But that's for part two.

Question: How can a computer do something it is not programmed to do?

posted Apr 1, 2015, 6:57 AM by William Sadler   [ updated Sep 7, 2015, 7:53 AM ]

Assuming its not a bug of course? There were actually two questions which were related, the first was by email:

You make a claim that is IMPOSSIBLE!
Where did you get the idea that: “…It started using a pair of webcams under its control to track the movement of objects that were in its visual field. There was no software in the system written to track objects.”?

And Secondly via anonymous comment on the campaign site:

“A while back, a computer system did something that it had not been programmed to do. It started using a pair of webcams under its control to track the movement of objects that were in its visual field. There was no software in the system written to track objects.”
Do you have a source for that story? Never heard of it.

The answer to the second question is easy, the source for the story is myself - I was describing early experimentation with a system that is the 3rd generation precursor of the system that is now being marketed. I had no intention of publishing any details until AFTER I had developed it to the point where it was commercially useful. Hey, what can I say, I'm a capitalist at heart. Here's the rest of the story...

Background

The source of the 'claim' and 'story' involved work seeking an inexpensive method for performing visual tasks. The goal was to use inexpensive low-resolution webcams Instead of high resolution hyper-spectral cameras and see how much of the high end detection capabilities were possible by utilizing the neuron simulation software I had written. The hardware setup was very primitive - the camera mounts were built from an Erector set, pan / tilt was controlled with hobbyist radio-control servos. Image pre-processing was done with small Intel atom single board computers and held together on acrylic plates with whatever old PC chassis hardware I had to cobble the system together.

The main processing box had, what for that time was an absolutely enormous quantity of memory (16GB) and an early GPU card and almost a full TB of hard disk space. That box cost nearly $25k when it was bought, about $750 today and available in a laptop. Sheesh.

Our technique at the time involved generating a fractal, space filling pattern and 'growing' neurons around that pattern to get a simulation of an emergent fractal brain structure. While debugging the interface code, the ANN was given control of the pan tilt functions and fed inputs from the cameras. When the code was finally working, the ANN began moving the cameras about in what looked like random patterns, which is exactly what I expected at that point in time. The system utilized a SOM to display patterns of neural activity and I was set to experiment.

I realized that to train, showing pictures to the cameras was not the best method (funny how you miss some obvious things...) and was working on splitting the video processing to accept an RTPS URI instead of a hard interface to a camera so I could just point the system at a simple server and push whatever images I wanted.

While working on this task, a couple of hours into it I realized that the cameras had stopped moving. The SOM showed that the simulated neurons were still firing, but the cameras were not moving... oh well, another bug somewhere. Little while later, the cameras moved for a bit and stopped. That was odd. Then it happened again, which got my attention enough that I started paying attention to the system. Dump the ANN, data structures had a large amount of change from the baseline I recorded when the system first started. Hey, its learning something! Cool... Why did the cameras stop? They should still be moving...

The BIG surprise

Investigating this took a few more hours, during which time the cameras would periodically move, and I'd occasionally get up to get something to drink, go to the restroom, stuff like that. One time, I came into the room and the cameras followed my movement. Coincidence, it randomly moved while I walked in. Next time I walked in front of the camera, it did it again. Not coincidence - once I can buy, twice, lets look closer.

What followed was a multi-year investigation of the data in the system. The original system didn't record the signals as they propagated through the network, once that was put in place I realized that the system was reacting to reduce a noisy signal. When the two images being feed into the system by the cameras were the same, the 'signal traffic' between the different regions of the brain went to a local minimum. I conjectured that this is why the system learned to minimize camera movement. When something moved across the visual field of the cameras, the system would attempt to keep the images the same, and it learned that moving the cameras with the object in the center of the field had that tendency. The periodic random movement happened when enough random neuron firing generated a sufficiently strong signal that the cameras would move.

What I discovered was an emergent phenomena related to a signal optimization - I called it a 'reduction in chaos' when I first saw it, but have since come to use a different term (in the past couple of years) since I've seen some of the neurological scans of pain - the activity looked very similar. Our ANN is optimizing to remove pain.

The Key is the Feedback

What happened by providence in that original network was that it had the right amount of feedback to cause that single emergent phenomena - the question then was how can this phenomena be created at will and used for other purposes? Over the course of time, I learned to control the situation. How to grow the neurons, how to generate pain feedback loops to control the system's actions, the necessary and sufficient conditions to cause the system to seek a particular goal in the real world. We abandoned the primate model as being impossible with current hardware (4 years ago) and the thought model in my head was; small hunting carnivore' - small because it was prey and had to hide, hunting because it was seeking things, small because the hardware didn't exist to do a bigger one...

Current State

I have learned quite a bit about using the various emergent phenomena - we no longer use fractal space filling curves, the neural structure now emerges as well. (Rabbit Trail: Interesting Side Phenomena - if you put 2 cameras into our neural growth model, the structure bifurcates into a 2 spatially separated hemispheres. If you put in 3, you get a three lobed brain. Back up the rabbit hole.) I've gotten several organelle type structures to emerge as well that have functional equivalents in the Cortex, Amygdala, LGN, and Motor-Cortex. Our recent tests of Yannis at a DHS cyber security test range have indicated that the 'small mouse' system can detect anomalous network traffic, make a determination if it is malicious or benign, and act to mitigate if it is malicious. We're expanding that system to SFP (Small Furry Primate) size for our next test which is a rigorously structured single blind test to specifically look for inductive solutions to network attacks. We will train the system on x categories of attacks, and the system will be tested on x+n categories of attacks. Mitigation will be a part of that test, but its efficacy is not being tested in the next phase of tests. But believe me, we'll collect the data!

But What About Skynet?

I was asked early on, "Just because you can do a thing, doesn't mean you should - should you create a cognitive AI?" and there are some fears about the end of the world expressed by such notables as Stephen Hawking and Elon Musk. The quick answer is, "Yes, not only should we but we must." and "Hawking and Musk are wrong - unless we don't build cognitive AI". The long answer will have to wait for the next update.

Acknowledgements

I've said a lot of 'I' in this post, and I'll admit that the idea, research, and work in the earliest stages was performed by me. But no large effort is achieved by individuals alone. My dear wife Penny has had to put up with a house full of servers and many nights when I was up to all hours fiddling with my erector set robot. My Son, Will who has to put up with his old man sometimes working instead of fishing. My first investor Martin Cooper who fronted me the hardware $$ when I didn't have the wherewithal to build the first system is directly responsible for the first "that's strange" moment. The FnnTEK founding team, Dick Dunnivan, Rich Schott, Geoff Cauble, and Ned Franks, whose provision of wisdom, time, money, and contacts bought enough of my time to relieve me of a day to day job and actually do the research that made a lab curiosity into to a network security appliance, it couldn't have happened without you. Lastly, my 'guys' - Mike McDargh, who has set me straight when my programming has gone astray and has sped up our code so much that its useful and not a lab trick, and Mick Hart who brings years of experience, math that greatly exceeds my own, and a practicality of approach that I've valued since our days at Bellsouth in the mid 90s. While they 'work' for me I've always wondered why you would hire someone that is not better than you are? These guys are better than I am at every task set before us. The ongoing effort will be successful based on their efforts.

--

William Sadler, CEO, FnnTEK, Inc.

YouTube Video

posted Mar 27, 2015, 10:34 AM by William Sadler   [ updated Feb 11, 2016, 8:35 AM ]

We convert time series data into FFTs before we pump them into a FrANN™ system. We've constructed visible interfaces as well to aid in debugging - occasionally it's interesting enough to share. Enjoy!

https://youtu.be/G5RUJszBrqM

1-4 of 4