Home‎ > ‎FnnTEK News‎ > ‎

Question: How can a computer do something it is not programmed to do?

posted Apr 1, 2015, 6:57 AM by William Sadler   [ updated Sep 7, 2015, 7:53 AM ]

Assuming its not a bug of course? There were actually two questions which were related, the first was by email:

You make a claim that is IMPOSSIBLE!
Where did you get the idea that: “…It started using a pair of webcams under its control to track the movement of objects that were in its visual field. There was no software in the system written to track objects.”?

And Secondly via anonymous comment on the campaign site:

“A while back, a computer system did something that it had not been programmed to do. It started using a pair of webcams under its control to track the movement of objects that were in its visual field. There was no software in the system written to track objects.”
Do you have a source for that story? Never heard of it.

The answer to the second question is easy, the source for the story is myself - I was describing early experimentation with a system that is the 3rd generation precursor of the system that is now being marketed. I had no intention of publishing any details until AFTER I had developed it to the point where it was commercially useful. Hey, what can I say, I'm a capitalist at heart. Here's the rest of the story...

Background

The source of the 'claim' and 'story' involved work seeking an inexpensive method for performing visual tasks. The goal was to use inexpensive low-resolution webcams Instead of high resolution hyper-spectral cameras and see how much of the high end detection capabilities were possible by utilizing the neuron simulation software I had written. The hardware setup was very primitive - the camera mounts were built from an Erector set, pan / tilt was controlled with hobbyist radio-control servos. Image pre-processing was done with small Intel atom single board computers and held together on acrylic plates with whatever old PC chassis hardware I had to cobble the system together.

The main processing box had, what for that time was an absolutely enormous quantity of memory (16GB) and an early GPU card and almost a full TB of hard disk space. That box cost nearly $25k when it was bought, about $750 today and available in a laptop. Sheesh.

Our technique at the time involved generating a fractal, space filling pattern and 'growing' neurons around that pattern to get a simulation of an emergent fractal brain structure. While debugging the interface code, the ANN was given control of the pan tilt functions and fed inputs from the cameras. When the code was finally working, the ANN began moving the cameras about in what looked like random patterns, which is exactly what I expected at that point in time. The system utilized a SOM to display patterns of neural activity and I was set to experiment.

I realized that to train, showing pictures to the cameras was not the best method (funny how you miss some obvious things...) and was working on splitting the video processing to accept an RTPS URI instead of a hard interface to a camera so I could just point the system at a simple server and push whatever images I wanted.

While working on this task, a couple of hours into it I realized that the cameras had stopped moving. The SOM showed that the simulated neurons were still firing, but the cameras were not moving... oh well, another bug somewhere. Little while later, the cameras moved for a bit and stopped. That was odd. Then it happened again, which got my attention enough that I started paying attention to the system. Dump the ANN, data structures had a large amount of change from the baseline I recorded when the system first started. Hey, its learning something! Cool... Why did the cameras stop? They should still be moving...

The BIG surprise

Investigating this took a few more hours, during which time the cameras would periodically move, and I'd occasionally get up to get something to drink, go to the restroom, stuff like that. One time, I came into the room and the cameras followed my movement. Coincidence, it randomly moved while I walked in. Next time I walked in front of the camera, it did it again. Not coincidence - once I can buy, twice, lets look closer.

What followed was a multi-year investigation of the data in the system. The original system didn't record the signals as they propagated through the network, once that was put in place I realized that the system was reacting to reduce a noisy signal. When the two images being feed into the system by the cameras were the same, the 'signal traffic' between the different regions of the brain went to a local minimum. I conjectured that this is why the system learned to minimize camera movement. When something moved across the visual field of the cameras, the system would attempt to keep the images the same, and it learned that moving the cameras with the object in the center of the field had that tendency. The periodic random movement happened when enough random neuron firing generated a sufficiently strong signal that the cameras would move.

What I discovered was an emergent phenomena related to a signal optimization - I called it a 'reduction in chaos' when I first saw it, but have since come to use a different term (in the past couple of years) since I've seen some of the neurological scans of pain - the activity looked very similar. Our ANN is optimizing to remove pain.

The Key is the Feedback

What happened by providence in that original network was that it had the right amount of feedback to cause that single emergent phenomena - the question then was how can this phenomena be created at will and used for other purposes? Over the course of time, I learned to control the situation. How to grow the neurons, how to generate pain feedback loops to control the system's actions, the necessary and sufficient conditions to cause the system to seek a particular goal in the real world. We abandoned the primate model as being impossible with current hardware (4 years ago) and the thought model in my head was; small hunting carnivore' - small because it was prey and had to hide, hunting because it was seeking things, small because the hardware didn't exist to do a bigger one...

Current State

I have learned quite a bit about using the various emergent phenomena - we no longer use fractal space filling curves, the neural structure now emerges as well. (Rabbit Trail: Interesting Side Phenomena - if you put 2 cameras into our neural growth model, the structure bifurcates into a 2 spatially separated hemispheres. If you put in 3, you get a three lobed brain. Back up the rabbit hole.) I've gotten several organelle type structures to emerge as well that have functional equivalents in the Cortex, Amygdala, LGN, and Motor-Cortex. Our recent tests of Yannis at a DHS cyber security test range have indicated that the 'small mouse' system can detect anomalous network traffic, make a determination if it is malicious or benign, and act to mitigate if it is malicious. We're expanding that system to SFP (Small Furry Primate) size for our next test which is a rigorously structured single blind test to specifically look for inductive solutions to network attacks. We will train the system on x categories of attacks, and the system will be tested on x+n categories of attacks. Mitigation will be a part of that test, but its efficacy is not being tested in the next phase of tests. But believe me, we'll collect the data!

But What About Skynet?

I was asked early on, "Just because you can do a thing, doesn't mean you should - should you create a cognitive AI?" and there are some fears about the end of the world expressed by such notables as Stephen Hawking and Elon Musk. The quick answer is, "Yes, not only should we but we must." and "Hawking and Musk are wrong - unless we don't build cognitive AI". The long answer will have to wait for the next update.

Acknowledgements

I've said a lot of 'I' in this post, and I'll admit that the idea, research, and work in the earliest stages was performed by me. But no large effort is achieved by individuals alone. My dear wife Penny has had to put up with a house full of servers and many nights when I was up to all hours fiddling with my erector set robot. My Son, Will who has to put up with his old man sometimes working instead of fishing. My first investor Martin Cooper who fronted me the hardware $$ when I didn't have the wherewithal to build the first system is directly responsible for the first "that's strange" moment. The FnnTEK founding team, Dick Dunnivan, Rich Schott, Geoff Cauble, and Ned Franks, whose provision of wisdom, time, money, and contacts bought enough of my time to relieve me of a day to day job and actually do the research that made a lab curiosity into to a network security appliance, it couldn't have happened without you. Lastly, my 'guys' - Mike McDargh, who has set me straight when my programming has gone astray and has sped up our code so much that its useful and not a lab trick, and Mick Hart who brings years of experience, math that greatly exceeds my own, and a practicality of approach that I've valued since our days at Bellsouth in the mid 90s. While they 'work' for me I've always wondered why you would hire someone that is not better than you are? These guys are better than I am at every task set before us. The ongoing effort will be successful based on their efforts.

--

William Sadler, CEO, FnnTEK, Inc.

Comments