Self Organization

To me, AI self organization is a kind of spontaneous order that appears to emerge from random networks of simple neurons. Capturing this magic is a primary aim of AI these days and progress, although plagued with periods of stops and starts over the years, has been always steadily moving ahead.

Reflexes, the senses and awareness can all be distilled down to a simple type of reinforcement/strengthening between neurons via the electrically charged axon/synapses and dendrites of the next neuron. The flow of current is directional and a neuron in some sense behaves like a simple nonlinear switch, albeit with thousands of independent output terminals that feed forward. This gave rise to Back Propigation, Radial Basis, PDF and a myriad of other feed-forward learning paradigms in the 80s.

A limitation was that the software systems which trained them tended to use a few layers, usually one input, hidden and output layers with the assumption more complex networks could be factored into these more simpler constructs. The deep learning systems of more recent years seems an attempt to layer such systems so that intermediate results can be learned to effect a more capable overall system.

Software has always had the ability to ‘self modify’ – a fact usually found by accident in my experience after a wild bit of code wrote data to a protected area where prefetching of instructions was intended. Sometimes, the op-codes and operands read from such stray writes resulted in interesting actions, but more often than not it simply vectored to an interrupt, not unlike the intended actions of malware and viruses that overflow an input buffer.

DNA also seems to have the ability to self modify its encoding as described by ‘jumping genes’ that can change positions on a chromosome through successive generations, thereby giving rise to different characteristics within an organism.

These types of systems become uber complex when compared to a strictly deterministic set of instructions but therein lies their great potential. Fortunately software is easy to change and can emulate some of the complexity of such systems. Given the simple statistics that govern the transfer functions of neurons the key factors to me are the malleability and sheer size of networks that can be emulated on computers. The limitations become controlling the system so it doesn’t just produce mush and feeding the output back into the input for temporal and other recurrent properties to emerge. There is a Copernicus moment in all of this when, after several generations of successive layers of neural networks are produced to fit a set of learning data, you realize humans will never have the ability to glean insight as to why a network of such complexity has converged in the particular fashion it has. Nor would the same set of inputs produce an identical network were the training set run through again, not unlike each person having their own unique perception of the world based on their individual experience and wiring. Self organizing systems are incredible and incredibly complex.

This entry was posted in Self Organization. Bookmark the permalink.