AI versus AN, Implications for Singularity

This post really should be titled "AI versus AI", because it will contrast Artificial Intelligence and what I'll call Artificial Instincts. But that'd get confusing, so I'll let AN stand for the latter.

The field of Artificial Intelligence has bifurcated - originally it was simply about how to make machines think like people do - how to make a conscious, articulate, logical mind. We made some great early progress - theorem proving, chess playing via heuristics, higher level language code compilation, expert systems. But eventually it was recognized that if we have to code every bit of the intelligence in an AI, that may simply be too hard.

There are a few on-going attempts in that direction (CYC for example, with it's huge database of rules), and perhaps that will work out. But mostly AI researchers gave up on that approach, and shifted to a "bottom up" approach -what I'll call Artificial Instincts - in which researchers attempt to build "lower level" mental functions that are present in humans and necessary in order to behave and think as a human. The assumption appears to be that if one can build a robot body that has all the automatic, instinctual capabilities of a human being, and maybe paste some higher level heuristics and logic onto that, maybe AI will just emerge. Or more charitably, the idea may be that we'll learn enough working on such things as vision, balancing and walking, speech recognition, evolutionary algorithms and neural networks, that we'll be able to push deeper into the mysteries of higher level consciousness.

Some researchers took AN in a direction of brute force application of logic - eg. making chess play at the grand master level by having it examine vast numbers of alternative moves, using rules and heuristics to prune the number of moves down to a level that makes that practical. Others took a bio-mimetic approach, starting with neural networks, then experimenting with evolutionary algorithms, with more recent work becoming even more explicitly bio-based, with massive computing power applied to simulating chunks of actual brain neurons - e.g. a simulation of a billion neuron visual network. The idea being that if one learns enough about how the brain works, one could eventually build a complete simulation, or at least a close enough approximation that it might self-organize into an intelligence - and of course, if it doesn't, one might still get some useful insights into making machines more capable.

Then there's the idea of the "singularity" - that at some point some AI or AN method will get intelligent enough to start re-designing itself to be more intelligent (whatever "more" means, in that context), resulting in a sudden and very rapid increase in AI. Rodney Brooks has commented ( www.spectrum.ieee.org/jun08/6307 ) that he believes that will not happen, that rather the singularity will be a slow, developing process in which machines slowly get more capable. Dr. Brooks seems to fall into the "build the body well enough and the mind will come" camp of AN, with a heavy focus on robotics.

That approach, while likely to have some useful spin-offs, is difficult and slow, requring lots of clever thinking on the part of the human researchers - a wonderful puzzle, a delightful area to play in - but unfortunately suffering much the same problem of the original AI "program a mind" approach. They've simply moved on to harder, deeper problems, some of them expecting or hoping that intelligence will eventually pop out, if they take it far enough. So it's hardly surprising that Dr. Brooks sees the singularity as a slow development. And let me be clear - he may be correct.

However, the other path of AN - the one that aims to circle around to AI by simulating brain "circuitry" - does appear to have the potential for a relatively sudden singularity. While a great deal of intelligence and cleverness also goes into this area of research, that cleverness gets vastly multiplied through application of massive, brute-force simulation of the resulting self-modifying networks. If these researchers do their work right, and computer processing power continues to increase for another decade or two, it seems likely that they will in fact be able to create closer and closer approximations to a human brain. First simulating a crude approximation of neural networks, then doing simulations of a small number of realistic neurons, now expanding to relatively large numbers of neurons organized for specific functions. With more processing power, the obvious "next step" would be to simulate more and more areas of the brain, and integrating those. This is the path that some researchers see as eventually leading to a human-level brain (sanity, let alone wisdom, not guaranteed).

Such a brain could be easily "re-wired" to give that intelligence significant advantages over human beings. It's short term memory could be made much larger. It could have multiple conventional computer's displays routed directly into it's visual senses - and perhaps even be given complete computer programs that behave as higher level "instincts" - e.g. so that it has only to think of factoring a number, and very quickly get the result, without it having to apply it's intelligence to step through an algorithm, as humans must. Such things could give an otherwise "merely human" intelligence simulation a massive edge over actual humans. In fact, it seems likely to me that we might develop "idiot-savants" well before getting to average intelligence.

It might even be reasonable to presume that one could make an idiot-savant that has talent in the area of designing better neural structures, in which case not even human-level intelligence would be necessary to kick off the singularity. After all, it'll be fairly natural for researchers to make their own field one of the first areas of hyper-competence in a simulated brain appoaching human level intelligence. Once that level of capability is achieved, the researchers could collaborate with the AI - setting it goals, letting it invent solutions, and rolling those solutions into a future improved version. After years of painfully slow progress, those researchers would be elated and rush ahead, simply for the joy of seeing technical challenges falling, and appreciating the elegance or cleverness of the solutions the AI finds, almost greedy to watch it accelerating.

It'd be at that point that some of the "unable to predict" nature of singularity starts to kick in. For example, what if one of the researchers got the bright idea, one evening, of setting the AI to designing a self-improving "financial analysist". Running in off-hours and the background, looking like just one more of several dozen such development tracks the AI is working on, in a few months it accumulate sufficient understanding and new insights, that in simulations with past and realtime data, it starts catching the "surprise" surges and drops in stock prices, where "big money" can be made. And so the researcher starts taking it's advice, perhaps doubling his money every week. Fearful of getting caught, he has the AI figure out how to go online and set up hundreds of accounts, and so on. By half a year, his original $1000 may have become a $1T fortune, spread over 1000 "investors" who appear to make some mistakes, but manage to "day-trade" on thousands of stocks to ever increasing profits. All under the control of the AI...

Another researcher might very much hope that the AI could find a "cure for aging". While it couldn't do lab work, it might be able to cull through vast amounts of biological research to find key hints and experimental directions, publishing (with the human AI researcher's help) papers that excite other researchers into doing key experiments. Again it might spread its papers over dozens of false identities, each of those gaining a reputation for solid theoretical work and very interesting and tightly reasoned extrapolations. With hundreds of human researchers un-wittingly guided by the AI to do experimental work, and often sharing "pre-publication" results with the AI and getting immediate feedback of new ideas and experiments to try and tools to build, progress could be very rapid. Only toward the end of the process might it become apparent that the research is converging on a set of capabilities that allow reversing some of the worst symptoms of aging.

And of course, those are just two, rather crude ideas for areas where a developing idiot-savant might be guided to achieve startling impact. Physics and medicine might be revolutionized. New manufacturing methods developed. Socio-technical trends might be extrapolated and new inventions created just in time to accelerate those trends. All before the idiot-savant has even gotten to "I think, therefore I am". It will have surpassed human intelligence without ever equaling it.

Comments

Popular posts from this blog

Safer Molecular Manufacturing Through Nanoblocks

Proposed Presidential Vision and Plan for NASA

Pods and Self-driving Carriages