Friday, April 12, 2013

The singularity that matters

If you have been following certain developments over the last half-century or so, you will  have seen a pattern. The pattern relates to computing, to the capabilities of electronic computers in particular.

The issue started with the work of Alonzo Church and Alan Turing, who simultaneously published the conceptual groundings for modern computing, before electronic computing systems were developed. In that era, a "computer" was considered to be a person. "Computer" was a job title, as in a forester or a shop-keeper. "Computing" served as the basis for a technical career, that of adding up numbers for the most part, but without a machine to assist in the process. There were some mechanical devices for very big jobs. In our day, such a work description may seem odd. Nonetheless, it is an artifact of an earlier time. Much of what we do now to earn a living will surely seem strange to our own descendants.

The idea brought forth by Church and Turing, respectively from Princeton and Cambridge, was couched in complex, arcane languages of mathematics and logic. The concept is simple, though. As long as a logical process could be found to loop through itself multiple times, to repeat itself, the result could represent the kind of reasoning and decision-making that we as humans carry out. Such repetition would allow for a cycle of reasoning, error-correcting, and learning. In this, computers could be used to model much of human behavior.

During and after World War II, when computing machines were brought into use, such guidelines showed great promise. Indeed, they have been embraced by all sectors of society in incremental steps. Computerization of computational tasks started slowly, with the famous "glass temples" of mainframe systems owned by large organizations. Smaller, but powerful new systems become available over time in many stages, until the world is now awash with computing devices of one kind or another, a testament to their usefulness for many things.
a tr
The development mentioned earlier, the one begun by the works of Church and Turing, is referred to as the "Singularity". Computer scientists make the claim that computing devices are eventually going to take over the task of thinking, releasing us from much of this function, given that they will at some time demonstrate their cognitive, or thought-producing superiority over humans. Looking forward to the Singularity has been a tradition of many computer scientists since the time when Turing mentioned the possibility of such a development. Computers are not only judged to be superior in collating and sorting and facilitating communications on a large scale, a point that is not in dispute. Their potential, according to proponents of the eventual Singularity, is to even take over in the production of new thoughts where we haven't even gone.

Whether or not the career of being a human "computer" was fulfilling, we are not going back there to be sure. Much has been written about a shift in reasoning power from people to machines, which is also the theme of many artistic works, movies, and literature. The Singularity is a staple of much science fiction. Similar to predictions of the end of the world, there have been many forecasts of the Singularity, when it will come and what its eventual implications will be. Concern for and promotion of the Singularity has been the basis of much federal research and development funding, particularly in the defense arena. If the end of the world -- or at least of someone's version of the world -- is to be ushered in by computers with unbounded power, at least we can rest assured that they will be ours. Actually, some Singularity prediction artifacts are wrapped up in catastrophic finality, the end of the world as stimulated by rogue computing devices of various kinds. In such fictional accounts, machines often act contrary to the interests of their creators once they establish a level of superiority thought-wise and in terms of control. By these accounts, a tragedy faces humanity to the degree that we are not ready for the Singularity.

It is interesting, of course, that government and other interests are almost frantically working to bring the Singularity about in spite of such risks.

Singularity predictions, many of which are long past due, tend to extend ever further into the horizon. While the Singularity was considered to be imminent within only a year or two in the 1950s through to the 1970s, the 1980s , and beyond, predictions extended the date ever further into the future. By the end of the Twentieth Century, predictions had been extended to 2050 or so. In our day, it is difficult to understand when the Singularity is expected, as predictions are not so often provided with associated dates. The temptation is surely there by Singularity prophets to make explicit predictions, but with popular knowledge being ubiquitous is it is in our time, it is surely more difficult to back out of predictions that clearly did not happen.

Nonetheless, the implications of the Singularity are presented as being increasingly stark and frightening, even as predicted dates extend over the horizon or disappear altogether. This is not to say that automation is not inherently beneficial and that some aspects of intelligence as a characteristic of computing devices are not not available and desirable. The problem is the idea that computers will out-think us. Proponents of artificial intelligence say that we are creating machines that are inherently, evolutionarially superior to us. As a result, we will become, relatively-speaking, stupid.

As can readily be discerned, there is much evidence that the human race does not need a Singularity to behave stupidly. Individually and severally, we can generate more than a few irrational thoughts and counterproductive behaviors. Funding an impending Singularity would stand up alongside other well-documented acts of insanity of which we are aware.

Rather than trying to build machines to out-think us, couldn't we concentrate on leveraging the power of computers to use existing knowledge in improved ways? We have pretty good brains. We have stores of knowledge in various forms that lie unused, to the detriment of all of us. Why don't we work to utilize, if not maximize, the fruits of human creative output and thought? In this vein, let us consider another potential form of singularity. How about a singularity in which all of the best knowledge, supported and guided by viable flow of data, was available for our evaluation and use? What if an idea, once documented and verified, were immediately available when it was needed?

By this, I don't mean just that knowledge that happens to be available at a particular time and place. That wouldn't be much of a singularity, now, would it? We should at least take a page from the Singularity-ists. We should think big. Why not a form of singularity where the knowledge would rush to the scene once the context of a problem or situation presented itself. In the impending "Internet of things", data will be available from many new sources. Health is an important part of this. What if you were to get a blood test, or weigh yourself, or order a meal at a restaurant, an impending event with potentially important consequences? Would you want to do the right thing, the smart thing, with the results of the test? If we are truly able to arrange for a singularity of knowledge of this kind, such knowledge would also incorporate the best-tasting, most desirable options, given your condition and preferences. Now we are talking! Taste THAT ice cream (this will make sense a little later).

Is this possible? Our message is that it is. Would it be the "death" of commerce? Yes, much of it, as there is a great deal of profiteering going on. There will be substantial opportunities for purveyors of the "good stuff", however. Commerce is based on providing "goods" and services, not "bads" and services. Disease, for example is bad; there is nothing good about it. Knowledge can and will get rid of it.

Now, of course, the question arises as to whether the Singularity of such computer scientists and other futurists are so rhapsodic about can or will occur. They just did well in the Jeopardy challenge. I do not have the energy or time to take on that question right now, but I have one observation. About ten years ago I was at an artificial intelligence conference, a defense-sponsored affair, where exhibitors asked me to type in a question to one of their systems. I put in, "Is the ice cream good?" Several of the people were eating ice cream at the time. The exhibitors dutifully told me that the machine could not taste the ice cream. At the time, and even now, I thought the advice was more than a little condescending.

I happen to know that there are electronic taste and smell sensors on the market that are more than able to discern between the chemical and sensory characteristics of basically anything, including ice cream. I think that that is beside the point, however. I find it hard to believe that computers will be able to replace us in the thinking department as long as it is our senses and our priorities that hold sway. Can we at least work on achieving the singularity of which I write prior to the Singularity, if we need such a thing at all?

No comments:

Post a Comment