Review of Kurzweil: The Singularity is Near


Ray Kurzweil: “The Singularity is Near: When Humans Transcend Biology”, 2005

Ray Kurzweil is an optimist. Actually, it’s a lot more than that. Ray Kurzweil is extravagantly, exponentially, optimistic about progress over the next century or two. And he has some good reasons for optimism, extrapolating from the growth in information technologies of the past century. Reading the book raises three central questions about the arguments here: do Kurzweil’s extrapolations faithfully represent reality? Do they really extend beyond information technology to our control of the material world around us? And how can he be so sure the predicted doubly exponential progress will be good for us?

New products typically follow a logistic (“S”) curve in market acceptance, accelerating slowly at first, then doubling on a regular basis, then slowing down near market saturation. A similar “S” curve seems to apply to improvements within a given technology (“paradigms” in Kurzweil’s description) such as vacuum tubes or transistors: capability of the technology grew at an exponential rate at first, then slowed as the technology reached practical limits where further devlopment was too costly. Kurzweil remarks on this typical “S”-curve behavior for information technology paradigms, but then notes that, so far, one paradigm has shifted smoothly to another, for example with transistors to integrated circuits, to yield not just continuing exponential growth, but growth on a faster exponential than the previous paradigm. Kurzweil peppers the book with dozens of graphs intended to demonstrate
various patterns of exponential growth. Kurzweil’s prediction is that, in the next decade or so, integrated circuits will be replaced by a new computing hardware paradigm with even faster growth rates, and that this is in fact a “law” that will hold true farther and farther into the future – “the Law of Accelerating Returns”.

According to Kurzweil, integrated circuits are already the fifth paradigm in the exponential growth of computing, starting with the electromechanical punch-card and related devices of the early 1900’s. The resulting curve of computing power per unit cost in the twentieth century definitely demonstrates an accelerating exponent – but it’s not clear to this reviewer that this isn’t just an artefact of comparison of
very different products. What did “cycles per second” really mean when keypunch operators had to do most of the work, and why should it be comparable with more recent definitions? The curve in Kurzweil’s graph is not nearly so clear without the data from before 1920 – in fact a simple exponential -: 10 to the power of (15 times the year (since 1900) minus 7) cycles per second per $1000 – also provides a reasonable fit, discounting those pre-1920 values, as the following graph indicates:

The two curves start to diverge significantly after the year 2000, however, so we should very shortly see which is more correct.

A more complete justification for Kurzweil’s “Law of Accelerating Returns” is in the appendix: the root of the argument is that computational speed (per unit cost) grows with world knowledge; world knowledge accumulates with the amount of computational power available; with all else constant, those linear relationships naturally lead to an exponential growth in world knowledge, since its rate of change grows with its total size. The exponent itself grows because there is an additional factor: the computational power available comes from the product of computational speed per unit cost and the exponential growth in the world economy. It’s a reasonable justification for Kurzweil’s double exponential, but the actual growth of the exponent he calculates for the 20th century is only 1.2% per year, less than half of the observed 2.8% world economic growth for that period. So there’s already some inconsistency here, not commented on by Kurzweil.

The reason surely is that the bulk of our knowledge isn’t particularly useful to improving the speed of computers. If Kurzweil’s full argument were correct, even diversion of all computational resources to idleness rather than research (so world knowledge remains constant) would still lead to faster and faster computers over time, a situation that seems unlikely. Moreover, there are some like physicist Jonathan Huebner who have argued that we are, far from entering an era of exponentially greater growth in capabilities, actually approaching a new dark age. Huebner claims the rate of technological innovation per person per year reached a peak a century ago, and the decline now, despite high R&D and education
funding, is because developing new technology beyond what’s already been done has become more and more difficult. Kurzweil has his own list of innovations to refute this, but he does not manage to make a convincing case that his “law of accelerating returns” is in any way a necessary consequence of the way the world works; “conjecture of accelerating returns” would be a more accurate term. In comparing linear and exponential behavior in the first chapter, Kurzweil notes that “people tend to overestimate what can be achieved in the short term (because we leave out necessary details)”. There are many details Kurzweil has necessarily skipped over, but no indication that he thinks he might himself be overestimating.

Which of these visions of the future is more accurate should be clear a decade or two down the road. The main distinction between, for instance, Kurzweil’s accelerating returns and a simple continued exponential growth in computing power is the question of when, rather than if, certain milestones are reached. A “human-level” personal computer would be available with somewhere between 10^16 (for functional emulation) and 10^19 (neuron-level simulation) cycles per second, for $1000. With Kurzweil’s accelerating returns, that point arrives between 2020 and 2030. If the growth continues only on a simple exponential, we have to wait until after 2050 for human-scale personal computing. Kurzweil fully intends to be alive when his brain can be scanned and uploaded to a simulation of immortality, a motive that perhaps overly encourages him to argue for the earlier date.

Kurzweil sets 2045 as the date for the Singularity itself, the point when all this computer power really transforms our capabilities. If the plain exponential law is true instead, the date for comparable
computational capabilities is pushed back to the first decade of the 22nd century. Continued doubly exponential growth in computational power would reach the ultimate computational capacity of our solar system, between 10^70 and 10^80 calculations per second, before 2120, within the natural life-span of some alive today. Growth along the ordinary exponential would not reach such astronomical scales until the 25th century. Depending on whether such vastly enhanced intelligence can find a way around the speed-of-light restriction or not, Kurzweil sees a universe filled with computation possible less than 200 years from today.

This vast growth in computational power is the central element on which much of the remaining speculation of the book rests; it’s an awe-inspiring story, and even if slower growth pushes back some of the these dates a century or three, it is still worth understanding where augmenting human intelligence with machines may take us.
Kurzweil’s arguments for the development of real artificial intelligence in the relatively near future, given computational capabilities, seem sound enough. His commentary on the issue of subjectivity (if I get uploaded, which one is “me”?) is
one of the most lucid I have ever seen. But he wastes far too much time on Searle’s Chinese Room argument against AI; just a simple statement that the scale of complexity invalidates the comparison should have been enough.

Kurzweil identifes three related revolutions underway: in genetics, nanotechnology, and robotics (strong AI). These enable the information technology revolution to be extended to the living, material and mental worlds; many wonders are to be expected. In particular, his outline for “brain uploading” depends on nanomachines capable of penetrating the brain and recording patterns there, a rather invasive (but believable) approach. Kurzweil takes Eric Drexler’s side in the “fat fingers” argument about nanotechnology with the late Richard Smalley, but not entirely convincingly – Smalley had a point relating to the energy scales for the variety of molecular processes you would need; the real proof will be in demonstration of true molecular assemblers beyond what biology provides.

At times, Kurzweil’s book veers into millenial apocalypsism, at one point asserting “I am a Singularitarian”, describing an almost religious belief in the ability to be uploaded and live forever, and listing several articles of faith. Kurzweil acknowledges the religious element but asserts that this is different: traditional religion is primarily a rationalization of death, while the Singularity makes death a thing of the past. How will existing religions respond to such notions?

One very serious question is the possibility of threats from these new technologies – every individual will have vast power, beyond anything even nations have today. There is the “gray goo” threat from nanotechnology, as Bill Joy has articulated. Kurzweil acknowledges, yes there are dangers, in fact he agrees with Joy in many respects. Unfortunately we will have to keep several steps ahead, with “immune systems” deployed against the threats, before they wreak havoc. The most worrisome threat is from Strong AI itself – once they supercede human intelligence, what will prevent them from overcoming any bounds we may have set against harming us? Kurzweil’s main response to this threat is that “they” will be “us”, uploaded and greatly enhanced, so it doesn’t much matter what happens to the old biological world. This is, to say the least, a little unsettling…

In addition to the copious graphical illustrations, Kurzweil adds to the text some imaginary conversations with historical, present, and future persons, including Drexler, Bill Gates, Darwin, and Freud. He seems to have obtained permission from the living for this; sometimes these conversations enlighten, but they seem oddly contrived.

Kurzweil does have a fascinating view of our potential future. Whether near or far, this book is a useful guide to how the world will change at that point where humans transcend biology.


5 thoughts on “Review of Kurzweil: The Singularity is Near

  1. (Ooopsie! I voted for the story and it zipped out of editing mode and I can’t post an editorial comment anymore!) The first link (essay on space solar power) is busted. Also, I can’t see the graph image. Worth voting for anyway :-) terrific review apsmith!

  2. Sorry about that – apparently the site I had it on didn’t allow remote linking. I’ve relocated the image, should be good now!

  3. My rejoinder to people invoking Searle’s Chinese Room example (regarding whether or not such a machine would be intelligent) is to ask how they are sure that we’re really intelligent. I don’t mean this in an “oh, we’re stupid” way, but in a “what is intelligence, and how do we know we have it” kind of way.

    Perhaps I’ve been overly influence by Skinner, but I’m not convinced that “intelligence” is a well-defined term. That doesn’t keep me from studying AI, but still…

Comments are closed.