David's Reviews > The Singularity is Near: When Humans Transcend Biology

The Singularity is Near by Ray Kurzweil
Rate this book
Clear rating

by
166376
's review

did not like it
bookshelves: embarrassed-to-own, intellectual-con-artist-at-work, utter-dreck, read-in-2009

FUTURE SCHLOCK

(If you loved "Future Shock", and "The Celestine Prophecy" changed your life, this is the book for you)

But, wait! All those 5-star reviews gotta count for something, right? Well, let's take a look.

"We will have the requisite hardware to emulate human intelligence with supercomputers by the end of this decade."

Really, Ray. How's that coming along? You've still got a year, two if we're charitable. But, even despite the spectacular vagueness of the claim, things are hardly looking good.

"For information technologies, there is a second level of exponential growth: that is, exponential growth in the rate of exponential growth".

A breathtakingly audacious claim. Without a scintilla of evidence provided to justify it. Graphs where the future has been conveniently 'filled in' according to the author's highly selective worldview do not count as evidence, and are nothing more than an embarrassment.
But then, most of the graphs in this book do not bear up under close scrutiny - their function is more cartoon-like. Even Kurzweil's more apparently reasonable claim - that of exponential growth at a constant rate - rests on a pretty selective framing of the question and interpretation of existing data.

"Two machines - or one million machines - can join together to become one and then become separate again. Multiple machines can do both at the same time: become one and separate simultaneously. Humans call this falling in love, but our biological ability to do this is fleeting and unreliable."

Say what now?

From a technical standpoint, as far as biotechnology is concerned (which is the area I am most competent to judge), there's hardly a statement that Kurzweil makes that is not either laughably naive or grossly inaccurate. Assuming that, indeed, drug delivery via nanobots and the engineering of replacement tissue/organs will at some point become reality, Kurzweil's estimate of the relevant timeframe is ludicrously optimistic. A relevant example is the 20 years it took to derive clinical benefit from monoclonal antibodies -- the rate-limiting steps had little to do with computational complexity. So the notion that, in the future, completely real biological, physiological, and ethical constraints will simply melt under the blaze of increased computing power is fundamentally misguided.

From a statistical point of view, things are no great shakes either. His account of biological modeling is such a ridiculous oversimplification it defies credulity. I'd elaborate, but frankly, the whole sorry mess is just starting to irritate me.

Given the density of meaningless, unsubstantiated, and demonstrably false statements in the first few chapters, it's hard to see the point in continuing. If one actually reads carefully what he's saying, and assumes that he is assigning standard, agreed-upon, meaning to the words he uses, then several possible reactions seem warranted:

* that sinking feeling that one inhabits a universe that is completely orthogonal to those who gave this a 5-star rating
* heightened skepticism and aversion to Kool-Aid
* bemusement at the gap between Kurzweil's perception of reality and one's own - in particular, the evident moral vacuum in which he "operates", as well as apparent ignorance or indifference to the lot of the vast majority of the planet's inhabitants
* wonder at the sheer monomaniacal gall of the man

Grandiose predictions of the future, the more outlandish the better, appear to have an undiminished appeal for Homo sapiens. For the life of me, I have never been able to figure out why.






191 likes · flag

Sign into Goodreads to see if any of your friends have read The Singularity is Near.
Sign In »

Reading Progress

January 2, 2009 – Shelved
Started Reading
January 21, 2009 – Finished Reading

Comments Showing 1-44 of 44 (44 new)

dateDown arrow    newest »

message 1: by Manny (new)

Manny I'm an AI person, and about half of me whole-heartedly supports what you're saying here. He does often come across as ludicrously optimistic, indeed quite out of touch with reality.

The other half points out that, even though AI has a terrible history of overhyping itself, the errors are often not as bad as they first appear. People in the 50s did indeed make themselves look stupid when they said that a computer would be the world's best chess player within 10 years. But if you compare them with Dreyfus, who wasted a lot of time arguing that computers would never be able to play Grandmaster-level chess, I know who I think came out looking dumbest. The AI people were off by a factor of at most 10, really nothing very serious.

So when Kurzweil says that the Singularity's going to be here by September 2014, or whatever his latest projection is, sure, he's dreaming. But I don't see why it's so obvious that it won't happen by, say, 2150. If you plot rate of technological progress over the last 50,000 years, is "exponential" really a crazy word to use?

I think his hyper-optimistic projections are largely driven by his hope that the Singularity will get here in time to make him personally immortal. He's very upfront about this. The thought of an immortal Kurzweil is indeed pretty scary.




David I'll actually buy the notion of exponential growth as far as computational ability goes, with the caveat you suggest, that the pace is nowhere near what Kurzweil suggests. That still doesn't get me to believe that the very real biological and physical constraints of the universe will suddenly become irrelevant.

Apart from my intellectual misgivings, there is, for want of a better word, a hubris about Kurzweil's supremely confident utterances that also makes me not trust him at the gut level. One imagines him in a tracksuit, dispensing Kool-Aid ....


message 3: by Manny (new)

Manny Yes, that's just it. Intellectually, his argument that the growth in computational ability is exponential doesn't seem that bad, and it would seem to imply that we will reach a singularity. But he stinks of hubris to such a degree that you can't believe a word he's saying. It's odd.


message 4: by Matthieu (last edited Jun 04, 2012 10:43PM) (new)

Matthieu Hofstadter thinks that Ray-Ray is a joke. I'd have to agree. I've seen this book floating around for years in the local bookstore (I see it in Princeton just as often... I can't escape!) and I always thought that it looked like a bunch of pretentious nonsense. Guess I was right, eh? I agree with Manny that there has been an exponential growth in all forms of technology, yet I'm skeptical when it comes to putting a date on these things. The limits, whether they be physical or biological as David said, I believe will always be in effect.

I'll take my Kool-Aid on the rocks, thanks.

I wish I had seen the keynote speech that Ray-Ray gave with Obama.


David I've read three of Hofstadter's books (GEB, Metamagical Themas, and the totally awesome 'Le Ton Beau de Marot') he's an infinitely more interesting writer than Kurzweil (though that might be considered damning with faint praise, not my intention).

Usage question: in my review, I said that Kurzweiler's simplistic take on biological modeling 'defies credulity'. Is that correct, or should I have written 'defies credibility'? Or maybe, since I can't figure out which seems more appropriate, reworded things altogether?


message 6: by Matthieu (last edited Jan 22, 2009 07:47PM) (new)

Matthieu Perhaps rewording things would be the best bet. Ha.


David Indeed. One wouldn't want to risk public ridicule by Steven Pinker in the pages of the New York Times, after all.


message 8: by Joshua Nomen-Mutatio (last edited Nov 07, 2009 09:15PM) (new) - added it

Joshua Nomen-Mutatio I think I'm completely with Manny on post number one. Well put, Mr. Manny.

Though I haven't actually read this book. I'm not willing to write Kurzweil off as a total kook yet. I'm pretty familiar with the audacity of his claims. I saw him in a debate once and he seemed rather formidable and level headed to me. But also wasn't getting fully into all the singularity stuff during it, mostly just focusing on AI.

In any case, as much as I generally respect your opinion, David, I think I still have a strange attraction to checking this book out, despite your wonderfully righteous one-star shredding.

I feel like wild speculation has its place, as long as it's treated as such, nothing more, nothing less. Audacious hypotheses have their well-earned place in science. The problem is when people conflate wild hypotheses with well-worn theories and other things more solid like that.


Joshua Nomen-Mutatio Oh, and I love Hofstadter, too. I need to read more of his work.


Joshua Nomen-Mutatio The other thing that interests me is how candid Kurzweil is about how much he hopes "The Great Migration" or whatever (brains becoming digitized, electronic immortality and all that jazz) takes place before he dies. He's desperately plotting that time line. But I still think it's sort of fun to think about some of the predictions he makes. I've really only read outlines of it though, so maybe it really is certifiably unhinged, I'm not sure.


Joshua Nomen-Mutatio If anyone is interested: go to my "review" of Hofstadter's I Am A Strange Loop and there's a link to a cool interview with him about said book on this Bay Area radio program run by these charismatic old philosophy professors out of Stanford. There's some good episodes of the show in the archives section of their website which the link takes you to.


message 12: by Joshua Nomen-Mutatio (last edited Nov 08, 2009 08:52AM) (new) - added it

Joshua Nomen-Mutatio Link re: message 11/I Am A Strange Loop:

http://www.goodreads.com/review/show/...


message 13: by David (new) - rated it 1 star

David I love that philosophy program out of Stanford. It absolutely seems like it should make for terrible radio, but week after week it turns out to be fascinating.

My favorite Hofstadter book is the one about translation, though I do like his other books as well. Thanks for the link, and the comments, MFSO.


message 14: by Joshua Nomen-Mutatio (last edited Nov 08, 2009 10:27AM) (new) - added it

Joshua Nomen-Mutatio David wrote: "I love that philosophy program out of Stanford. It absolutely seems like it should make for terrible radio, but week after week it turns out to be fascinating."

Yeah, absolutely. I went through the archives when I was pushing papers at an office job where I could listen to the radio while working. I haven't listened to any of the new ones for the last couple of months, but I think I'll go play catch up with it later today.

Did you ever listen to either of the episodes about film? They were really entertaining and interesting. Especially the one without a guest where the two hosts just discussed their favorite films and some of the neat little "philosophical ideas" within them. They somehow manage to avoid being pretentious or overly-precious or boringly self-deprecating. They definitely transcend the stereotype of grumpy, humorless college professors.


message 15: by [deleted user] (new)

Very elegant take-down.

See, maybe I'm repeating something in the thread, and I also have zero background in computers - although I do hang around with lots of programmers, so I can bullshit with actual jargon - but one of the problems with Kurzweil's stuff about the exponential growth of computing power is that he makes this really insane leap from Moore's Law. Moore's Law states that the number of transistors that can be placed inexpensively on an integrated circuit has doubled approximately every two years. First off, this isn't a law, in the scientific sense, it's an observation. Lots of computing ability is tied to processing power, sure, but not all, and certainly that doesn't have anything to do with nanites getting magicked into existence. Uh, and now I have come to end of my jargon.


message 16: by Sroek (new)

Sroek He says we have the hardware. That doesn't mean we have the funding nor the teamwork required to create it.


message 17: by James (last edited May 20, 2011 04:08PM) (new) - rated it 5 stars

James Carroll ""We will have the requisite hardware to emulate human intelligence with supercomputers by the end of this decade."

"Really, Ray. How's that coming along? You've still got a year, two if we're charitable. But, even despite the spectacular vagueness of the claim, things are hardly looking good."

Some of Kurzweil's predictions may be overly optimistic, but as a computer scientist, you are missing the boat by pointing at this one, since so far he has been right on the money. This is the prediction that has matched reality best so far.

How well are we doing? He was talking about raw computing power, and that power has indeed grown exactly along his proposed trends. Indeed, super computers already exist today that can arguably match the computing power necessary to simulate the functionality of the human brain (10^14-10^16 cps). You will notice that he gives two numbers for a brain simulation, one number is the number necessary to simulate the function of the brain (10^14-10^16 cps), the other is the number necessary to simulate the entire brain, every neuron and dendrite, inefficiencies and all (10^19 cps). We have hit the first number right on schedule.

See: http://top500.org/list/2010/11/100, notice that the fastest runs at 2566.00 teraFlops, that's 2.566X10^15 FPS. There are more than 10 computations for every Floating point computation, so that means that we now have a computer that can do at least 2.566X10^16 CPS.

What does that mean? It means that we have passed his first number exactly on schedule. He predicted that we would cross the second number, 10^19 CPS, much later but we are still on track to do so, uncannily so. If anything, we are ahead of schedule. Here are the trend graph I produced for super computer ability: https://picasaweb.google.com/jlcarrol... Remarkably stable.

He never claimed that we would actually have a full brain simulation by this point. Far from it. The point where he claimed that we would pass the raw computational power, and the point where he claims that we would actually create the simulation are actually quite different. He claims that that won't happen until the late 2030's. My simple fit to the trend says that by 2035 it will cost $1000 dollars to buy 10^19 CPS, (assuming that the trend continues), which is enough to simulate the entire brain, not just its functionality. That prediction was surprisingly similar to Kurzweil's, and is still on track.

But how long before we create the right simulation to run on these super computers that we already have?

Notice this article:

http://www.dailymail.co.uk/sciencetec...

Now notice that they are predicting full brain simulation success BEFORE Kurzweil suggested that it would happen.

So if anything, we are AHEAD of his predicted schedule, at least in this one limited area.


message 18: by Gendou (last edited Jun 20, 2011 02:31AM) (new) - rated it 1 star

Gendou James, the problem is that you're "assuming that the trend continues". It doesn't. Moor's law is running out. Even if we can simulate an entire human brain in silicon, it will remain forever cheaper to feed a human being. Silicon is an energy hog and isn't amenable to parallelism. Unlike Ray, I don't believe a super-intelligent computer will be able to circumvent the laws of physics to increase it's computing resource, or re-program it's own source code. So I don't believe in Ray's vision of a singularity.

As for his short term predictions, did you read what he wrote about the year 2010? He said we'd have a natural-language interface search engine hooked up to our eyeglasses by now! Not even close...


message 19: by James (last edited Mar 06, 2012 09:32AM) (new) - rated it 5 stars

James Carroll I admit that the trends might not continue into the future, but they have continued so far. Your comment seems to indicate that you believe that the performance improvement has already petered out, and that Moore's law is essentially dead. This assumption is not validated by the data.

It is true that transistors are not getting smaller, (*EDIT: this is wrong, and I apparently wasn't really thinking clearly for some reason, they are still getting smaller, just not faster in terms of clock speed because of heat concerns... which I knew, not sure why I said this. As a side note, we should hit the quantum limits of transistor shrinking by about 2020 at which point they will stop getting any smaller, which may have been what I was thinking. But this won't necessarily stop computer performance improvements, they will just have to shift to other hardware like 3d chips, and to other domains like power consumption and parallel performance which I say later.) and clock times are not getting faster. However, computer performance has continued to expand at an exponential trend by leveraging parallelism. How do I justify this conclusion? If you look at the top500.org results since 1993, the performance of the world's super computers has continued exponentially, and is now doubling approximately every 1.03 years, with absolutely no sign that things are slowing down, if anything they are speeding up.

https://picasaweb.google.com/jlcarrol...

The raw data indicates that this touted end of computer performance improvement is not materializing.

You claimed "Even if we can simulate an entire human brain in silicon, it will remain forever cheaper to feed a human being."

This misses the fundamental idea of paradigm shifts. There is no reason to assume that future computers will use silicon. The human brain itself is a computer, and is thus an existence proof that cheaper, more energy efficient, more powerful computation is possible. Which means that in principle there is no reason to suppose that we can't create a computer that works more like the human brain, and with the same or better efficiencies. In fact, there is no reason to think that the human brain is optimally efficient. Therefore, there is no reason to think that we can't eventually engineer a computer that will one day be cheaper than "feeding a human" for the amount of "work" we get out of the computer.

But even without major shifts in how we build computers, your assertion that human brains will always remain cheaper simply is not true. Price performance of our silicon computers has also been increasing exponentially: https://picasaweb.google.com/jlcarrol... Again, if anything the exponential cost halving rate has been increasing not decreasing.

As has the computations per watt of power: https://picasaweb.google.com/jlcarrol... At this rate, our super computers should become as power efficient as the human brain by around 2040. And we have not yet been focusing on power performance, something that will likely change in the future, meaning that this particular trend is likely to speed up, not slow down.

Storage price performance has also been increasing exponentially: https://picasaweb.google.com/jlcarrol...

You can argue that the exponential trends will not continue, but you can't argue that the fact that we hit the wall of clock speeds in 2004 has already slowed their increase. It has not. We have simply moved to new paradigms of performance improvements.

You also wrote: "As for his short term predictions, did you read what he wrote about the year 2010? He said we'd have a natural-language interface search engine hooked up to our eyeglasses by now! Not even close."

Apparently, you haven't been paying much attention. Watson recently demonstrated the type of NLP performance that Kurzweil suggested that we would have by this point (right on time), and augmented reality glasses have already been demonstrated, and work rather well (again right on time). Here's a review of the technology: http://www.technologyreview.com/compu... If you want to buy some go here: http://www.google.com/search?rlz=1C1C.... The technology is not yet ubiquitous or cheap, but it is indeed here, right about on time. Kurzweil seems to have been rather close in his predictions, even on the ones you singled out to mock! That's a pretty impressive track record so far.


message 20: by Nani (new) - rated it 4 stars

Nani Thank you James! Also want to point out we do have natural language search engines (Siri on the iPhone for one) and yes, they will soon be linked to our eyeglasses! Here is a current link about augmented reality glasses:
http://www.nytimes.com/2012/02/23/tec...


message 21: by James (last edited Mar 06, 2012 09:40AM) (new) - rated it 5 stars

James Carroll Thanks Nani, good points. Siri (or anything like her) wasn't available just the few months ago when I wrote my reply above. It is interesting to note how time is again vindicating the predictions. His timing wasn't perfect, but it was impressively close. Some things took more time than he thought, some less.

Augmented reality glasses happened about when he said that they would, but they appear so far to be taking longer to go mainstream than he thought. This may just be because people don't really want them as much as he thought that they would. This is what Michio Kaku calls "the caveman principle", the idea that technology doesn't progress according to what we can create, but rather according to what people really want. I think that demand for augmented reality glasses will eventually rise. But it will take some time for content and ubiquitous wi-fi to catch up before there is a huge consumer demand.

In this case it appears to me that Kurzweil was exactly right on in his prediction of when the technology would be available, but just a few years off in his predictions of consumer demand.

Still pretty impressive.


message 22: by David (new) - rated it 1 star

David Yeah. But the book is still rubbish.


James Carroll Well that was a convincing argument.


Chrisf It's fascinating to read through some of these comments and reflect of how far we've progressed since the book was published. In the past year alone we've seen IBM's Watson and Google's self-driving car prototype both exceed the capabilities of human experts in their own domains. IBM's Sequioa computer, scheduled for completion later this year, will be capable of 20 petaflops (twice Kurzweil's estimate of the raw computational ability of the human brain). What will the next 5 years hold ? Kurzweil is dead right about one thing : humans are simply not well equipped to understand the implications of exponential change.


Richie de Almeida You took the words out of my mouth. Seriously. This comment was about to be much longer but realized I was just repeating for myself what you already said.


message 26: by David (new) - rated it 1 star

David To James, in message 23. Oops, you got me there. Sorry for my glibness.

A few years on, I am less inclined to get exercised about Kurzweil's technical claims, where I think those who make the case he is off mainly in being optimistic about the timing do so persuasively. But the absence of any apparent moral dimension, or consideration, in his arguments still bothers me enormously. In using the term "hubris", this is what I have in mind. At Genentech, I worked on clinical protocols for many years, and the ethics of something as simple as a therapeutic intervention trial (treating patients with an untested candidate drug) are already pretty complex; in such areas as tissue engineering, prosthetic devices, let alone neuronal implants or devices, the ethical questions multiply like rabbits. Not to mention the social engineering questions of how a civilized society should allocate its healthcare resources. Kurzweil seems disturbingly oblivious to all these issues, as if blinded by his own selfish drive for immortality.


message 27: by Peggy (new) - added it

Peggy Salvatore Actually, by 2011 we hit that target if you count computer algorithms that can mimic neural pathways. Close enough. It is interesting to read these comments from 2009 and 2011 discounting Kurzweil who actually has access to the kind of research that probably gave him a clue about what is possible by when.


message 28: by Lydia (new)

Lydia Kius Seriously? This is not a review. This is a tunnel visioned hate letter.


message 29: by David (new) - rated it 1 star

David That Kool-Aid must be mighty tasty, eh, Lydia?


Nilesh Jasani Nanobots also announced this week. Off on timing but otherwise predictions on AI, gene splicing/editing, nanobots, likely cure of conventional blindness in a few years etc - he was quite ahead in his forecasting.


message 31: by Matt (new)

Matt Kurzweil is a Premillenialist Dispensationalist with the theology filed off. He's waiting for the rapture, so he won't have to die, and like everyone whose ever predicted the eschaton, they generally predict it is only a short time away.

Kurzweil is reasonably good at predicting technology 10-15 years out, usually because that's about how long it takes to go from workable concept to market. "This thing is in an MIT lab (or CMU, or Caltech, or wherever) and it's going to probably be a big thing once commercialized..." is not a terribly big prediction.

But we are still waiting on that voice activated typewriter he promised to build by 1995, and people are still texting rather than whispering into their phones. The problem that Ray has is that the more complex the problem, and the less well he understands it, the more he's prone to hand wave away those complexities. The further away his predictions get, the more ironically optimistic that they actually are. He's usually no more than 5 or 15 years wrong about his near term predictions, but the further out he goes the more you get the sense that (to use his favorite word) he's "exponentially" wrong.

One thing I'm not at all sold on is that the overall pace of technological change is exponential. If you look at individual technologies, what you tend to find is that there is an initial period of exponential improvement and growth of the technology, but that he has his curves backwards. Instead of exponentially accelerating, new technologies begin with seemingly asymptotic velocity, but rates of improvement exponentially decay until at some point they flat line as they hit hard limits set by the physical laws of the universe and the practical costs of the technology. Then you get some randomly long period of experimentation before some replacement technology arrives and opens up a new period of accelerated growth.

This alternate reading of the data actually is a better reading of Moore's Law (which, let's be serious, is not a law) that better explains the data than Kurzweil's own reading. Moore himself revised the expected rate of change downward twice, from his initial estimates of doubling every year to doubling every two years. If you look at the rate of doubling in transistor density and cost, it's actually decaying constantly over the observed period with the curve getting flatter and flatter. Soon it will double every 5 years, and then every 10, and then perhaps not at all.

It's not at all clear to me that overall, rate of change is linear and we are simply not understanding what the big jumps in capability actually are. For example, you notice we haven't been back to the moon lately, right?

There are other problems large and small with his thesis. We are quite likely for example to be able to create something that passes the Turing Test in another 20 or 25 years time, only to discover that we ourselves are not 'hard intelligences' and that what we've created is only a set of flexible algorithms that solve some common problems well but other problems very inefficiently - and the problem of how to make itself smarter not at all, since understanding of self is not necessary to create intelligent behavior. These AI's may prove quite useful, but will be just as tied to the need for empirical evidence and experimentation and trial and error as we are. Lab work doesn't necessarily speed up just because you want it to, nor are these new AI's likely to be as energy efficient as our own wetware. And inherently wicked problems will remain wicked, and just as unammendable to computational solutions as they are now.


message 32: by Vincent (new)

Vincent Thank you from a guy who works with neural networks. I'm glad not everyone is drinking the kool-aid.


message 33: by Charlotte (new)

Charlotte Larsson It’s fascinating to read this comment thread in 2019.


message 34: by Benjamin (new) - added it

Benjamin Janssen David, did he talk much a out the link process between human biological information processing and the completely alien one, computer processing.

I find that these techno optimist and lots of Americans in general are just obsessed with oversized things, extrapolating a metric to it’s in sane non-reliable normal distribution tale. I was thinking about reading this to explore that question but 24 hours is Some next level intellectual masturbation.

Message me if I don’t get back to you. Would you be open to do a video chat about the book Just to answer a couple of my questions?


message 35: by Glenn (new) - rated it 1 star

Glenn I wish I had written this review, absolutely dead on. Kurzweil is not a philosopher.


Ashley Marc Hope you forgive me for giving it a 4-star rating. Indeed Kurzweil treats tech as if in a vat when it needs to come out in a world of people with regulations and needs to be met at a right price to make a dent.
So a lot of his predictions have ran in the ground.
But OTHER than that, and if you take in account that the author bends on the optimistic side, the book still does hold some interesting stuff. I found it thought-provoking.


message 37: by Nittya (new) - added it

Nittya Rizza Reading this review in 2023 and their comments makes me laugh. His ideas may have seemed far fetched 14 years ago..but not so much anymore


message 38: by Beckam (new) - added it

Beckam I completely agree, This seems to be real.


message 39: by Victor (new)

Victor Yeah this review didn't age well LOL


message 40: by Nameun (new)

Nameun Now in 2024. This review is a good example of why one should not be too proud when criticizing others’ work.


message 41: by Tim (new)

Tim March 2024, just quit my job bc singularity. Expecting post scarcity before summer. Buy WorldCoin!


message 42: by BJ (new)

BJ J It’s 2024 and NVIDIA leads the companies exponentially growing in valuation due to the predicted real burgeoning of AI in all its complexities and promise. This book was ahead of its time and worth reading now.


Nesuđena Travarka that aged well


message 44: by Ibn (new)

Ibn Cereno To all the "aged well" morons who buy into the linear algebra hype, dm me your SSN and bank account info and I'll send you my mint NFT collection lmao


back to top