Our Final Invention Quotes

Rate this book
Clear rating
Our Final Invention: Artificial Intelligence and the End of the Human Era Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat
3,701 ratings, 3.72 average rating, 461 reviews
Open Preview
Our Final Invention Quotes Showing 1-30 of 78
“A powerful AI system tasked with ensuring your safety might imprison you at home. If you asked for happiness, it might hook you up to a life support and ceaselessly stimulate your brain's pleasure centers. If you don't provide the AI with a very big library of preferred behaviors or an ironclad means for it to deduce what behavior you prefer, you'll be stuck with whatever it comes up with. And since it's a highly complex system, you may never understand it well enough to make sure you've got it right.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else. —Eliezer Yudkowsky, research fellow, Machine Intelligence Research Institute”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“The strongest argument for why advanced AI needs a body may come from its learning and development phase—scientists may discover it’s not possible to “grow” AGI without some kind of body.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? —Vernor Vinge, author, professor, computer scientist”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“As I’ll argue, AI is a dual-use technology like nuclear fission. Nuclear fission can illuminate cities or incinerate them. Its terrible power was unimaginable to most people before 1945. With advanced AI, we’re in the 1930s right now. We’re unlikely to survive an introduction as abrupt as nuclear fission’s.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“we don’t want an AI that meets our short-term goals—please save us from hunger—with solutions detrimental in the long term—by roasting every chicken on earth—or with solutions to which we’d object—by killing us after our next meal.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“Is knowledge the same thing as intelligence? No, but knowledge is an intelligence amplifier, if intelligence is, among other things, the ability to act nimbly and powerfully in your environment.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“More than any other time in history mankind faces a crossroads. One path leads to despair and utter hopelessness, the other to total extinction. Let us pray we have the wisdom to choose correctly. —Woody Allen”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“Vinge compares it to the Cold War strategy called MAD—mutually assured destruction. Coined by acronym-loving John von Neumann (also the creator of an early computer with the winning initials, MANIAC), MAD maintained Cold War peace through the promise of mutual obliteration. Like MAD, superintelligence boasts a lot of researchers secretly working to develop technologies with catastrophic potential. But it’s like mutually assured destruction without any commonsense brakes. No one will know who is ahead, so everyone will assume someone else is. And as we’ve seen, the winner won’t take all. The winner in the AI arms race will win the dubious distinction of being the first to confront the Busy Child.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“In 1956, John McCarthy, called the “father” of artificial intelligence (he coined the term) claimed the whole problem of AGI could be solved in six months.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“If we build a machine with the intellectual capability of one human, within five years, its successor will be more intelligent than all of humanity combined. After one generation or two generations, they’d just ignore us. Just the way you ignore the ants in your backyard.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“In 1970, AI pioneer Marvin Minsky said, “In from three to eight years we will have a machine with the general intelligence of an average human being.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“Stuxnet dramatically lowered the dollar cost of a terrorist attack on the U.S. electrical grid to about a million dollars.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“The National Institute of Standards and Technology found that each year bad programming costs the U.S. economy more than $60 billion in revenue. In other words, what we Americans lose each year to faulty code is greater than the gross national product of most countries.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“Stanford University’s John Koza, who pioneered genetic programming in 1986, has used genetic algorithms to invent an antenna for NASA, create computer programs for identifying proteins, and invent general purpose electrical controllers. Twenty-three times Koza’s genetic algorithms have independently invented electronic components already patented by humans, simply by targeting the engineering specifications of the finished devices—the “fitness” criteria. For example, Koza’s algorithms invented a voltage-current conversion circuit (a device used for testing electronic equipment) that worked more accurately than the human-invented circuit designed to meet the same specs. Mysteriously, however, no one can describe how it works better—it appears to have redundant and even superfluous parts. But that’s the curious thing about genetic programming (and “evolutionary programming,” the programming family it belongs to). The code is inscrutable. The program “evolves” solutions that computer scientists cannot readily reproduce. What’s more, they can’t understand the process genetic programming followed to achieve a finished solution. A computational tool in which you understand the input and the output but not the underlying procedure is called a “black box” system. And their unknowability is a big downside for any system that uses evolutionary components. Every step toward inscrutability is a step away from accountability, or fond hopes like programming in friendliness toward humans. That doesn’t mean scientists routinely lose control of black box systems. But if cognitive architectures use them in achieving AGI, as they almost certainly will, then layers of unknowability will be at the heart of the system. Unknowability might be an unavoidable consequence of self-aware, self-improving software.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“Omohundro predicts self-aware, self-improving systems will develop four primary drives that are similar to human biological drives: efficiency, self-preservation, resource acquisition, and creativity. How these drives come into being is a particularly fascinating window into the nature of AI. AI doesn’t develop them because these are intrinsic qualities of rational agents. Instead, a sufficiently intelligent AI will develop these drives to avoid predictable problems in achieving its goals, which Omohundro calls vulnerabilities. The AI backs into these drives, because without them it would blunder from one resource-wasting mistake to another.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“Repurposing the world’s molecules using nanotechnology has been dubbed “ecophagy,” which means eating the environment. The first replicator would make one copy of itself, and then there’d be two replicators making the third and fourth copies. The next generation would make eight replicators total, the next sixteen, and so on. If each replication took a minute and a half to make, at the end of ten hours there’d be more than 68 billion replicators; and near the end of two days they would outweigh the earth. But before that stage the replicators would stop copying themselves, and start making material useful to the ASI that controlled them—programmable matter. The waste heat produced by the process would burn up the biosphere, so those of us some 6.9 billion humans who were not killed outright by the nano assemblers would burn to death or asphyxiate. Every other living thing on earth would share our fate. Through it all, the ASI would bear no ill will toward humans nor love. It wouldn’t feel nostalgia as our molecules were painfully repurposed. What would our screams sound like to the ASI anyway, as microscopic nano assemblers mowed over our bodies like a bloody rash, disassembling us on the subcellular level? Or would the roar of millions and millions of nano factories running at full bore drown out our voices?”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“According to Steve Omohundro, some drives like self-preservation and resource acquisition are inherent in all goal-driven systems.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“An agent which sought only to satisfy the efficiency, self-preservation, and acquisition drives would act like an obsessive paranoid sociopath,” writes Omohundro in “The Nature of Self-Improving Artificial Intelligence.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“Unlike our intelligence, machine-based superintelligence will not evolve in an ecosystem in which empathy is rewarded and passed on to subsequent generations.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“Singularity” has become a very popular word to throw around, even though it has several definitions that are often used interchangeably. Accomplished inventor, author, and Singularity pitchman Ray Kurzweil defines the Singularity as a “singular” period in time (beginning around the year 2045) after which the pace of technological change will irreversibly transform human life.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“Deep Blue, IBM’s chess-playing computer, was a sole entity, and not a team of self-improving ASIs, but the feeling of going up against it is instructive. Two grandmasters said the same thing: “It’s like a wall coming at you.” IBM’s Jeopardy! champion, Watson, was a team of AIs—to answer every question it performed this AI force multiplier trick, conducting searches in parallel before assigning a probability to each answer.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“With the possible exception of nanotechnology being released upon the world there is nothing in the whole catalogue of disasters that is comparable to AGI. —Eliezer Yudkowsky, Research Fellow, Machine Intelligence Research Institute”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“Not knowing how to build a Friendly AI is not deadly, of itself.… It’s the mistaken belief that an AI will be friendly which implies an obvious path to global catastrophe.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“Surely no harm could come from building a chess-playing robot, could it?… such a robot will indeed be dangerous unless it is designed very carefully. Without special precautions, it will resist being turned off, will try to break into other machines and make copies of itself, and will try to acquire resources without regard for anyone else’s safety. These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“Like genetic algorithms, ANNs are “black box” systems. That is, the inputs—the network weights and neuron activations—are transparent. And what they output is understandable. But what happens in between? Nobody understands. The output of “black box” artificial intelligence tools can’t ever be predicted. So they can never be truly and verifiably “safe.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“But if the technological Singularity can happen, it will. Even if all the governments of the world were to understand the “threat” and be in deadly fear of it, progress toward the goal would continue. In fact, the competitive advantage—economic, military, even artistic—of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assures that someone else will. —Vernor Vinge, The Coming Technological Singularity, 1993”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“Moore’s law means computers will get smaller, more powerful, and cheaper at a reliable rate. This does not happen because Moore’s law is a natural law of the physical world, like gravity, or the Second Law of Thermodynamics. It happens because the consumer and business markets motivate computer chip makers to compete and contribute to smaller, faster, cheaper computers, smart phones, cameras, printers, solar arrays, and soon, 3-D printers. And chip makers are building on the technologies and techniques of the past. In 1971, 2,300 transistors could be printed on a chip. Forty years, or twenty doublings later, 2,600,000,000. And with those transistors, more than two million of which could fit on the period at the end of this sentence, came increased speed.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“In information technologies, each breakthrough pushes the next breakthrough to occur more quickly—the curve we talked about gets steeper. So when considering the iPad 2 the question isn’t what we can expect in the next fifteen years. Instead, look out for what happens in a fraction of that time. By about 2020, Kurzweil estimates we’ll own laptops with the raw processing power of human brains, though not the intelligence.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era
“I’ve written this book to warn you that artificial intelligence could drive mankind into extinction, and to explain how that catastrophic outcome is not just possible, but likely if we do not begin preparing very carefully now.”
James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era

« previous 1 3