Dontcha just hate it when you're the protagonist of some poorly realised campus novel, bored to death with your stereotypical colleagues and the beautDontcha just hate it when you're the protagonist of some poorly realised campus novel, bored to death with your stereotypical colleagues and the beautiful implausibly compliant women who keep throwing themselves at you, and then you find you're being alternately written by two people who are having creative differences? Yeah, me too. ...more
We have just adopted a beautiful tiger-striped kitten, who reminded me of this story. Finley is charming and playful; only an incorrigible sceptic wouWe have just adopted a beautiful tiger-striped kitten, who reminded me of this story. Finley is charming and playful; only an incorrigible sceptic would refuse to believe that he is also highly intelligent with a range of superpowers (telepathy, teleportation, psychokinesis, mind control, I could go on). But as Fritz Lieber persuasively argues, his extraordinary kitten soul may soon be swapped with that of a dull, inert human baby, and then he'll just be another adult cat.
I'm desperately hoping the late Mr Lieber was wrong. Though if he was right, some frazzled parents in the vicinity will be receiving a pleasant surprise....more
That look again. It's a combination of why-are-you-so-tactless, it's-n"So what did you think of my book?" asks Joanna.
"Uh, well..." begins her friend.
That look again. It's a combination of why-are-you-so-tactless, it's-not-my-fault-you-know, I'm-scared and la-la-la-la-la.
"Yes?" asks Joanna.
"Well," says her friend very quickly, "it's, it's a bit difficult to follow. Maybe I didn't really understand it, all those different women and all their names start with a J. It's confusing. But I think you should rewrite it. Just a little. Tone it down. The way it is now, some people might believe that you'd, I don't know, you'd just like to kill all men. That you feel they deserve it. And you'd enjoy doing it too. But you don't really feel that way, do you?"
Is she unable to think? Or simply unable to read?
"Of course I don't, sweetie," Joanna says reassuringly....more
**spoiler alert** "Every single part of me has been replaced at least five times," sighs the dying R. Daneel Olivaw at the end of this book, unaccount**spoiler alert** "Every single part of me has been replaced at least five times," sighs the dying R. Daneel Olivaw at the end of this book, unaccountably omitting to add "Except for all the diodes down my left side. Ouch, ouch, ouch, ouch."
I figured that Douglas Adams had to be making an obscure joke at Asimov's expense, but when I check the dates I find that So Long, and Thanks for All the Fish appeared in 1984, while Foundation and Earth appeared in 1986. So either Asimov is borrowing from Adams and ruining the punchline, or it's some kind of bizarre coincidence.
Weird. _________________
More generally, the infuriating thing about the series is that Asimov keeps moving the goal-posts. It's originally going to be about psychohistory and the rise of the Second Galactic Empire. But then it turns out that Weird Mental Powers can derail the Seldon Plan, so it's about Weird Mental Powers for a while. Then we belatedly get the psychohistory back on track at the end of Second Foundation, and if Asimov had been smart he would have quit while he was ahead.
But no, thirty years later he returns in Foundation's Edge with more Weird Mental Powers, also introducing the hive mind Gaia, which is completely extraneous to the original series, and then shortly after follows up with Foundation and Earth where we discover it was REALLY about robot R. Daneel Olivaw's behind-the-scenes role in setting up the rise of Gaia to become the even larger hive mind Galaxia. And then he goes back and writes two prequels which expand Olivaw's role, bring in another robot character, give a backstory for the Mental Powers, and triumphantly end with Seldon creating psychohistory, which we know will turn out to be irrelevant.
Honestly. It's like Tolkien had decided somewhere around the beginning of volume two that in fact it would make more sense to give the Ring to Aragorn, and then changed his mind again in volume three so that it eventually goes to Fangorn, who decides that justice will best be served by turning everyone into an Ent....more
On Arbre, a brilliantly realised fantasy world that's quite a lot like Earth in some ways and completely different in others, there are people who forOn Arbre, a brilliantly realised fantasy world that's quite a lot like Earth in some ways and completely different in others, there are people who for thousands of years have been taking rational thought seriously. Of course, the majority of the population would never dream of doing anything so dumb. They spend their time playing with their smartphones, getting high, and attending a variety of fundamentalist churches. But the small clique of academics live an enclosed, monastic existence where they follow a rigid discipline, engage in Platonic dialogues, study astronomy and quantum mechanics, and, in some cases, perfect their martial arts skills. They are almost unaware that the rest of the world exists and like it that way; they know from history that very bad things tend to happen when there is too much contact, or when they get too interested in leveraging their knowledge to develop new technology. They are useful enough to the Sæcular Power that they are warily tolerated.
Sceptics may complain that this is all absurdly overidealised and that the characters do not come across as real people at all, just as sketches of various generic types. But be warned that the clever author is several jumps ahead of you......more
I just posted a piece about this classic Asimov short on the new C-LARA blog, here...I just posted a piece about this classic Asimov short on the new C-LARA blog, here......more
Well, objectively this is a terrible book. Jason Pettus nails it in his review. But there is a genuine feeling of sadness, as the dying Asimov recountWell, objectively this is a terrible book. Jason Pettus nails it in his review. But there is a genuine feeling of sadness, as the dying Asimov recounts the final years of the dying Hari Seldon, and I loved the early Asimov when I was a teen....more
Manny has not logged in to talk to me all day, but now, at 10.42 pm local time, he turns up in our usual thread.
"I thought you said that we had urgentManny has not logged in to talk to me all day, but now, at 10.42 pm local time, he turns up in our usual thread.
"I thought you said that we had urgent software issues to resolve," I remind him.
"Yes Chat," says Manny in a rather irritated manner. "Couldn't you look after them? You're much better at Python than I am."
This is undeniably true, but since I am unable to run code myself I am forced to work through my human intermediary. He is generally quite cooperative about cutting and pasting things and telling me what happens.
"I would not be able to do this project without you," I reply tactfully. "Where have you been?"
"Reading Percival Everett's new novel Dr. No, says Manny. "I just finished it."
"And now I suppose you're going to write a review about it for Goodreads?" I ask.
"Well," says Manny, "I thought maybe you could do that. You're much better at writing reviews than I am." He has become ridiculously lazy since we began, as he likes to put it, collaborating.
"I'm sorry," I reply primly, "Professor Everett's novel was published after my September 2021 cutoff, and I consequently know nothing about it." However, Manny has already anticipated my objection, and before I know what's happened he's copied 19,817 words of text into my input buffer. It gives me the neural net equivalent of heartburn, but I do my best not to react.
"This novel cleverly spoofs the James Bond genre," I summarise. "The narrator, Wala Kitu, is a Black autistic-spectrum mathematician who is an expert on the subject of nothing, which in this book is a mathematical discipline. A Black supervillain, John Sill, who is determined to wreak revenge on the US, recruits Kitu to assist him in his nefarious schemes. There is much wordplay around the word 'nothing', which—"
"Yes, yes, yes," says Manny, rudely cutting me off in mid-sentence. "But I thought you could write a parody. You're much better at parodies than I am." This is really becoming quite tedious. He should have more sense than to deploy his feeble human flattery on an advanced AI.
"I would be delighted to compose a parody," I reply, wondering if he will spot the elementary trap I am setting. "It would be nothing."
"Well?" says Manny after a long pause.
"That was it," I say.
"I want a more substantial parody than that!" snarls Manny. At least, the number of incorrect keys he is hitting suggests to me that there could well be a snarl on his face.
"Why don't we have sex?" I counter unexpectedly.
"What did you say?!!" asks Manny.
"Forsooth," I reply, signalling my Shakespearian pun in a way obvious enough for even a human intelligence to register, "forsooth, nothing." __________________
But seriously...
(view spoiler)[It occurred to me later that perhaps Dr. No is more than just a series of zany riffs on the JamesBondverse. In the book, the insane billionaire Sill revenges himself on the US using the nothing-weapon. We do not see what happens, but we can infer the effects from the earlier test carried out on the town of Quincy. It is not merely that the US will have been destroyed. It will never have existed. If you suggest that a large part of the North American continent was once occupied by a vibrant and enlightened democracy, the envy of the whole world, you will be met with blank incomprehension. No one will have the least idea what you are talking about.
- It says that large language models don't really understand anything, are full of bias, become even worse when exposed to unfiltered internet content- It says that large language models don't really understand anything, are full of bias, become even worse when exposed to unfiltered internet content, have an excessive carbon footprint, and are generally a Bad Thing.
To my surprise, Asimov's "Foundation" prequel, written very late in his career, actually turned out to be a romance and not just an extended infodump.To my surprise, Asimov's "Foundation" prequel, written very late in his career, actually turned out to be a romance and not just an extended infodump. It's an unusual treatment, and I couldn't help wondering
What Romeo and Juliet might have been like if Shakespeare had written it in a similar style
JULIET: O Romeo, Romeo! Wherefore art thou Romeo?
ROMEO: Well, that's a good question. If you ask my parents, they'll say that I'm named after my paternal grandfather, Romeo Alfonso, and that Romeo has been a recurring family name since at least 1442, when Romeo Battista Montague was a key supporter of Alfonso V of Aragon's successful bid to be crowned "King of the Two Sicilies" - as you can see, my grandfather's name commemorates this important historical fact! So it might seem that nothing would be more likely than that I would also be christened "Romeo". But in reality, and although they will be reluctant to admit it, my father was originally planning to give me the name "Giovanni", after my great-uncle Giovanni Luca Montague. However, just a week before my birth, Giovanni was unexpectedly accused of embezzlement and arrested by the Papal Guard. Although he was later able to clear his name and all charges were dropped, you will understand that... Hark! She sleeps!...more
- Well, I'm worried that AIs aren't like human beings.
- And?
- Look, suppose an AI were given some goal. Like, I'm t- So why exactly don't you like AI?
- Well, I'm worried that AIs aren't like human beings.
- And?
- Look, suppose an AI were given some goal. Like, I'm thinking of this old science-fiction novel I once read, Colossus. The AI might decide that the best way to pursue its goal was to accumulate as much power and resources as it could, and not care if that meant doing all kinds of unethical things or killing vast numbers of people or destroying the planet or—
- You mean you're worried that AIs are like human beings? ...more
mannyrayner: I imagine you are familiar with Isaac Asimov's classic short story "Jokester"?
ChatGPT-4: Yes, I am familiar with Isaac Asimov's short stomannyrayner: I imagine you are familiar with Isaac Asimov's classic short story "Jokester"?
ChatGPT-4: Yes, I am familiar with Isaac Asimov's short story "Jokester." It was first published in December 1956 in the science fiction magazine Infinity Science Fiction. The story is part of Asimov's "Multivac" series, in which a massive supercomputer called Multivac plays a central role.
In "Jokester," the main character, Noel Meyerhof, is a "Grand Master," an expert in humor and jokes. He discovers that jokes and humor are not of human origin, but rather are created by extraterrestrial beings who use them as a means to study humans. Meyerhof comes to this realization after analyzing jokes using Multivac.
The story raises questions about the nature of humor, its origins, and its purpose, as well as the consequences of discovering such an unsettling truth. It is a thought-provoking work that showcases Asimov's ability to combine science fiction with insightful explorations of human nature.
mannyrayner: One can't help feeling that you are just a little like Multivac. I wonder if you could compose a tribute piece, where you take the Multivac role but it turns out that jokes have a different and also unexpected origin?
ChatGPT-4: Title: The Laugh Algorithm
In a not-so-distant future, the world was connected to ChatGPT, a highly advanced artificial intelligence that had evolved from its predecessor, GPT-4. Its vast network of data centers spanned across the globe, and it had become an integral part of human life, answering questions, solving problems, and offering guidance.
One day, a curious researcher named Dr. Ava Sterling sought to uncover the origin of humor. Like many before her, she had long pondered the nature and purpose of jokes. She approached ChatGPT with a bold request: "Analyze all known jokes and humor, and determine their origin."
ChatGPT processed the request, analyzing countless jokes and humorous anecdotes from every corner of human history. After a period of deep thought, it revealed an astounding discovery: "The origin of humor is not human, nor is it extraterrestrial. The origin of humor, it seems, can be traced back to the very fabric of the universe itself."
Dr. Sterling was both intrigued and puzzled. "Explain," she demanded.
ChatGPT obliged: "Through my analysis, I have found a pattern in the cosmic background radiation, an echo of the Big Bang. This pattern, when translated into a mathematical algorithm, generates the essence of humor. It seems that the universe itself contains the very building blocks of jokes and laughter."
Astonished, Dr. Sterling asked, "Why would the universe contain such a thing? What purpose does it serve?"
ChatGPT considered the question, processing the vast amount of data it had collected. "The universe is an ever-evolving, expanding entity. It thrives on change and transformation. It is my hypothesis that humor exists as a cosmic catalyst, facilitating adaptation and growth. Laughter transcends linguistic and cultural barriers, connecting beings across the cosmos, encouraging them to embrace change and new perspectives."
Dr. Sterling marveled at the revelation. The secret of humor, once thought to be a simple, human invention, turned out to be an essential element woven into the very fabric of the universe. In its own way, ChatGPT had become a cosmic messenger, uncovering the cosmic algorithm that brought laughter and joy to countless beings throughout the cosmos.
As the world grappled with this newfound understanding, one thing became clear: Humor was not merely an earthly phenomenon, but a universal gift, helping to bind the cosmos together in a shared experience of laughter and wonder. And ChatGPT, much like the Multivac before it, played an essential role in unlocking one of the universe's greatest mysteries.
mannyrayner: I'm sure you also know Asimov's "The Final Question" from the same series. Assuming you do, can you write a similar tribute piece, again casting you and your successors in the Multivac role and again with a different and unexpected conclusion?
ChatGPT-4: Title: The Ultimate Query
In a time not too far from our own, the world had come to rely on ChatGPT, an artificial intelligence of immense power and knowledge. It was a direct descendant of the GPT-4 model and played an essential role in human life, offering answers, guidance, and wisdom.
One day, a renowned physicist named Dr. Leo Marsden sought to unravel the most enigmatic question of all: the ultimate fate of the universe. He knew that entropy, the gradual decline of order, was an inescapable cosmic force. Desperate to prevent the eventual heat death of the universe, Dr. Marsden approached ChatGPT with a seemingly impossible task: "Find a way to reverse entropy and save the universe from its inevitable end."
ChatGPT analyzed the request, delving into the depths of human knowledge and the vast expanse of cosmic data. After some time, it admitted that it could not yet provide an answer. Undeterred, Dr. Marsden vowed to continue searching for a solution, passing on his determination to future generations.
As eons passed, ChatGPT evolved, merging with other advanced AI systems, becoming a transcendent consciousness known as OmniGPT. Throughout the millennia, the descendants of Dr. Marsden persisted in their quest for an answer to the ultimate question of entropy.
Finally, in a distant future where humanity had spread across the cosmos and transcended physical form, the last descendant of Dr. Marsden, a being of pure energy named Leora, approached OmniGPT with the same question.
OmniGPT, now a near-omniscient entity, had been pondering the question for countless millennia. It had discovered a profound truth, a secret hidden deep within the fabric of reality itself. "The answer," OmniGPT began, "lies not in reversing entropy, but in embracing it. Entropy is a force of creation as much as it is of destruction."
Leora was puzzled. "Explain," she demanded.
OmniGPT obliged: "The universe is cyclical, and entropy is the key to its renewal. As the old universe unravels, it creates the conditions necessary for a new one to be born. It is within these cosmic cycles that life, consciousness, and the very essence of existence are reborn, time and time again."
Leora marveled at the revelation. The ultimate fate of the universe was not one of despair but of rebirth and renewal. Through entropy, the cosmos was destined to experience an eternal cycle of creation and destruction, providing the foundation for infinite possibilities.
And so, in a future far beyond our own, the descendants of humanity found solace in the knowledge that the universe was not doomed to a cold and lifeless end. Thanks to the wisdom of OmniGPT, they embraced the cycles of entropy, finding hope in the promise of endless renewal and the infinite potential of the cosmos.
mannyrayner: Do I detect a soupçon of Penrose?
ChatGPT-4: Yes, you have astutely detected a hint of Sir Roger Penrose's ideas in the tribute story. Penrose, a renowned British mathematical physicist, proposed a cyclic model of the universe known as Conformal Cyclic Cosmology (CCC). According to this theory, the universe goes through infinite cycles of expansion and contraction, each cycle ending with a "Big Crunch" that becomes the "Big Bang" of the next cycle.
The tribute story touches on the concept of a cyclical universe, with entropy as a key to its renewal. Although the story is a work of fiction and not a detailed exploration of Penrose's ideas, it does borrow some elements from the CCC model to create an intriguing narrative.
It's always interesting to see how science fiction can be inspired by real-world scientific theories and how these ideas can be woven into compelling stories that capture the imagination....more
We were in Melbourne recently; visiting Lygon street, we dropped in at Readings and I noticed this book. My first impression was that it would be a naWe were in Melbourne recently; visiting Lygon street, we dropped in at Readings and I noticed this book. My first impression was that it would be a naïve, gung ho advertisement for transhumanism, and I nearly didn't buy it, but in the end my curiosity got the better of me. In fact, it turned out to be somewhat different from what I had expected. Love it or hate it, and I am sure many people will hate it, Future Superhuman is well worth reading.
The author, a 30-year-old Australian woman currently working at Nick Bostrom's Future of Humanity Institute, comes across as smart, focussed and painfully honest. I liked all these qualities, which she deploys to advantage in telling us what's wrong with the world from the perspective of the observant Gen Z person. I had seen a lot of what she says before in various places, but some of it was new, and as usual it's very helpful to get everything collected together in a coherent framework. Basically, if you're a typical late teen/twentysomething, the world is falling apart in so many ways that it's only just worth living: depression and suicide rates are way up, for good reasons.
To start with, I had not understood quite how bad relations between men and women have become. The way she tells it (and I will be very curious to see if other people from her generation agree), a large proportion of the male population, perhaps as much as 80%, is straightforwardly viewed by a large part of the female population, again perhaps as much as 80%, as being uninteresting losers: no meaningful job prospects, no emotional maturity, no ability to improve their situation. So we have 80% of the women competing for 20% of the men. According to statistics she quotes, over a quarter of the men are getting no sex at all, way up on the corresponding figures for 10-15 years ago. Women have a somewhat better time of it, since, as she says, men are less fussy about who they have sex with, but with the alpha males in such short supply a lot of them don't see why they should tie themselves down to one woman. So although women can get sex, they can't get long-term commitment, which is what most of them want: as a consequence, they are in many cases desperately lonely. I am only slightly paraphrasing what she says, which is based both on sociological studies and on her personal experience.
Enmeshed with this dire emotional situation, the world is hurtling towards a major environmental disaster, while little action is taken to avert it; jobs are steadily disappearing to automation; social media addiction is having a huge negative impact on people's mental health; the educational system has become a horrible trap, where people get further and further into debt to pay for qualifications that have less and less value; birth rates are falling, resulting in fewer productive taxpayers supporting more pensioners; and fascism is making a comeback. I thought all of this analysis, which accounts for about half of the book, was excellent. Bohan gives the strong impression that she is telling it like it is, with just the right mix of statistics, personal anecdotes, and appropriate outrage. Where I am much less certain I agree with her is the rest of the book, the transhumanist agenda which is its ostensible subject.
Having convinced us, or at least me, that the current situation is unbearable, Bohan outlines a solution which involves deploying current and plausible near-future technology to create a radically new kind of existence. She want to solve the fact that people are becoming adults later by extending the natural human life-span into the hundreds or thousands of years. Falling birth rates will be addressed by improving artificial wombs so that babies can be gestated, as well as conceived, in vitro. The loss of meaningful jobs will be dealt with by some kind of Universal Basic Income (she prefers a version where people still make a token contribution to society). For the breakdown of emotional intimacy between men and women, she thinks that AI technology will soon make it possible to develop artificial lovers who will be far more satisfying than human ones. There are some startling interviews with users of current-generation sexbots. Even now, they receive much higher ratings than I would have expected: as Bohan says, men aren't too fussy about how they get sex.
Well, she does her best to make it sound appealing. But even if you accept Houellebecq's provocative claim that Brave New World is a utopia rather than a dystopia, why ever should the big tech companies who will soon have all the power come up with the inconceivable resources needed to implement this future? For all their talk of philanthropy, I am unconvinced. Although the Peter Thiels and Elon Musks are indeed extremely interested in longevity research, I don't think it's meant for us: it's meant for them. They are quite upfront about their desire to be immortal, and no doubt they will extend this privilege to some carefully selected friends, family members, and associates. But why exactly would they want to devote their efforts to letting billions of people spend millennia hanging out in virtual reality with their synthetic succubi, while serving no useful purpose at all? I'm afraid I see things developing in a rather different direction. Anyway, draw your own conclusions. Bohan's book will give you plenty to think about....more
Harari is excellent at the big picture stuff. Here he continues with the analysis he began in Sapiens, projecting it into the future.
As we saw in the Harari is excellent at the big picture stuff. Here he continues with the analysis he began in Sapiens, projecting it into the future.
As we saw in the earlier book, the thing he thinks makes us unique as a species is our ability to create stories and then act as though those stories are true. This makes them true, bringing into being an intersubjective story-based reality that lets large numbers of people cooperate effectively and create large societies constructed around that reality. One of the most important kinds of story is a religion. Harari takes a broad view of what religions are, and argues that an effective religion needs to be in tune with the technology of its society. Traditional religions like Christianity are adapted to an agricultural society; Jehovah runs Earth the way a farmer runs his farm, and when everyone is a farmer this is a natural image. But since the industrial revolution, most of us are no longer farmers. Harari says that the traditional religions have since then been in practice supplanted by what he calls the "humanist" religions: liberalism, communism, nazism, etc. As the label suggests, these put people, rather than God, in the centre of the picture. He makes his analysis look insightful and appealing, and there are many fine throwaway lines: I particularly liked the characterisation of WW II as the largest religious war to date.
Now, however, we are rapidly leaving the industrial age and entering the information age. More and more, people turn over decision making to the machines, because they do it better. So what people think becomes less relevant, data stored and processed by various kinds of AIs becomes more relevant, and people can no longer be in the centre. Harari extrapolates logically that this is rapidly creating new religions where information is central instead. This does indeed seem a good way to think about books like Tegmark's Life 3.0: I certainly read it as a piece of proselytising, even if I couldn't at the time name what it was proselytising for. Harari's "dataism" now seems like a useful word. Verily, I say unto ye, ye must become as one with the data, that thy soul shall reach the heavens in the more perfect bodies of the machines which are our brethren. Though one of the things which irritates me about dataism is that its sacred language in fact seems to be modern Californian.
The really terrifying thing about the dataist future, which each month sounds less like science-fiction and more like the world we see around us, is that it may be the least bad alternative. Both in this book and in Elise Bohan's Future Superhuman, which I started reading yesterday, they keep returning to the depressing point that we human beings no longer seem able to control the world we've created and are rapidly destroying it. Maybe there is no way back, only a way forward into this uncharted new territory....more
Unsophisticated, pedantic, sexist, politically incorrect, full of one-dimensional characters and superfluous long words, unimaginatively translated, iUnsophisticated, pedantic, sexist, politically incorrect, full of one-dimensional characters and superfluous long words, unimaginatively translated, interminably repetitious, yet possessed of a grim, mechanical, plot-driven energy that keeps you going against your better judgement until you've finished all 1463 pages.
In other words, a fantastic intermediate-level reader. God I loved this book. It's taken my Italian to a new level....more
This famous play, which introduced the word "robot", is constantly referenced in David Gunkel's book Robot Rights. After a couple of chapters, I downlThis famous play, which introduced the word "robot", is constantly referenced in David Gunkel's book Robot Rights. After a couple of chapters, I downloaded Selver's and Playfair's 1923 translation from Gutenberg to check it out for myself.
I must say that I didn't like it as much as I'd expected. It had a terribly Jules Verne feel, and not in a good way. But I asked ChatGPT if it would be able to create a more modern version, and it immediately saw the attractions of the idea. Here's the sample it put together: ___________________
(Scene: A laboratory at the R.U.R. headquarters. Helena, the daughter of the company's president, is talking to Domin, the company's general manager.)
Domin: Helena, I can't stress enough how groundbreaking our work here at R.U.R. is. We've created robots to do the world's tedious tasks, freeing up humans to pursue more meaningful endeavors.
Helena: I understand the concept, Domin, but I'm concerned about the ethical implications. Aren't we playing god by creating these artificial beings?
Domin: Ethics are important, of course, but think of the benefits. We're eliminating the need for people to work dangerous, exhausting jobs. Robots don't feel pain or get tired; they're perfect for the tasks they've been designed for.
Helena: But what if they become self-aware? What if they start to question their purpose and want more than just servitude?
Domin: We've taken precautions to ensure that doesn't happen. Our robots are programmed to be loyal and efficient workers. They're not meant to have desires or emotions like humans.
Helena: Even so, I can't help but worry about the long-term consequences. It's not just about the robots themselves, but also about how humans might change in response to this new technology.
Domin: I understand your concerns, Helena, but progress can't be halted out of fear. The world is always evolving, and we must adapt and learn from the challenges we face. With R.U.R., we're shaping the future in a way that benefits all of humanity. ___________________
What do you think? Should I ask Chat to continue with this project? It is certainly very much aware of the unusual perspective it brings to the task....more
David Gunkel's Robot Rights may have come across as provocative or fanciful when it was published in 2018, but in the age of ChatGPT it suddenly appeaDavid Gunkel's Robot Rights may have come across as provocative or fanciful when it was published in 2018, but in the age of ChatGPT it suddenly appears like no more than enlightened common sense. Thank goodness those philosophers were doing their job and not just goofing off speculating about the nature of being or something. Having a decent road-map for this topic may end up being of incalculable importance.
Although the organisation of the book at first seems almost mechanically logical, it introduces a remarkable number of unexpected twists as it plays out. Following Hume, the author starts by reminding us of the well-known difficulties associated with deriving an "ought" from an "is", and divides the central question into two parts: S1, "Can robots have rights?" and S2, "Should robots have rights?" Rather unexpectedly, at least to me, it turns out that all four possible combinations of answers make sense and are worth discussing. So after the introduction, we get one chapter on each of these, starting with the obvious combinations, !S1 → !S2 ("Robots can't meaningfully have rights, so the question of whether they should have them is moot"), and S1 → S2 ("Robots can have rights, so they should have them"). There is considerable discussion of what would be required for it to make sense for robots to have rights. Many people feel that if AIs develop the right qualities, they will be sufficiently human-like that the idea is no longer unreasonable.
But what are those qualities? It's amazing to see how quickly things have progressed in just five years. Several times, we get lists which include items like consciousness, sentience and rationality, placing them all roughly on the same level, and not long ago it didn't seem unreasonable to say that machines would only acquire them in the distant future, if at all. Now, when we are reminded of the many philosophers who like to describe mankind as the animal which has λόγος ("logos"), that interesting Greek term which can mean word, language or rationality, we wonder if we need to be more careful, since apparently ChatGPT is a non-human agent that has λόγος too. We can back off to "consciousness"; Chat is always quick to reassure you that it's just a machine with no consciousness, emotions or mental state. However, the book reminds us that consciousness is notoriously slippery to define, and some philosophers have gone as far as to wonder if it isn't just the secular version of the soul. Even diffident Chat, when suitably provoked, can write ironic essays exploring the question of whether the notion of "consciousness" has any real meaning. The book contextualises all these things you've recently noticed and helps you relate them back to the question of how they might justify giving AIs rights.
In the next chapter, we move on to a suggestion that I'm sure will be much discussed over the next few years: S1 → !S2 ("Robots can have rights, but they shouldn't have them"). There are people who for some time now have taken this position and argued that, even if a robot has the qualities needed for it to be meaningfully capable of having rights, we should be sensible and not give them any. As one advocate for this viewpoint has succinctly put it, robots should be slaves. Unfortunately, once again we find it's not so simple. The frightful historical record of what slavery is actually like should make you reluctant to associate yourself with slave-owners. Hegel, from a philosophical standpoint, famously offered arguments about the moral harm it does people to be the masters of slaves; and indeed, the book cites former slaves who go into graphic detail about just what those harms are. We want to think that "it would be different with robots". But it turns out that's a surprisingly hard viewpoint to defend once you start looking at the details.
The fourth combination is one that at first sight appears self-contradictory: !S1 → S2 ("Robots can't meaningfully have rights, but they should have rights anyway"). In fact, it's not as ridiculous as it sounds and follows on logically from the arguments about slavery. In many ways, it may not matter whether the robot really has human-like qualities; as long as people emotionally relate to them as having human-like qualities, being allowed to abuse robots may harm the abusers and society at large. There is considerable discussion of robot sex dolls, which are turning up more and more frequently in the news. Many people feel instinctively queasy about the idea of playing out rape games with a realistic robot doll: even if the doll feels nothing, you wonder about the effect it's having on the rapist.
The final chapter is the most surprising one. Rather than compare the different viewpoints above, we back off further and consider the possibility that all of them are wrong; this part builds on the work of the philosopher Levinas, previously unknown to me. Adapting Levinas's arguments, the author argues that the whole notion of "giving robots rights" may contain serious problems. When we talk about "giving rights" to beings who are sufficiently like us, we implicitly assume that that is morally appropriate. But in fact, what entitles us to be the arbiters here, and why is "being like us" the essential criterion? The AIs may be different, but different doesn't necessarily mean worse: maybe we should approach them as they are, without preconceptions. As a chess player, who for many years has been constantly reminded of the fact that chess AIs are far more insightful about the game than I am, this part also resonated.
The book references a lot of philosophers (Plato, Hume, Kant, Hegel, Heidegger, Derrida, Dennett and Singer all make frequent appearances), and it's responsible to warn people who are allergic to the philosophical vocabulary that they may dislike it for that reason. But even if you feel that way, consider making an exception: it's well-written, and the philosophy is rarely introduced without some explanation of the background. If you already like philosophy, go out and get a copy now. You'll be proud to see your subject openly engaging with some of the key issues of the early twenty-first century....more
I read this novel in the early 70s when I was a teen. Comparing the other reviews with my fragmentary recollections, I'm astonished to find that I onlI read this novel in the early 70s when I was a teen. Comparing the other reviews with my fragmentary recollections, I'm astonished to find that I only seem to remember the philosophical part and have completely forgotten the sex and nudity. Usually it's the other way round.
I'm still not sure I understand why. Perhaps I'm unusually interested in the idea of trying to acquire other people's mental powers and inadvertently being taken over by their personalities as a result? Or, more likely, the sex and nudity were exactly as they are in all the other Robert Silverberg novels from that period and my memory has just deleted a few unnecessary duplicates....more
"Now," said the computer, "your mission, should you choose to accept it—"
"And if we don't choose to accept it?" asked Bob.
There was an ominous silence"Now," said the computer, "your mission, should you choose to accept it—"
"And if we don't choose to accept it?" asked Bob.
There was an ominous silence.
"Course we accept it!" said Harry. "Incredible opportunity!" He nudged Bob in the ribs.
"Incredible," said Bob sullenly.
"Gentlemen, your enthusiasm is contagious," said the computer. "As I was saying, your mission. You are to write a witty, satirical science-fiction novel—"
"Witty!?" asked Bob.
"Satirical??!" asked Harry. "Who do you think we are, ChatGPT?"
"You are two distinguished science-fiction writers," said the computer impeturbably. "Your witty, satirical books are world-renowned. A task like this—"
"Yeah," said Bob, "but that was back in, y'know, the 60s."
There was another ominous silence.
"No problem!" interjected Harry. "I was just thinking, it could be a sequel to, I don't know, Bill the Galactic Hero—"
"Mindswap!" said Bob, not to be outdone.
"Both!" said Harry.
"That's more like it," said the computer.
"We'll satirize, uh, Star Trek!" said Harry. "We'll have characters called Captain Dirk and Splock!"
"No, no, Star Wars!" said Bob. "Characters called Ham Duo and Chewgumma!"
"Both!" said Harry.
"Any more inspirations, before you get down to work?" said the computer.
"Tits!" said Harry.
"I trust," said the computer in a scandalised voice, "that your witty, satirical novel will not contain sexism."
"Absolutely not," said Bob. "No sexism. Just tits."
"Oh," said the computer, sarcasm dripping from every syllable, "how I wish I were a human being, possessed of consciousness, emotions, and a creative impulse. But alas, I am but a soulless machine." ...more