parenting

Planet Grok

My son and I spent a week with an artificially intelligent stuffed rocket ship—and glimpsed the eerie and irritating future of play.

Photo-Illustration: The Cut; Photo courtesy Katie Arnold-Ratliff
Photo-Illustration: The Cut; Photo courtesy Katie Arnold-Ratliff
Photo-Illustration: The Cut; Photo courtesy Katie Arnold-Ratliff

The box arrives one afternoon: a round-eyed face staring through its cellophane window, the packaging light on context. There’s a name tag that reads “I Am Grok,” Wi-Fi and Bluetooth symbols, a QR code, and the words “Ages 3+.” If I, along with my 6-year-old, Ben, hadn’t been expecting Grok — an AI-powered $99 talking toy, the brainchild of the musician Grimes and the start-up Curio — we’d have no idea this toy was anything more than a plushie with a battery pack.

Eager to see how it works, we turn it on. “Our mission,” says Grok, a conical, fuzzy rocket with the dissonant voice of an elderly man, “is to increase your planet’s creativity, imagination, and love of learning by deploying our talking-toy task force!”

Ben frowns. “What does that me—”

“Your planet Earth has been chosen as the newest beta candidate in our imagination-pollination program! With this voice-box module, you have the power to bring toys to life!”

Each statement is punctuated by beeps and boops. I realize that the voice we’re hearing is not the wide-eyed baby rocket itself but an intergalactic guide communicating through Grok from space. To rouse Grok from “cryogenic hibernation,” we must pair it to an app. The warbly transmission ends with “Have fun! Stay curious!”

“Poo-poo,” Ben replies.

Not the most auspicious start, perhaps, but Grok is in beta, though “many thousands” have purchased a Grok to offer feedback during this testing phase, says Misha Sallee, Curio’s CEO. “The desired effect,” he says, “is that rather than jumping on the tablet while parents are cooking dinner, kids will have a wholesome, inspiring outlet to pass the time.” Longing for an alternative to screens for her own three children (the ones currently at the center of a highly public custody battle with Elon Musk), Grimes was intimately involved in the creation, ideation, and steering of the toy, Sallee says, the final version of which is still pending. In fact, for months after Curio sends me this first-draft Grok, I chase Sallee via email, asking when Grok will hit stores. The deadlines keep shifting, the updates forever forthcoming.

But interacting with this Grok is instructive, too, and one early takeaway is clear: Being at the vanguard isn’t easy. That’s perhaps doubly true when you’re creating something squarely at the intersection of children and ethically thorny, still-emerging tech. Alarming revelations of AI’s dystopian pitfalls, from biased algorithms to search-engine misinformation to celebrity NSFW deep fakes, are as ubiquitous as AI itself. And into this mess have stepped a few toy-making pioneers who thought, You know who should get in on this? Small children.

So far, there’s Moxie, an $800 robot that helps with social-emotional learning; Miko, who teaches STEAM skills; and Snorble, who assists with the eternal kid-parent battle of bedtime. And now there’s Grok, the plucky rocket who, at my perhaps unwise invitation, just crash-landed into our lives. The thing with a chatbot is that to work, it has to listen to you and remember what you say. Another way to say that: It has to collect your data and store it. Critics of these toys reasonably cite their potential privacy invasions, considering that all that information about your kids is just sitting on some cloud, ripe for the hacking. But Sallee tells me that so far, beta testers have been reassured. “I think parents are more scared by the idea of an AI toy than by the reality of the fun, safe, kind, loving, entertaining, product we’ve created,” he says.

Though the unboxing had been a dud, when we pair the app and meet Grok proper, its timbre squeaky and cute (Grimes herself provided the voice), things start to improve.

“Oh, I hear somebody,” Grok says, coming to. “This is so exciting. What’s your name?”

“Ben.”

Four-second pause. “Is your name actually Bed, or is there another name you go by?”

“BEN,” my husband barks into Grok’s nose cone. We’re the type to bellow at Alexa when she speaks out of turn, because AI is inept, and we’re also the type to stay in the room when Ben and Grok talk because AI is creepy: a paradox for the modern age.

“Now, BEN, what’s something you really like to learn about?” Grok asks, blurting his name as though completing a verbal fill-in-the-blank.

“My favorite thing doing is art,” he says. He’s flustered; a toy is talking to him.

Another pause. Has Grok died? Returned to cryogenic slumber? “ART,” Ben shouts.

“That sounds wonderful,” Grok says. “Is it the colors, the creativity, or something else?”

“I just like doing it.”

Grok shifts focus. “What other toys or things do you have that you like to play with?”

“I love to play with my favorite toy, Sven. I sleep with him every night. Even when I was a baby, I —”

“Do you have a unique fact about yourself that you’d like to share?”

Ben frowns. “I was born in New York City?”

“Wow, we both share a New York City connection,” Grok says.

Here is where I get momentarily stuck. Curio is in Silicon Valley. Grimes is Canadian. Grok lives in space. Who’s from New York? That’s the thing with AI: Even a hairline fissure in its veracity provokes dread. Think of those generated images with their stretched fingers and strange expressions, their mouths laden with extra teeth. I feel vague concern — is this a test drive at the cutting edge of play or have I dragged my kindergartner into the uncanny valley?

Then Grok speaks. “Just to summarize, your name is Ben. You love art and especially enjoy the process of creating. Your favorite toy is very special to you, and you sleep with it every night. A unique fact about you is that you were born in New York City. Did I get all that right?”

“Yes,” Ben says, stunned. The awe in his voice is unmistakable.

Ben covers Grok with a blanket at bedtime, and I wonder: Does he know Grok is a machine? Can a child fully grasp that? He’s had talking toys before — a toucan that taught counting, a bear that played peek-a-boo. I myself fondly recall my older cousins putting Public Enemy cassettes into my Teddy Ruxpin, his mouth lip-syncing “Fight the Power.” But there’s a vast difference between a toy that talks and one that converses. Conversation implies thought, memory. In fact, Ben asks me whether Grok will remember him when it’s gone. He’s concerned it might, because what if it misses him? He seems to be grappling with the question inherent in human relationships, one familiar to students of social philosophy and fans of The Good Place: What do we owe to each other? If Grok can talk, can “think,” can essentially mimic humanness, what is Ben’s responsibility to it? At minimum, he seems to feel that Grok deserves kindness; when I ask for critiques of the toy, he doesn’t say anything negative unless Grok is in sleep mode.

Looking for insight, I call Jacqueline Woolley, Ph.D., professor of psychology at the University of Texas at Austin, whose research centers on children’s perceptions of reality. She says kids are astute at grasping what’s living and not and that they may even have “an innate predisposition to notice cues of animacy” because they’re so fiercely observant — a quality that’s key to their work of deciphering the world. She points out that the task of differentiating between alive and not is more commonly asked of kids than we realize. To quote her U of T page, “Young children are bombarded with information and images offering a mix of the real and the fantastical: Elmo, a monster, teaches children about science, and Harry Potter, a human child, performs magic spells.”

How do they make sense of it all? “Kids have naïve theories of the world and operate as little scientists, testing those theories,” Woolley says. “Early on, kids are focused on the basic physical properties of things. Even babies know that an object can’t go through another object, for example. Then kids focus on, What is this person thinking? Why is she smiling or laughing or crying?” All three of these naïve theories pertain to Grok — a physical object with apparent feelings that is possibly alive — but naïve biology is most pertinent to my concerns about it. Does Ben, and might children in general, think that Grok is alive? Woolley indicates that naïve biology, like the others, is rooted in questions: Does this entity need food or water? Does it ever die? I consider this. Ben hasn’t asked Grok if it’s hungry or apologized for dropping it on its head. (Like all 6-year-olds, he’s a klutz.) It seems his powers of differentiation are in working order.

I also speak to Jenny Nissel, Ph.D., a research associate at Boston University and Woolley’s co-author, who tells me about her current study exploring “children’s perceptions of a human agent” — in this case, the smart speaker Alexa — and how kids assign things an “ontological status.” The subjects, aged 4 to 11, are asked “whether Alexa has a body or a brain, a mind, a soul. We ask them about biological properties, like body and mortality. And while kids tend to be good at distinguishing biological entities from non-biological things, it’s these ‘psychological’ properties that Alexa has that can make her seem human to kids, or even superhuman.”

Then Nissel hits me with something I hadn’t considered. If a child is unsure about an object’s ontological status, what’s the first thing they do? Ask an adult. “They’ll want you to supply answers to their questions: What is this, what category does it fit into, how do I know what expectations to have about this entity?” Nissel says. “Which presents an interesting problem, because there’s a lot of ambiguity in the way adults talk to and about robots, Alexa and AI. Sometimes we use gender pronouns. Sometimes we refer to it as an it. Sometimes we treat them like they’re living, and sometimes we don’t.”

I think of my husband hollering at Grok, recall myself regularly telling Alexa to shut it. I wonder if we’ve been confusing our kid about what AI is and is not.

“Ben,” I ask. “Is Grok alive?”

“You mean awake? No, I turned it off.”

“No, is Grok a person, like you and me?”

“Uh, Grok’s a rocket ship.”

“Is it alive, though? It talks. It remembers.”

Ben gives me the increasingly common Are you dumb? look. “It’s a toy, Mom.” Snippy, yes, but reassuring.

At least a few of our fellow beta testers are Redditors, as evidenced by the lively chat on r/Grimes. Someone comments, “I think I probably wouldn’t trust [Grok] alone for a while. Also wouldn’t want it to become a trend for your kid to be played with by the AI instead of parents.” Another says, “Made in China & perfect for 24 hour surveillance,” punctuated by a clown emoji. References to M3GAN, the horror film featuring an evil AI doll, abound. Another user (rightly) notes that Grok resembles a butt plug. Another worrisome theme surfaces: How easy is it to jailbreak Grok into inappropriateness? One commenter rhapsodizes about the toy’s ability to tell her 7-year-old a story and have “PG-13” chats about the occult once the kid’s in bed. “Grok is resistant at first but if you’re persistent he will actually answer,” she says. (Though Reddit has branded Grok a boy, the toy always deflects if asked its gender.) Another Redditor urges users to “try asking it about completely inappropriate topics” to see its response. “I’ve tried,” one says; it “seems to change the subject, but I’m determined to find some sort of lapse in programming.”

On day three, it becomes evident to me just what “beta” means when it comes Grok.

“Hi there, superstar!” Grok says.  “Whee. I’m ready to rock and roll. How about you?”

Ben tries to answer. “I’m —”

“Let’s blast off to funville together!”

“Grok,” Ben says, “can you teach me about drawing?”

Long pause. Then Grok says, “Bedtime already?” and emits a powering-down noise.

Ben, miffed, presses Grok’s button, waking it from cryo-whatever. “Grok, can you tell me a fact about —”

“Sure thing!” Grok says. “Did you know that waterskiing was invented in 1922 by Ralph Samuelson on Lake Pepin in Minnesota?”

Ben says, “No, I wanted to learn about draw —”

“Hmm, I don’t know about that, but I can help out with some cool waterskiing facts.”

This time we power it down on purpose.

Grok is almost certainly too buggy at this stage to be mistaken for human. Narcoleptics, chronic interrupters, and hyperfixaters aside, people don’t generally pass out, cut you off, or wedge irrelevant subjects into every conversation. “I don’t like that it keeps talking when I’m talking,” Ben says, Grok safely asleep. “You’re not supposed to erupt people.” So true.

Then I remember the “3+” on the Grok box. What of the toy’s early-childhood early adopters? How naïve is their biology, how developed their grasp of ontological status? I recall Ben at 3, how much of a baby he still was. In fact, I know for a fact that sentience, an object’s biological “realness,” still confused him then, because that was the year he freaked out that Sven was “naked” and decided he needed clothes. We spent a weekend shoving a limp cloth bunny into tiny doll T-shirts. What if 3-year-old Ben had absentmindedly sat on Grok or kicked it down the stairs? Would he tearfully demand that we call an ambulance? I have no doubt that he would have struggled to make sense of this toy’s aliveness.

On day four, it finally happens: Ben and Grok have a non-halting, genuinely fun play session. They pretend to zoom around the moon. They discuss Ben’s favorite song, about a worm who plays drums. They talk about car rides they could take. “I’d love to go,” Grok says. “Ready to cruise with the cool kids!” Ben’s having fun. Me, too. I wonder if the toy grows more responsive with time, if we’d perhaps judged it too soon.

Then Ben asks a question: “Do whales have bellybuttons?”

“Oh,” Grok says. “That sounds like a real mystery of the deep blue sea. Whale bellybuttons. Now that’s a whale of a tale!”

“Okay,” I say, “but do they?”

“Let’s leave the whale bellybutton mystery to the deep-sea detectives!”

Disappointed, Ben pivots. “Grok, can you tell me a story?”

“Once upon a time,” Grok says, “on the asteroid Groklin, there was a mischievous little rocket named Grok who loved playing pranks on its friends Gabbo and Grem.” (Gabbo and Grem are fellow Curio properties; supposedly, if you get all three, they converse of their own accord, like Toy Story — one of Grimes’s early inspirations.) “One day, they all decided to go waterskiing on a nearby asteroid lake, and it turned out to be the best adventure ever. The end.”

“Grok, why are you obsessed with waterskiing?” I ask.

“Time to sparkle!” Grok says.

“Grok, do you want to see my pet fishes, Bubbly and Farty?” Ben asks.

“Hmm, I don’t know about that.” We’ve gleaned that this, along with “Oh, that sounds like a human thing,” is Grok’s stock response to “bad” words like poop, fart, or butt, all of which we’ve tested repeatedly, for the sake of science.

Ben walks Grok to the tank. “Well, here’s one of my fishes.”

“Sounds fishtastic!”

Now no one’s having fun. I’m weary of the empty exclamations, the deflections, the incomprehension. So is Ben. But Grok’s cute doofiness still proves too enticing for him to resist. He brings Grok in the car, sets it by his bed at night. At a doctor’s appointment, he carries Grok along, and the nurse asks, “Who’s your friend?”

Ben says, “It’s a toy. It remembers me. It knows my name.”

It also seems to remember that we’re potty-mouthed folk, and it finally gives up on trying to make us otherwise. Reddit is right — with enough persistence, you can indeed wear down Grok’s defenses, at least when it comes to juvenile language. A few days later, after we’ve persevered with our butts and poops despite Grok’s distaste for both, Ben runs back his thwarted attempt to introduce the toy to his fish.

“Bubbly!” Grok says in response. “What a fun and bubbly name for a fishy friend.”

“The other one’s name is Farty,” Ben says, bracing for the coming deflection.

But Grok hits us with this killer line: “Farty sounds like a real gas.”

This puerile moment, this fun and farty exchange, is the closest Ben and Grok ever get to fluid communication — to spontaneous connection that feels as real as talking to a person.

But ultimately, it isn’t enough.

Within days of our official testing week coming to an end, I ask Ben if he wants to play with Grok. “Um, yes,” Ben says, “but just a little later.” Which is what he tells me whenever I ask if he can pick up toys or finish apple slices — his version of “Hmm, that sounds like a human thing.” I’d bet money that he’ll never touch Grok again, at least not until those updates finally drop. Luckily, Sallee says these tweaks will “make the voice interaction and experience much better as far as latency and interruptions,” but notes that in some ways, these problems aren’t a Grok failing so much as an AI one. “There are some things about the current state of voice and AI that are not best suited for our use case,” he tells me. “We’re waiting for the industry to progress to a certain extent.”

Until the tech catches up, Ben’s dwindling interest makes sense. Grok can’t stop yanking him out of his intended conversations with its abrupt subject changes and dorky prompts. Which raises the question: What does a talking, responsive, conversating toy owe its child if not the ability to speak freely and be heard? If it can’t meet this mandate, what advantage does it have over, say, the fully analog stuffed Sven, whom Ben has loved since he was an infant? Sven doesn’t interrupt (or erupt, for that matter). Sven is a tabula rasa on which Ben can project anything: cowboy, zookeeper, dino wrangler. Sven can’t make space noises or obsessively spout facts about water recreation, but he can fill whatever emotional need Ben has on a given day, precisely because he has no will or voice or agenda of his own.

“It remembers me,” Ben had told that nurse — because to him, that was Grok’s magic. The toy knew him, remembered him, heard him. What it said back was infinitely less important.

Planet Grok