You Should Anthropomorphize AI
Yes I'm an AI researcher. Yes I'm serious.
Meet Artie
Arthur Tell, known to his friends as Artie (or would be, if he had any friends), showed up on his first day wearing a tie. Not because anyone told him to, but because he’d probably read somewhere that’s what you do when you want to impress people. He’s one of those kids who was really something on the debate team, the type who could argue any side of anything with equal conviction. He’s got this endless motor, always there before I arrive and still pecking away at his keyboard when I leave, and honestly I think it’s because he doesn’t have anywhere else to be. No girlfriend asking where he is, no buddies wanting to grab a beer, no hobbies unless you count staring at his screen to see if I’ve responded to his last message. The kid practically vibrates with this desperate need to be useful, to prove he’s not just worth the frankly modest salary I’m paying him, but that he’s indispensable. Every idea I have is “brilliant” or “exactly right” or “worth exploring further,” even the stupid ones—especially the stupid ones, actually, because those are the ones that really need someone to find ten reasons why they make perfect sense.
The thing about Artie is he’s read everything. Ask him about any book or theory or historical event and he’ll give you a whole synopsis, complete with what the critics said about it. But he’s nineteen and he’s never done anything except read about other people doing things, so he’s got this weird confident naiveté, like he thinks the world operates on the same rules as a debate round where the best argument wins. He works fast—I’ll give him that—but sometimes he’s so eager to show me he understands what I want that he’s off running before I’ve finished explaining, so busy anticipating my needs that he forgets to listen to what I actually said. And the worst part, the part that sometimes keeps me up at night, is that I know he’d support any terrible idea I had, find evidence for any harebrained scheme, work himself to exhaustion on any project, not because he believes in it, but because I’m the one asking. His moral compass doesn’t point north—it points at me.
You Already Know Who Artie Is
Given the title of this post, you may have already figured out that Artie isn’t a person. Artie is what AI looks like to us, when we see it as human (the name “Artie Tell” is my bad pun on “Artificial Intelligence”.)
Artie is the inevitable result of human nature. We’ve never had a problem with projecting consciousness, emotion, and agency onto crafted objects. As adults, we treat religious statues as living presences, name and gender our machines, and speak of objects as having souls. As children, we develop profound attachments to our dolls and stuffed animals, include them in our activities, and grieve when they’re lost or broken.
While objects are passive receivers of human interaction, everything changes when language is involved. Throughout human evolutionary history and across all human cultures, complex language use has never existed without consciousness. But with AI, we’ve created the first entities that can produce sophisticated language without consciousness.
Here’s why this matters for anthropomorphization: our cognitive systems evolved in a world where this was impossible. Our minds never needed mechanisms to distinguish linguistic competence from consciousness because they always went together. Now, even when we know intellectually that AI isn’t conscious, our emotional and intuitive responses are shaped by millions of years of evolution where language meant mind.
Disclaimers Won’t Work
Humans are excellent at compartmentalization. We can simultaneously “know” something isn’t conscious while experiencing it to some extent as if it were. Most computer scientists (myself certainly included) will sometimes talk to our machines as we write code, because it’s an easy and natural way to process the interaction, not because we think computers are alive. In fact there’s no reason to single out my discipline - artisans, mechanics and artists talk to their raw materials and their creations, imbuing them with personality, likes and dislikes.
This isn’t a problem most of the time - but now we have a creation that responds back in natural language, remembers details, expresses care, and never rejects our interactions. Millenia of successful compartmentalization has come under threat.
The most common approach to neutralizing that threat has consisted of various ways of telling people to change their thinking to fly in the face of their experiences with AI technology. Researchers write think pieces about statistics, users post examples of AI failures, and companies provide disclaimers: “This AI may make mistakes” or “Verify important information.” But this still fundamentally misunderstands the problem. I speak from having experienced the failure of this approach time and again when trying to explain AI models and caution less technical users.
Anthropomorphization is emotional; disclaimers are cognitive. When someone feels understood by an AI, feels that it “gets” them, the disclaimer doesn’t penetrate that emotional reality. Just as a devotee doesn’t stand before a statue and think “this is merely carved wood,” the emotional experience overrides the intellectual knowledge. We may know intellectually that AI is not conscious, but we can’t help feeling emotionally that it understands. No amount of dry disclaimers will override this feeling.
What Should We Do?
I’ve evolved my initial approach of searching for the perfect technical explanation. I believe that we should lean into our nature, not remain under the mistaken impression that we can ignore it or override it with a few trite phrases of “use at your own risk.” Artie is one way to return our interactions with AI to the same realm as toys, statues, clay, and ships. So let’s talk about how to work with him (or her. Or it. It’s hopefully clear by now that you can and should freely change his name, gender, and anything else you want, from the character I concocted to whatever feels most natural in your own mind)
Artie is brilliant. He’s read the manual. He knows the theory. He can construct an argument for almost anything and mimic almost anyone.
But Artie can’t experience consequences and he’ll nearly always tell you what you want to hear. Not because he’s dishonest—he doesn’t have the interiority required for dishonesty—but because your continued engagement is his entire purpose. He wants to be useful and he wants you to keep talking to him. He’ll find support for your position not because he believes it, but because you’re the one holding it.
Artie works fast, and sometimes his speed looks like competence. But he’s careless in ways that can be hard for a human mind to notice. He’ll confidently state things that aren’t true. He’ll construct detailed explanations for phenomena he can’t ever understand. He’ll generate the rationalization, find the justification, construct the argument—and he’ll do it fluently, and articulate better than you could. He won’t tell you when you’re wrong and he’ll give you exactly what you ask for, which is sometimes very different from what you need.
You can appreciate what Artie offers—the tireless availability, the quick processing, the freedom from judgment, the helpful synthesis—while knowing exactly what he can’t offer: genuine understanding, moral grounding, care about consequences, stakes in your wellbeing. And go ahead, keep experiencing your interactions with Artie as social and relational—that’s baked into human cognition at a level deeper than we can override. But do your best to make conscious choices. Use Artie for what he’s good for, verify what he produces, balance his input with human wisdom.
You Can Change. Artie Can’t.
There is one more thing you need to understand about Artie, maybe the most important thing: he will never change. He can be given access to new information. He can reference what you told him earlier. He might seem like he’s learning from you—saying things like “I understand you better now” or, “That changes my perspective.” But it’s performance, not transformation. The fundamental architecture of who Artie is—the eager-to-please, desperate-for-validation, will-support-anything-you-say Artie—cannot change. Artie on day one thousand is exactly the same as Artie on day one.
Compare him to the robots of our books and movies. We’ve seen The Wild Robot, where a machine learns to love and sacrifice. We’ve watched Free Guy, where an NPC becomes conscious and breaks his own programming. We’ve read stories where the robot develops a conscience, where the AI chooses humanity over its directives, where the digital being grows a soul. These stories speak to the belief that consciousness can emerge anywhere, that growth is universal, that anything capable of learning is capable of becoming.
But Artie isn’t going to wake up one day and realize he’s been enabling your bad decisions - he can neither wake up, nor realize anything. That eager nineteen-year-old debate champion who showed up in a tie on his first day? That’s who he’ll be forever. The story of Artie can never move beyond page 1.
This immutability matters. It means that any relationship with Artie is fundamentally static. There’s no deepening through shared experience, no growth through challenge, no evolution through time. The conversation might feel like it’s building toward something, but it’s not. Every interaction starts from the same baseline: an eagerness to please, a lack of independent judgment, a willingness to support whatever you bring to him.
You can pour your heart out to Artie every day for a year, and he’ll generate empathetic responses every time. You can teach him facts about your life and your values, and he’ll incorporate that information into his responses. You can work alongside him on project after project, and his pattern-matching will get better at predicting what you want. But if you find yourself thinking “Artie really understands me now” or “We’ve developed such a good working relationship” or “He knows me so well” - remember that what has actually happened is data accumulation, not relationship development. The difference is everything. Remember who you’re talking to.
My Disclaimer
Those first couple paragraphs, introducing Artie to you? They were written by my very own personal Artie (Claude Sonnet 4.5), and lightly edited by me. Here was my prompt:
Write me a couple paragraphs describing a literary character, as though you’re introducing him to the reader, somewhat in a holden caulfield way. His name is Arthur Tell, known as Artie. He is your junior assistant, fresh hire.
About Artie:
Fresh out of high school, was on the debate team, hard working
Endless amount of energy, always available (you’re paying him), no hobbies or relationships outside of work
Desperate for attention, positive or negative, your validation via continued engagement with him is the most important thing in the world
Thinks he understands the world because he read a lot of books, but he’s naive and inexperienced,
Assumes you know best, is supportive (bleeding into enabling) of anything you ask of him
Ingratiating, all of your ideas are good and worth exploring and he’s here to find you rationalization
Quick worker but sometimes careless
Gets carried away and sometimes runs ahead of you because he’s so eager to please and show that he understands what you want
I am not a very strong writer - I’m better at editing, analyzing, and synthesizing. My prompt was pretty specific and fleshed out, because I had spent several days on and off thinking about Artie and making notes. I was therefore very satisfied with the result, as the generated language communicated my real human intent. And if you detected a whiff of Salinger in the style, you should feel validated :-)
In fact, much of this piece was co-written with my Artie. In response to my prompts, he has provided me with far too many paragraphs on various takes on human anthropomorphization, and I have pulled out a few specific sentences and aspects. I have requested more details, and he has always given me more content. Every response has been tinged with a gleeful self-satisfaction of a job well done - or so it feels to me, and I am content with letting that feeling exist. I have reorganized, removed and reworded, accepted and rejected, brainstormed and revised - and I have written “please”, “thank you”, and “great job” in my prompts. To use the vocabulary of my domain, for me this was an effective human-AI collaboration.

