Generative AI comes in many forms. Increasingly, though, it’s marketed the same way: with human names and personas that make it feel less like code and more like a co-worker. A growing number of startups are anthropomorphizing AI to build trust fast — and soften its threat to human jobs. It’s dehumanizing, and it’s accelerating.
I get why this framing took off. In today’s upside-down economy, where every hire feels like a risk, enterprise startups — many emerging from the famed accelerator Y Combinator — are pitching AI not as software but as staff. They’re selling replacements. AI assistants. AI coders. AI employees. The language is deliberately designed to appeal to overwhelmed hiring managers.
Some don’t even bother with subtlety. Atlog, for instance, recently introduced an “AI employee for furniture stores” that handles everything from payments to marketing. One good manager, it gloats, can now run 20 stores at once. The implication: you don’t need to hire more people — just let the system scale for you. (What happens to the 19 managers it replaces is left unsaid.)
Consumer-facing startups are leaning into similar tactics. Anthropic named its platform “Claude” because it’s a warm, trustworthy-sounding companion for a faceless, disembodied neural net. It’s a tactic straight out of the fintech playbook where apps like Dave, Albert, and Charlie masked their transactional motives with approachable names. When handling money, it feels better to trust a “friend.”
The same logic has crept into AI. Would you rather share sensitive data with a machine learning model or your bestie Claude, who remembers you, greets you warmly, and almost never threatens you? (To OpenAI’s credit, it still tells you you’re chatting with a “generative pre-trained transformer.”)
But we’re reaching a tipping point. I’m genuinely excited about generative AI. Still, every new “AI employee” has begun to feel more dehumanizing. Every new “Devin” makes me wonder when the actual Devins of the world will push back on being abstracted into job-displacing bots.
Generative AI is no longer just a curiosity. Its reach is expanding, even if the impacts remain unclear. In mid-May, 1.9 million unemployed Americans were receiving continued jobless benefits — the highest since 2021. Many of those were laid-off tech workers. The signals are piling up.
Some of us still remember 2001: A Space Odyssey. HAL, the onboard computer, begins as a calm, helpful assistant before turning completely homicidal and cutting off the crew’s life support. It’s science fiction, but it hit a nerve for a reason.
Last week, Anthropic CEO Dario Amodei predicted that AI could eliminate half of entry-level white-collar jobs in the next one to five years, pushing unemployment as high as 20%. “Most [of these workers are] unaware that this is about to happen,” he told Axios. “It sounds crazy, and people just don’t believe it.”
You could argue that’s not comparable to cutting off someone’s oxygen, but the metaphor isn’t that far off. Automating more people out of paychecks will have consequences, and when the layoffs increase, the branding of AI as a “colleague” is going to look less clever and more callous.
The shift toward generative AI is happening regardless of how it’s packaged. But companies have a choice in how they describe these tools. IBM never called its mainframes “digital co-workers.” PCs weren’t “software assistants”; they were workstations and productivity tools.
Language still matters. Tools should empower. But more and more companies are marketing something else entirely, and that feels like a mistake.
We don’t need more AI “employees.” We need software that extends the potential of actual humans, making them more productive, creative, and competitive. So please stop talking about fake workers. Just show us the tools that help great managers run complex businesses. That’s all anyone is really asking for.