You’ve already seen trendslop. You just didn’t have a name for it.
You ask AI to help with a deliverable. What comes back is polished, competent, and sounds like it was written by someone who’s never met you. The structure is fine. The grammar is clean. But it reads like a stranger’s work—because it is.
There’s a name for this: trendslop. The term was coined by researchers writing in Harvard Business Review, who found that leading AI models have deep-seated biases toward whatever’s trendy in contemporary discourse: the popular take, the buzzy framing, the trend-aligned answer. Across thousands of simulations, the models consistently chose the fashionable option over the context-specific one.
This matters because trendslop is what AI gives you out of the box every time, in every conversation. It’s the average of the internet dressed up in clean prose. And your clients, colleagues, and audience have already asked AI the same question before they came to you. They can get that average answer themselves in 30 seconds. They’re coming to you because the average answer wasn’t good enough.
If what you ship sounds like the same thing AI would have told them directly, you’re not adding unique value.
There’s a lot of noise right now about who’s good at AI and what it takes. But the actual bar is simpler than the noise suggests.
An AI-enabled person works better with AI than without. That’s the whole definition.
You can tell because what they produce with AI still sounds like them: their voice, angle, and way of thinking, not a stranger’s.
It’s tempting to assume the most AI-enabled person at your company is the one talking the most about it, using it the heaviest, or demonstrating the latest features. Sometimes they are—usage and skill often do go together. But the inference doesn’t always hold. Two people can use AI the same number of hours and produce wildly different work. One ships something that feels like a multiplication of their thinking. The other ships trendslop.
The difference isn’t volume of use. It’s whether the person has figured out how to get AI close enough to their way of working that what comes out actually sounds like them.
That difference has a name. The technical term is alignment—getting AI close enough to your context, constraints, standards, and way of approaching a problem that what comes back sounds like you instead of the average output.
But the work itself isn’t technical. It’s showing AI how you think, what you’re trying to accomplish, and what your finished work actually looks like. It’s giving AI enough of you that it produces something you’d put your name on.
The research around this is encouraging: A study out of Harvard and BCG tested 758 consultants using AI on realistic tasks. Below-average performers improved by roughly 43 percent. Above-average performers still gained around 17 percent. But the same study found that on certain tasks—ones that fell outside what the researchers called AI’s “jagged frontier”—using AI actually made people worse. The people who performed best weren’t the ones who used it most. They were the ones who had learned where AI fits their work and where it doesn’t.
Here’s the part most people don’t expect: The skills that make someone effective with AI aren’t new skills. They’re communication skills—the ability to convey what you need clearly and specifically. They’re judgment—knowing when something sounds right versus when it actually is right. They’re also curiosity, intellectual humility, and the ability to break a complex thing down into pieces. You’ve been building these skills your whole career.
And unlike every tool that came before it, AI actually understands what you’re saying, not code, not commands, just your words. Every skill you’ve developed for working with smart, capable people transfers more directly than you’d think. The World Economic Forum, McKinsey, and researchers at Harvard all point to the same conclusion: What makes someone effective with AI is existing human capacity applied in a new direction—not a new technical skill set you haven’t learned yet.
If that feels like a relief, good. It should.
The people who consistently get AI to sound like them—instead of like a stranger—have built a small set of habits. These aren’t tricks or hacks; they’re ways of working that get their fingerprint into the output faster.
Here are five worth trying:
Being AI-enabled isn’t about how often someone uses AI, how loudly they talk about it, or which tools they’ve tried. It’s about whether what they create with AI still has them in it. The habits are small, but the difference they make is not.
The five habits above are drawn from AI Coach, a guided learning tool built around 24 AI Productivity Habits grounded in behavior science. AI Coach teaches people how to work with AI tools like Copilot, Claude, Gemini and ChatGPT more effectively through practical, project-based coaching—real-time feedback, short lessons, and hands-on practice in their actual work. To learn more about bringing AI Coach to your organization, fill out the form below to connect with our team.