We’ve noticed a growing trend in professional circles of people sending ideas or documents for review with notes like “here are some ideas from ChatGPT” or “here’s a draft I wrote with the help of ChatGPT.” What follows is often unedited—or lightly edited—text that’s overly generic or missing important context. (There are also cases where text seems AI-generated but isn’t acknowledged as such.) That leaves it up to the recipient to improve the output, effectively making them ChatGPT’s editor, a role they didn’t sign up for.
For example, one of us was recently asked by a professional contact to review a strategy document. “I had this idea and I am curious your thoughts,” they wrote, adding the disclosure, “AI was very helpful.” What followed was a roughly 1,200-word memo clearly written by genAI. We wound up spending around half an hour focusing on it and drafting a thorough response. The friend never replied to that, and by the end we felt annoyed that they had asked us to react to something they didn’t invest a lot of time in themselves.
This experience isn’t isolated, and it represents a growing pain from genAI adoption: These tools make it easier for people to quickly produce content that they can then pass to others to react to or edit, shifting the cognitive burden from the creator to the recipient.
It’s no surprise we’re seeing this. We’re still figuring out what the norms should be around using genAI for work. Because the benefits from genAI are often framed in terms of time savings, it’s understandable why individuals might skimp on reviewing its output. But as Erica Greene, editor at Machines on Paper, recently wrote after experiencing something similar to our story above, “productivity is not about outsourcing the thinking work, but accelerating the execution speed.” Plus, one person’s time saved becomes another’s time spent fixing low-quality work.
How AI shifts cognitive effort
A recent study by researchers at Microsoft and Carnegie Mellon found that using genAI redirects where we exert our critical thinking effort from tasks like collecting information and creating content to tasks like verifying information, editing, and guiding the AI’s responses. That’s why critical thinking, judgment, and taste are skills leaders tell us they see as important for the future.
The concern is when people cut corners on the verification, editing, and guiding. “There is a risk for people…to switch off their brains and just rely on whatever AI recommends. And in those instances where AI is not that good, then that’s definitely critical,” Fabrizio Dell’Acqua, a postdoctoral researcher at Harvard Business School who has studied a related phenomenon, previously told us. The Microsoft and Carnegie Mellon study similarly found that when users are more confident in the AI tool, they tend to engage in less critical thinking.
Many people might also find verifying and fact-checking the output of a genAI tool less interesting or creatively rewarding. One study of material scientists found that even though such a tool made them dramatically more productive, many were less happy with their jobs, because they felt that AI was doing the original, creative work and they were just cleaning up after it.
To be honest, this is how we’ve sometimes felt when a collaborator has sent us AI-generated text, shifting the responsibility to us to make sense of whether it’s accurate and relevant.
A way forward
As AI tools become standard in professional circles, we need clearer guidelines on responsible use to prevent poor practices from permeating workplaces.
We’ve been thinking about how organizations can set norms that prevent AI-assisted work from placing an uneven burden on their teams. One simple framework: Individuals are to be evaluated based on the quality of their work, regardless of the tool they used to accomplish it. Whether or not someone used ChatGPT is the wrong question (assuming it’s not banned for the given task and is used ethically.) What matters is whether the person’s work meets the company’s standards. If it doesn’t, that’s the problem—not the fact they used AI to do some of their work.
If your team is still figuring out norms for AI use, here are a few best practices to consider:
- Don’t be AI’s middleman. Any task you’re using genAI for should still involve some effort. If you’re brainstorming with colleagues, don’t send them the 20 ideas ChatGPT or Claude gives you. (Admitting you used the tool doesn’t make this much better.) Select the best ideas and then send those to your colleagues, along with a description of what you think about each.
- Verify facts. It’s well established at this point that genAI tools occasionally make things up. If you would have been embarrassed to share a document with errors before genAI, you should feel the same way now.
- Ask yourself, “Would I accept this level of quality from a colleague?” If the answer is no, don’t pass it along yet; edit it until you’re happy with the output, then send it to them.
- Provide context the AI tool doesn’t have. You know things about your company and the project you’re working on that genAI tools aren’t privy to. Give them that context in your prompts; edit what they give you to make it work for your company.
Here’s a one pager with AI norms you can share with your team. (You need to join Charter Pro or have a Pro membership to access it.)
And here’s a note you can send someone if they give you something that misses the mark and likely used genAI without acknowledging it:
Thanks for sending this! Would you mind taking another pass before I edit it or respond in detail? Some of it reads as a bit generic. Can you make it more specific and ensure it has all the context? Let me know if you have any questions!
Has your team established norms around AI use? Let us know—we’d love to hear from you.