EY’s Chief Innovation Officer Shares Telltale Signs of AI-Generated Work in Writing and Presentations
Joe Depa, EY’s global chief innovation officer, has developed a keen ability to detect AI-generated content, thanks to his leadership role overseeing the firm’s AI, data, and innovation strategy. As part of his responsibilities, Depa evaluates how EY employees integrate AI into their work, giving him a unique perspective on the technology’s strengths and limitations. While he fully supports AI adoption and doesn’t impose strict usage limits, Depa emphasizes that the goal should be to amplify human creativity—not replace it. He warns that overuse of AI can lead to content that lacks originality and personal voice. “There does become a point of AI becoming a little bit less efficient or effective,” he said, especially when employees rely on AI without adding their own insights or unique perspectives. This balance is critical as companies push employees to use AI tools. A Business Insider survey of 220 respondents found that 40% admitted to sometimes hiding or downplaying their AI use at work—suggesting a growing tension between staying competitive and maintaining authenticity. Depa has identified several telltale signs that a piece of writing may have been generated by AI. One is poor accuracy, such as hallucinations—fabricated facts or incorrect details—even though AI tools have improved dramatically. He also notes that AI writing often feels overly polished, with consistent tone, structure, and flow, lacking the natural variation that comes from human expression. Another red flag is the overuse of generic language and buzzwords. AI tends to default to corporate jargon and repetitive sentence patterns, such as starting multiple paragraphs with the same phrase. “If it’s too smooth, too perfect, or too predictable, that’s a signal,” Depa said. To use AI effectively, he advises employees to draft their own content first—outlining key points and messaging—before using AI to refine or enhance it. “If you write it yourself first and then ask for the enhancement using AI, I feel like that’s much more productive,” he said. This approach, he believes, allows AI to challenge and improve human thinking rather than do the thinking for you. In presentations, Depa sees similar patterns. Overreliance on AI often results in surface-level analysis, missing concrete examples or deep insights. Topics are frequently addressed too broadly, without tailoring to the audience’s specific needs. He also points to “hedging”—a common AI trait where the tool avoids making clear recommendations and instead presents multiple options or vague alternatives. “Anytime you see vagueness or general statements that don’t really tell you anything, I would often say that’s AI,” Depa said. He stresses the importance of maintaining individuality and distinct voice, warning that if everyone uses AI the same way, the output starts to sound the same—eroding authenticity and impact.
