Last week, I wrote about AI’s growing role in the newsroom, paying particular attention to the team-leaders who set the tone on how AI is used as well as the staff applying those standards.
For publishers, that human oversight feels like a necessity, particularly when its absence creates such horrific, lasting impressions.
A recent EY survey found that Americans are particularly negligent when it comes to checking AI-generated content, with only 24% saying they review or edit the text, images, or translations that AI produces (compared to 31% worldwide).
“At the same time, 74 percent of people in the U.S. believe that AI applications understand their needs, slightly above the global average of 73 percent,” writes The Decoder of the EY results. “The data suggests that because Americans find AI both useful and accurate, they may be less inclined to check its work.”
The study found that just 14% of U.S. users “fine-tune” what AI generates, putting it below the 19% worldwide average and above only Japan (13%), United Kingdom (13%), and France (12%) internationally.
“If every AI-generated result had to be carefully reviewed,” The Decoder writes, “the technology would quickly lose much of its appeal and efficiency.”
Championing AI For Those Concerned
In a separate paper, EY recently created six personas to “acknowledge the diverse views that people have on AI,” coming up with:
- Tech champions (23%), who see AI’s benefits but “still advocate for regulation”
- Hesitant mainstreamers (20%), who have data and transparency concerns, but acknowledge societal benefits
- AI rejectors (17%), who prioritize “human connection and … strict regulations”
- Passive bystanders (15%), who are concerned yet ambivalent
- Unworried socialites (13%), who embrace AI “with few reservations”
- Cautious optimists (12%), who “welcome AI’s potential while remaining mindful of risks”
The research found that, no matter what type the person was or their comfort level, “concerns remain.” Of the most alarming, 75% said they worry about AI generating false information that’s ultimately taken seriously, and 67% worry that AI “may become uncontrollable” without human oversight.
Publishers, like any other group or organization, clearly have reason to be both transparent and supremely careful with their AI strategies. Perhaps because so much is at stake, it’s disheartening to see media and entertainment companies already showing up at the bottom of those earning high net trust for their AI management and having respondents’ best interests in mind.
(Source: EY)
For publishing leaders, strategizing AI use is an uphill battle made all the more difficult with the “move fast and break stuff” mentality that AI optimists get prescribed with, but when done with the audience in mind, the results benefit all involved.
“This is what leadership in AI truly means,” the EY research says. “It’s not just about implementing the technology but about shaping a future where AI expands human potential. The organizations that succeed will be those that recognize AI’s real power lies not in automation, but in augmentation — elevating what people can achieve rather than replacing them.”
SEE FOR YOURSELF
The Magazine Manager is a web-based CRM solution designed to help digital and print publishers manage sales, production, and marketing in a centralized platform.