Artificial intelligence (AI) is rapidly becoming an integral part of our society, with its applications expanding daily. A recent study reveals that over 25 percent of external corporate communications and press releases are now either created entirely by AI chatbots or significantly refined using them. The most notable increase in AI adoption was observed in technology and business-related press releases, with AI influence also found in LinkedIn job postings and even official communications from the United Nations.
The Rising Tide of AI in Press Releases
Published in the prestigious journal Patterns, this new study indicates a significant surge in the use of AI-generated and AI-assisted content within corporate communications, particularly from early 2023, following the November 2022 launch of ChatGPT.
Researchers meticulously analyzed 537,413 corporate press releases, observing a dramatic increase in AI integration. Before ChatGPT’s arrival, AI-assisted content hovered around a mere 2-3 percent, often attributed to false positives. However, post-ChatGPT, these figures soared. By the end of 2023, Newswire reported 25 percent of its content as AI-assisted, while platforms like PRWeb and PRNewswire saw their AI usage stabilize at 15 percent.
Furthermore, the study highlighted that the “business and money” and “science and technology” sectors showed the highest rates of AI writing adoption, with technology-focused content alone reaching almost 17 percent by the fourth quarter of 2023.
While this trend clearly indicates businesses are embracing AI to boost content creation speed and reduce costs, the study cautioned that relying too heavily on these tools could result in less nuanced information, potentially diminishing a company’s credibility.
To identify AI-generated content within press releases, researchers employed a sophisticated distributional large language model (LLM) quantification framework. This system works by analyzing the frequency of words in a given text and comparing it against a distribution graph of typical AI-generated content on similar subjects.
However, this method isn’t without its drawbacks. The study pointed out that the framework primarily targets widely recognized AI chatbots like ChatGPT, suggesting that content from lesser-known models might go undetected. A more significant challenge is the framework’s inability to reliably identify language initially generated by AI but then extensively revised by human editors or “humanized” by other AI tools. Such content could easily bypass detection.