by Nancy Heffernan
Last week on a news website, a large photo accompanying a story was captioned, “Blonde woman in black dress stands in front of photo backdrop.” Accurate, yes, but for the fact that it was Taylor Swift.
Was it artificial intelligence or a copywriter who didn’t know a Swiftie from a Swiffer? Who knows for sure, but what we do know is this cognitive-model technology known as AI is here – especially in communications writing – and evolving fast.
California Gov. Gavin Newsom issued an executive order in September requiring state government to review the potential application of artificial intelligence to their work and the issues it raises for the public. And President Biden is of a like mind, planning to issue his own executive order this fall on responsible innovation of the AI industry. We can expect more AI-centered legislation both at the state and federal levels next session.
The kind of “generative” AI that powers ChatGPT and other large language bots is developing so quickly that most individuals, institutions and businesses are struggling to keep up and need to get a move on.
The new tools are especially adept at writing and editing, so companies in the communications realm tend to be among the firms most excited – and most concerned – about the technology’s potential benefits and risks.
The chat bots can already do research, analyze legislation, write routine letters and social media posts, and create graphics. They will soon be able to do far more than that. This is a disruptive technology that might bring more change to the workplace than anything since the invention of the World Wide Web.
As a writer practiced in media relations myself, the notion of a robot with a pen terrified me, less from a threat perspective and more from concern over where the bot would be getting its information. In our public affairs communications work, our firm takes seriously our responsibility to share factual, verifiable data, concepts and ideas. Our reputation depends on it.
While we get to know these tools and overcome simmering anxiety, we are committed to embracing and deploying them sensibly and with parameters that ensure continued human-led corroboration and fact-checking.
But we must seek first to understand.
What we are learning is that generative AI tools are not digital encyclopedias that spit out factual answers to specific questions. They actually are like very advanced auto-complete tools, using a conversational dialogue with the user to formulate responses. The bots use all the information they have “scraped” from the internet – another factor that gives pause since we all know accurate information resides alongside mis- and disinformation on the web and in the news.
For now, at least, tools like ChatGPT appear to be better at creative work than research. ChatGPT’s knowledge cutoff in its free model is September 2021, although its developer, OpenAI, just addressed that pain point by announcing an upgrade in a paid service that will provide real-time information. You can best use it by giving it a set of facts and the instruction to analyze them or write something based on those facts so it can at least produce a draft for your review.
But even this awareness about how to use it comes with a caution. Specifically, responses to a query may come with overt or hidden biases since the programs were developed by an industry not known for its gender or ethnic diversity. The work still needs to be reviewed and fact checked by a seasoned professional. In other words, a human.
At LPA, where we are always keen to stay on top of emerging issues, trends and technologies, we wanted to get ahead of potential pitfalls of using ChatGPT and other bots. Like the governor indicated in his order, we, too, are getting up to speed and developing guidelines for our employees’ use of AI.
That also means training them to use it wisely. We brought in advanced new media and UC Berkeley visiting lecturer Peter Bittner – who has risen to AI-expert status – to share his insights and guidance with our employees and clients as we delve into this new world.
And, as Peter says, despite the fears and predictions of many, artificial intelligence tools probably aren’t coming for all of our jobs in the near future. But the early adopters who are among the first to master the new technology may soon be using it to move faster and produce more, particularly in communications.
Let’s hope bots welcome their human collaborators.
# # #
Nancy Heffernan is Senior Strategist at Lucas Public Affairs in Sacramento and a former journalist and editor.