I’ve castigated the use of Generative AI in content creation. Models tend to spit out copy full of banal observations, trite lessons and overly neat advice.
Ask for a piece on “five trends in content marketing”, and you’ll be treated to insights such as this: “As brands focus on creating more engaging experiences, expect to see an increase in interactive content that fosters deeper connections with audiences.” Truly riveting.
But today, I’m not here to kvetch. Rather, I’ve realised that language models can be incredibly useful in one specific area: research.
ChatGPT’s ability to dig through long pieces of material is incredible. Like truly amazing. I increasingly see the model as a curious interlocutor, someone else interested in the topic who is genuinely invested in getting to the best possible answer.
I would describe my experience over the last six months as something akin to my tentative use of Google during my early teenage years. This wasn’t like the library, with its card catalogue. I (like every single other user of the internet) needed to learn the importance of keywords and topics, the combinations of words that would be most likely to turn up in relevant results. And then we would skim through the top two, three or five results, to see if they merited further examination.
Previously, searches were limited to exact matches, but models are far more clever.
Build out custom resources. There are few things easier to write from than a good table. Organised data, separated by column and row, makes observations simple and relationships clear. I’ve increasingly been asking ChatGPT to create these custom tables where I’m able to control the relationships. Recently I was working with a fintech that claimed they had the best pricing in the industry. Before, I would either take them at the word, or do some manual checking of other websites. Instead I was able to create an elaborate chart that was able to show relationships between pricing (theirs, it turned out, was more middle of the road) along with consumer reviews, complaints from online reviews and a like-for-like review of features. This made it much easier to create and provide evidence for a particular source of action when it came to a communications strategy.
Test hypotheses. As part of the drafting process, I’ll start to form ideas and suppositions. Then, the question becomes whether or not the evidence supports this idea. Sometimes I will put the debate right to the model. For example: “I’ve heard that there are fewer reports of card fraud in France compared with Germany. What does looking at the last 18 months of major media headlines tell us?” The model is generally quite good about coming out and saying whether or not available material is adequate – no is also an answer. You can also force it into playing devil’s advocate with the question, “Why would someone say this is wrong, or if you were to argue against this point, what would you say?”
Pointing out highlights in larger written pieces. Writing on financial topics means critical information may be buried inside very long documents. Annual reports can be incredibly useful but also can run into dozens of pages. Regulatory guidance, court rulings, academic research – these are all valuable first-hand sources with useful facts. Instead of relying on executive summaries, download the whole PDF text and upload it into ChatGPT with a question or three. “Are there any places where an executive talks about the benefits of machine learning? Point out any places where the judge references a previous federal court case. What are some of the most strident arguments made against all-to-all payments in this report?”
Some caveats. Certain types of content can be streamlined. There are hopefully some new tools that can help with the writer’s block that can set in at the beginning of the content writing process.
You still need an idea – call it a thesis – to start exploring. ChatGPT is horrendous when it comes to topic creation. While research can seem time-consuming, it’s really that germ of an idea that is the most important part of making an article. That seems harder to automate.
It is critical to check sources. Models absolutely make mistakes and cannot wholly be relied upon to be up-to-date due to the complex way they skim and learn from information across the internet. This is especially true when you ask to compare data on obscure or specialised topics. Originality is a goal of most content creation, but it also means that language models won’t necessarily have one source from which they can draw all the material. Thankfully, models increasingly include in-line citations, where quotes can be found, and paragraphs read in full context.
Additionally, use the time that models give us wisely. If more information can be marshalled quicker, that is no excuse for sloppy and unclear writing. Consider setting aside more time for an additional draft (here, tools like Grammarly Plus can supplement the role of friends, colleagues and editors) and some final polish.
We’re in a time of great discovery and innovation. The emerging frontier of what these tools can do is constantly expanding – not just because new models are increased but because more people are spending more time with them. One downside to this is that our conversations with models are usually private affairs on our laptops.
Let’s break that cycle – when you figure out some cool way to get interesting results from a language model, share it with someone. Or write an article; just make the actual words your own.
Jon Schubin is Cognito’s Director of Content Marketing