December 4 – ChatGPT has more than 1 million users
After one month – 57 million users
After two months – 100 million users
For reference, TikTok took 9 months to reach that milestone, and Instagram took about 2.5 years.
However, the growth of ChatGPT is not unprecedented. People who have been keeping tabs on LLM (Large Language Model) space know that AI writers have existed in good capacity since 2020 – i.e., the year when OpenAI launched GPT-3.
The only difference now is the “accessibility” (and, of course, a wee bit of improvement in quality). ChatGPT has worked to supercharge the market that otherwise relied on hyper-targeting, expensive subscriptions, and a handful of use cases.
We have been analyzing this market since before the venerable (yet arguable) entry of ChatGPT (and now Google’s Bard). And we’ve always been curious about the following:
- Are AI writers any good for B2B content?
- Can they replicate (or perhaps emulate to a certain degree) the technical nuance of humans?
- Are they fit for consumer-facing content in a space rife with a knowledgeable audience?
- And if they are any good – what does it mean for businesses and writers out there?
Our Research Was an Eye-Opener
We officially started our assessment of the AI writing tools and their viability for the B2B tech space in October 2022.
We purchased the subscription to five GPT-3 AI writing tools. The choice of the tools was based on a variety of factors, including their reviews and ratings on G2 and Capterra, the support for use cases that complemented long-form writing, the perceived output quality as seen on YouTube, and more. We took the “cost” into account as well, but there was a substantial variation in what these tools were charging, so there was initially no bar set.
Through the course of four months (from October 2022 to January 2023), we used these tools to write 50 articles for us across the most pertinent technology themes relevant to our business – Cloud, Data Science, Software Testing, Digital Transformation, Application Development, Cybersecurity, Industry 4.0, CX Transformation, etc.
Our aim was to understand how the AI writers performed when it came to:
- Accommodating the linguistic adeptness for different types of articles: How-to Guides, Long-form Listicles, Thought Leadership Write-ups, What-is Guides, the Why(s)
- Accommodating the technical adeptness for Moderately technical (or non-technical), Technical, and Highly Technical write-ups.
For ease of comprehending how the AI performed in terms of linguistic and technical adeptness, we used “time taken” and “usage of tool” as two defining variables and fixed the content length at 600 words.
What Did We Learn?
Massive Human Involvement
Clarity, correctness, coherence, and relevance are critical to the success of both theoretical and analytical B2B write-ups. When it came to AI, the articulation was good; however, it required:
- A concrete outline to start off
- Rigorous fact-checking
- Non-stop interventions for setting the direction
- Meticulous morphological interventions to sustain the brand’s tone of voice
- Exceptional domain knowledge on the part of the human editor
Of course, AI was a clear winner here. The speed of responses and the associated ideas complemented the writer’s workflow and warded off writer’s block – provided the writer was equipped with a comprehensive content brief.
Huge Category (& Thematic) Variation
As outlined above, our focus was to write articles around Cloud, Data Science, Software Testing, Digital Transformation, Application Development, Cybersecurity, Industry 4.0, CX Transformation, etc. Of course, that entailed testing AI on technical articles and against different content types. The results, as seen above, were expected (yet intriguing). For example, AI output was dubious and very surface level for listicles – something that we didn’t see coming, considering that listicles tend to have a more concrete structure in place as opposed to other categories.
Why was this the case? There are a few reasons that we discerned. We’ll explore them in detail in the forthcoming weeks. Stay tuned!
Enter AI DETECTION
In December 2022, ChatGPT rose to utter prominence, inviting AI Detection to the fore as well – which has grown substantially in the academic space, if you will, and understandably so.
But we were again curious to understand the implications in the B2B space. Can such detectors help identify what’s now called “AI Plagiarism.” Heard of CNET’s AI Journalist committing plagiarism? Well, that’s a story in itself.
For the purpose of clarity, we ran human-generated, GPT-3-generated, and ChatGPT-generated content through an AI detector – named GPTZero and later through the newly released OpenAI AI Classifier.
Here’s how the results compared on GPTZero for the definition of “Platform Engineering”
Here’s a definition we wrote for one of our clients:
“Platform engineering refers to the development of Internal Developer Platforms (IDPs) or engineering platforms that developers, data scientists, or end users can use to speed up application delivery. Essentially, IDPs act as a self-service operational layer between users and the backend services powering the platform. The idea is to modernize application development and realize intended business outcomes at speed.”
GPT-3-Powered AI tool’s definition
“Platform engineering is the process of designing, delivering and running digital platforms in order to create economic, social and environmental value. Platform engineers are cross-functional, with a focus on the enterprise and business value of the platforms they’re designing and managing.”
Well, there you have it! How would the AI Content Detection space serve the B2B landscape and SERP? That remains to be seen. More on that in the forthcoming weeks.
Interesting Times Ahead
Of course, as everyone has been proudly (but fearfully) touting, this is just the start. With Google’s Bard releasing in a matter of days, this space will explode even further.
However, two things haven’t changed as much as many had envisioned:
- Consumer-facing B2B content isn’t legibly supported by AI unless a good writer or editor (with domain expertise) is controlling the narrative.
- AI writers are great companions, but thinking of them as a “replacement” for human expertise is far-fetched.
Surely, technology keeps getting better. But that means that human-generated authoritative, journalistic-style, opinionated, brand-centric, and consumer-centric content has even more relevance. Food for thought?
More to come in this series around:
- Things that AI is good at
- Things that AI’s Bad at
- How can you detect AI content?
- Why AI fails at writing thought leadership content? (and why you shouldn’t opt for it in the first place)
- Why AI fails at writing good listicles
- Why AI is making us mentally obese
Besides the above, what more would you like to know? Let us know your thoughts and opinions in the comments below.