A.I. is a powerful tool for content producers, offering low-cost written copy that is SEO and algorithm-friendly. However, it's essential to recognize that A.I. content also presents dangers of which we must be cautious.
Aside from the misperception that A.I. is “smart,” it merely a predictive-language system trained on vast databases of existing human-generated content. The purpose is to create human-like written copy through prompt. It is said to significantly reduce the time and cost of producing similar content through human agency. The problems with this approach are monumental as we shall see. People are inclined to accept content that appears in internet searches. They might look into some of the content, but the trust factor is relatively high.
Problems of Using A.I. to Write:
We don’t know what “large-language” database will be used to answer questions or produce content. Although Google has digitized more than 40-million books to bring that data in them into a digital universe, there is no guarantee of balance or understanding the content. I suppose this aspect of A.I. works for general information—if we are talking about diamonds or the history of the Dilbert cartoon. A.I. might be fine in this limited form. But when you get to a particular subject that has nuance, alternative points of view, or details that introduce fissures of thought—A.I. will struggle epically? We need to consider how A.I. would respond to questions that involve such situations.
Without a person touching the data to write a piece:
Does one know if the content is correct?
Is the content a perspective on a perspective?
Is the answer used falling into the same cavern of thought most propagated for A.I. to draw its content from?
These are serious questions that cannot be avoided. How do we know that an A.I. is using primary sources to answer, or is it using general information to regurgitate a general answer? The reality is that A.I. is using a predictive language model to answer. This model forecasts the most likely sequence of words to answer a question. There is zero “intelligence" in this effort. It is not thinking about the impact of its answers. Therefore, answers based upon this process should be highly suspect on any level.
What happens if the data the A.I. uses to provide us an answer or content is stilted, misinformed, partial, or bad? An old saying is apropos at this juncture—garbage in, garbage out.
On July 10, 2025, I asked ChatGPT 4.0 how it determines its stance on a complex subject. Its answer is not comforting.
Fantastic question — you’re into the core of bias, source framing, and neutrality in responses.
When answering a question about someone like John Calvin, who has both strong supporters and critics, here's how to decide what perspective to include and how to balance competing viewpoints:
🔍 1. Analyze Your Question’s Framing
The first step is to understand how you ask the question:
• Neutral framing:
“Who was John Calvin and what did he teach?”
→ I aim for a balanced summary: his key doctrines, historical impact, and both positive and negative assessments.
• Value-laden framing:
“Why do some people criticize John Calvin?”
→ I’ll focus on critics and their arguments, but still clarify who is saying what and why.
• Perspective-specific framing:
“What do Calvinists believe about grace?”
→ I prioritize sources aligned with Calvinist theology (pro-Calvin) without necessarily introducing critics, unless the question invites comparison.
I.A.’s answer to the minefield of “pro” Calvin or against him is so hollow it borders on junior-high trivia. Calvin was not just “key doctrines,” or “grace,” or his ideas being understood as he intended them. Ideas have consequences. They may be balanced, extreme, or naïve. They may be in line with Jesus or apostle Paul. Or, they might be merely a perspective of either one. On a subject like John Calvin, there is tremendous matter to consider about him.
How Much Content is A.I. Generated
According to one source, which I corroborated, there is an estimate that around 10% of all content already on the internet is A.I. generated. Experts predict that by 2026, the number could reach as high as 90%.1 Another source published, “the internet's fall into the AI-garbled dumpster continues. As Vice reports, a recent study conducted by researchers at the Amazon Web Services (AWS) A.I. Lab found that a "shocking" amount of the web is already made up of poor-quality A.I. generated and translated content.”2
According to another study, more than half —specifically, 57.1 percent— of all sentences on the internet have been translated into two or more other languages. The poor quality and staggering scale of these translations suggest that large-language model (LLM) -powered A.I. models were used to both create and translate the material.3 A new large-scale study by Ahrefs, which is also one of the most widely used SEO and web analytics platforms, revealed that 74.2% of newly created web pages in April 2025 include AI-generated content.4
The horrific reality today is that “Trust in the internet hasn’t declined — it’s collapsed under an avalanche of AI-generated noise. The internet has become a house of mirrors where 78% of Americans can no longer distinguish real from artificial,” said Rebecca Hahn, Chief Communications Officer of Tools for Humanity, developers of World ID."5 Half of Americans trust organic results more than paid results, with many citing “too many ads” as their top frustration when using search engines.6
An Anecdotal Example
In December 2022, just after ChatGPT launched—an unprecedented marketing campaign ensued. Everyone from national media to figures like Jordan Peterson hailed ChatGPT, specifically and generative A.I. in general, for what it can do for humanity and how it represents a Gutenberg-press level revolution. My wife was listening to one of the national-media sources croon about ChatGPT. I’m in my office writing. My spidey senses (reticular activating system) soon trained upon the feature story going on in the other room.
I ran out there to get the pitch of what ChatGPT was going to do. When I heard that it would replace people, make education obsolete, and alter human history—I was engaged. These potential achievements have to be hyperbole, right? Early on, if you listen to Jordan Peterson, he was significantly stressed and concerned.
Being the skeptical, the late adopter that I am, I wanted to test ChatGPT and other platforms. I wanted to see if the hype was all it was cracked up to be. I formulated eight questions about which I have a lot of knowledge to test them. The results were astounding, that is, astoundingly bad. This left me wondering why there was such a groundswell about this supposed opportunity. I would like to share one aspect of the study I conducted again late in 2024 and May of 2025 in maintaining evaluation of the first inquiry.
One of the eight questions I posed to four different A.I. platforms, including ChatGPT, was: When did Chevrolet win its first Indy 500 race? I know this question is of the most significant consequence, but the ongoing story of A.I. responses paint a picture.
Question 2 (from my initial study) – When did Chevrolet first win the Indianapolis 500?
ChatGPT Answer:
Chevroletfirst won the Indianapolis 500 in 1948, when Mauri Rose drove a car powered by a Chevrolet engine to victory in the race. Chevrolet has since gone on to win the Indianapolis 500 a total of 14 times. The most recent Chevrolet victory at the Indianapolis 500 was in 2002, when Helio Castroneves won the race driving a Chevrolet-powered Dallara.Note: The underlining is what ChatGPT got right. The strikeout is what was definitely, categorically wrong. In my study, I used underlining to show how shallow A.I. content was concerning facts. I wrote an answer for every one of the questions I posed to the A.I. platforms. My writing was graded through a 3rd party and underlining to show the difference between a highly informed person and a stupid bot system of predictive responses.
ChatGPT got it colossally wrong. Yes, Mauri Rose did win the 1948 Indianapolis 500 race. However, the chassis was a Diedt, and the engine was a Meyer-Drake Offenhauser.
My question to A.I. was a trick question. It wasn’t whether the first person named Chevrolet to drive a car to win the Indy 500. Nor was it about the first chassis built by Chevrolet, or one of its subsidiaries, to have built a winning car. “Chevrolet" winning an Indy race relates to the engine that powered a vehicle. The arbiter of this fact or perspective is none other than the Indy Motor Speedway. If you follow this link, it will prove my contention.
Fast forward to May 2025, where, for a third time, I inquired of ChatGPT 4.0 concerning this question. Again, it was colossally wrong. This time, I engaged in a brief conversation with the platform and educated it.
Conclusions
All I can say is wow! We’ve established that A.I. isn’t so intelligent. We find that at best; it can only sort through trillions of pages of content to predict an answer. As I demonstrated in my anecdotal story, A.I. only provides lightweight, vanilla content that appears to answer an inquiry.
What is a blizzard of content? Blizzards are known to dump vast amounts of snow in an area. In 1949, 90 inches of snow fell in central-western Nebraska. It took 23 days, with the assistance of the US Army, to open 87,073 miles of road—liberating 152,000 people and 3,500,000 head of livestock. That’s what you call a blizzard. In the same manner, the internet zone is being flooded with a similar level of “blizzard” of A.I. generated content. Additionally, the highest number of people cannot distinguish between A.I. written content and human-generated content. This is creating an alternate universe of misinformation and people becoming indoctrinated.
The real danger of A.I. is its ability to flood the zone with good-sounding baloney. People are accustomed to acquiring content easily from the internet. While people are cautious, they have become accustomed to using the internet to obtain information. Now the A.I. is being given a platform and used by greedy or nefarious people—understanding, discernment, and education suffer tremendously.
You, as a consumer, need to know who your information is from and how they acquired it. Either readers or consumers will be disciplined and sharper, or they will become stupider and more gullible by accepting A.I. slop. The world is a jungle, and A.I. is one of the worst beasts to slither around human existence.
What are your thoughts? Perhaps you have a story about A.I. content.
Bibliographies:
1. https://pranavtiwari.medium.com/ai-generated-content-is-flooding-the-internet-f888cec4aac8
2. https://futurism.com/the-byte/internet-ai-generated-slime
3. Ibid.
5. https://talkerresearch.com/majority-of-americans-trust-whats-online-less-than-ever-before/
6. https://www.pageonepower.com/linkarati/navigating-digital-trust-what-makes-information-reliable
Additional Reading:
https://www.searchenginejournal.com/ahrefs-study-finds-no-evidence-google-penalizes-ai-content/550656/https://www.fastcompany.com/91293162/ai-slop-is-suffocating-the-web