I asked ChatGPT, the AI Language Model ( “LLM”) about the dangers it represented for news in March 2023, after this nuclear weapon was released without notice or protection by OpenAI at the end of 2022.
So, here are some personal conclusions in response to ChatGPT’s ever cautious and hopeful answers.
AI is Biased, so AI-produced News at Scale will be Biased at Scale
- Says ChatGPT: “One of the primary concerns about using AI to generate news is the potential for bias. While AI language models like me are designed to learn from vast amounts of data, the data we learn from is not always diverse or representative. If the data we learn from is biased or limited, then the news we generate may also be biased, presenting a skewed version of events and perpetuating stereotypes or misinformation. It is crucial to ensure that the data used to train AI models is diverse and representative to avoid this problem.”
My personal conclusion to biased, generative AI news production is:
Concentrating the power to produce news in seconds, without human intervention, using one generative AI tool gives enormous, global influence to a small clique of people: investors and app developpers, whether from Silicon Valley or China.
ChatGPT’s Language Model Machine Learning can Output whatever Garbage its Fed
- ChatgPT : “Another danger to news from AI is the potential for misinformation to be generated at scale. With the ability to generate vast amounts of text quickly and easily, AI could be used to flood the internet with fake news or propaganda.
While it is possible to train AI to detect and flag misinformation, the process is not foolproof, and there is a risk that inaccurate information could spread rapidly and widely before it can be identified and corrected.”
My personal conclusion to ChatGPT-generated misinformation :
As the digital marketing skill of search engine optimization (SEO) is based on producing textual content in response to keywords, we will see an avalanche of fake content online, directing unaware users to wrong information on everything from health to screen addiction, to crypto currency or climate change.
This will discredit online content even more than it is in 2023. It will prop up journalists and media sources who are already known and recognized today as sources of reliable news. But it will make it very hard for these very news sources to use search or generative AI like ChatGPT to assess whether information is true or not, bringing us back to an age where actually talking to a human, potentially in person, will be the only sure way to confirm a piece of news.
It will make it hearder for small, earnest news upstarts to emerge, unless they have a mastery of useful uses of generative AI, SEO and potentially a unique angle.
Yuk, Who wants News from a Machine ? Advertisers Maybe
- ChatGPT: “Additionally, there is a risk that AI-generated news could lack human empathy and understanding. While AI can learn to use language effectively, it cannot fully comprehend the nuances of human emotions, cultural contexts, or ethical considerations. This could result in insensitive or inappropriate news coverage, lacking the human touch needed to convey the full impact of a story.”
My personal conclusion on robot-created information:
Media outlets, especially lower tier ones, will be tempted to use generative AI like ChatGPT to automate information production. Not necesarily to produce fake information, but to produce bland, rehashed and consensual text that really has no intrinsic human value to the reader, but fills a web site.
The media outlet may spend a lot more time and ressources on creating fancy charts, videos and visuals that will make the site look professional and attractive to advertisers. These advertisers may well get misled into purchasing ads on this beautiful, reassuring site, despite the fact it will only attract one page hits from good SEO optimization with little to no actual reading of the boring site’s content.
This trend will further discredit online advertising. The positive outcome is that subscriptions to reliable media will go back up. The question is, what news will those that can’t afford to subscribe to expensive, upper tier media access in this News 3.0 era access? Will they get angrier yet and connect ever easily online in order to block roads, towns, police stations or Capitols?
Generative AI will Replace Many Journalists, Increasing its Inherent Bias
- ChatGPT: “Lastly, the automation of news generation could lead to the loss of jobs for human journalists, potentially reducing the diversity of perspectives and expertise within the news industry. While AI language models like me can assist with generating news, they cannot replace the value of human insight, experience, and intuition in shaping and interpreting the news.”
My Personal Conclusion on ChatGPT replacing junior journalists :
Again, media outlets may well train junior journalists on augmenting their production with AI so they produce more of the simpler content in less time, reducing the number of junior jobs in journalism. Some journalism will become ever more technical, requiring data analysis and an understanding of AI-aided tools to interpret our worlds. Other types of Journalism may at the opposite focus on human interest content, based on in person interviews and individual testimonials.
ChatGPT is Clueless about Corporate Responsability
What ChatGPT doesn’t discuss is its responsability for all this. It seems immune to the notion of thinking before releasing a highly powerful service. It seems to be waiting for Dad to come home and get angry at the mess it created playing in the house.
Here’s DALL.E’s AI attempt at portraying this…Little Sam Altman is to the left, looking at the camera, while Greg Brockman and Ilya Sutskever play along.
What Responsible Tech Experts & Thinkers are Saying about ChatGPT:
- What Yann LeCun, one of the global experts on AI, first at Vector Institute in Toronto, now at Meta, is saying about ChatGPT’s form of auto-generative LLM:
” I have claimed that Auto-Regressive LLMs are exponentially diverging diffusion processes. Here is the argument:
Let e be the probability that any generated token exits the tree of “correct” answers.
Then the probability that an answer of length n is correct is (1-e)^n .
The probability of correctness decreases exponentially.
One can mitigate the problem by making e smaller (through training) but one simply cannot eliminate the problem entirely.
A solution would require to make LLMs non auto-regressive while preserving their fluency.
The full slide deck is here: https://lnkd.in/eueY5EAq
This was his introductory position statement to the philosophical debate
“Do large language models need sensory grounding for meaning and understanding?”
which took place at NYU March 24, 2023.
- What Tristan Harris, of the Center for Humane Tech and historians like Yuval Harari are saying in March 23: You can Have the Blue Pill or the Red Pill but We’re Out of Blue Pills.
- Read Noam Chomski on The False Promise of ChatGPT ( March 8, 2023)