I asked the well-known AI tool, ChatGPT, to write a blog post about the dangers of ChatGPT. It did, and what follows is not that post. But I will open by citing the four dangers of ChatGPT—according to ChatGPT:
- Amplification of bias
- Misinformation and fake news
- Manipulation and social engineering
- Ethical implications
Ironically, ChatGPT is dismissive of the dangers associated with its use. As the AI-produced blog post concludes:
“[W]e can strive for the responsible development and deployment of AI conversation models, ultimately harnessing their benefits while mitigating their risks.”
Apparently, bias, misinformation, manipulation, and unethical complications can all be mitigated, according to ChatGPT.
My wife and I write dystopian fiction. As authors, we spend our time thinking about the world falling apart: what would cause this and how it would happen. Our novels do not deal directly with AI, but the fundamentals are still there.
Some of what AI promises sounds like a kind of utopia, but oftentimes, things that foretell a utopia can result in a dystopia instead. Simply look at science fiction from two centuries ago (and more recently). There’s quite the tradition of literature detailing the dreams science may prognosticate and the nightmares which can instead ensue.
However, I don’t maintain that all AI or even most AI is dangerous. But I do posit that the widespread embrace of AI without a critical eye to its potential pitfalls is. Science doesn’t always triumph and just because we can do something, doesn’t mean we should.
Using AI to augment a surgeon’s skills and increase the likelihood of success is one thing. But using AI to rent artificial girlfriends to lonely men is another. Yes, the latter really exists and a proliferation of things like this is developing. Again, science and the human/artificial mind can accomplish wonders. But we need ethics to tell what’s morally right and wrong and put guardrails around what science and technology are capable of.
The AI Ghostwriter
AI these days can do many different things. Yet I turn now to one capability that seems most natural to this blog: writing. I fear that the art of writing is quickly becoming subsumed by the mediocre results of ChatGPT. Ask it to write you an X-word article on Y topic and you’ll get something passable:not outstanding, but probably satisfactory in many cases.
But for those just starting out their journey of writing, ChatGPT is like giving a calculator to a second-grader just barely grasping basic arithmetic. It will get the job done much faster, but it will destroy any development of the student in the craft. If we wish to have future generations of men and women who can think independently and who can express their opinions in incisive prose, we need to curb the current trends toward AI writing everything.
This is all leading to widespread mediocrity at best and lemming thinking at worst. If AI begins to write much of what we read, who is actually behind the opinions and points of view that are being espoused? Artificial Intelligence will become the “thought leader” and people will begin to all think the same: the way the bots have been programmed to have us think.
Some have mocked Elon Musk for encouraging a “pause” on the development of AI until it might be properly understood and addressed. The open letter he and 27,000+ others signed states:
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
This non-AI author agrees that we must tread lightly with AI and not rush headlong into its widespread overuse and rapid development. We want utopic results, not some sort of dystopian scenario, which should remain on the pages of a novel.