The Ethics of Writing Artificially Intelligent Books by Gary Stuart

John Matthew Fox[1] gets the blame/credit for this woefully unintelligent screed about artificial intelligence. He does not like it, and neither do I. But at least he understands it technologically, if not cognitively. He says some writers will soon publish artificially intelligent books and flood the marketplace. Some will claim “AI-Assisted” in the front notes. And some will disguise the fact that ChatGPT wrote part of their book. Worse yet, he thinks in the next decade, AI programs will be able to write an entire book. How could that be when millions of aspiring authors have spent the last several eons trying to write their books but are still at the halfway point? Is it because we rely on actual intelligence and are leery of artificial intelligence? Artificial means made or produced by humans rather than occurring naturally. So, shouldn’t this new thing be called Unnatural Intelligence?

ChatGPT is short for Chat Generative Pre-trained Transformer. Pre-trained? By whom? It’s a pre-trained transformer of what? Does it transform chats, bots, or chatbots? A chatbot is a computer program that uses artificial intelligence and natural language processing to understand questions and automate responses to them, simulating human conversation. Botting is also new—but it has a Wikipedia page relating it to an internet bot or a software application that runs automated tasks over the internet. Apparently, it could be a video game bot disguised as an automated player in a video game.[2] Maybe you are botting to think about artificial intelligence as a replacement for actual intelligence and talking instead of natural language processing. I suspect this is probably a tech way of doing away with analog all together, replacing it with digital, and not having to breathe, take in nourishment, or sit upright. This can be done inside a plexiglass box in which bots play with one another, or not. Probably not, since it was launched by OpenAI in November 2022. It is built on top of OpenAI’s GPT-3 family of large language models and is fine-tuned as an approach to transfer learning with both supervised and reinforcement learning techniques.[3]

Believe it or not, they say this ChatGPT thing is known for detailed and articulate answers across many domains of knowledge. Its uneven factual accuracy seems itchy, but it is a bot, after all. What did they think—it was real? Human  like? Cute? Cuddly? No, none of that, but when it was released, it was valued at $29 billion.[4]

So, fellow writers and ethicists, what are the ethical imperatives of this incorrectly named thing called Artificial Intelligence?

“AI presents three major areas of ethical concern for society: privacy and surveillance, bias and discrimination, and perhaps the deepest, most difficult philosophical question of the era, the role of human judgment.”[5]

Harvard University clarified that philosophical question. “Debates about privacy safeguards and about how to overcome bias in algorithmic decision-making in sentencing, parole, and employment practices are by now familiar . . . We’ve not yet wrapped our minds around the hardest question: Can smart machines outthink us, or are certain elements of human judgment indispensable in deciding some of the most important things in life?”[6]

As expected, Madame Wikipedia chimed in. “The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use, and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics. It also includes the issue of a possible singularity due to superintelligent AI.”[7] I suppose graduation from intelligent to super-intelligent was inevitable, like graduating from high school and entering college—we went from merely intelligent AI to super-intelligent AI.

Could our human moral behavior be evaluated by one of these Super Intelligent machines? If so, will that upgrade human moral behavior to super-human, super-intelligent moral behavior? Could one of those super-duper machines match the actual achievements of Mother Teresa? Could it evolve into the moral messages of Emmeline Pankhurst, Fredrick Douglass, Florence Nightingale, Charles Darwin, Martin Luther King, or Pope Francis?

Could an artificial intelligent machine establish the line between ethics and morals? Or would it meld these two human traits together? Humans know the difference, even though they are, in reality, two sides of the same coin. In ancient Greek, the word “ethos” (ethics) means character, whereas “mos” (morals) means custom. Unfortunately, these translations are largely unhelpful with a modern-day interpretation of the guiding principles that relate to the difference between ethics and morals, right and wrong, and being a “good” person.[8]

Here’s the difference. Ethics are rules to be followed in a professional setting, such as a code of ethics in medicine, law, and business. The other side of that coin—morals—refine an individual’s personal principles. That brings us back to the underlying subject—bots, chats, machines, artificiality, actuality, and the reinforcement of learning techniques. Machines are built by people. But if the machine is super intelligent, will it reproduce itself by reinforcing new learning techniques? Not only new, but inapposite from human ethics rules and moral conduct by the humans that built the machines? Time won’t tell. The machines will.

Gary L. Stuart:





[4]  ChatGPT creator OpenAI is double the startup’s valuation to $29 billion. Insider, Lakshmi Varanasi, January 5, 2023.


[6] Ibid.




(Visited 1 times, 1 visits today)