By Dick Bourgeois-Doyle
(This article was originally published in the January 2024 issue of Leacock Matters)
When the founding members of the Leacock Associates gathered in 1946 to establish a new literary award, they probably did not envision a day when such a prize could be won by a machine.
But over the last decade, advances in what is termed Generative Artificial Intelligence (AI) have made it possible to produce entire stories and other creative works with minimal human input. Machine-generated submissions have been surreptitiously entered in and have won visual arts awards and even been short-listed in literary competitions (i.e., Japan’s Hoshi Shinichi Literary Award and 2023 Sony World Photography Award).
Academic institutions around the world are seized with the very real possibility that student essays and other papers could be materially, if not completely, produced with AI tools such as the increasingly popular ChatGPT. To manage, they have introduced guidelines and are actively developing software to detect abuse of AI systems. See the McMaster University AI Task Force resources.
But the challenges are complex.
Though tech purists might balk at this description, it can be useful for most mortals to think of AI as trial-by-error on a massive scale. The concepts that underpin this process have been studied for decades under bewildering labels like ”backpropagation,” “neural networks,” and “algorithm optimization.” But these ideas rested in the realm of the theoretical until the recent rise in computing power and access to gigantic data holdings (often the personal info we surrender when blindly accepting Terms and Conditions). This allows computers to access visual, audio, and written information units measured in the billions and to run those “trial-and-error” learning efforts in similar magnitudes.
The result is manifest in applications like Apple’s virtual assistant “Siri” that not too long ago would have struck most humans as supernatural but are now considered commonplace.
Pros and Cons
While the Leacock Associates do not have the technological resources of universities, it does have an interest in the issue. On the plus side, AI could be used to help authors craft better stories. From one perspective, the use if AI assistance is not unlike the long-established process of seeking input from other writers, editors, and research assistants. AI could thus be viewed as a tool for expanding the competent Canadian humour-writing community and democratizing the arena of creative literature. AI-generated literature also has the potential to make the publishing process more efficient, possibly making books more accessible.
On the other hand, some worry that Generative AI will diminish the development of humans as creative beings. Autonomous writing machines could, for example, be used to replace human writers in early career functions such as institutional communications, media relations, journalism, and editorial work: employment that has provided income and served as the training ground for writers.
A world reliant on AI literature, one that draws upon existing work, could decrease the nuance, personality, and emotion of written language, leading to a decrease in diversity and to a biased homogenization of writing styles, perspectives, and topics. Worse, AI can amplify capacities to spread misinformation, incite violence, and share confidential information.
Another ethical issue evoked by autonomous writing and its use by students is the prospect of
unconscious plagiarism and copyright infringement. AI-generated literature created with algorithms designed to mimic the writing style of a particular author will add a new dimension to evaluating one work against another.
Scary stuff and only a hint of the full scope of AI concerns.
Leacock Medal – We’re Pro Humans
Much of the capacity to combat negative impacts rests on the shoulders of governments and regulatory agencies. They could, for example, introduce new laws to protect copyright and privacy and take measures to ensure that AI-generated content is used responsibly.
Within this broad context, the Leacock Medal has specific interests. While many major publishers are already devising AI guidelines for authors, the Leacock Medal also accepts self-published entries to the main award program and holds a competition for unpublished, short works by high school students.
Furthermore, the Medal seeks to honour the memory of a man, who actively celebrated the association between “Humour and Humanity,” the human element in humorous literature, and the benefit of humour to the functioning of human beings. An award in Leacock’s honour is certainly compelled to preserve the ethical human feature of humorous works.
Finally, the Leacock Medal has long championed the authors who participate in its competition – suggesting that the medal should also act to protect such humans as the originators of ideas and written works as well as their economic interests and worth in the creative enterprise.
Seeking a Balanced Approach
With this backdrop of massive, technological change and international disruption, a humble, volunteer-based program like the Leacock Medal could feel impotent.
In the longer term, educators and award programs may have easy access to software tools to quantify machine involvement. But for now, the Leacock Associates is considering a balanced approach based on an honour system and respect for author entrants. One modest step, for example, could see an addition to submission guidelines for the Student Award Competition to recognize the reality of AI but to also celebrate and encourage human input, ethical conduct, transparency, and openness.
The Associates are, at the very least, obliged to uphold the requirement for originality in the age of AI. This would prohibit direct or indirect plagiarism, copyright infringement, and the use of another’s work for entry in the competition.
So, while founding members of the Leacock Associates could ignore the possibility of machine-generated entries, their 2023 counterparts cannot.
(Surprise! I ran this piece through ChatGPT – which advised me to add headings – as I did – and suggested it might be more engaging with comments from authors and book lovers – so, feel free to send me some.)
Dick Bourgeois-Doyle
bourgeoisdoyle@gmail.com
___
BIO: Dick Bourgeois-Doyle is an Honorary Member of the Leacock Associates, former Secretary General of the National Research Council of Canada (NRC), and former Chair of NRC’s Task Force on AI Ethics (2017). He is also author of Two-Eyed AI: A Reflection on Artificial Intelligence and Indigenous Ways of Knowing (2019: Canadian Commission for UNESCO).