By Ruth-Anne Klassen, Student Liaison
Each month, the IDL SIG hosts a “First Friday” social event for students and members. Each edition of First Friday includes a short presentation. Jenn Buckley, M.Ed, MBA, MCCT, CPTD, was the presenter on Friday, June 2.
Jenn presented a demo and presentation on ChatGPT, focusing on reducing time spent on creative tasks and navigating controversy around ChatGPT and similar AI tools.
During the demo portion of the presentation, Jenn prompted Chat GPT to produce the following:
- A top-ten list of Learning and Development influencers
- An attendance policy
- A script between a plumber and a customer explaining how a toilet flapper works, and
- A multiple-choice Biology quiz with explanations of the correct answer
Although Chat proved to provide detailed answers, Jenn highlighted many limitations of the software. For example, the paid version of the software will prevent you from being kicked off the platform when too many users are on it simultaneously. The software also cuts off with sources from September 2021 or earlier. ChatGPT is like a child in that it needs a specific prompt to do what you want. Chat’s dialogue sounded stiff and unrealistic when generating a script, unlike real-life dialogue. We discovered other limitations, like offering bad translations to other languages and listing incorrect sources and authors for scholarly articles.
ChatGPT users need to take certain precautions to avoid pitfalls with the software. Since ChatGPT is just a tool, you can use it to do either good or evil, but it doesn’t carry moral weight. Specifically, Jenn proposes that we need not think of ChatGPT as plagiarism software, as many sources have claimed. To avoid plagiarism, though, Jenn advises creators using AI to “Enhance or present sources in a whole new way, or else… footnote it.” Jenn summarized Josh Cavalier’s limitations of ChatGPT, that it provides inaccurate information and raises privacy concerns. As well it is helpful to note Asimov’s Three Laws of Robots
- A robot cannot actively hurt a human or inactively allow a human to hurt itself.
- A robot must do what it’s told unless it is told to violate rule one.
- A robot cannot hurt itself as long as they can avoid violating rules one or two.
Before Jenn’s session, I thought ChatGPT was a software that compiled information to produce responses to imitate human writing composition. I had heard of students and employees using ChatGPT or similar tools to reduce labor hours and make written works for them, along with experiments like AI finishing a poem previously lacking a conclusion. Still, I doubted that generative AI could replicate the human-created quality of work due to lacking the human element. I think it’s similar to how there are limitations and risks to my retail employer using self-checkouts to help customers pay for their merchandise with fewer human cashiers.
During the session, I saw ChatGPT work in action for the first time. I loved how it was able to compile interesting responses to a variety of prompts. I agreed with Jenn and other attendees that results from a ChatGPT search still need editing and fact-checking.
I would use ChatGPT for fun and leisure, but not for work or other duties where I’m expected to compose my works using my word choice and thought process. Similarly, for purposes like job search and career development, I think those are good brand-building opportunities when I show how I add unique value through my creative process and values. If, however, an employer expected me to use generative AI to reduce labor hours, then there would be no pretense about the originality of my work. I could normalize such a task by comparing it to an open-book exam, where the test is not on memory recall but on abilities like researching and compiling information. In the case of ChatGPT, my work would involve giving Chat effective prompts and editing its responses.
Even though I have mentioned the “originality” of “my own” work, I think that generative AI accentuates the idea that it is difficult or impossible to be original. Although my writing might sound unique to a reader, I could use ideas from others or resources I have encountered. Although anyone who has attended school has learned about the importance of citing work and avoiding plagiarism, generative AI may further blur the line between plagiarism and unique branding.