3 minute read
Today, there is a major uptick in the presence of artificial intelligence (AI), from the new AI search engines integrated into apps like Instagram to the increased utilization of products such as ChatGPT within classrooms.
The major concern within classrooms is generative AI, which acts as an overpowered word generator. These generators use a mix of internet searches, a generative word processor and an advanced machination of grammar functions to generate entire documents, paragraphs or speeches.
Generative AI can be used in several ways, such as compiling and summarizing research, generating out-of-the-box ideas and performing other abstract expressions within the generator’s capacity. However, as this technology is advancing at a terrifying rate allowing students to use it in a fashion which can only be defined as cheating.
Using platforms like ChatGPT, students can generate entire essays or short story responses, plugging them into exams and papers as their own. Usually, the use of AI is relatively easy to detect, but what are the consequences for a student caught using artificial intelligence?
With the increased emergence and proliferation of cheating through AI, faculty at Oklahoma Christian are developing policies to create parameters for use of the technology.
Brian Simmons, the Director of the College of Liberal Arts, says there is no blanket policy within the college when it comes to generative AI. Professors are given the discretion to develop individual policies regarding the use of this technology in the classroom.
“There is a variance in who is incorporating AI into their classrooms and how they do so,” Simmons said.
In the syllabus of Paul Howard’s Applied Mathematics, fall 2024 class, he states: “During this course, AI may not be used during exams, quizzes or funding the final or when completing the house project. The goal of AI is not to get work done.”
However, Howard does offer occasions in the syllabus when generative AI could be useful, such as, “creating study problems with solutions, creating examples and explanations for mathematical concepts.”
While not entirely against the concept of AI usage, it is clear Howard does not want students completing homework using generative AI software.
Some professors, though, offer a more flexible approach to AI usage in their classrooms, attempting to positively implement it into the learning process.
Nathan Shank, a professor in the English department, shares how he implements generative AI across different classroom settings.
“AI use varies from course to course,” Shank said. “While introductory courses need to have a strict, no generative-AI use, upper-level courses allow its monitored and stated use in order to prepare students for workplace writing.”
Shank says he is trying to prevent rather than police improper AI use.
“I break assignments into parts that are submitted separately, making sure the prompt doesn’t easily lend itself to AI whole cloth production of a text,” Shank said.
Shank’s positive view stretches even further, attempting to positively implement AI in the writing process rather than fear its negative ramifications.
“Even in lower-level courses, I show students appropriate uses of AI, such as asking for advice in the same way you’d ask a writing tutor,” Shank said. “In upper-level class, I encourage students to use it to generate outlines, ideas, research and even text.”
To learn more about the ethical use of AI, a seminar will be held at 7:30 p.m. Thursday, Oct. 3 in Judd Theater called the “The Ethics of Artificial Intelligence.” Hart Brown, CEH and QRD, will discuss the importance and necessity of ethics in the AI field with his presentation at the Millican Ethics Symposium. This event is free and some professors are offering course credit for attending.
Be First to Comment