{"id":25717,"date":"2024-09-03T09:00:16","date_gmt":"2024-09-03T16:00:16","guid":{"rendered":"https:\/\/www.nexusnewspaper.com\/newsite\/?p=25717"},"modified":"2024-09-18T10:39:19","modified_gmt":"2024-09-18T17:39:19","slug":"i-ai-bot-a-classroom-discussion-in-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/www.nexusnewspaper.com\/newsite\/2024\/09\/03\/i-ai-bot-a-classroom-discussion-in-artificial-intelligence\/","title":{"rendered":"I, AI-bot: A classroom discussion in artificial intelligence"},"content":{"rendered":"<p>Artificial Intelligence (AI) is the seemingly paradoxical concept of non-living entities generating persistent logic and reason\u2014which, from the earliest days of ancient philosophers such as Descartes, are the singular defining features of what it means to be human.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>The first actual attempts at simulating human intelligence came in 1943, when academics Walter Pitts and Warren McCulloch published ideas in <i>The Bulletin of Mathematical Biophysics<\/i>, wherein they used mathematical functions to simulate a \u201cneural network\u201d such as those found in the human brain. While this function may be considered similar to today\u2019s circuit boards, study shifted to AI as we know it, which can be traced back to a conceptual experiment by Alan Turing in 1950.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>In \u201cComputer Machinery &amp; Intelligence,\u201d Turing chose to avoid getting hung up on defining a \u201cmachine\u201d and what it means \u201cto think.\u201d Instead he focused on functionality, asking if machines can do what we as thinking entities can, and decided that if a typewritten conversation between a machine and a human could be considered indistinguishable, then a machine can be considered intelligent. This became known as the Turing Test.<\/p>\n<p>Along the winding path that took us from mathematical algorithms to Siri, computer programmers eventually formed rudimentary language parsing software, which identified pre-programmed combinations of keywords as containing \u201cmeaning.\u201d This is probably best personified through the early text-based adventure games of the late \u201970s and \u201980s, wherein a player might instruct the computer to \u201ctake food\u201d or \u201cthrow rock break window.\u201d<\/p>\n<p>This labour-intensive process quickly revealed a programmer\u2019s searing holy hellfire: if intelligence is based around explicit instructions, then complex intelligence requires manually pre-programming millions of commands to account for every possible eventuality. Called \u201cbottom-up\u201d programming, this was quickly found to be impracticable, and eventually \u201ctop-down\u201d knowledge evolution was conceived, which we now know as machine learning. This begins with a simple premise and allows the computer to use trial and error to rewrite and refine its own rules and associations.<\/p>\n<p>Fast forward to 2017, and our very perceptions of reality are thrown violently into contest through the emergence of deepfakes, or the process of using machine learning to generate a convincing video depiction of a real person, and with that, the AI ball as we know it was set into motion. Since then, AI technology has become eerily sophisticated, and we now have AI personalities that can, in a superficial sense, achieve the hypothetical test that Turing envisioned nearly a century ago.<\/p>\n<p>There are three main avenues of AI that we may encounter: speech synthesis\/cloning, image\/video generation, and text-based artificial personalities. It\u2019s the latter two that are causing the most stir, with visual AI algorithms capable of generating shockingly coherent artwork from complex instructions, and conversational AI algorithms creating (mostly) passable writings.<\/p>\n<p>This article explores AI from an academic standpoint. Is procedurally generated artwork legally distinct from the real artwork that the AI was trained on? Is it considered transformative fair use, or is it at least ethical? Can AI-generated writing be trusted as accurate? And can the use of AI in assignments be considered cheating?<\/p>\n<p>Unfortunately, the unprecedented ingenuity of this technology leaves many of these inquiries in a murky grey area, but through discussions with some Camosun instructors, I\u2019ll do my best to find some clarity.<\/p>\n<p><b>Legality<\/b><\/p>\n<p>The question of legality emerges primarily in the procedural generation of AI artwork, which is trained by assimilating millions of actual, human-created art pieces. Those opposed to AI art say that art must be original material, and because AI algorithms are trained on real artwork without the creator\u2019s consent, this is, in their opinion, considered stealing. The opposition states that just as human artists are inspired by pre-existing media, the crucial element lies in the concept of \u201ctransformative fair use,\u201d which creates something similar but entirely distinct, with no permission necessary.<\/p>\n<p>Even the concept of art itself is crumbling under scrutiny. Some argue that true art cannot be created by machines, because art is an emotional expression of the human experience. Others feel that art has always existed independently of the creator\u2014that the meaning lies in the artwork itself, and if a piece causes an emotional reaction in the human viewing it, it has artistic merit.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<figure id=\"attachment_25718\" aria-describedby=\"caption-attachment-25718\" style=\"width: 188px\" class=\"wp-caption alignleft\"><a href=\"https:\/\/www.nexusnewspaper.com\/newsite\/wp-content\/uploads\/2024\/08\/1-cover-ray-nufer.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-25718\" src=\"https:\/\/www.nexusnewspaper.com\/newsite\/wp-content\/uploads\/2024\/08\/1-cover-ray-nufer-188x300.png\" alt=\"\" width=\"188\" height=\"300\" srcset=\"https:\/\/www.nexusnewspaper.com\/newsite\/wp-content\/uploads\/2024\/08\/1-cover-ray-nufer-188x300.png 188w, https:\/\/www.nexusnewspaper.com\/newsite\/wp-content\/uploads\/2024\/08\/1-cover-ray-nufer-438x700.png 438w, https:\/\/www.nexusnewspaper.com\/newsite\/wp-content\/uploads\/2024\/08\/1-cover-ray-nufer-768x1229.png 768w, https:\/\/www.nexusnewspaper.com\/newsite\/wp-content\/uploads\/2024\/08\/1-cover-ray-nufer-960x1536.png 960w, https:\/\/www.nexusnewspaper.com\/newsite\/wp-content\/uploads\/2024\/08\/1-cover-ray-nufer-1280x2048.png 1280w\" sizes=\"auto, (max-width: 188px) 100vw, 188px\" \/><\/a><figcaption id=\"caption-attachment-25718\" class=\"wp-caption-text\">This story originally appeared in our September 3, 2024 issue (illustration by Ray Nufer\/<em>Nexus<\/em>).<\/figcaption><\/figure>\n<p>Camosun College Criminal Justice instructor Michel Legault says that as far as legality is concerned, the answer is undefined, because laws and policies are largely a product of unfortunate hindsight.<\/p>\n<p>\u201cThe \u2018legality\u2019 is the term that probably creates the grey area, because there\u2019s no legal framework <i>per se<\/i>. Laws are made as a result of \u2018something went wrong.\u2019 Yes, some laws are made for prevention, but most of the laws are made as a result of something that happened, an offence occurred or whatever,\u201d he says. \u201cSo if AI is used in a way to create a criminal offence of some sort, then legislation will probably follow at that point. In the meantime, academic institutions or even companies, for that matter, have to create their own policy around the use of AI, saying, we expect a level of critical thinking, we expect you to do your research, understanding what you bring to the table.\u201d<\/p>\n<p><b>Integrity<\/b><\/p>\n<p>The most rampant problem arising in schools regarding the use of AI is when students procedurally generate the answers to their assignments and try to pass them off as legitimate using engines such as ChatGPT. Originally, my opinion was that ChatGPT uses quite a lot of words to say nothing of actual substance, so it must be laughably easy to spot and weed out, so therefore not really a true academic threat.<\/p>\n<p>However, that view was challenged by English instructor Kelly Pitman, who says that students who use AI put a heavy labour load on teachers, as well as actively contribute to the erosion of trust between instructors and students as a whole.<\/p>\n<p>\u201cIt requires, typically on average, several hours per assignment that may be dependent on AI, because I have to go through a process of checking through students\u2019 writing, and I have to talk to the student, and I often have to schedule a rewrite for them,\u201d she says. \u201cThat\u2019s how it\u2019s transformed my life, and I\u2019d have to say my fear is that it\u2019s also transforming my relationship with students, because your default is always, \u2018Hmm, this seems like a kind of set essay from a book, I wonder if the student wrote it,\u2019 and you\u2019re constantly having to ask that question\u2026 and often the answer is that they did not.\u201d<\/p>\n<p>Detecting plagiarism used to be a simple process of copy and pasting text into a search engine to evaluate if there were any glaringly similar matches. Unfortunately, AI-generated text is too variable to be caught by such strict parameters, and the instructor has to rely on their intuition around evaluating conflicting writing styles. Accusing a student of academic dishonesty is no trivial matter, and while she has only ever been wrong once, Pitman lives with the constant anxiety of misinterpreting the situation and falsely accusing an innocent student.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>\u201cI\u2019m also very afraid of being wrong, and accusing someone who has really worked hard to do what they had to do, but I have had a lot of conversations with students who have used AI, and they are painful and difficult conversations where they tell me why, and I tell them why they can\u2019t, and I feel for them,\u201d she says. \u201cOften they know it\u2019s wrong, and they\u2019re doing it because of extrinsic factors. They\u2019re nervous about it, they\u2019re panicking. They\u2019re working a full-time job, they\u2019re living in a dump with a roommate they don\u2019t like, and worried about the future, and there\u2019s a lot of motivation [to cut corners].\u201d<\/p>\n<p>Camosun College Faculty Association president Lynelle Yutani, who is also a Camosun instructor, says that, like Pitman, rather than blind vindictive punishment, her primary focus is understanding the student\u2019s justifications, in order to gain some context on their poor choice.<\/p>\n<p>\u201cWhenever I\u2019ve had a student in my class who has bent those academic integrity boundaries, the reason behind that has always interested me more than the thing that\u2019s actually happened,\u201d says Yutani. \u201cThe pressure to get good grades, or the consequences of failing out of an expensive program, or sometimes there\u2019s ego involved, all of these things that drive us to do things that are very survival-instinct based.\u201d<\/p>\n<p><b>Consequences<\/b><\/p>\n<p>The repercussions of getting caught for cheating extend far beyond the immediate consequences of getting caught or the perceived short-term benefits of following the path of least resistance. Pitman thinks that students considering cheating should re-evaluate the very reason that they are paying an arm and a leg to attend college in the first place.<\/p>\n<p>\u201cWhat I wish is that students ask themselves more often is whether it matters whether they know the content of the courses that they\u2019re taking. Is it about a ticket to a job or is it about learning?\u201d she says. \u201cThat\u2019s the core question, because if it\u2019s really just a bunch of hoops to go through so that you can start your future, you\u2019ve got your teachers insisting that you learn and that\u2019s not your priority. There will come a time when you will be expected to demonstrate the skills that it says on paper that you have. So you\u2019re losing that opportunity to make sure that you will be able to do that.\u201d<\/p>\n<p>Yutani says that the students who graduate without legitimately learning are not only doing themselves a disservice\u2014it also reflects negatively on the instructors whose job it is to make sure students who traverse the education system emerge rightly and thoroughly educated.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>\u201cIf I graduate students into the workforce that don\u2019t have the skills or capabilities that we\u2019re certifying on the basis of credentials,\u201d she says, \u201cis that my fault because I didn\u2019t catch them, or is that the student\u2019s fault because they used AI and passed it off as their own work?\u201d<\/p>\n<p>To Pitman, the value of education does not lie solely in written knowledge, but also the ability to reason, problem solve, and think critically. Students who take the time to learn and understand are also taking the time to improve how they resolve problems and challenges in the lives of themselves and those around them.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>\u201cI\u2019m surprised at how often I have to work with a student to get them to think about the knowledge, and what value that might have for them, and what value it might have for a culture that we have people who can think their own way out of problems,\u201d says Pitman. \u201cI see myself performing a civic duty as well as preparing people for employment, and I don\u2019t want to send people away with a college education or a university degree who can\u2019t read a book or who can\u2019t spot a problem in an argument, can\u2019t think their way through to a new solution that no one\u2019s thought of.\u201d<\/p>\n<p><b>Perspectives<\/b><\/p>\n<p>Yutani believes that AI is not a magic technological solution, and that it has the potential to do more harm than good if not used responsibly.<\/p>\n<p>\u201cIt\u2019s not a panacea to solve all your problems and make everything easier. Depending on how it can be used, it\u2019s very possible for it to make things worse. And I think that\u2019s what we\u2019re seeing in general, in pop culture media,\u201d says Yutani.<\/p>\n<p>Pitman points out that AI can only ever function using popular points of view that have already existed across many years of the internet, and that in order to move forward as a culture, we need to find new discussions and new ideas.<\/p>\n<p>\u201cAnother thing that I think students need to think about with AI is we\u2019re living in a time of great development of our ideas, about identity and power, and AI by default is reproducing dominant discourse,\u201d she adds. \u201cWe can\u2019t just keep rehearsing and revising the same old knowledge; that\u2019s been getting us nowhere.\u201d<\/p>\n<p>AI algorithms have no sense of morality, and this subtle trait is what makes us human. The ability to distinguish between multiple morally complex scenarios is something AI will never be able to do, because it lacks empathy.<\/p>\n<p>\u201cThe important question is, \u2018What is the human element, and does it matter?\u2019 If I ask someone to write an argument about the best political candidate for the 2025 federal election, AI could probably do a comparison,\u201d says Pitman. \u201cBut a human being can have a subtlety to bring to that conversation, including putting together a moral subtlety that AI cannot do. A machine doesn\u2019t make value judgements, and value judgements are a lot of what we do when we communicate and work together.\u201d<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>On the other hand, like technologies that have come before it, AI can play a positive role in the classroom and workplace, and shouldn\u2019t be viewed as strictly negative or creating double standards.<\/p>\n<p>\u201cI\u2019m not sure if it\u2019s so terrible when you look at it from an equity or accommodation perspective,\u201d says Yutani. \u201cNot everybody can have the same level of skills, but if we have tools that allow people to achieve a similar level, is that the worst thing in the world? If tomorrow you\u2019re employed and your employer says, I need you to get this thing done, and you don\u2019t know how to do it, well, for a lot of people the first step would be Google, YouTube, and AI. If you deliver a product that works for your employer and they\u2019re satisfied with it, then you\u2019re successful in the workplace, and if I\u2019m in an applied learning classroom, and I say, \u2018No, you can\u2019t do any of those things,\u2019 that\u2019s a bit disingenuous, don\u2019t you think?\u201d<\/p>\n<p>Yutani believes that open communication is the key to properly integrating AI into the classroom\u2014there is no reason for a student to slink around in the shadows using this forbidden technology surreptitiously, like a child sneaking into the kitchen to get a midnight snack. The knowledge\u2014and the snacks\u2014are there to be consumed freely; we just need to communicate openly about it.<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>\u201cTo me, it\u2019s not AI that\u2019s a problem, it\u2019s our approach to incorporating it into our lives that is not being done in the most appropriate and collaborative way,\u201d says Yutani. \u201cI think it\u2019s about disclosure and consent. Let\u2019s agree to share when and how we\u2019ve used AI on anything, whether it\u2019s me producing material to you, whether it\u2019s you turning in material to me. Let\u2019s agree that we\u2019re just going to be transparent about that.\u201d<\/p>\n<p>Legault agrees that the key to embracing AI is to treat it like any other information source: critically, which requires stringent verification and citation. However, it should be noted that verifying AI sources can result in far more work than you would have to do if you just used Google, like our pre-Generation-Alpha ancestors did.<\/p>\n<p>\u201cThere\u2019s nothing wrong with doing research, the idea is what do you do and how do you bring that research? Do you properly quote it, do you cite it? If you\u2019re asking AI to do your paper, there\u2019s some very positive aspect to that, if you are prepared to say so, as well as make sure that you are able to bring your own knowledge to the table,\u201d he says. \u201cYou have to be able to critically approach the information in front of you, and verify it. And if you verify everything that\u2019s there to the right source, and then you\u2019re able to cite the right source, it\u2019s doing the work twice, in a sense, because if you did the research right from the get-go, you arrive at the same place.\u201d<\/p>\n<p>Verifying the info you receive from AI is of utmost importance since ChatGPT is notorious for haughtily telling bald-faced lies based on some obscure <i>Onion<\/i> article that it believes to be absolute fact, because mama never told her little chatbot that you shouldn\u2019t believe every byte of data you scrape out of the dank recesses of the internet.<\/p>\n<p>At the end of the day, AI is here to stay, and the sooner we can stop treating it like some grubby little thief whispering treacherous filth into our ears, the sooner we can gingerly embrace it before moving onto some other groundbreaking technology at which we can wave our torches and pitchforks.<\/p>\n<p>Pitman reminds us that while hard work is a virtue that cannot be understated, teachers can also do their part to help struggling students so they won\u2019t have to resort to criminal enterprises to pass.<\/p>\n<p><span class=\"Apple-converted-space\">\u00a0<\/span>\u201cI think we just have to accept that it\u2019s not unfair or a bad thing if something takes work. AI is much easier, and that\u2019s its attraction, right, but I think Thomas Edison said that there\u2019s no replacement for hard work,\u201d she says. \u201cBut I also think teachers have a role to play in assuring that they\u2019re thinking about how to help students master the material, how to layer the knowledge so that they go into an assignment with a pretty good basis.\u201d<span class=\"Apple-converted-space\">\u00a0<\/span><\/p>\n<p>Yutani agrees that people shouldn\u2019t get carried away by the negative connotations that AI has stirred up, and instead focus on using it in a way that advances our cause as mostly intelligent, sometimes-sentient living creatures.<\/p>\n<p>\u201cI\u2019ve actually heard this \u2018sky is falling\u2019 scenario about every major technological advancement over the last 30 years, that\u2019s been my whole life,\u201d she says. \u201cI recognize that it\u2019s going to be difficult, we\u2019re going to make mistakes, but in the end, society only moves forward, and we\u2019re going to have to make the best of that and do so with the least collateral damage.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial Intelligence (AI) is the seemingly paradoxical concept of non-living entities generating persistent logic and reason\u2014which, from the earliest days of ancient philosophers such as Descartes, are the singular defining features of what it means to be human.\u00a0 The first actual attempts at simulating human intelligence came in 1943, when academics Walter Pitts and Warren [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":25718,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[10,316],"tags":[],"class_list":["post-25717","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-features","category-september-3-2024"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/www.nexusnewspaper.com\/newsite\/wp-json\/wp\/v2\/posts\/25717","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.nexusnewspaper.com\/newsite\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.nexusnewspaper.com\/newsite\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.nexusnewspaper.com\/newsite\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.nexusnewspaper.com\/newsite\/wp-json\/wp\/v2\/comments?post=25717"}],"version-history":[{"count":1,"href":"https:\/\/www.nexusnewspaper.com\/newsite\/wp-json\/wp\/v2\/posts\/25717\/revisions"}],"predecessor-version":[{"id":25719,"href":"https:\/\/www.nexusnewspaper.com\/newsite\/wp-json\/wp\/v2\/posts\/25717\/revisions\/25719"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.nexusnewspaper.com\/newsite\/wp-json\/wp\/v2\/media\/25718"}],"wp:attachment":[{"href":"https:\/\/www.nexusnewspaper.com\/newsite\/wp-json\/wp\/v2\/media?parent=25717"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.nexusnewspaper.com\/newsite\/wp-json\/wp\/v2\/categories?post=25717"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.nexusnewspaper.com\/newsite\/wp-json\/wp\/v2\/tags?post=25717"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}