ARTIFICIAL INTELLIGENCE AND PREDICTIVE TECHNOLOGIES

Students: in the midst of the hype (positive and negative) surrounding ChatGPT, Google Bard, and other Large Language Model (LLM)-based chatbots, are you wondering where Ӱ stands? The administration is taking a cautious approach that seeks to provide foundational working assumptions as well as to respect instructors’ academic freedom. You can see the current statement that outlines this approach here. 

Where do you fit in?

  • Please do read the current statement on these technologies’ use at MU, and feel free to reach out to Academic Integrity if you have any questions or suggestions.
  • In academic work, the university is treating these chatbots (ChatGPT, Google Bard) like any other source, so: when you use them, cite them like you would any other source you use in academic work.
  • If you have any doubts or questions about what is and is not appropriate use of these technologies in a class you are taking, ask your instructor for clarification right away. Don’t wait!
  • If you have a deeper interest in these issues, feel free to contact Dr. Jacob Riyeff, who is organizing a “Critical AI” working group to help provide a broad view of these technologies and their implications to the MU community, with a student focus.

Support for the Classroom

Context and Clarification of Expectations

  • Since the fall term of AY 2022-2023, large language models (LLMs: what are commonly referred to as “chatbots,” “artificial intelligence” or “AI”, like ChatGPT, Google Bard, etc.) have been available to the public. These models produce responses to prompts, and the sophistication of these responses has led to pressure on educational models and institutions.
  • Whatever our personal opinions on the merits, value, implications, and legitimate uses of such LLMs (chatbots) in the larger world and in education, we as a university community must find new ways of navigating our educational mission together in this new context. Some may desire a maximalist approach that invites complete integration of LLMs into classroom learning and assessments. Others may want to shun their use entirely. Presumably, many will fall somewhere in between.
  • The university wants to provide clarity on this issue, while also allowing for variety of opinion and practice as colleges, departments, and individual instructors see fit. Regardless of where an instructor falls in the spectrum of use and integration of Large language models into their courses, the university strongly recommends that each instructor make clear both in a course’s syllabus and during class time what the specific expectations for that class are with regard to these new technologies.
  • More generally, to provide provisional guidance to the university community, the current baseline expectation remains that, unless otherwise clearly attributed, a student is expected to have produced their own text and other content in submitted coursework. Like the unattributed use of any other source, the unattributed use of LLMs (Large language models) in coursework violates academic integrity. Colleges, departments, and instructors are welcome to invite the use of LLMs in their coursework, but if they do so, they ought to make explicit in syllabi (and, ideally, in assignment sheets and verbally as well) what is expected of students regarding this LLM use on specified assignments. In keeping with the necessary honesty and transparency of academic work in general, academic work that allows for LLM use should still attribute such use, as with any other source that scholars use to aid their work. Failure to cite the use of LLMs falls under the usual definition of plagiarism.(Where appropriate, LLMs should be cited using the instructions found on or similar sites, and adapted as needed for different models.) Instructors permitting or requiring LLM use should also make clear that such permission does not apply outside the assignment(s) or course(s) for which the exception has been made.
  • In keeping with the necessary honesty and transparency of academic work in general, academic work that allows for LLM use should still attribute such use, as with any other source that scholars use to aid their work. Failure to cite the use of LLMs falls under the usual definition of plagiarism.(Where appropriate, LLMs should be cited using the instructions found on this or similar sites and adapted as needed for different models.) Instructors permitting or requiring LLM use should also make clear that such permission does not apply outside the assignment(s) or course(s) for which the exception has been made.
  • Hopefully these principles will allow instructors in the various disciplines to utilize and experiment with LLMs (chatbots like ChatGPT, Google Bard) in their courses if they so choose, while maintaining a baseline of clarity on expectations regarding LLM use with regard to course work, learning outcomes, and academic integrity broadly understood. The deployment, development, and integration of these technologies into various sectors of society will continue to change and evolve, and this statement of guidance will be revised and altered as the university deems appropriate in light of the changing situation.
  • Finally, we encourage instructors to begin the new year from a place of trust and transparency, inviting students into dialogue about the goals of higher education and working toward them together in these new circumstances. We believe that increased surveillance and suspicion will not lead us to improved student learning and a culture of academic integrity, but rather the fostering of candor and cooperation will do so as we engage in the labor of academic work side by side.

Note: Those who view plagiarism as an unwarranted categorization for LLM use that lacks attribution are asked to revisit the definition of plagiarism and to note in addition that—while the specific text produced by LLMs for a particular prompt may be superficially novel—LLMs do not generate their own responses whole-cloth but are trained on prior humans’ texts and other data, and guided by teams of workers who label that data. That is, other humans’ labor and intellectual property are always implicated and always in use when LLMs are employed, however anonymous and depersonalized those humans become in the black box mediation of an LLM. In addition, generally speaking the initial human labor and intellectual property was used without those humans’ consent.

Additional Resources