AI, Academic Integrity, and the GOAL Framework

By: Iris S. De Lis


⏤ 🟡 DRAFT – EARLY WORK IN PROGRESS 🟡 ⏤


AI, Academic Integrity, and the GOAL Framework

v4, Updated Friday, August 29, 2025

Overview

Conversations about AI and academic integrity often begin in a place of fear: Will students cheat, and how can we catch them? Traditional grading systems, with their emphasis on one-shot products and high-stakes performance, amplify this fear by creating the very incentives for shortcuts they then punish.

The GOAL Framework — Growth-Oriented Assessment for Learning — offers another path. By centering process, engagement, iteration, authenticity, and equity, GOAL reframes AI — increasingly ubiquitous in our students’ lives — not as a threat to police, but as a tool for thoughtful teaching and learning. When assessment is growth-oriented, the incentives for misuse diminish, and the opportunities for ethical engagement expand.

Five GOAL Remedies

One: From Product to Process

Traditional grading emphasizes a single, high-stakes “product.” This creates strong incentives for students to outsource work to AI if the goal is merely a grade. GOAL instead centers learning processes: engagement, strategies, iteration, and reflection.

Inoue (2022) asserts that when students “center their laboring first” through contracts and reflections, their attention shifts to doing the work rather than performing for points. Carillo (2021) critiques labor contracts and calls instead for engagement-based grading contracts, which afford student a variety of ways they might engage toward course objectives (see more on this in Pillars Two and Three).

Example: In Daniel Look’s math courses, students typeset proofs in LaTeX with unlimited revisions and present orally like at a conference. An AI can generate a polished proof, but it cannot replicate the student’s problem-solving process or their verbal articulation of understanding.

Two: Authentic Assessment

AI misuse thrives when tasks are generic and disconnected from students’ lives. GOAL encourages assessments that are inherently more difficult to outsource because they require personal, situated, or disciplinary authenticity.

Examples:

  • Sophia D’Agostino’s case study–based assessments in biology foster real-world problem solving.
  • Silvia Vong, working from Critical Race Feminist pedagogy, highlights that assigning numbers to “human experience” misses the point — authentic assignments invite personal insights and cultural knowledge that AI cannot replicate.

Three: Reducing Anxiety and Barriers

Cheating, including AI misuse, is often driven less by intent to deceive than by anxiety, pressure, and inequity. High-stakes grading systems amplify this pressure, especially for marginalized or disabled students. Many students with disabilities do not disclose them, and therefore lack accommodations (Adam & Warner-Griffin, 2022). Students from historically marginalized backgrounds face heightened stress under inequitable grading systems (Sorensen-Unruh, 2024). Butler (2025) and Mannix (2025) both found that ungrading reduced stress and fostered focus on content over points.

By embedding flexibility, multiple attempts, and clear paths to success, GOAL reduces the desperation that drives misconduct and supports equitable participation.

Four: Cultivating AI Literacy

Instead of banning AI, GOAL encourages faculty to engage students in critical conversations about its ethical use, biases, and limitations.

Examples:

  • Christopher Adamson’s workshop sequence moves from AI-free “grade-free zones” to AI-assisted research, building literacy gradually.
  • Elizabeth Kubek emphasizes teaching students “the essential limits of AI” as part of process-based criteria.

This approach positions AI as a tool students must learn to use responsibly, much like earlier generations of technology (calculators, search engines).

Five: Building Trust and Relationships

Surveillance-heavy approaches erode trust. GOAL, by contrast, emphasizes transparency, equity, and growth, which fosters stronger faculty-student relationships. When students feel seen and supported, they are less inclined to misuse AI and more likely to reach out for help.

Trust is not a soft extra; it is a precondition for authentic engagement, especially for students who have felt sidelined or excluded by traditional academic structures.

Next Steps

Addressing AI and integrity is not about locking down tools or “catching cheaters.” It is about redesigning assessment around learning, equity, and care. GOAL provides a framework for doing this work:

  • Shift the focus from grading products to supporting growth and process.
  • Adopt authentic assessments that invite personal, disciplinary, or cultural voice.
  • Reduce stress and inequities by building flexibility and multiple pathways into design.
  • Teach AI literacy as part of preparing students for the world they are entering.
  • Foster trust by centering transparency and relationships.

Conclusion

The lockdown, catch-and-punish mentality comes from a place of fear. GOAL offers a path of pedagogical courage. By aligning assessment with growth, equity, and authenticity, faculty can reduce the appeal of shortcuts and create richer opportunities for ethical engagement with AI. This prepares students not only to thrive in their courses, but also to navigate a world where fluency with AI and technology is both expected and essential.

References

Adam, T., & Warner-Griffin, C. (2022). Use of Supports Among Students With Disabilities and Special Needs in College (NCES 2022-071). U.S. Department of Education, National Center for Education Statistics, Institute of Education Sciences. https://nces.ed.gov/pubs2022/2022071.pdf

Adamson, C. (2025, June). Promoting Al Literacy with Ungrading [Poster presentation]. https://www.centerforgradingreform.org/grading-conference/abstracts/

Butler, M. (2025, June 13). Exploring Alternative Grading Systems: Impacts on Motivation, Engagement, and Stress. 2025 Grading Conference, Online. https://www.centerforgradingreform.org/grading-conference/abstracts/

Carillo, E. C. (2021). The Hidden Inequities in Labor-Based Contract Grading. Utah State University Press.

D’Agostino, S. (2025, June 12). Beyond exams: Using case studies and scaffolded learning for student success [Conference presentation]. In Classroom case studies: Alternative grading in STEM (Symposium). 2025 Grading Conference, Portland, OR, United States. https://www.centerforgradingreform.org/grading-conference/abstracts/

Inoue, A. B. (2022). Labor-Based Grading Contracts: Building Equity and Inclusion in the Compassionate Writing Classroom, 2nd Edition. The WAC Clearinghouse; University Press of Colorado. https://doi.org/10.37514/per-b.2022.1824

Kubek, E. (2025, June 13). Joy Against the Machine: Assessing Writing in the Age of AI. https://www.centerforgradingreform.org/grading-conference/abstracts/

Look, D. (2025, June 12). From Napkin Math to Conference Talks: Assessments for Growth, Mastery, and Communication in Upper Level Mathematics Courses. 2025 Grading Conference, Online. https://www.centerforgradingreform.org/grading-conference/abstracts/

Mannix, J. (2025, June 11). Decreasing Stress Using Non-traditional Grading Practices in a Math Education Course. 2025 Grading Conference, Online. https://www.centerforgradingreform.org/grading-conference/abstracts/

Sorensen-Unruh, C. (2024). The Ungrading Learning Theory We Have Is Not the Ungrading Learning Theory We Need. CBE-Life Sciences Education, 23(es6), 1–12. https://doi.org/10.1187/cbe.24-01-0031

Vong, S. (2025, June 12). Can My Teaching Practice be Rooted in Critical Race Feminism include Grading? A Critical Reflection on Pedagogical Dissonance. 2025 Grading Conference, Online. https://www.centerforgradingreform.org/grading-conference/abstracts/