- Video Tip - Student Voice in Gen-AI
- TOPkit Workshop 2026 - Submit a Proposal
- Top Tips - Student-Centered Gen-AI
- From the Community - Student Voice, Smarter Practice
- Top Community Topics
|
|
|
This video centers on student voice on Gen-AI. The Learning Design Innovation team stepped out to ask FIU students how they actually use tools like ChatGPT, Claude, and Grok for learning. The video surfaces real patterns students value study guides, syntax help for coding, step-by-step tutoring for tough courses like chemistry, and planning support and it shows their cautions about overreliance and unclear rules. The purpose is to ground our newsletter’s recommendations in lived experience so faculty and IDs design from student reality, not rumor.
In the video, students say:
|
|
|
-
AI works best as a tutor and organizer helping them clarify concepts, generate practice prompts, and check code or grammar.
- Cheating feels self-defeating they prefer guided help that requires their own thinking, like ChatGPT’s Study and Learn flow.
- Overuse is the real risk so they ask for clear, assignment-level guidance on when and how AI is appropriate.
|
|
|
Submit a proposal for TOPkit Workshop 2026. The workshop will be held as an afternoon live, virtual event Tuesday through Thursday, March 31 - April 2, 2026 with a Carnival Magic theme. Save the dates! Registration opens soon.
|
|
|
|
Design from Reality, Not Rumors |
| |
Student use of Gen-AI is widespread, but it is not one thing. Many students report using chatbots to clarify concepts, draft practice questions, or organize notes. Others admit grade-chasing shortcuts. Recent surveys show rapid growth in student AI use and equally rapid confusion about what is allowed, with a sizable share of students unsure about course policies. Faculty attitudes are also mixed, which widens the perception gap.
Surveys about the use of AI in higher education and student experience:
|
|
|
If we want meaningful learning with AI, we need to start from student reality. Listen to how students actually work, set clear permission structures, and redesign assessments to emphasize process, reflection, and verification. That approach lines up with current guidance on assessment and with what students say they need to use AI well.
|
|
|
-
Listen first, then set the lanes. Do a one-week listening sprint in your own course. Ask exactly what your students already do with AI, just like in your interviews. Expect patterns like concept clarification, study planning, prompt practice, and light copy-editing. Use those patterns to define three lanes:
|
|
|
- Allowed
- Allowed with citation
- Not allowed
|
|
|
- Make rules visible at the point of use. Students repeatedly asked for clarity. Publish a tiny box on each assignment and repeat it in the syllabus and on each assignment:
|
|
|
- Allowed: brainstorming, outlines, practice prompts, grammar checks
- Allowed with citation: code hints, template language, research leads
-
Not allowed: final wording, solving graded problems, cita-fabrication
|
|
|
Link to a campus or department template so students see consistent expectations across courses. Clarity reduces unintentional violations and anxiety about cheating accusations.
Campus department and template examples:
|
|
|
-
Aim AI at process, not product. Design tasks that require students to show thinking and decision-making. Students liked AI when it kickstarted thinking, not when it replaced it. For example, use AI for brainstorming and outlining, then require students to submit their prompt history, a brief “how I verified this” note, and a paragraph on what they changed after checking sources. Shift grading weight toward artifacts that AI cannot do for them, like data selection, reasoning with course materials, or original examples tied to class content. Guidance from assessment groups supports this re-balancing.
|
In my courses, “process over product” means students work through a milestone-based GPT that teaches by conversation, checks understanding with short formative prompts, and only then reveals a personalized completion message. Students export the chat and add a screenshot that clearly shows their name in the final message, which lets me verify learning at a glance. In one course the GPT also generates a one-time code that unlocks the next assessment and the rest of the course content, in another it produces a custom joke that includes the student’s name. This design keeps grading focused and authentic because the transcript shows the checkpoints and retries, while the visible completion signal confirms mastery. I give credit for three things, the final message with the student’s name, the required artifact such as the code or joke, and evidence of milestone progression in the transcript. For privacy, students can blur unrelated details or paste the last 20 turns if export is not possible. The approach is adaptive, efficient, and personal, and as one face-to-face student told me, the “puniness” of the final line even sounded like me.
|
-
Teach verification as a mini skill. Make “trust, then verify” a graded step. Require students to check AI claims against assigned readings or primary sources and to note any inaccuracies (I am choosing to stop describing them as “hallucinations”) or missing citations. This builds critical reading and reduces over-reliance on AI. Widespread reports of AI inaccuracies make this step essential, and students themselves say they want help using AI responsibly.
|
- Close the loop every two weeks. Add a two-minute pulse: “How did AI help or hurt your learning this unit?” Adjust boundaries and support based on what students report. Over time, this reduces the gap between perception and reality and improves fairness. Surveys show confusion drops when expectations are reinforced during assignments, not just at the start.
|
|
|
Summary: Start with listening so policy reflects reality. Make rules easy to find and repeat them at the point of use. Point AI at thinking steps and require simple verification. Keep a short feedback loop and tune the rules as practice evolves. You will get better learning signals, fewer misunderstandings, and a healthier culture around AI.
|
|
|
Student Voice, Smarter Practice |
From Rumor to Reality: Recommended Picks |
|
|
The TOPkit community has developed extensive resources that complement UDL implementation. Community contributors have shared insights about assessment design, multimedia integration, and inclusive teaching practices that align with UDL principles.
|
-
How AI Is Changing—Not “Killing”—College (Inside Higher Ed)
A student survey with a reality check: Gen-AI hasn’t torched the value of college, but students do worry about their own critical thinking. Pull one or two stats into your slides to frame “teach it, don’t ban it.” -
Listening to Learners 2025 (Tyton Partners)
Belonging, awareness of services, and career readiness drive persistence and AI shows up in that mix. Use these findings to align your AI policy with what keeps students engaged and supported week to week. -
AI Detection & Assessment: 2025 Update (Jisc)
Short, practical, and refreshingly honest: what detectors can’t do reliably, and what to do instead. Great for pivoting colleagues from “gotcha” tools to assessment design that values process evidence. -
Google’s “Homework Help” Button Backlash (Washington Post)
Educators called it a built-in “cheat” button. After pushback, Google paused the Chrome feature that could answer on-screen questions. A timely opener for discussing design choices that promote learning instead of loopholes. -
Rewriting the Rules on Cheating (Associated Press)
From take-home essays to in-class writing, schools are redefining what counts as cheating in the AI era. Use this article to show why assignment-level clarity beats blanket bans. -
Today, Explained — “AI Is Killing the Internet” (podcast)
A lively explainer on AI-generated content flooding the web. Perfect for a quick class debate: if the internet is noisier, how do we teach students to verify and cite fast? Pair with your “verify in 60 seconds” micro-skill.
|
Summary: These resources turn what students told us into practical moves you can use right away. Skim for quick stats, clear policy language, and classroom ideas that channel AI toward learning rather than loopholes. Each item pairs student reality with a concrete next step for design or assessment.
|
|
|
Generative AI may have been used to retrieve relevant research, generate suggested language, and enhance original content.
|
|
|
Bren Bedford, MNM, SFC®, Web Project Analyst II, Center for Distributed Learning, University of Central Florida
|
|
|
Florence Williams, Ph.D., Associate Instructional Designer, Center for Distributed Learning, University of Central Florida
|
|
|
You are receiving this email because you are a member of the TOPkit Community.
Manage your preferences | Opt Out
|
| |
|
This email was sent to . Got this as a forward? Sign up to receive our future emails. To continue receiving our emails, add us to your address book.
|
|
|
|