A Student-Led Guide to Using AI Wisely in School
We talked to students, ran experiments, and built a simple framework to help you use AI like ChatGPT as a learning tool, not a crutch. No complicated rules just practical advice.
To help students use AI without losing their own skills.
Create a simple, practical guide for responsible AI use, based on what real students told us.
We interviewed 8 students and ran an experiment with 8 more to see how a framework changes behavior.
5 evidence‑based principles to keep you learning and thinking, even when you use AI.
Our framework helped students produce better work and be more thoughtful about AI.
Four students from Carleton University who wanted to make AI easier to understand.
Lead Researcher
erinvanderpouwkraan@cmail.carleton.ca
Led the interviews and made sure our study was fair and followed the rules.
Research Coordinator
victoroikawalopes@cmail.carleton.ca
Turned interview responses into our 5 guiding principles and planned the experiment.
Technical Lead
mohammadalsaao@cmail.carleton.ca
Built this website and made our data easy to see with charts.
Technical Coordinator
mustafaali6@cmail.carleton.ca
Also helped with the website and worked on the poster for the project.
Picked our topic and figured out what questions we wanted to answer.
Got the green light from Carleton to safely interview students.
Talked to 8 students about their real thoughts on AI.
Created 5 evidence‑based rules based on interviews and research.
Tested our framework with 8 students doing real school tasks.
Presented everything at the Carleton Capstone Fair.
Professor, School of Information Technology
Carleton University
Helped us stay on track, design our study the right way, and turn our ideas into real research.
A research‑backed approach to using AI wisely, developed from student interviews and validated through experimentation.
We talked to 8 students about their AI usage, concerns, and experiences. Their responses shaped our initial principles.
We coded interview responses and identified 5 core themes that students cared about most.
We validated our themes against peer‑reviewed research on AI literacy, accuracy, and learning outcomes.
Tested the framework with 8 students. Those using it produced higher quality work and showed more thoughtful AI use.
Know how the AI generated its response – don’t just copy.
Gong et al.: Even students exposed to AI lack understanding of AI ethics.
Boscardin et al. (2024): Accuracy, bias, and ethical use must be considered.
Harvard/Stanford RCT (2025): Using AI as a “scaffolded tutor” doubles learning gains.
“I use AI to explain concepts in a more human way, but I don’t always know if I can trust it.”
Try this: Instead of “what’s the answer?” ask “walk me through the steps to solve this.”
AI invents sources and details – especially for niche topics.
Johnson et al. (2023): Only 57.8% of AI medical answers were “nearly all correct.”
Linardon et al. (2025): 65% of GPT‑4o citations unreliable; for niche topics fabrication jumps from 6% → 29%.
“AI gave me a perfect quote with an author – but the paper didn’t exist.”
Try this: Verify every citation on Google Scholar.
Consider what you already know before turning to AI.
From Q6: overuse and dependence were the main concerns. Students said consulting AI before thinking twice can weaken your own skills.
“Dependence is the biggest negative – I’m worried I’ll lose my ability to write.”
Try this: Work on a problem for 10–15 minutes on your own first.
Be mindful of its limitations – it’s up to you how you use it.
Students find AI useful for: repetitive work, rewording, idea generation, summarizing, creating study guides.
Student analogy: “AI is like a lighter. It can light a candle or burn down a building.”
Try this: Use AI for brainstorming, then write the final draft yourself.
The repercussions apply to you, not to the AI.
Every student knows the academic integrity rules, but only half personally care – the rest just avoid getting caught. “Individuals have found ways to cheat the system – consider if you’re cheating yourself.”
Remember: If you misuse AI, you miss out on learning.
Real data from 8 Carleton students – full demographics from our interviews.
We interviewed 8 undergraduate students (Bachelor's level) from Carleton. Here’s who they were:
Ethnicity: 2 White, 4 Asian, 1 Middle Eastern, 1 Black or African American.
Field of study: 8/8 in Science, Engineering, IT.
CGPA (12‑point scale): ranged from 6.0 to 11.5.
Split: 4 follow rules because they want to learn, 4 just avoid trouble.
Most see AI as helpful, but dependence worries remain.
5 frequent, 2 rare, 1 consistent (daily).
Only 2 of 8 consider themselves reliant.
We put our 5 principles to the test with real students doing real tasks.
4 got the framework, 4 didn't.
Summarizing, writing, debugging, scheduling.
Each student worked through tasks at their own pace.
Yes – students with the framework were more thoughtful and produced better work.
Blind evaluators rated the work from the group with our framework higher.
They were more intentional about when and why they used AI.
Students without the framework finished faster, but quality suffered.
Our 5 principles made a difference. Students who had them thought more about their choices. They used AI as a tool to help them learn, not just to get the task done.
All our materials, free for you to use.
A simple, printable poster with our framework. Perfect for your study wall.
Download PDFAll the details: methods, interview questions, experiment tasks, and data analysis.
Download ReportQuestions about our research? Want to collaborate? Reach out!
erinvanderpouwkraan@cmail.carleton.ca
victoroikawalopes@cmail.carleton.ca
mohammadalsaao@cmail.carleton.ca
mustafaali6@cmail.carleton.ca
James Brunet
JamesBrunet@cunet.carleton.ca