This post was written by Molly Chehak, Director of Digital Learning at CNDLS.
In my conversations with faculty, one of the biggest concerns that emerges is about the ways AI may short-circuit student engagement with course materials, particularly the assigned reading. I recently conducted an experiment with my students to compare traditional and AI-assisted reading strategies and assess their impact on comprehension and engagement.
I told them I wanted to evaluate reading and comprehension strategies, including traditional human reading, AI-assisted tools, and multimedia learning approaches. I chose a short, college-level article and put them into groups. They were required to have access to ChatPDF or LMNotebook, and relevant devices.
The Exercise
- Dividing the Class
Students were divided into two groups, and each group was assigned a distinct method of engaging with the same article:- Group A (Human Readers): Read the article completely using traditional methods—just eyes and brains.
- Group B (AI Readers): Used ChatPDF to process the article, tracking prompts they used or created.
- Debriefing the Experience
After the initial reading phase, both groups convened for a discussion:- Group A led with their insights on the article’s content.
- Group B shared their experience using ChatPDF (or Notebook LM), including how they crafted prompts and their thoughts on the tool’s accuracy and helpfulness.
- Quiz Creation and Results
- Group B generated a quiz using ChatPDF or Notebook LM.
- Both groups took the quiz, revealing a surprising trend: Group A often performed better, even on AI-generated questions.
- This result sparked a reflective discussion on why the AI-generated questions were thought-provoking and how familiarity with the material influenced performance.
- Exploring Multimedia
I used LMNotebook to convert the article into a podcast version.. Students listened individually or as a class and reflected on how this modality compared to reading, discussing retention and understanding. - Class Discussion & Reflection
The exercise culminated in a class-wide discussion comparing the pros and cons of each method—traditional reading, AI-assisted tools, and multimedia learning. Students shared their preferences and identified which combinations best supported their learning.
Results
The results were unexpected and insightful, and I’m eager to repeat the exercise this semester. The two groups were neck and neck in terms of surface-level class discussion about the article. However, once we started to dive deeper, the machine readers started to falter. They created a quiz using their tool of choice to help them study and thus participate better, but the whole class took the quiz. The only students who could pass the quiz were the human readers.
Key Takeaways
- Students gained firsthand experience with AI reading tools, reflecting on their potential and limitations.
- Learning Modalities: The exercise highlighted the benefits and drawbacks of various strategies, empowering students to adapt and combine methods to their needs.
- Self-Reflection: This activity encouraged students to think critically about their own learning styles and preferences, as well as the level of specificity that learning requires.
Overall, a combination of methods proved most effective for engaging this particular article at a deep level. This experiment was a powerful way to engage students in exploring how technology complements (or challenges) traditional methods. Try it and let me know what you think!