As instructors and researchers, we’ve all seen how challenging it can be for students to learn to program. Students need to iteratively learn many skills, such as using correct syntax, tracing code, using common programming patterns, writing code, and testing/debugging the code they write. Struggling with any one of these tasks may mean that the student fails to solve the problem they wanted to solve.
In this talk, we’ll explore how Large Language Models (LLMs) like GitHub Copilot and ChatGPT can shift the skills needed to succeed at programming and enable more students to become successful programmers. Remarkably, this shift –- away from syntax and toward problem decomposition and testing –- may also be exactly what many instructors were hoping to be able to focus on in CS1 all along.
-Why so many students struggle in CS1
-How LLMs change the skills needed to program, and how we might teach these skills
-How LLMs benefit students and instructors
-Concerns and questions around using LLMs
Presented on Wednesday, June 21 at 12:00 PM ET/16:00 UTC by Daniel Zingaro, Associate Teaching Professor at the University of Toronto, and Leo Porter, Associate Professor of Computer Science and Engineering at UC San Diego. Michelle Craig, Professor of Computer Science at the University of Toronto and member of the ACM Education Board, will moderate the questions and answers session following the talk.