Wednesday, December 21st 2022 Digital Learning, Research

Stanford faculty weigh in on ChatGPT’s shake-up in education

Faculty from the Stanford Accelerator for Learning share thoughts about how the new AI chatbot will change and contribute to learning and teaching.

by GSE Communications

Classroom with computer network lines connecting desks

share

The release this fall of ChatGPT – a new natural language processor that can write essays, spit out a Haiku, and even produce computer code – has prompted more questions about what this means for the future of society than even it can answer, despite efforts to make it try.

Faculty from the Stanford Accelerator for Learning are thinking about the ways in which ChatGPT and other generative artificial intelligence will change and contribute to education in particular. Here are some of their initial thoughts. 

Innovating with education rather than just for education

“Too often, we treat new technology innovations as the product of a lone genius coder that has not set foot in a classroom since they were themselves a student. If we want generative AI to meaningfully improve education, there are the obvious steps we need to take by listening to educators, parents, and students and using what we learn to find the most pertinent and valuable use cases for generative AI in education. We need to also provide comfortable on-ramps so that someone who doesn’t already know what a ‘large language model’ is can know enough to make informed choices and avoid magical thinking about what generative AI will do – for good or ill – in schools.” 

– Victor Lee, associate professor of education

What’s next for high school essays and writing? 

“Teachers are talking about ChatGPT as either a dangerous medicine with amazing side effects or an amazing medicine with dangerous side effects. When it comes to teaching writing, I'm in the latter camp. 

“First, ChatGPT may help students use writing as a tool for thinking in ways that students currently do not. Many students are not yet fluent enough writers to use the process of writing as a way to discover and clarify their ideas. ChatGPT may address that problem by allowing students to read, reflect, and revise many times without the anguish or frustration that such processes often invoke. 

“Second, teachers can use the tool as a way of generating many examples and nonexamples of a form or genre. Often, teachers have the resources and bandwidth to find or create one or two models of a particular kind of writing — say, a personal narrative about a family relationship. As a result, students may come to believe that there is only one way to write such a narrative. ChatGPT allows teachers to offer students many examples of a narrative about family where the basic content remains the same but style, syntax, or grammar differ. With many examples to compare and analyze, students can begin to see the relationship between form and content. They can develop criteria for what makes a strong piece of writing, or how one verb might affect readers differently than another. For teachers, designing instruction has just become much easier – ChatGPT is essentially a tool for creating contrasting cases, and most teachers will be delighted that ChatGPT is doing a lot of the legwork for them. 

“Obviously, teachers are less delighted about the computer doing a lot of legwork for students. And students still need to learn to write. But in what way, and what kinds of writing? A third side effect of this new medicine is that it requires all of us to ask those questions and probably make some substantive changes to the overarching goals and methods of our instruction. 

“I couldn't figure out a good last line for this that wasn't tacky, like ‘just what the doctor ordered,’ so I asked ChatGPT to come up with a final sentence that used the metaphor of medicine and side effects but didn't use those actual words. It came up with: 

As with any new treatment, the use of ChatGPT in teaching writing requires careful consideration of its potential effects on students and the overall goals of education. While it may offer promising solutions to certain challenges, it is important to be mindful of any unintended consequences and to approach its implementation with caution. 

I didn't love most of the sentence – too generic (that’s the way ChatGPT writes) – but I liked the word treatment. So I will end this by saying: As with any new treatment, we will have to experiment and proceed carefully. But there is a lot to be optimistic about.” 

– Sarah Levine, assistant professor of education

What will it mean for college admissions? 

“There is some consternation in the admissions space about these technologies, and with obvious good reason. In one recent Twitter thread, someone posted an AI-generated essay and the results of an informal study showing that over half of admissions officers identified it as not being computer-generated. With SAT/ACT test score usage waning in many admissions sectors, the narrative portions of college applications may receive additional emphasis in evaluation of merit and deservingness. This was our worry when we found the content of admission essays to be more strongly correlated with income than are SAT scores. 

“AI complicates this space immensely, though in what direction policy-wise, it’s hard to say. My best guess is that access to the technology will make its use in admission essays more prevalent among lower-socioeconomic status households. Why? Because wealthier folks, as they’ve shown in the past, are quite savvy and will know that (1) places like ETS [Educational Testing Service, which develops standardized tests for K-12 and higher education] are already working on algorithms to accurately detect AI-written essays; and (2) anything available to the masses is something to not only avoid but to counter with a more exclusive strategy. That might look like writing non-standard essays — poetry or a mini-screenplay, for example — or something else. The drive for maintaining social distinction and its attendant privilege is quite strong. And there certainly will be a for-profit cottage industry rising up to meet the demand to help richer families in their quest. Things are moving fast, though, and perhaps at such a speed that technology’s potential democratic effects do surface in this space.”

— Anthony Lising Antonio, associate professor of education

Centering the intersection of language, culture, and cognition

“The innovation centers the capacity to replicate and, in some cases, enhance how human intelligence emerges in dialogue. On its merit, this advancement has the potential to improve how software supports students’ learning through rich, computer-generated dialogue. This is an incredibly important technological advancement that must understand the cognitive and cultural benefits of dialogue as an educational tool. To replicate dialogue without an understanding of the cultural and cognitive benefits of dialogue runs the risk of centering a singular cultural lens: that of the designer.

“Dialogue serves many purposes. Social science research indicates that dialogue represents cultural membership, gender identification, and group membership broadly. Said differently, how something is said sends multiple messages. On one level all dialogic communications send a message of content. The message shares an idea. On another level a message sends a message of belonging and identity. How the message is communicated sends a cue of who the message is for and who the speaker is. This subtle intersection of language cues and language identities embeds a message in every dialogical exchange. So, artificial intelligence must embed the power of cultural cues in its communicative pathways. They are already there. How something is said sends a message of who the speaker expects to be.  

“From my cognitive perspective, dialogue serves as both an assessment tool and a tool for developing mastery. It is vital that the AI developers create opportunities for students to explain their way toward

expertise, to use artificial intelligence for feedback and corrective support, while explicitly ensuring all students are able to receive cues of cultural belonging. In thinking this way, all kids may benefit from AI technologies if developers do the important work of centering the intersection of language, culture, and cognition.”

— Bryan A. Brown, professor of education

What about opportunities for kids with disabilities? 

“In the disability space, I've been having conversations about (a) how we could use AI to code videos of teachers and other instructors to coach on instructional practices that have been demonstrated to be useful for kids (i.e., providing opportunities to respond; corrective feedback); and (b) ways AI could possibly help us develop smarter tutoring that is responsive to students’ needs. There seem to be a lot of opportunities.”

Chris Lemons, associate professor of education

What students need to know now

“We have a glimpse of new things that are going to be built with generative AI. What do we need students to know and understand about how these are built, how they work, and the costs and benefits (financial, ethical, environmental, social) of different technologies for different visions of what education is supposed to do? As a first step, we need to seriously examine how generative AI is changing how different fields and disciplines do their work and what ideas students need to develop to both build and use AI for humans rather than in place of humans.” 

– Victor Lee, associate professor of education