How Does ChatGPT Influence Language Learning?

Faculty members from across the Arts and Sciences and a current student took part in a panel discussion on ChatGPT in language learning.

The Language Pedagogy Research And Working Group of the Leslie Center for the Humanities convened Dartmouth experts in language education last week to discuss the benefits and risks of ChatGPT in language learning.

Roberto Rey Agudo, language program director in the Department of Spanish and Portuguese, and research assistant professor Tania Convertini, language program director of Italian, served as moderators of the March 3 discussion, which featured John Bell, director of Dartmouth's Data Experiences and Visualizations Studio; Jennie Chamberlain of the Department of Film and Media Studies; James (Jed) Dobson, assistant professor of English and director of the Institute for Writing and Rhetoric; Soroush Vosoughi, assistant professor of computer science; Jacqueline Wernimont, associate professor of Film and Media Studies and distinguished chair, Digital Humanities and Social Engagement cluster; and computer science major Parth Dhanotra '24, who provided a student perspective on the new artificial intelligence technology. 

Here are some excerpts from their conversation. 

How ChatGPT masters language 

Soroush Vosoughi: Companies like OpenAI and Google use web crawlers to collect a large dataset of English language sentences from the internet. They use this dataset, which can consist of hundreds of millions of sentences, to train large language models. These models are trained to learn the syntax and structure of English language sentences, allowing them to generate coherent and grammatically correct sentences. Additionally, these models are trained to learn the context in which different words are used, and they are exposed to a large amount of data to achieve high accuracy and performance.

OpenAI has taken additional steps to ensure the quality of its language model, such as hiring human annotators to refine the model and ensure its outputs align with human preferences. The model has also been programmed with stop signals to prevent it from generating inappropriate or potentially harmful responses to certain queries. For example, if a user were to ask the model for advice on how to rob a bank, the model would decline to provide an answer.

Why ChatGPT is more effective than Google Translate for language learning

Parth Dhanotra '24: Let's say I have a sentence which says 'if I had the time, I would read all the books in my house,' and I want to translate that. If I were using Google Translate, I plug it in and get an answer that's hopefully more or less correct. With ChatGPT, if you prompt it correctly, you can ask it to not only provide you the sentence, but to articulate how it was constructed. It can tell you that because the initial phrase has XYZ characteristics, it's going to be translated using the imperfect subjunctive. And because the second clause has these ABC characteristics, it'd be translated using the conditional.

The implication here is that you're not just getting a sentence in a vacuum. It's being situated in the broader context, and I think that remediates a fundamental concern with tools like Google Translate, where you are ensuring that students are learning things that generalize and not just memorizing.

Risks of relying on ChatGPT for language learning

Soroush Vosoughi: In many instances, explanations given by ChatGPT are wrong. The model is not magic. It cannot create knowledge out of thin air; it's just reflecting knowledge that's on the web. And as we all know, the web is full of misinformation and toxic speech." 

We also have to keep in mind that only a few corporations have the resources to develop these large language models. If we become reliant on these models, then we are basically relying on these corporations to provide us with the knowledge we use and build upon. I don't think the nightmare scenario is that a company denies access to the model later on—I think it's if a company decides to start charging $15 a query rather than three hundredths of a cent, which is the cost of ChatGPT right now.

How ChatGPT fosters inequalities 

Jacqueline Wernimont: As noted in the recent article On the Dangers of Stochastic Parrots: Can Language Models Be Too Big, over 90% of the world's languages spoken by more than a billion people currently have little to no support in terms of language technology. 

ChatGPT is optimized for the English language. The model has been prioritizing English language learning, because coming up with a system across multiple languages is really challenging. And there are some languages and linguistic practices that simply will not be present at all in this. So if you're thinking about ChatGPT in terms of language learning, in some ways you can rest a little bit easy, because it might not be very good. In other ways, it poses problems in terms of equity and access. 

On designing "AI-proof" assignments 

Jed Dobson: I think the knee-jerk reaction a lot of faculty members have to ChatGPT is to design 'AI-proof' assignments. There is no such thing. It is not an appropriate thing to do, nor is it achievable.

Citing usage of AI models in scholarly and student work

Jacqueline Wernimont: In a forthcoming piece I include a poem co-authored by an algorithm, and I cite the algorithm and treat it like a co-authored work. I think that's one model, to say that 'I did this with the assistance of X algorithm,' although the language of co-authorship attributes a certain kind of creative impetus. I also cited the person who developed the algorithm, because they are in fact the person who made that mechanism work.

Tania Convertini: In the Italian program, we know that at some point, students will be tempted to use a translator, maybe because they are curious about a new structure or want to say something they don't know how to say. We want to promote the use of translators that raises metalinguistic awareness. Students are allowed to use translators three times in their writing composition. They must cite that they used a translator, describe what they learned from it, and give an example, such as using the same structure in a new sentence. Sometimes this discourages students from using translators, but it can also encourage those who desire to learn more.

On preparing students to take advantage of AI

Parth Dhanotra '24: The skills required to program with robust language models would be different, but in some sense, they could be even more powerful. I think about the potential of language models to abstract away a lot of information for us, allowing us to interface with engineering disciplines at a different level of thinking. For example, if you're working in distributed systems, rather than having to know multiple different languages and libraries, you can have a model that handles implementation details for you, and you can focus on coming up with more effective design patterns instead.

There may one day be a world where a machine can do pretty much anything we can, and possibly do it better. Educators should consider what education looks like in a world like that