Comment: Can parents trust ChatGPT to help with kids’ homework?

It can help students understand assignments, but maybe don’t trust it to check your kids’ work.

By Parmy Olson / Bloomberg Opinion

Wednesday marked a day of teacher strikes across much of the United Kingdom, putting parents in the familiar pandemic-inspired role of homeschoolers-in-chief to their kids. Except this time, there’s a magical automated assistant on hand to help.

Educators have been cautiously praising ChatGPT, the ultra-sophisticated chatbot from OpenAI, saying it could revolutionize education. One head teacher in Britain says it has triggered a rethink on homework, while another in Oregon has used it to create lesson plans and study guides.

The tool’s personalized responses are what make it so tantalizing as an all-knowing digital tutor. I recently used it to dig into the topic of enzymes, when my 12-year-old had questions that I had no hope of answering. When ChatGPT offered a dense, technical explanation, I asked it for simpler terms and an analogy.

“Sure!” it replied. “Think of a lock on a door. The lock is like an enzyme and the key is like the substrate molecule.” It stretched the analogy further to describe the active site of an enzyme as the keyhole.

These were remarkable answers. We could have dug deeper into every facet of biochemistry if we’d wanted. Unlike a human tutor, ChatGPT can be interrogated for as long as you like.

This holds huge potential for personalized, independent learning … except that ChatGPT often gets things wrong, and it does a very good job of hiding that. When I tested one of my daughter’s English homework questions on the tool, it offered an eloquent list of examples, which on closer inspection included one that was wildly inaccurate. The main character had a turbulent relationship with his parents, the bot said, even though the character’s parents were dead throughout the book.

On another occasion, I used the tool to generate some linear equations for my daughter to practice. She was stumped when I asked the tool to generate the answers, which were different to the ones she had calculated. I asked ChatGPT for an explanation and it broke down its method in simple terms once again, sounding as authoritative as any real math tutor. But when I double-checked the answers on Google, it turned out ChatGPT’s answers were wrong and my tween’s were correct. Thus ended her mini-nightmare of failing math, and much of my initial enthusiasm for ChatGPT.

The New York City public school system, the largest in the U.S., has already banned its students from using ChatGPT, in part because of concerns about the “accuracy of content.” That is why recent comparisons of ChatGPT to a “calculator for writing” is a deceptive analogy, since calculators are always right and ChatGPT isn’t.

How inaccurate is it? A spokeswoman for OpenAI said the company had updated ChatGPT over the last couple of months to improve its factual accuracy, but that it had no statistics to share. The tool also warns users, when they first open it, that it sometimes makes mistakes.

Will it get more accurate? Yes, but it’s hard to say by how much. The large language model underpinning ChatGPT is made up of 175 billion parameters, which are settings that are used to make the model’s predictions, versus the 1.5 billion that its predecessor GPT-2 had. It’s become accepted wisdom in AI that the more parameters are added to a model, the more truthful it becomes, and the correlation is real for GPT. It became substantially more accurate when all those parameters were added. It’s rumored that the next iteration slated for release this year, called GPT-4, will have trillions.

The problem is, we don’t know whether a huge jump in parameters also means a huge jump in trustworthiness. That is why students should use ChatGPT with caution, if at all, for the foreseeable future.

When I asked Julien Cornebise, an honorary professor of computer science at University College London, if he would ever trust it as a homework tool, he replied, “Absolutely not, not yet.” He pointed out that even when the system improves, we still won’t have guarantees that it is truthful.

Students should get used to corroborating any facts the system shares with other online information or with an expert. Albert Meige, an associate director focused on technology at consulting firm Arthur D. Little, says his own teenage daughter used it to help her with her physics homework; but he could validate the answers thanks to his doctorate in computational physics. He recommends using the chatbot to help better understand questions being posed in homework. “She discovered that she should not ask one single question,” he says. “It was an interactive process.”

Use it to get feedback, concurs Cornebise. “That’s what the star student will do.”

Being a relatively small company, OpenAI can get away with spewing out the odd alternative fact. Alphabet Inc.’s Google and Meta Platforms Inc. wouldn’t be able to do the same. Google has its own highly-sophisticated language model called LaMDA, but is ultra-cautious about integrating a similar chatbot into its own search tool, likely in part because of the accuracy problem. Three days after it released an AI tool that could generate scientific papers, called Galactica, Meta took it down after scholars criticized it for generating untrustworthy information.

OpenAI will be held to similarly high standards as the generative AI arms race heats up and chatbot technology gets integrated into search engines in the US and China.

Until then, use it with discretion and a healthy dose of skepticism, especially in education.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “We Are Anonymous.”

Talk to us

More in Opinion

toon
Editorial cartoons for Sunday, March 26

A sketchy look at the news of the day.… Continue reading

Construction workers walk along the underside of the Lynnwood Link light rail tracks on Tuesday, March 29, 2022 in Lynnwood, Washington. (Olivia Vanni / The Herald)
Editorial: What’s needed to get Link light rail on track

Sound Transit needs to streamline its process, while local governments ready for rail and stations.

The sun turns a deep red as it sets beyond the Port of Everett and the Olympic Mountains on a hazardously smoky evening on Sunday, Oct. 16, 2022, in Everett, Washington. Following the start of the Bolt Creek Fire and other wildfires in the region, air quality in Snohomish County was seemingly always hazardous through September and October. (Ryan Berry / The Herald)
Comment: Yours is not your father’s climate change

Your experience of climate change depends on your generation and that of your children and grandchildren.

Carbon dioxide is to blame for climate change

An interesting hunch was presented in a recent letter to the editor… Continue reading

Fight poverty by offering tax credits for kids, rent

Sad to read about millions of Americans trapped in poverty, in “the… Continue reading

Time to put bigotry’s anti-‘woke’ attacks to bed

I’m sick and tired of hearing bigoted attacks on diversity under the… Continue reading

Comment: Lawmakers risk second lawsuit over special education

Legislative funding proposals for special education fall far short of what school districts are due.

Comment: Hydro remains key to our next ‘Great Electrification’

Moving to a carbon-free electrical grid will rely on all sources of clean energy, especially hydropower.

Comment: Legislation could threaten access to telehealth

A bill to protect consumers’ health data could inadvertently undermine teleheath services.

Most Read