Computer Scoring of Essays Shows Promise, Analysis Shows
Article Courtesy of Smart Brief on EdTech
Computers can find a good Greek restaurant, recognize a song on the radio and beat our best players at Jeopardy. But can they grade our kids’ writing?
A wide-ranging new analysis suggests that the answer is “Yes.”
The study of automated computer essay-scoring software finds that a handful of programs are “capable of producing scores similar to human scores” on thousands of sample essays.
The findings come at a crucial time for USA students, especially high schoolers. Since 2005, they’ve had to write essays to score well on SAT college admissions tests — humans grade these, at least for now. Under new academic standards, students must produce billions of words over the next few years. Whether teachers will embrace computer scoring to help grade all that writing is up for debate. The National Council of Teachers of English opposes “machine scored” assessments, putting its support behind “direct assessment by human readers.”
But computer-scoring advocates, many of whom are also educators, say critics misunderstand the basic undertaking. Computers stink at judging high-level writing achievements, such as whether a writer has a unique style or can present sophisticated ideas. Most auto-graders aren’t even programmed to capture that, no matter what critics say.
“They really don’t understand that most kids are having a hard time communicating at all,” says Mark Shermis, dean of education at the University of Akron. He agrees that timely, individualized grading by a human reader would be great, but given the scale of writing that schools envision, it’s unlikely. “If every kid in the country had that kind of individualized attention, we might not be having this conversation.”
The new findings, unveiled last week at the National Council on Measurement in Education meeting in Vancouver, Canada, analyzed 17,500 essays that had already been graded by humans. Funded by the William and Flora Hewlett Foundation, the analysis was part of an ongoing competition, an X-Prize of sorts, that Hewlett is sponsoring to push the field forward.
Les Perelman, director of Writing Across the Curriculum at MIT, says the effort amounts to “reinventing the wheel,” as there’s already a cutting-edge computer writing coach in nearly every classroom — it’s called Microsoft Word. The ubiquitous word processing program is “a much better product than anything that’s going to be developed by this competition,” he says. Its grammar checker is fairly sophisticated, but can be fooled. For instance, if a student types, “The car was parked by the side of the road,” Word suggests, “(The) side of the road parked the car.”
Perelman worries that the bid to develop machine readers will, in the end, train humans to read more like machines. “It will get good agreement (between humans and machines) but not necessarily good writing.”
The stakes are high: Under the new Common Core English standards that nearly all states have adopted, USA school kids will soon produce millions of new long- and short-form essays in most subjects. The standards take effect in 2014 and schools must figure out how to grade all that writing soon.
Jeff Pence, an English teacher at Dean Rusk Middle School in Canton, Ga., uses computer-aided scoring for his 120-plus students, since hand-grading just one set of writing drafts “with any sense of thoroughness” could take two weeks. The computer takes about three seconds to deliver feedback. So far, each of his students this year has completed more than 25 essays.
“Obviously, the advent of automated scoring of essays has revolutionized how I run my classroom,” he says.
He understands the limits of computer grading but says that teachers have limits too. “I know, as does every teacher out there, that on that 63rd essay, I am nowhere near as consistent, accurate or thorough as I was on the first three.”
For schools struggling to incorporate more writing into daily lessons, computers could supplement — but not replace — teachers, says Tom Vander Ark of Open Education Solutions, a consulting firm based in Washington State. He also helped design the Hewlett competition.
“I want to see kids writing a lot every day in every classroom across the country and I want teachers, students and parents to have the benefit of more critical feedback,” he says. “I want teachers to be able to spend more time on teaching writing and not mechanics of grading.”
Original article published here.