Monday, October 11, 2010

Robots grade English exams in the U.K. - could law schools be next?

I just came across this announcement from a U.K. educators online newsletter explaining that a software maker has developed a program that allows computers to grade exam essays used by universities to test the language proficiency of incoming students.  According to the developer, the software's "'proven automated scoring' will provide a test that accurately measures candidates’ English writing abilities."  Some of those quoted in the article suggest that it's not "if" more schools will begin to rely heavily on the use of grading robots, but only a question of "when" that's going to happen.

Critics are fearful that use of such software will discourage creativity and instead cause students "to write for the robot."

Bethan Marshall, senior lecturer in English and education at King’s College London, said: “A computer will never be unreliable. They will always assess in exactly the same way. But you don’t get a person reading it and it is people that we write for. If a computer is marking it then we will end up writing for the computer.

“People won’t be aiming for the kind of quirky, idiosyncratic work that produces the best writing. So what is the point?”

Tim Oates, research director at Cambridge Assessment, which owns the OCR exam board, said: “It’s extremely unlikely that automated systems will not be deployed extensively in educational assessment. The uncertainty is ‘when’ not ‘if’.”

The technology being used by Pearson is designed to allow computers to assess pupils’ use of grammar and vocabulary. But some experts say newer, more effective systems are available.

The Pearson approach is based on correlations between human judges and artificial intelligence systems. Machines are “trained” to learn from the scores given to specific texts by humans so that they will be able to achieve the same results on their own.

Mr Oates said: “In simply getting an automarking system to agree with human markers you are ignoring the vital question of exactly what parts of performance are being ranked.

“Other developers are working on more valid approaches, of greater merit and promise. Crucially, these aim to be sensitive to the concepts and language structures actually being used by candidates.”

You can read more here.

(jbl)

 

 

 

http://lawprofessors.typepad.com/legal_skills/2010/10/robots-grade-english-exams-in-the-uk-will-law-schools-be-next.html

| Permalink

Comments

Post a comment