Please use this identifier to cite or link to this item: http://hdl.handle.net/1959.14/177839
47 Visitors55 Hits2 Downloads
A Framework for evaluating text correction
International Conference on Language Resources and Evaluation (8th : 2012) (23 - 25 May 2012 : Istanbul, Turkey)
Calzolari, Nicoletta; Choukri, Khalid; Declerck, Thierry; Uğur Doğan, Mehmet; Maegaard, Bente; Mariani, Joseph; Odijk, Jan and Piperidis, Stelios. Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12), p.3015-3018
Computer-based aids for writing assistance have been around since at least the early 1980s, focussing primarily on aspects such as spelling, grammar and style. The potential audience for such tools is very large indeed, and this is a clear case where we might expect to see language processing applications having a significant real-world impact. However, existing comparative evaluations of applications in this space are often no more than impressionistic and anecdotal reviews of commercial offerings as found in software magazines, making it hard to determine which approaches are superior. More rigorous evaluation in the scholarly literature has been held back in particular by the absence of shared datasets of texts marked-up with errors, and the lack of an agreed evaluation framework. Significant collections of publicly available data are now appearing; this paper describes a complementary evaluation framework, which has been piloted in the Helping Our Own shared task. The approach, which uses stand-off annotations for representing edits to text, can be used in a wide variety of text-correction tasks, and easily accommodates different error tagsets.