Chinese Grammatical Correction Using BERT-based Pre-trained Model

Hongfei Wang1, Michiki Kurosawa1, Satoru Katsumata2, Mamoru Komachi1
1Tokyo Metropolitan University, 2Retrieva, Inc.


Abstract

In recent years, pre-trained models have been extensively studied, and several downstream tasks have benefited from their utilization. In this study, we verify the effectiveness of two methods that incorporate a pre-trained model into an encoder-decoder model on Chinese grammatical error correction tasks. We also analyze the error type and conclude that sentence-level errors are yet to be addressed.