Jump to content

LEPOR

From Wikipedia, the free encyclopedia

LEPOR (Length Penalty, Precision, n-gram Position difference Penalty and Recall) is an automatic language independent machine translation evaluation metric with tunable parameters and reinforced factors.

Background

[edit]

Since IBM proposed and realized the system of BLEU[1] as the automatic metric for Machine Translation (MT) evaluation,[2] many other methods have been proposed to revise or improve it, such as TER, METEOR,[3] etc. However, there exist some problems in the traditional automatic evaluation metrics. Some metrics perform well on certain languages but weak on other languages, which is usually called as a language bias problem. Some metrics rely on a lot of language features or linguistic information, which makes it difficult for other researchers to repeat the experiments. LEPOR is an automatic evaluation metric that tries to address some of the existing problems.[4] LEPOR is designed with augmented factors and the corresponding tunable parameters to address the language bias problem. Furthermore, in the improved version of LEPOR, i.e. the hLEPOR,[5] it tries to use the optimized linguistic features that are extracted from treebanks. Another advanced version of LEPOR is the nLEPOR metric,[6] which adds the n-gram features into the previous factors. So far, the LEPOR metric has been developed into LEPOR series.[7][8]

LEPOR metrics have been studied and analyzed by many researchers from different fields, such as machine translation,[9] natural-language generation,[10] and searching,[11] and beyond. LEPOR metrics are getting more attention from scientific researchers in natural language processing.

Design

[edit]

LEPOR [4] is designed with the factors of enhanced length penalty, precision, n-gram word order penalty, and recall. The enhanced length penalty ensures that the hypothesis translation, which is usually translated by machine translation systems, is punished if it is longer or shorter than the reference translation. The precision score reflects the accuracy of the hypothesis translation. The recall score reflects the loyalty of the hypothesis translation to the reference translation or source language. The n-gram based word order penalty factor is designed for the different position orders between the hypothesis translation and reference translation. The word order penalty factor has been proved to be useful by many researchers, such as the work of Wong and Kit (2008).[12]

In light that the word surface string matching metrics were criticized with lack of syntax and semantic awareness, the further developed LEPOR metric (hLEPOR) investigates the integration of linguistic features, such as part of speech (POS).[5][8] POS is introduced as a certain functionality of both syntax and semantic point of view, e.g. if a token of output sentence is a verb while it is expected to be a noun, then there shall be a penalty; also, if the POS is the same but the exact word is not the same, e.g. good vs nice, then this candidate shall gain certain credit. The overall score of hLEPOR then is calculated as the combination of word level score and POS level score with a weighting set. Language modelling inspired n-gram knowledge is also extensively explored in nLEPOR.[6][8] In addition to the n-gram knowledge for n-gram position difference penalty calculation, n-gram is also applied to n-gram precision and n-gram recall in nLEPOR, and the parameter n is an adjustable factor. In addition to POS knowledge in hLEPOR, phrase structure from parsing information is included in a new variant HPPR.[13] In HPPR evaluation modeling, the phrase structure set, such as noun phrase, verb phrase, prepositional phrase, adverbial phrase are considered during the matching from candidate text to reference text.

Software implementation

[edit]

LEPOR metrics were originally implemented in Perl programming language,[14] and recently the Python version[15] is available by other researchers and engineers,[16] with a press announcement[17] from Logrus Global Language Service company.

Performance

[edit]

LEPOR series have shown their good performances in the ACL's annual international workshop of statistical machine translation (ACL-WMT). ACL-WMT is held by the special interest group of machine translation (SIGMT) in the international association for computational linguistics (ACL). In the ACL-WMT 2013,[18] there are two translation and evaluation tracks, English-to-other and other-to-English. The "other" languages include Spanish, French, German, Czech and Russian. In the English-to-other direction, nLEPOR metric achieves the highest system-level correlation score with human judgments using the Pearson correlation coefficient, the second highest system-level correlation score with human judgments using the Spearman rank correlation coefficient. In the other-to-English direction, nLEPOR performs moderate and METEOR yields the highest correlation score with human judgments, which is due to the fact that nLEPOR only uses the concise linguistic feature, part-of-speech information, except for the officially offered training data; however, METEOR has used many other external resources, such as the synonyms dictionaries, paraphrase, and stemming, etc.

One extended work and introduction about LEPOR's performances with different conditions including pure word-surface form, POS features, phrase tags features, is described in a thesis from University of Macau.[8]

There is a deep statistical analysis about hLEPOR and nLEPOR performance in WMT13, which shows it performed as one of the best metrics "in both the individual language pair assessment for Spanish-to-English and the aggregated set of 9 language pairs", see the paper (Accurate Evaluation of Segment-level Machine Translation Metrics) "https://www.aclweb.org/anthology/N15-1124" Graham et al. 2015 NAACL (https://github.com/ygraham/segment-mteval)

Applications

[edit]

LEPOR automatic metric series have been applied and used by many researchers from different fields in natural language processing. For instance, in standard MT and Neural MT.[19] Also outside of MT community, for instance,[11] applied LEPOR in Search evaluation;[20] mentioned the application of LEPOR for code (programming language) generation evaluation;[10] investigated automatic evaluation of natural language generation [21] with metrics including LEPOR, and argued that automatic metrics can help system level evaluations; also LEPOR is applied in image captioning evaluation.[22]

See also

[edit]

Notes

[edit]
  1. ^ Papineni et al. (2002)
  2. ^ Han (2016)
  3. ^ Banerjee and Lavie (2005)
  4. ^ a b Han et al. (2012)
  5. ^ a b Han et al. (2013a)
  6. ^ a b Han et al. (2013b)
  7. ^ Han et al. (2014)
  8. ^ a b c d Han (2014)
  9. ^ Graham et al. (2015)
  10. ^ a b Novikova et al. (2017)
  11. ^ a b Liu et al. (2021)
  12. ^ Wong and Kit (2008)
  13. ^ Han et al. (2013c)
  14. ^ "GitHub - aaronlifenghan/Aaron-project-lepor: LEPOR: A Robust Evaluation Metric for Machine Translation with Augmented Factors". GitHub. 8 January 2022.
  15. ^ "HLepor: This is Python port of original algorithm by Aaron Li-Feng Han".
  16. ^ "GitHub - lHan87/LEPOR". GitHub. 5 May 2021.
  17. ^ Global, Logrus (30 April 2021). "Logrus Global Adds hLEPOR Translation-quality Evaluation Metric Python Implementation on PyPi.org". Slator (Press release). Retrieved 2 November 2022.
  18. ^ ACL-WMT (2013)
  19. ^ Marzouk and Hansen-Schirra (2019)
  20. ^ Liguori et al. (2021)
  21. ^ Celikyilmaz et al. (2020)
  22. ^ Qiu et al. (2020)

References

[edit]
  • Papineni, K., Roukos, S., Ward, T., and Zhu, W. J. (2002). "BLEU: a method for automatic evaluation of machine translation" in ACL-2002: 40th Annual meeting of the Association for Computational Linguistics pp. 311–318
  • Han, A.L.F., Wong, D.F., and Chao, L.S. (2012) "LEPOR: A Robust Evaluation Metric for Machine Translation with Augmented Factors" in Proceedings of the 24th International Conference on Computational Linguistics (COLING 2012): Posters, pp. 441–450. Mumbai, India. Online paper Open source tool
  • Han, A.L.F., Wong, D.F., Chao, L.S., He, L., Lu, Y., Xing, J., and Zeng, X. (2013a) "Language-independent Model for Machine Translation Evaluation with Reinforced Factors" in Proceedings of the Machine Translation Summit XIV (MT SUMMIT 2013), pp. 215-222. Nice, France. Publisher: International Association for Machine Translation. Online paper Archived 16 January 2019 at the Wayback Machine Open source tool
  • Han, A.L.F., Wong, D.F., Chao, L.S., Lu, Y., He, L., Wang, Y., and Zhou, J. (2013b) "A Description of Tunable Machine Translation Evaluation Systems in WMT13 Metrics Task" in Proceedings of the Eighth Workshop on Statistical Machine Translation, ACL-WMT13, Sofia, Bulgaria. Association for Computational Linguistics. Online paper pp. 414–421
  • Han, Aaron L.-F.; Wong, Derek F.; Chao, Lidia S.; He, Liangye; Lu, Yi (2014). "Unsupervised Quality Estimation Model for English to German Translation and Its Application in Extensive Supervised Evaluation". The Scientific World Journal. 2014: 1–12. doi:10.1155/2014/760301. PMC 4032676. PMID 24892086.
  • ACL-WMT. (2013) "ACL-WMT13 METRICS TASK"
  • Wong, B. T-M, and Kit, C. (2008). "Word choice and word position for automatic MT evaluation" in Workshop: MetricsMATR of the Association for Machine Translation in the Americas (AMTA), short paper, Waikiki, US.
  • Banerjee, S. and Lavie, A. (2005) "METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments" in Proceedings of Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization at the 43rd Annual Meeting of the Association of Computational Linguistics (ACL-2005), Ann Arbor, Michigan, June 2005
  • Han, Lifeng. (2014) "LEPOR: An Augmented Machine Translation Evaluation Metric". Thesis for Master of Science in Software Engineering. University of Macau, Macao. [1] PPT
  • Yvette Graham, Timothy Baldwin, and Nitika Mathur. (2015) Accurate evaluation of segment-level machine translation metrics. In NAACL HLT 2015, The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado, USA, May 31 - June 5, 2015, pages 1183–1191.
  • Han, Lifeng (2016). "Machine Translation Evaluation Resources and Methods: A Survey". arXiv:1605.04515 [cs.CL].
  • Jekaterina Novikova, Ondˇrej Dušek, Amanda Cercas Curry, and Verena Rieser. (2017) Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241–2252, Copenhagen, Denmark. Association for Computational Linguistics.
  • Liu, Zeyang; Zhou, Ke; Wilson, Max L. (2021). "Meta-evaluation of Conversational Search Evaluation Metrics". ACM Transactions on Information Systems. 39 (4): 1–42. arXiv:2104.13453. doi:10.1145/3445029. S2CID 233423567.
  • Liguori, Pietro; Al-Hossami, Erfan; Cotroneo, Domenico; Natella, Roberto; Cukic, Bojan; Shaikh, Samira (2021). "Shellcode_IA32: A Dataset for Automatic Shellcode Generation". Proceedings of the 1st Workshop on Natural Language Processing for Programming (NLP4Prog 2021). pp. 58–64. arXiv:2104.13100. doi:10.18653/v1/2021.nlp4prog-1.7. S2CID 233407761.
  • Celikyilmaz, Asli; Clark, Elizabeth; Gao, Jianfeng (2020). "Evaluation of Text Generation: A Survey". arXiv:2006.14799 [cs.CL].
  • D Qiu, B Rothrock, T Islam, AK Didier, VZ Sun… (2020) SCOTI: Science Captioning of Terrain Images for data prioritization and local image search. Planetary and Space. Elsevier
  • Marzouk, Shaimaa; Hansen-Schirra, Silvia (2019). "Evaluation of the impact of controlled language on neural machine translation compared to other MT architectures". Machine Translation. 33 (1–2): 179–203. doi:10.1007/s10590-019-09233-w. S2CID 171094946.
  • Han, Aaron Li-Feng; Wong, Derek F.; Chao, Lidia S.; He, Liangye; Li, Shuo; Zhu, Ling (2013). "Phrase Tagset Mapping for French and English Treebanks and Its Application in Machine Translation Evaluation". Language Processing and Knowledge in the Web. Lecture Notes in Computer Science. Vol. 8105. pp. 119–131. doi:10.1007/978-3-642-40722-2_13. ISBN 978-3-642-40721-5.
[edit]