Evaluation code for various unsupervised automated metrics for Natural Language Generation.
-
Updated
Mar 15, 2024 - Python
Evaluation code for various unsupervised automated metrics for Natural Language Generation.
A neural network to generate captions for an image using CNN and RNN with BEAM Search.
Well tested & Multi-language evaluation framework for text summarization.
Code for paper title "Learning Semantic Sentence Embeddings using Pair-wise Discriminator" COLING-2018
Machine Translation (MT) Evaluation Scripts
A python3 library for evaluating caption's BLEU, Meteor, CIDEr, SPICE,ROUGE_L,WMD score. Fork from https://github.com/ruotianluo/coco-caption
MAchine Translation Evaluation Online (MATEO)
Built a classifier for evaluating quality of machine translation to predict best matching sentence to the reference sentence
An effective and simple tool to calculate SacreBLEU, Token-BLEU, BLEU w/ compound splitting for fairseq
Evaluation tools for image captioning. Including BLEU, ROUGE-L, CIDEr, METEOR, SPICE scores.
Corpus level and sentence level BLEU calculation for machine translation
Implementation for paper BLEU: a Method for Automatic Evaluation of Machine Translation
A data driven query expansion approach for image caption, implemented in cpp
Add a description, image, and links to the bleu topic page so that developers can more easily learn about it.
To associate your repository with the bleu topic, visit your repo's landing page and select "manage topics."