Meta | ย ย ย ย ย ย |
Testing | ย ย ย ย ย ย ย ย ย ย |
Package | ย ย ย ย ย ย ย ย |
werpy is a powerful yet lightweight Python package that rapidly calculates and analyzes the Word Error Rate (WER) between two sets of text.
It has been designed with the flexibility to handle multiple input data types such as strings, lists and NumPy arrays.
The package also includes a full set of features such as normalizing the input text to account for data collection variability and the capability to easily assign different weights/penalties to specific error classifications (insertions, deletions, and substitutions).
Additionally, the summary function provides a comprehensive breakdown of the calculated results to assist in analyzing the specific errors quickly and in more detail.
The following table provides an overview of the functions that can be used in werpy.
Function | Description |
---|---|
normalize(text) | Preprocess input text to remove punctuation, remove duplicated spaces, leading/trailing blanks and convert all words to lowercase. |
wer(reference, hypothesis) | Calculate the overall Word Error Rate for the entire reference and hypothesis texts. |
wers(reference, hypothesis) | Calculates a list of the Word Error Rates for each of the reference and hypothesis texts. |
werp(reference, hypothesis) | Calculates a weighted Word Error Rate for the entire reference and hypothesis texts. |
werps(reference, hypothesis) | Calculates a list of weighted Word Error Rates for each of the reference and hypothesis texts. |
summary(reference, hypothesis) | Provides a comprehensive breakdown of the calculated results including the WER, Levenshtein Distance and all the insertion, deletion and substitution errors. |
summaryp(reference, hypothesis) | Delivers an in-depth breakdown of the results, covering metrics like WER, Levenshtein Distance, and a detailed account of insertion, deletion, and substitution errors, inclusive of the weighted WER. |
You can install the latest werpy release with Python's pip package manager:
# Install werpy from PyPi
pip install werpy
Import the werpy package
Python Code:
import werpy
Example 1 - Normalize a list of text
Python Code:
input_data = ["It's very popular in Antarctica.","The Sugar Bear character"]
reference = werpy.normalize(input_data)
print(reference)
Results Output:
['its very popular in antarctica', 'the sugar bear character']
Example 2 - Calculate the overall Word Error Rate on a set of strings
Python Code:
wer = werpy.wer('i love cold pizza', 'i love pizza')
print(wer)
Results Output:
0.25
Example 3 - Calculate the overall Word Error Rate on a set of lists
Python Code:
ref = ['i love cold pizza','the sugar bear character was popular']
hyp = ['i love pizza','the sugar bare character was popular']
wer = werpy.wer(ref, hyp)
print(wer)
Results Output:
0.2
Example 4 - Calculate the Word Error Rates for each set of texts
Python Code:
ref = ['no one else could claim that','she cited multiple reasons why']
hyp = ['no one else could claim that','she sighted multiple reasons why']
wers = werpy.wers(ref, hyp)
print(wers)
Results Output:
[0.0, 0.2]
Example 5 - Calculate the weighted Word Error Rates for the entire set of text
Python Code:
ref = ['it was beautiful and sunny today']
hyp = ['it was a beautiful and sunny day']
werp = werpy.werp(ref, hyp, insertions_weight=0.5, deletions_weight=0.5, substitutions_weight=1)
print(werp)
Results Output:
0.25
Example 6 - Calculate a list of weighted Word Error Rates for each of the reference and hypothesis texts
Python Code:
ref = ['it blocked sight lines of central park', 'her father was an alderman in the city government']
hyp = ['it blocked sightlines of central park', 'our father was an elder man in the city government']
werps = werpy.werps(ref, hyp, insertions_weight = 0.5, deletions_weight = 0.5, substitutions_weight = 1)
print(werps)
Results Output:
[0.21428571428571427, 0.2777777777777778]
Example 7 - Provide a complete breakdown of the Word Error Rate calculations for each of the reference and hypothesis texts
Python Code:
ref = ['it is consumed domestically and exported to other countries', 'rufino street in makati right inside the makati central business district', 'its estuary is considered to have abnormally low rates of dissolved oxygen', 'he later cited his first wife anita as the inspiration for the song', 'no one else could claim that']
hyp = ['it is consumed domestically and exported to other countries', 'rofino street in mccauti right inside the macasi central business district', 'its estiary is considered to have a normally low rates of dissolved oxygen', 'he later sighted his first wife anita as the inspiration for the song', 'no one else could claim that']
summary = werpy.summary(ref, hyp)
print(summary)
Results Output:
Example 8 - Provide a complete breakdown of the Weighted Word Error Rate for each of the input texts
Python Code:
ref = ['the tower caused minor discontent because it blocked sight lines of central park', 'her father was an alderman in the city government', 'he was commonly referred to as the blacksmith of ballinalee']
hyp = ['the tower caused minor discontent because it blocked sightlines of central park', 'our father was an alderman in the city government', 'he was commonly referred to as the blacksmith of balen alley']
weighted_summary = werpy.summaryp(ref, hyp, insertions_weight = 0.5, deletions_weight = 0.5, substitutions_weight = 1)
print(weighted_summary)
Results Output:
- NumPy - Provides an assortment of routines for fast operations on arrays
- Pandas - Powerful data structures for data analysis, time series, and statistics
werpy
is released under the terms of the BSD 3-Clause License. Please refer to the LICENSE file for full details.
This project also includes third-party packages distributed under the BSD-3-Clause license (NumPy, Pandas) and the Apache License 2.0 (Cython).
The full NumPy, Pandas and Cython licenses can be found in the LICENSES directory in this repository.
They can also be found directly in the following source codes: