Skip to content
#

bert-model

Here are 924 public repositories matching this topic...

We augmented an already existing BERT Tiny Transformer network designed to train the Google NQ dataset to randomly sample some of the tokens in a question with its synonyms. The idea comes from the process of image data augmentation used in computer vision pipelines. This experiment directly tackles the concepts of Natural Language Inference and…

  • Updated Jul 23, 2021
  • Python

Here we leverage a subset of the amazon_polarity dataset to train two machine learning models: an LSTM model with GloVe embeddings and a fine-tuned DistilBERT model. The LSTM model achieved an accuracy of 80.40%, while the DistilBERT model outperformed with an impressive 90.75% accuracy. Predictions can made in real time via our streamlit app

  • Updated Sep 27, 2023
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the bert-model topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the bert-model topic, visit your repo's landing page and select "manage topics."

Learn more