Twitter Sentiment Extraction using Custom Roberta Transformer Model and using Pre-trained model weights for prediction
-
Updated
Oct 30, 2020 - Jupyter Notebook
Twitter Sentiment Extraction using Custom Roberta Transformer Model and using Pre-trained model weights for prediction
Question answering system - research work [4 semester]
We augmented an already existing BERT Tiny Transformer network designed to train the Google NQ dataset to randomly sample some of the tokens in a question with its synonyms. The idea comes from the process of image data augmentation used in computer vision pipelines. This experiment directly tackles the concepts of Natural Language Inference and…
PAN @ CLEF 2021 shared Task: Detection of hate speach spreaders in tweets with the help of ML-Methods and transformer models.
Applying different ML & Neural Network algorithms to analyze MBTI Dataset.
Implementation of Adversarial Training for BERT and BERT-Like Models and Analysis of effects of model compression on Robustness of a model
Teaching 🤖 how to ✍🏽 multiple choice questions 👩🏽🏫
Reimplementation of BerConvoNet: A deep learning framework for fake news classification
QA (Question-Answering) are machine or deep learning models that can answer questions given some context, and sometimes without any context based on the data already fed into the model. They can extract answer phrases from paragraphs and can even paraphrase the answer generatively.
The project is all about predicting the Twitter's tweets as reliable or unreliable
In this project, we have proposed a meta-learning approach for text classification using a combination of a base model and a meta-learner model. The base model, based on the BERT architecture, is used to extract contextualized representations of text.
Implemented pre-trained Transformer-based distilBERT and BERT multilingual model to classify sentiments in positive or negative class and ranked them on scale of 1 to 5
BERT, a well-liked pretrained language model, was utilised to offer a transformer-based strategy for emoji prediction. Additionally, BERT was refined using a sizable corpus of text (tweets) that had both text (tweets) and emojis in order to predict the ideal emoji for a particular text.
A hybrid topic modeling approach fusing LDA, BERT embeddings, and autoencoders for enhanced topic extraction.
Implementation of the link identification task in BERT.
Here we leverage a subset of the amazon_polarity dataset to train two machine learning models: an LSTM model with GloVe embeddings and a fine-tuned DistilBERT model. The LSTM model achieved an accuracy of 80.40%, while the DistilBERT model outperformed with an impressive 90.75% accuracy. Predictions can made in real time via our streamlit app
Multi class classification of tweets using BERT
This project compares the performance of a Naive Bayes model and fine-tuned BERT models on emotion classification from text.
Add a description, image, and links to the bert-model topic page so that developers can more easily learn about it.
To associate your repository with the bert-model topic, visit your repo's landing page and select "manage topics."