Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting special characters in model generation #70

Open
marianafidalgo opened this issue Jun 2, 2021 · 9 comments
Open

Getting special characters in model generation #70

marianafidalgo opened this issue Jun 2, 2021 · 9 comments

Comments

@marianafidalgo
Copy link

marianafidalgo commented Jun 2, 2021

Hello. I fine tuned my DialoGPT small model and trained it on Daily Dialogue database.
When speaking with the model, on its last utterance generation, the outputs has random punctuation characters.
This is my generation:

from transformers import AutoModelWithLMHead, AutoTokenizer
import torch

tokenizer = AutoTokenizer.from_pretrained('microsoft/DialoGPT-small')
model = AutoModelWithLMHead.from_pretrained('output-daily')

raw = ""
while raw != "END":
  for step in range(4):
    raw = input(">> User:")
    if raw == "END":
      break
  
    new_user_input_ids = tokenizer.encode(raw + tokenizer.eos_token, return_tensors='pt')

    bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids

    chat_history_ids = model.generate(
        bot_input_ids, 
        max_length=150,
        do_sample=True,
        temperature = 0.7,
        top_k=50, 
        top_p=0.92,
        no_repeat_ngram_size=3,
        pad_token_id=tokenizer.eos_token_id
    )
    print("Bot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
  bot_input_ids = []
  print("\n")

The output:

>> User:Hello, what are you doing?
Bot: I'm just watching TV.
>> User:What is your favorite movie?
Bot: My favorite movie is Superbad.
>> User:I don't know it.
Bot: I like comedies best.
>> User:Me too!
Bot: !!!( At the end of the movie, O. K. )!!!!!1!!@!!excuse me!!( at the end, O'K. )!!excuses me!!Excuse me!(At the end )!exc!!automatics!!remotes!!Automatics!rematics!autos!!Remotes!autop!!Automatics!!autoblog!!log!autobi


>> User:Me too!
Bot: What do you think of this dress?
>> User:It's amazing!
Bot: I can't believe it.
>> User:Yes I lie.
Bot: What does it look like?
>> User:A potato.
Bot: !!!( After trying )!!!!!
@chiranshu14
Copy link

Did you get any solution for this?

@marianafidalgo
Copy link
Author

Not yet :(

@chiranshu14
Copy link

I tried with small and medium models it's the same for me.
I followed this tutorial to fine-tune the model. - https://towardsdatascience.com/make-your-own-rick-sanchez-bot-with-transformers-and-dialogpt-fine-tuning-f85e6d1f4e30
The author's results look really good, but surprisingly it doesn't give the same output when I trained it on the same data.

  1. Could you share your retraining code? or any references that you followed?
    Didn't really find any good learning resource for this.

  2. My guess was that the training dataset is not sufficient. How large was your training data?

@imibook
Copy link

imibook commented Jul 22, 2021

@marianafidalgo ,could you share your output-daily data ?

@archmagos-dominus
Copy link

archmagos-dominus commented Feb 22, 2022

Yeah I have encountered the same issues. The model just returns tens of "!!!!!!" and then cannot be conversed with anymore. This behaviour happens after the 4th round of the conversation, like clockwork. The problem seems to step from the implementation of chat history. With the step hardcoded to constant 0, the bot works, albeit without any memory. as the step reaches 3 everything just breaks down. Maybe it's a dataset issue, or maybe it is some sort of memory issue.

EDIT: Seems like after a few round the EoS token that should end the round is not longer added after the bot response.

@nikich340
Copy link

Yeah I have encountered the same issues. The model just returns tens of "!!!!!!" and then cannot be conversed with anymore. This behaviour happens after the 4th round of the conversation, like clockwork. The problem seems to step from the implementation of chat history. With the step hardcoded to constant 0, the bot works, albeit without any memory. as the step reaches 3 everything just breaks down. Maybe it's a dataset issue, or maybe it is some sort of memory issue.

EDIT: Seems like after a few round the EoS token that should end the round is not longer added after the bot response.

Did you solve it?

@archmagos-dominus
Copy link

Yeah I have encountered the same issues. The model just returns tens of "!!!!!!" and then cannot be conversed with anymore. This behaviour happens after the 4th round of the conversation, like clockwork. The problem seems to step from the implementation of chat history. With the step hardcoded to constant 0, the bot works, albeit without any memory. as the step reaches 3 everything just breaks down. Maybe it's a dataset issue, or maybe it is some sort of memory issue.

EDIT: Seems like after a few round the EoS token that should end the round is not longer added after the bot response.

Did you solve it?

I did not manage to figure out the root cause of the problem. I did manage to make the bot respond as it should by constraining the lenght of the chat_history_id to a maximum length of 50 in my case. It no longer freaks out, but it is also quite limited when it comes to generating responses that take conversation context into account. I hope this bandaid fix might work well enough for your implementation as well.

@Nakul24-1
Copy link

Nakul24-1 commented Sep 18, 2022

Yeah I have encountered the same issues. The model just returns tens of "!!!!!!" and then cannot be conversed with anymore. This behaviour happens after the 4th round of the conversation, like clockwork. The problem seems to step from the implementation of chat history. With the step hardcoded to constant 0, the bot works, albeit without any memory. as the step reaches 3 everything just breaks down. Maybe it's a dataset issue, or maybe it is some sort of memory issue.
EDIT: Seems like after a few round the EoS token that should end the round is not longer added after the bot response.

Did you solve it?

Since it breaks after step 3/4 , a potential hacky solution is maintain a queue with a fixed length of 3 maybe which stores past inputs and outputs and use them rather than the whole history, although some context is lost this would allow the chatbot to run endlessly without breaking down and keeping some context rather than none as in step hardcoded to 0.

Yeah I have encountered the same issues. The model just returns tens of "!!!!!!" and then cannot be conversed with anymore. This behaviour happens after the 4th round of the conversation, like clockwork. The problem seems to step from the implementation of chat history. With the step hardcoded to constant 0, the bot works, albeit without any memory. as the step reaches 3 everything just breaks down. Maybe it's a dataset issue, or maybe it is some sort of memory issue.
EDIT: Seems like after a few round the EoS token that should end the round is not longer added after the bot response.

Did you solve it?

I did not manage to figure out the root cause of the problem. I did manage to make the bot respond as it should by constraining the lenght of the chat_history_id to a maximum length of 50 in my case. It no longer freaks out, but it is also quite limited when it comes to generating responses that take conversation context into account. I hope this bandaid fix might work well enough for your implementation as well.

When you say EoS is not added, is there a way to add it manually? Like after every response we add EoS , would that fix the issue?

@archmagos-dominus
Copy link

archmagos-dominus commented Sep 21, 2022

Since it breaks after step 3/4 , a potential hacky solution is maintain a queue with a fixed length of 3 maybe which stores past inputs and outputs and use them rather than the whole history, although some context is lost this would allow the chatbot to run endlessly without breaking down and keeping some context rather than none as in step hardcoded to 0.

Changing the number of chat rounds kept in memory proved to solve the issue most of the time, however, it was not as reliable as I needed it to be. As per my response on Sep 17, I have instead taken the length of the tensor into account and using a 'hacky' fix like the one below I was able to get it to work without freaking out at all.

    if bot_input_ids.size(dim=1) >= args.get('max_length'):
      #trim the tokens
      bot_input_ids = torch.narrow(bot_input_ids, 1, -args.get('max_length'), args.get('max_length'))

When you say EoS is not added, is there a way to add it manually? Like after every response we add EoS , would that fix the issue?

To be absolutely honest I did not pursue this line of thinking since I managed to get it working well enough for my implementation. If adding EoS manually will make it behave properly, I do not know.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants