You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to inference llava v1.5, but encountered error "IndexError: piece id is out of range". The environment I used is "llava 1.1.3, transformers 4.31.0, sentencepiece 0.1.99, torch 2.0.1"
The detailed error information is listed as follows:
output = self.tokenizer.batch_decode(output_ids, skip_special_tokens=True)[
File "/root/anaconda3/envs/llava16/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3485, in batch_decode
return [
File "/root/anaconda3/envs/llava16/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3486, in
self.decode(
File "/root/anaconda3/envs/llava16/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3525, in decode
return self._decode(
File "/root/anaconda3/envs/llava16/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 931, in _decode
filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens)
File "/root/anaconda3/envs/llava16/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 912, in convert_ids_to_tokens
tokens.append(self._convert_id_to_token(index))
File "/root/anaconda3/envs/llava16/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama.py", line 204, in _convert_id_to_token
token = self.sp_model.IdToPiece(index)
File "/root/anaconda3/envs/llava16/lib/python3.10/site-packages/sentencepiece/init.py", line 1045, in _batched_func
return _func(self, arg)
File "/root/anaconda3/envs/llava16/lib/python3.10/site-packages/sentencepiece/init.py", line 1038, in _func
raise IndexError('piece id is out of range.')
How can I solve the problem?
The text was updated successfully, but these errors were encountered:
I want to inference llava v1.5, but encountered error "IndexError: piece id is out of range". The environment I used is "llava 1.1.3, transformers 4.31.0, sentencepiece 0.1.99, torch 2.0.1"
The detailed error information is listed as follows:
output = self.tokenizer.batch_decode(output_ids, skip_special_tokens=True)[
File "/root/anaconda3/envs/llava16/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3485, in batch_decode
return [
File "/root/anaconda3/envs/llava16/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3486, in
self.decode(
File "/root/anaconda3/envs/llava16/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3525, in decode
return self._decode(
File "/root/anaconda3/envs/llava16/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 931, in _decode
filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens)
File "/root/anaconda3/envs/llava16/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 912, in convert_ids_to_tokens
tokens.append(self._convert_id_to_token(index))
File "/root/anaconda3/envs/llava16/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama.py", line 204, in _convert_id_to_token
token = self.sp_model.IdToPiece(index)
File "/root/anaconda3/envs/llava16/lib/python3.10/site-packages/sentencepiece/init.py", line 1045, in _batched_func
return _func(self, arg)
File "/root/anaconda3/envs/llava16/lib/python3.10/site-packages/sentencepiece/init.py", line 1038, in _func
raise IndexError('piece id is out of range.')
How can I solve the problem?
The text was updated successfully, but these errors were encountered: