get_num_tokens_from_message and get_num_tokens function gives accurate tokens for a string or not. #21071
Replies: 3 comments 4 replies
-
The
These functions are model-agnostic, meaning they should accurately count tokens for any text input, including inputs for models like "gemini-pro" used in your example. The accuracy of the token count depends on the compatibility of the tokenizer with the model's tokenization process. The lack of specific documentation for these functions in LangChain could be due to the project's ongoing development. The source code, particularly in the For the This explanation should help clarify the purpose and functionality of the
|
Beta Was this translation helpful? Give feedback.
-
@dosu can you please provide example code if possible |
Beta Was this translation helpful? Give feedback.
-
Thanks for the reply @dosu but i think I need to implement custom methods to get the tokens from each of different models. |
Beta Was this translation helpful? Give feedback.
-
Checked other resources
Commit to Help
Example Code
Description
I was wondering what are the above two functions used for. Are they used to calculate the tokens for a string, Can anyone please explain them to me? Also why there is not documentation for this function on langchain.
System Info
langchain_openai==0.1.3
langchain_core==0.1.45
langchain_anthropic==0.1.11
python-dotenv==1.0.1
python==3.11
Beta Was this translation helpful? Give feedback.
All reactions