Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The wrong names are used for the user and assistant for openai when using ChatPromptTemplate.fromMessages #5231

Open
5 tasks done
a-i-joe opened this issue Apr 27, 2024 · 1 comment
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@a-i-joe
Copy link

a-i-joe commented Apr 27, 2024

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain.js documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain.js rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

const chatModel = new ChatOpenAI({..});

const prompt = ChatPromptTemplate.fromMessages([
    new HumanMessage("hello"),
    new AIMessage("hello"),
    new HumanMessage("how are you"),
]);

const strOutputParser = new StringOutputParser();
const chain = prompt
    .pipe(chatModel)
    .pipe(strOutputParser);

const response = await chain.invoke({});

Error Message and Stack Trace (if applicable)

No response

Description

When I log the output (handleLLMStart, _prompts)

Expected:

user: hello
assistant: hello
user: how are you

Actual:

Human: hello
AI: hello
Human: how are you

I have also tried using MessageHistory, and manually specifying all messages with ["assistant", "hello"] format messages.

I found this, here https://github.com/langchain-ai/langchainjs/blob/e0f5668d3437fad01851fd8472e10705a27d120b/langchain/src/experimental/chat_models/bittensor.ts#L61C1-L74C1

  messageToOpenAIRole(message: BaseMessage) {
    const type = message._getType();
    switch (type) {
      case "system":
        return "system";
      case "ai":
        return "assistant";
      case "human":
        return "user";
      default:
        return "user";
    }
  }

Which seems to indicate the right names were intended here, but presumably this is not being utilized correctly.

System Info

node --version
v20.12.2

"langchain": "0.1.36"

Copy link

dosubot bot commented Apr 27, 2024

The behavior you're observing with the labels "Human" and "AI" in the ChatPromptTemplate.fromMessages method is by design in the LangChain.js framework. This method uses specific classes to represent human and AI messages, which inherently label messages as "Human" and "AI" based on the type of message they are meant to represent.

To customize these labels, LangChain.js provides a configuration option through the role parameter when creating instances of ChatMessagePromptTemplate. This allows for specifying the role of each message in the chat prompt, enabling customization of the labels used for different participants in the chat.

It's important to note that the MessageHistory class and other related classes in LangChain.js handle the labeling of messages through a sophisticated mechanism that involves multiple message types. This mechanism is designed to map the roles within the chat interaction context in the LangChain.js framework intentionally.

If you're experiencing issues with conversation flow or response generation, it could be due to several factors related to the integration of ChatOpenAI and ChatPromptTemplate.fromMessages. There are several alternative methods and classes in LangChain.js that could be used for creating chat prompts with custom labels, such as MessagesPlaceholder, ChatMessagePromptTemplate, and ChatPromptTemplate.fromTemplate.

The StringOutputParser class is designed to handle the output from a language model call and convert it into a string format, which could influence the final output seen after language model processing.

For further customization or if you encounter any specific issues with the output or behavior of these methods, reviewing the LangChain.js documentation or exploring alternative methods within the framework might provide additional insights or solutions.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@dosubot dosubot bot added the auto:bug Related to a bug, vulnerability, unexpected error with an existing feature label Apr 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

1 participant