This package contains the LangChain integration with Heroku
langchain-heroku/
README.md
LICENSE.txt
pyproject.toml
poetry.lock
Makefile
CODE_OF_CONDUCT.md
CODEOWNERS
CONTRIBUTING.md
SECURITY.md
docs/
...
langchain_heroku/
__init__.py
chat_models.py
py.typed
...
scripts/
check_imports.py
lint_imports.sh
tests/
test_chat_models.py
test_chat_models_integration.py
test_compile.py
...
.gitignore
pip install -U langchain-herokuAnd you should configure credentials by setting the following environment variables:
INFERENCE_URL- Your Heroku Inference API URL (e.g.,https://us.inference.heroku.com)INFERENCE_KEY- Your Heroku Inference API keyINFERENCE_MODEL_ID- The model ID to use (e.g.,claude-3-5-sonnet-latest,claude-3-5-haiku-latest)
To use this integration, you need to set up Heroku's Managed Inference and Agents add-on:
-
Install the Heroku CLI and AI Plugin:
# Install Heroku CLI if you haven't already # Then install the AI plugin heroku plugins:install @heroku/plugin-ai
-
Create a Heroku App (if you don't have one):
heroku create <your-new-app-name>
-
Provision an AI Model Resource:
# List available models heroku ai:models:list # Create and attach a model to your app heroku ai:models:create -a $APP_NAME $MODEL_ID # Example for Claude 3.5 Sonnet heroku ai:models:create -a my-app claude-3-5-sonnet-latest
-
Get Your Config Variables: After attaching a model resource, your app will have three new config variables. You can view them with:
heroku config -a $APP_NAME -
Export Environment Variables: You can export these as environment variables with:
eval $(heroku config -a $APP_NAME --shell | grep '^INFERENCE_' | tee /dev/tty | sed 's/^/export /')
Available Models:
claude-3-5-sonnet-latest- Claude 3.5 Sonnet (recommended for best intelligence)claude-3-5-haiku-latest- Claude 3.5 Haiku (cost-effective and fast)claude-3-opus-latest- Claude 3 Opus (most capable)claude-3-sonnet-latest- Claude 3 Sonnetclaude-3-haiku-latest- Claude 3 Haiku
Pricing: Models are billed per token used. See the Heroku AI pricing page for current rates.
This project uses Poetry for dependency management and pytest for testing.
poetry install --with testpoetry run pytestOr to run a specific test file:
poetry run pytest tests/unit_tests/test_chat_models.pypoetry run pytest -v- Make sure you are using a compatible Python version (see
pyproject.toml). - If you add new test dependencies, add them to the
[tool.poetry.group.test.dependencies]section inpyproject.toml.
ChatHeroku class exposes chat models from Heroku using the Inference API.
from langchain_heroku import ChatHeroku
from langchain_core.messages import HumanMessage
chat = ChatHeroku()
result = chat.invoke([HumanMessage(content="Sing a ballad of LangChain.")])
print(result.content)from langchain_core.messages import HumanMessage, SystemMessage
# With system message
messages = [
SystemMessage(content="You are a helpful assistant that speaks in a friendly tone."),
HumanMessage(content="What's the weather like?")
]
chat = ChatHeroku(temperature=0.7, max_tokens=256)
result = chat.invoke(messages)
print(result.content)chat = ChatHeroku(streaming=True)
for chunk in chat.stream([HumanMessage(content="Tell me a story.")]):
print(chunk.content, end="")