⭐ If you like this sample, star it on GitHub — it helps a lot!
Overview • Get started • Run the sample • Resources • FAQ • Troubleshooting
This sample shows how to build a serverless AI chat experience with Retrieval-Augmented Generation using LangChain.js and Azure. The application is hosted on Azure Static Web Apps and Azure Functions, with Azure Cosmos DB for NoSQL as the vector database. You can use it as a starting point for building more complex AI applications.
Tip
You can test this application locally without any cost using Ollama. Follow the instructions in the Local Development section to get started.
Building AI applications can be complex and time-consuming, but using LangChain.js and Azure serverless technologies allows to greatly simplify the process. This application is a chatbot that uses a set of enterprise documents to generate responses to user queries.
We provide sample data to make this sample ready to try, but feel free to replace it with your own. We use a fictitious company called Contoso Real Estate, and the experience allows its customers to ask support questions about the usage of its products. The sample data includes a set of documents that describes its terms of service, privacy policy and a support guide.
This application is made from multiple components:
-
A web app made with a single chat web component built with Lit and hosted on Azure Static Web Apps. The code is located in the
packages/webapp
folder. -
A serverless API built with Azure Functions and using LangChain.js to ingest the documents and generate responses to the user chat queries. The code is located in the
packages/api
folder. -
A database to store chat sessions and the text extracted from the documents and the vectors generated by LangChain.js, using Azure Cosmos DB for NoSQL.
-
A file storage to store the source documents, using Azure Blob Storage.
We use the HTTP protocol for AI chat apps to communicate between the web app and the API.
- Serverless Architecture: Utilizes Azure Functions and Azure Static Web Apps for a fully serverless deployment.
- Retrieval-Augmented Generation (RAG): Combines the power of Azure Cosmos DB and LangChain.js to provide relevant and accurate responses.
- Chat Sessions History: Maintains a personal chat history for each user, allowing them to revisit previous conversations.
- Scalable and Cost-Effective: Leverages Azure's serverless offerings to provide a scalable and cost-effective solution.
- Local Development: Supports local development using Ollama for testing without any cloud costs.
There are multiple ways to get started with this project.
The quickest way is to use GitHub Codespaces that provides a preconfigured environment for you. Alternatively, you can set up your local environment following the instructions below.
Important
If you want to run this sample entirely locally using Ollama, you have to follow the instructions in the local environment section.
You need to install following tools to work on your local machine:
- Node.js LTS
- Azure Developer CLI
- Git
- PowerShell 7+ (for Windows users only)
- Important: Ensure you can run
pwsh.exe
from a PowerShell command. If this fails, you likely need to upgrade PowerShell. - Instead of Powershell, you can also use Git Bash or WSL to run the Azure Developer CLI commands.
- Important: Ensure you can run
- Azure Functions Core Tools (should be installed automatically with NPM, only install manually if the API fails to start)
Then you can get the project code:
- Fork the project to create your own copy of this repository.
- On your forked repository, select the Code button, then the Local tab, and copy the URL of your forked repository.
git clone <your-repo-url>
You can run this project directly in your browser by using GitHub Codespaces, which will open a web-based VS Code:
A similar option to Codespaces is VS Code Dev Containers, that will open the project in your local VS Code instance using the Dev Containers extension.
You will also need to have Docker installed on your machine to run the container.
There are multiple ways to run this sample: locally using Ollama or Azure OpenAI models, or by deploying it to Azure.
- Azure account. If you're new to Azure, get an Azure account for free to get free Azure credits to get started. If you're a student, you can also get free credits with Azure for Students.
- Azure subscription with access enabled for the Azure OpenAI service. You can request access with this form.
- Azure account permissions:
- Your Azure account must have
Microsoft.Authorization/roleAssignments/write
permissions, such as Role Based Access Control Administrator, User Access Administrator, or Owner. If you don't have subscription-level permissions, you must be granted RBAC for an existing resource group and deploy to that existing group. - Your Azure account also needs
Microsoft.Resources/deployments/write
permissions on the subscription level.
- Your Azure account must have
See the cost estimation details for running this sample on Azure.
- Open a terminal and navigate to the root of the project.
- Authenticate with Azure by running
azd auth login
. - Run
azd up
to deploy the application to Azure. This will provision Azure resources, deploy this sample, and build the search index based on the files found in the./data
folder.- You will be prompted to select a base location for the resources. If you're unsure of which location to choose, select
eastus2
. - By default, the OpenAI resource will be deployed to
eastus2
. You can set a different location withazd env set AZURE_OPENAI_RESOURCE_GROUP_LOCATION <location>
. Currently only a short list of locations is accepted. That location list is based on the OpenAI model availability table and may become outdated as availability changes.
- You will be prompted to select a base location for the resources. If you're unsure of which location to choose, select
The deployment process will take a few minutes. Once it's done, you'll see the URL of the web app in the terminal.
You can now open the web app in your browser and start chatting with the bot.
When deploying the sample in an enterprise context, you may want to enforce tighter security restrictions to protect your data and resources. See the enhance security guide for more information.
To clean up all the Azure resources created by this sample:
- Run
azd down --purge
- When asked if you are sure you want to continue, enter
y
The resource group and all the resources will be deleted.
If you have a machine with enough resources, you can run this sample entirely locally without using any cloud resources. To do that, you first have to install Ollama and then run the following commands to download the models on your machine:
ollama pull llama3.1:latest
ollama pull nomic-embed-text:latest
Note
The llama3.1
model with download a few gigabytes of data, so it can take some time depending on your internet connection.
After that you have to install the NPM dependencies:
npm install
Then you can start the application by running the following command which will start the web app and the API locally:
npm start
Then, open a new terminal running concurrently and run the following command to upload the PDF documents from the /data
folder to the API:
npm run upload:docs
This only has to be done once, unless you want to add more documents.
You can now open the URL http://localhost:8000
in your browser to start chatting with the bot.
Note
While local models usually works well enough to answer the questions, sometimes they may not be able to follow perfectly the advanced formatting instructions for the citations and follow-up questions. This is expected, and a limitation of using smaller local models.
First you need to provision the Azure resources needed to run the sample. Follow the instructions in the Deploy the sample to Azure section to deploy the sample to Azure, then you'll be able to run the sample locally using the deployed Azure resources.
Once your deployment is complete, you should see a .env
file in the packages/api
folder. This file contains the environment variables needed to run the application using Azure resources.
To run the sample, you can then use the same commands as for the Ollama setup. This will start the web app and the API locally:
npm start
Open the URL http://localhost:8000
in your browser to start chatting with the bot.
Note that the documents are uploaded automatically when deploying the sample to Azure with azd up
.
Tip
You can switch back to using Ollama models by simply deleting the packages/api/.env
file and starting the application again. To regenerate the .env
file, you can run azd env get-values > packages/api/.env
.
Here are some resources to learn more about the technologies used in this sample:
- LangChain.js documentation
- Generative AI with JavaScript
- Generative AI For Beginners
- Azure OpenAI Service
- Azure Cosmos DB for NoSQL
- Ask YouTube: LangChain.js + Azure Quickstart sample
- Chat + Enterprise data with Azure OpenAI and Azure AI Search
- Revolutionize your Enterprise Data with Chat: Next-gen Apps w/ Azure OpenAI and AI Search
You can also find more Azure AI samples here.
You can find answers to frequently asked questions in the FAQ.
If you have any issue when running or deploying this sample, please check the troubleshooting guide. If you can't find a solution to your problem, please open an issue in this repository.
For more detailed guidance on how to use this sample, please refer to the tutorial.
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.