Skip to content

bitovi/temporal-ai-pipeline-example

Repository files navigation

Temporal AI workflow

The following is a simplified sample Temporal workflow to create custom embeddings from large list of files for use in an LLM-based application.

Installing and running dependencies

This repo contains a simple local development setup. For production use, we would recommend using Temporal Cloud and AWS.

Use the following command to run everything you need locally:

  • Localstack (for storing files in local S3)
  • Postgres (where embeddings are stored)
  • Temporal (runs temporal server start-dev in a docker container)
  • A Temporal Worker (to run your Workflow/Activity code)
OPENAI_API_KEY=<your OpenAPI key> docker compose up --build -d

Tearing everything down

Run the following command to turn everything off:

docker compose down -v

Create embeddings

npm run process-documents

Generated embeddings are stored in a Postgres table:

Alt text

Invoke a prompt

npm run invoke-prompt <embeddings workflowID> "<query>"

About

Example application using Temporal for an AI Pipeline

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published