Skip to content

A lightweight package to help you check whether a prompt to an AI chatbot is relevant to the context. Keep your API costs down by filtering out irrelevant queries!

Notifications You must be signed in to change notification settings

zion-off/llm-gatekeeper

Repository files navigation

llm-gatekeeper

A lightweight package to help you check whether a prompt to an AI chatbot is relevant to the context. Keep your API costs down by filtering out irrelevant queries!

Overview

This package uses the Xenova/mobilebert-uncased-mnli model via the Transformers.js library to perform zero-shot classification.

Features

  • Fast and efficient relevance checking (model automatically loads in a few seconds)
  • Uses advanced NLP model for accurate classification
  • Easy to integrate into existing chatbot systems
  • Helps reduce unnecessary API calls to your main LLM

Installation

$ npm i llm-gatekeeper

Usage

import isRelevant from "llm-gatekeeper";

const prompt = "Should I travel this summer?";
const keywords = ["reading", "books", "essays"];

const relevance = await isRelevant(prompt, keywordArray);

if (relevance === false) {
  // do not make API call
  console.log(
    `Sorry, the chatbot can only answer questions about ${keywords.join(
      ", or "
    )}`
  );
} else {
  // make API call
}

API

isRelevant(prompt, keywords)

Parameter Type Description
prompt string The input text to check for relevance
keywords string [] An array of keywords defining the relevant context

Returns

Promise<boolean>: Resolves to true if the prompt is relevant, false otherwise.

How It Works

The package uses a pre-trained MobileBERT model for zero-shot classification. The model is lightweight, and optimized for resource-limited devices.

It classifies the prompt into two categories: one containing your keywords, and "something else". If the prompt is more likely to belong to the keyword category, it's considered relevant.

Performance Considerations

  • The model is loaded asynchronously when the package is imported, which may cause a short delay on first use. The model is usually loaded within the first few seconds of page load.
  • Once the model is in memory, promises are resolved almost instantaneously.

Limitations

If you are not getting accurate results, try adding more keywords. Or submit an issue on GitHub!

Credits

Thanks to my friend Zein for inspiring the idea for the package. I wouldn't have made it if he wasn't abusing my chatbot on my website and costing me money.

About

A lightweight package to help you check whether a prompt to an AI chatbot is relevant to the context. Keep your API costs down by filtering out irrelevant queries!

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published