Skip to content

fiddler-labs/fiddler-auditor

Repository files navigation

Fiddler Auditor

Auditing Large Language Models made easy!

lint test

What is Fiddler Auditor?

Fiddler Auditor Capabilities

Language models enable companies to build and launch innovative applications to improve productivity and increase customer satisfaction. However, it’s been known that LLMs can hallucinate, generate adversarial responses that can harm users, and even expose private information that they were trained on when prompted or unprompted. It's more critical than ever for ML and software application teams to minimize these risks and weaknesses before launching LLMs and NLP models. As a result, it’s important for you to include a process to audit language models thoroughly before production. The Fiddler Auditor enables you to test LLMs and NLP models, identify weaknesses in the models, and mitigate potential adversarial outcomes before deploying them to production.

Features and Capabilities

Fiddler Auditor Flow

Fiddler Auditor supports

  • Red-teaming LLMs for your use-case with prompt perturbation
  • Integration with LangChain
  • Custom evaluation metrics
  • Generative and Discriminative NLP models
  • Comparison of LLMs

Example Report An example report generated by the Fiddler Auditor for text-davinci-003.

Installation

From PyPI

Auditor is available on PyPI and we test on Python 3.8 and above. We recommend creating a virtual python environment and installing using the following command

pip install fiddler-auditor

From source

You can install from source after cloning this repo using the following command

pip install .

Quick-start guides

Contribution

We are continuously updating this library to support language models as they evolve.

  • Contributions in the form of suggestions and PRs to Fiddler Auditor are welcome!
  • If you encounter a bug, please feel free to raise issues in this repository.

For step-by-step instructions follow the Contribution Guide.

Community