Skip to content

Commit

Permalink
🗾
Browse files Browse the repository at this point in the history
  • Loading branch information
transitive-bullshit committed Nov 6, 2023
1 parent 149c1d9 commit c63f94c
Showing 1 changed file with 4 additions and 3 deletions.
7 changes: 4 additions & 3 deletions docs/pages/guide/caching.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

LLMs, embedding models, and datastores all work with the same `cache` and `cacheKey` parameters and support basically any caching strategy you can think of.

**By default, caching is not enabled on any of the classes**.
By default, caching is not enabled on any of the classes.

To enable caching, pass in a `cache` object, which must implement `.get(key)` and `.set(key, value)`, both of which can be either sync or async.
**To enable caching, pass in a `cache` object**, which must implement `.get(key)` and `.set(key, value)`, both of which can be either sync or async.

The `cache` object is designed to work with `new Map()`, [quick-lru](https://github.com/sindresorhus/quick-lru), [any keyv adaptor[(https://github.com/jaredwray/keyv), or any other key-value store.

Expand Down Expand Up @@ -65,7 +65,8 @@ import { ChatModel } from '@dexaai/dexter/model';
import { pick } from '@dexaai/utils';
import hashObject from 'hash-obj';

// Create an OpenAI chat completion model w/ an in-memory cache using a custom cache key
// Create an OpenAI chat completion model w/ an in-memory cache using a
// custom cache key
const chatModel = new ChatModel({
cacheKey: (params) => hashObject(pick(params, 'model', 'messages')),
cache: new Map(),
Expand Down

0 comments on commit c63f94c

Please sign in to comment.