Skip to content

Commit

Permalink
feat: JSON Schema grammar enhancements (#388)
Browse files Browse the repository at this point in the history
* feat(JSON Schema grammar): `prefixItems`, `minItems`, `maxItems` support
* feat(JSON Schema grammar): improve inferred types
* feat(JSON Schema grammar): object `additionalProperties`, `minProperties`, `maxProperties`
* feat(JSON Schema grammar): string `minLength`, `maxLength`, `format`
* feat(function calling): params `description` support
* feat(function calling): document JSON Schema type properties on Functionary chat function types
* docs: how to reduce hallucinations when using JSON schema grammar
* fix: bugs
* chore: update dependencies
  • Loading branch information
giladgd authored Dec 1, 2024
1 parent bc6cfe3 commit 4d387de
Show file tree
Hide file tree
Showing 49 changed files with 5,122 additions and 1,394 deletions.
6 changes: 3 additions & 3 deletions .vitepress/config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ import {createRequire} from "node:module";
import process from "process";
import {fileURLToPath} from "url";
import fs from "fs-extra";
import {createContentLoader, defineConfig, HeadConfig} from "vitepress";
import {createContentLoader, defineConfig, HeadConfig, Plugin as VitepressPlugin} from "vitepress";
import {transformerTwoslash} from "@shikijs/vitepress-twoslash";
import ts from "typescript";
import envVar from "env-var";
Expand Down Expand Up @@ -308,7 +308,7 @@ export default defineConfig({
GitChangelog({
repoURL: () => "https://github.com/withcatai/node-llama-cpp",
cwd: path.join(__dirname, "..", "docs")
}),
}) as VitepressPlugin,
GitChangelogMarkdownSection({
exclude: (id) => (
id.includes(path.sep + "api" + path.sep) ||
Expand All @@ -318,7 +318,7 @@ export default defineConfig({
sections: {
disableContributors: true
}
}),
}) as VitepressPlugin,
BlogPageInfoPlugin({
include: (id) => id.includes(path.sep + "blog" + path.sep) && !id.endsWith(path.sep + "blog" + path.sep + "index.md")
})
Expand Down
44 changes: 41 additions & 3 deletions docs/guide/grammar.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
---
outline: deep
---
# Using Grammar
Use this to enforce a model to generate response in a specific format of text, like `JSON` for example.

Expand Down Expand Up @@ -69,11 +72,11 @@ console.log(JSON.parse(a2));
The [`llama.createGrammarForJsonSchema(...)`](../api/classes/Llama.md#creategrammarforjsonschema) creates a [`LlamaJsonSchemaGrammar`](../api/classes/LlamaJsonSchemaGrammar)
from a GBNF grammar generated a based on the [JSON schema](https://json-schema.org/learn/getting-started-step-by-step) you provide.

It only supports [a small subset of the JSON schema spec](../api/type-aliases/GbnfJsonSchema.md),
It only supports [a subset of the JSON schema spec](../api/type-aliases/GbnfJsonSchema.md),
but it's enough to generate useful JSON objects using a text generation model.

Many features of [JSON schema spec](https://json-schema.org/learn/getting-started-step-by-step) are not supported here on purpose,
as those features don't align well with the way models generate text and are prone to [hallucinations](https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)).
Some features of [JSON schema spec](https://json-schema.org/learn/getting-started-step-by-step) are not supported on purpose,
as those features don't align well with the way models generate text, and are too prone to [hallucinations](https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)).
Workarounds for the missing features that you can implement with the supported set of features often lead to improved generation quality.

To see what subset of the JSON schema spec is supported, see the [`GbnfJsonSchema` type](../api/type-aliases/GbnfJsonSchema.md) and follow its sub-types.
Expand Down Expand Up @@ -134,6 +137,41 @@ console.log(
);
```

### Reducing Hallucinations When Using JSON Schema Grammar {#reducing-json-schema-hallucinations}
When forcing a model to follow a specific JSON schema in its response, the model isn't aware of the entire schema being enforced on it.
To avoid hallucinations, you need to inform the model in some way what are your expectations from its response.

To do that, you can:
* Explain to the model what you expect in the prompt itself.
<br />
You can do that by giving a brief explanation of what you expect,
or by dumping the entire JSON schema in the prompt (which can eat up a lot of tokens, thus is not recommended).
* Force the model to output self-explanatory keys as part of its response, so it can then generate values for those keys.
* Use a combination of both.

The technique used in [the above example](#json-schema) forces the model to output the given keys, and then lets the model generate the values for those keys:
1. The model is forced to generate the text `{"positiveWordsInUserMessage": [`, and then we let it finish the syntax of the JSON array with only strings.
2. When it finishes the array, we force it to <br />generate the text <span>`, "userMessagePositivityScoreFromOneToTen": `</span>, and then we let it generate a number.
3. Finally, we force it to generate the text `, "nameOfUser": `, and then we let it generate either a string or `null`.

This technique allows us to get the desired result without explaining to the model what we want in advance.
While this method works great in this example, it may not work as well in other cases that need some explanation.

For example, let's say we force the model to generate an array with at least 2 items and at most 5 items;
if we don't provide any prior explanation for this requirement (either by using a self-explanatory key name or in the prompt),
then the model won't be able to "plan" the entire content of the array in advance,
which can lead it to generate inconsistent and unevenly spread items.
It can also make the model repeat the existing value in different forms or make up wrong values,
just so it can follow the enforced schema.

The key takeaway is that to reduce hallucinations and achieve great results when using a JSON schema grammar,
you need to ensure you inform the model of your expectations in some way.

::: tip NOTE
When using [function calling](./function-calling.md), the model is always aware of the entire schema being enforced on it,
so there's no need to explain the schema in the prompt.
:::

## Creating Your Own Grammar {#custom-grammar}
To create your own grammar, read the [GBNF guide](https://github.com/ggerganov/llama.cpp/blob/f5fe98d11bdf9e7797bcfb05c0c3601ffc4b9d26/grammars/README.md) to create a GBNF grammar file.

Expand Down
5 changes: 3 additions & 2 deletions eslint.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ export default tseslint.config({
after: true
}],
"@stylistic/comma-style": ["error", "last"],
"@stylistic/comma-dangle": ["error", "never"],
"@stylistic/comma-dangle": ["warn", "never"],
"no-var": ["error"],
"import/order": ["error", {
groups: ["builtin", "external", "internal", "parent", "sibling", "index", "type", "object", "unknown"],
Expand Down Expand Up @@ -142,7 +142,8 @@ export default tseslint.config({
{blankLine: "always", prev: "*", next: "method"}
]
}],
"@stylistic/no-trailing-spaces": ["warn"]
"@stylistic/no-trailing-spaces": ["warn"],
"@stylistic/no-multi-spaces": ["warn"]
}
}, {
files: ["**/**.{,c,m}ts"],
Expand Down
50 changes: 49 additions & 1 deletion llama/addon/AddonGrammar.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,54 @@ AddonGrammar::~AddonGrammar() {
}
}

Napi::Value AddonGrammar::isTextCompatible(const Napi::CallbackInfo& info) {
const std::string testText = info[0].As<Napi::String>().Utf8Value();

auto parsed_grammar = llama_grammar_init_impl(nullptr, grammarCode.c_str(), rootRuleName.c_str());

// will be empty if there are parse errors
if (parsed_grammar == nullptr) {
Napi::Error::New(info.Env(), "Failed to parse grammar").ThrowAsJavaScriptException();
return Napi::Boolean::New(info.Env(), false);
}

const auto cpts = unicode_cpts_from_utf8(testText);
const llama_grammar_rules & rules = llama_grammar_get_rules(parsed_grammar);
llama_grammar_stacks & stacks_cur = llama_grammar_get_stacks(parsed_grammar);

for (const auto & cpt : cpts) {
const llama_grammar_stacks stacks_prev = llama_grammar_get_stacks(parsed_grammar);

llama_grammar_accept(rules, stacks_prev, cpt, stacks_cur);

if (stacks_cur.empty()) {
// no stacks means that the grammar failed to match at this point
llama_grammar_free_impl(parsed_grammar);
return Napi::Boolean::New(info.Env(), false);
}
}

for (const auto & stack : stacks_cur) {
if (stack.empty()) {
// an empty stack means that the grammar has been completed
llama_grammar_free_impl(parsed_grammar);
return Napi::Boolean::New(info.Env(), true);
}
}

llama_grammar_free_impl(parsed_grammar);
return Napi::Boolean::New(info.Env(), false);
}

void AddonGrammar::init(Napi::Object exports) {
exports.Set("AddonGrammar", DefineClass(exports.Env(), "AddonGrammar", {}));
exports.Set(
"AddonGrammar",
DefineClass(
exports.Env(),
"AddonGrammar",
{
InstanceMethod("isTextCompatible", &AddonGrammar::isTextCompatible),
}
)
);
}
3 changes: 3 additions & 0 deletions llama/addon/AddonGrammar.h
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
#include "llama.h"
#include "common/common.h"
#include "llama-grammar.h"
#include "unicode.h"
#include "napi.h"
#include "addonGlobals.h"

Expand All @@ -15,5 +16,7 @@ class AddonGrammar : public Napi::ObjectWrap<AddonGrammar> {
AddonGrammar(const Napi::CallbackInfo& info);
~AddonGrammar();

Napi::Value isTextCompatible(const Napi::CallbackInfo& info);

static void init(Napi::Object exports);
};
Loading

0 comments on commit 4d387de

Please sign in to comment.