Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Instruct mode or system prompt sorely needed! #155

Open
Ph0rk0z opened this issue Oct 15, 2023 · 9 comments
Open

Instruct mode or system prompt sorely needed! #155

Ph0rk0z opened this issue Oct 15, 2023 · 9 comments
Assignees

Comments

@Ph0rk0z
Copy link

Ph0rk0z commented Oct 15, 2023

I am trying to spin this up for fun and to have my AI on telegram. Some things I notice:

  1. Instruct mode is not supported so you only talk to the default model. This causes quite a few problems as "assistant" mode in many models is very censored. Just say a bad word and it will lecture. Normally one can fix this using instruct or setting the system prompt but here we only have the character definitions.
  2. Loading characters from the config file seems broken. I set a character.yaml in the config file and only default example still loads. I even deleted example and copied the character to the "characters" folder in the extension but still get chiraru. I don't wish to give the option of changing the character away from default but the wrong one is loading.

I have tried to use both api and normal mode. I edited it like the user did here: #94 (comment) as well. Perhaps I can also try to duplicate in: generator_params.json

Thoughts?

edit: I find that the variables in config are different than in code. I got preset and character loading by changing the name as they are written there.

@innightwolfsleep innightwolfsleep self-assigned this Oct 16, 2023
@innightwolfsleep
Copy link
Owner

innightwolfsleep commented Oct 25, 2023

#156 added context/user/bot prompt prefix and postfix to config files. This can be configured as promp template. (At least, I thing so)

@Ph0rk0z
Copy link
Author

Ph0rk0z commented Oct 26, 2023

Thanks I will try it out.

@innightwolfsleep
Copy link
Owner

Added examples for different templates:
#158

@Ph0rk0z
Copy link
Author

Ph0rk0z commented Oct 29, 2023

I tested it in the new repo and it doesn't really work as instruct mode since the example dialog isn't wrapped but it does work for a system prompt. I suppose one could manually edit the character card to fit an instruct template too.

@innightwolfsleep
Copy link
Owner

Can you give an examples? Share a character, config and what you wanna got?

p.s. perhaps, manual character edit needed. Need to investigate real cases.

p.p.s. I tested few common prompt templates and they work fine as for me. If you know another prompt templates - share them please, I'll try them.

@Ph0rk0z
Copy link
Author

Ph0rk0z commented Oct 29, 2023

I just used alpaca. The ##instruction and ##response doesn't wrap the sample dialog so the model got confused and started outputting nothing. I used evilGPT off of chub.ai to mess with. I think if I manually wrapped the examples in the instruct template it would work fine. It's not seamless but at least now it's possible.

I gave up and went with the system prompt only, which does 80% of what it needs to. Getting the model out of the "assistant" personality was what I'm after as a lot of models are the most censored when using that. If you load up good models like euryale or airoboros in textgen and use chat vs chat instruct with the default character you can see what I'm talking about. Might be a bigger problem for small stuff like 7b/13b as they are more template bound than 70b, the latter just goes with whatever.

@innightwolfsleep
Copy link
Owner

Got it.
Curently, example dialogue don't used as part of context in my code... This is a part of truncation logic
(context never truncated; example, greeting and curent conversation messages truncating to avoid buffer owerflow)

As workaround, I can adwice to move example to context. But I realy should think about fix this... There is to much templates variation -_-

@Ph0rk0z
Copy link
Author

Ph0rk0z commented Oct 29, 2023

I watched terminal in textgen, example dialog sends with the character prompt. It is formatted like the card with Char: message\n User: message\n

I didn't try standalone with exllama or llama.cpp yet. Mostly run this to access the AI when away from home.

You are also missing a few of the new repetition penalty params but it was trivial to add that.

@innightwolfsleep
Copy link
Owner

I think i will change template implemenation in a few next updates... Curently ": " between Char and message is hardcoded.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants