New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix [UI/UX Page] Known issue #3624
base: main
Are you sure you want to change the base?
Conversation
Rebase without commit history (magic) done. |
- [+] fix(chat.ts): fix known issue where summarize is not using the current model selected - [+] feat(chat.ts): add support for user-selected model in getSummarizeModel function - [+] feat(builder.sh): add script to modify tauri.conf.json file and build Tauri application
// fix known issue where summarize is not using the current model selected | ||
function getSummarizeModel(currentModel: string, modelConfig: ModelConfig) { | ||
// should be depends of user selected | ||
return currentModel ? modelConfig.model : currentModel; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The initial intention was to use GPT-3.5 to save on token usage, as directly using the model currently in use by the user might significantly increase their usage costs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The initial intention was to use GPT-3.5 to save on token usage, as directly using the model currently in use by the user might significantly increase their usage costs.
It's not about usage costs, in-scenario when you using model gpt-4 with latest knowledge 2023, and you are using old model just for summarizing conversation, it giving bad conversation for next conversation when send memory is enabled example is confusing, not a connected about conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
be smart instead of thinking about cost
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no fundamental difference between GPT-3.5 and GPT-4 in generating a few words for a summary. Considering that some users have already raised concerns about costs, I believe it is still necessary to retain the use of GPT-3.5 for generating session titles
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no fundamental difference between GPT-3.5 and GPT-4 in generating a few words for a summary. Considering that some users have already raised concerns about costs, I believe it is still necessary to retain the use of GPT-3.5 for generating session titles
it's not about session titles
why focusing about session titles which is useless because you can create masks in chats
here is note that you need to know:
in current unchanged it force to summarize of session chats memory include that useless chat/session titles using old models gpt3.5 and it's affected LMAO
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
another note why its affected ?
"in-scenario when you using model gpt-4 with latest knowledge 2023, and you are using old model just for summarizing conversation, it giving bad conversation for next conversation when send memory is enabled example is confusing, not a connected about conversation"
Other:
Another issue with the conversation is related to the GPT-4-preview model, which has 128k context tokens, compared to the older model that only has 16k context tokens (for example). So, you can imagine how this scenario could lead to problems and inefficiencies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm referring to the cost rather than the number of tokens, the price of GPT-4 is significantly higher than that of GPT-3.5.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The recommended way of action is to continue using GPT-3.5 Turbo when it's available, otherwise use the model currently enabled by the user.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The recommended way of action is to continue using GPT-3.5 Turbo when it's available, otherwise use the model currently enabled by the user.
if you think that is recommended
you have to close this issue
for me tbh its not recommended
Issue:
CUSTOM_MODELS
removedgpt-3.5-turbo
#3621