We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No description provided.
The text was updated successfully, but these errors were encountered:
感谢您的提问,这也曾是之前我们认真思考过的问题。以基础范例来看,LLAMA是7B,ChatGLM是6B,虽然接近,但如果从特征向量来看则完全不同。因而,Lora实际上是一种相对独立的微调模型组件,而非大模型的直接权值,预训练模型及其权值是存放于bin或pth文件之中,即使我们不使用任何Lora等模型,这些预训练模型也是可以正常运行的。我们在此将13B Lora上传。
Sorry, something went wrong.
@StarRing2022 等等,我不确定我理解你回答中的逻辑
No branches or pull requests
No description provided.
The text was updated successfully, but these errors were encountered: