I want to fine-tune the public medical task dataset based on chatglm to unify multiple medical NLP tasks.
-
Updated
Jul 18, 2023
I want to fine-tune the public medical task dataset based on chatglm to unify multiple medical NLP tasks.
ChatGLM-6B-finetuning
A Platform for collection of ai
auto install ChatGLM-webui (全自动安装ChatGLM-webui)
LangChain+ChatGLM_6B
Source code tree of Intelligent Learning Platform Software System Based on LLM & Regression Analysis (ILP). This project has participated in the 16th Chinese Collegiate Computing Competition (CCCC, 4C) in 2023. 我的青春不迷茫-基于LLM与回归分析算法的大学生智能生涯指导与学习大平台 软件系统项目源代码。该项目已参加2023年第16届中国大学生计算机设计大赛(CCCC, 4C)。
A Genshin Impact Book Question Answer Project supported by LLM
👽 基于大模型的知识库问答 | Large model-based knowledge base Q&A.
大语言模型微调的项目,包含了使用QLora微调ChatGLM和LLama
一套代码指令微调大模型
基于ChatGLM-6B,低成本实现类Instruction效果的角色扮演
在kaggle部署ChatGLM API,和ChatGPT api使用相同的调用方式
The offical realization of InstructERC
实现一种多Lora权值集成切换+Zero-Finetune零微调增强的跨模型技术方案,LLM-Base+LLM-X+Alpaca,初期,LLM-Base为Chatglm6B底座模型,LLM-X是LLAMA增强模型。该方案简易高效,目标是使此类语言模型能够低能耗广泛部署,并最终在小模型的基座上发生“智能涌现”,力图最小计算代价达成ChatGPT、GPT4、ChatRWKV等人类友好亲和效果。当前可以满足总结、提问、问答、摘要、改写、评论、扮演等各种需求。
LocalAGI:Locally run AGI powered by LLaMA, ChatGLM and more. | 基于 ChatGLM, LLaMA 大模型的本地运行的 AGI
A full pipeline to finetune ChatGLM LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the ChatGLM architecture. Basically ChatGPT but with ChatGLM
Add a description, image, and links to the chatglm-6b topic page so that developers can more easily learn about it.
To associate your repository with the chatglm-6b topic, visit your repo's landing page and select "manage topics."