Skip to content

Large-Language-Models-Repos/ChatGLM-SFT

Repository files navigation

chatglm-lora

教程

  • Accelerate执行accelerate config,根据自己的机器配置好yaml,也可以直接修改accelerate_config.yaml,其中单机多卡仅修改num_processes为GPU个数即可
  • data_example.jsonl换为自己的数据,一行为一个session对话,array中多个对象为多轮,单个对象为单轮
  • 若显存不足,可考虑使用Zero3(修改ds_config.json文件),具体参考Deepspeed

运行

若通过accelerate config生成默认yaml即执行

accelerate launch chatglm_lora.py

若通过修改accelerate_config.yaml来配置节点信息则执行

accelerate launch --config_file accelerate_config.yaml chatglm_lora.py

Example效果示例

数据如data_example.jsonl所示

[{"q": "你好", "a": "你好,我是XXX"}, {"q": "你叫什么名字", "a": "我叫XXX"}]
[{"q": "你好", "a": "你好,我是XXX"}, {"q": "你是谁", "a": "我是XXX"}]
[{"q": "你好", "a": "你好,我是XXX"}, {"q": "你叫什么名字", "a": "我叫XXX"}]
[{"q": "你好", "a": "你好,我是XXX"}, {"q": "你是谁", "a": "我是XXX"}]

ChatGLM LoRa前 ChatGLM LoRa前 ChatGLM LoRa后 ChatGLM LoRa后

About

Supervised Fine-Tuning for ChatGLM

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages