Description
请提出你的问题
使用示例 https://github.com/PaddlePaddle/PaddleNLP/tree/develop/applications/text_classification/multi_class#readme
进程训练时,使用
python3 -m paddle.distributed.launch --nproc_per_node=24 train.py \
--do_train \
--do_eval \
--do_export \
--model_name_or_path ernie-3.0-tiny-medium-v2-zh \
--output_dir checkpoint \
--device cpu \
--num_train_epochs 100 \
--early_stopping True \
--early_stopping_patience 5 \
--learning_rate 3e-5 \
--max_length 128 \
--per_device_eval_batch_size 32 \
--per_device_train_batch_size 32 \
--metric_for_best_model accuracy \
--load_best_model_at_end \
--logging_steps 5 \
--evaluation_strategy epoch \
--save_strategy epoch \
--save_total_limit 3
开启多进程并行,在训练完成的时候加载结果会报如下错误。 是我使用的方式不对吗?CPU模式下开启多进程或多线程同时计算应该用什么命令正确开启? 官方文档里没有查到,参数里面也没有明确的选项,使用enable_auto_parallel参数报错。见#8428
[2024-05-13 12:49:22,098] [ INFO] - [timelog] checkpoint saving time: 0.00s (2024-05-13 12:49:22)
[2024-05-13 12:55:42,547] [ INFO] - ***** Running Evaluation *****
[2024-05-13 12:55:42,548] [ INFO] - Num examples = 1955
[2024-05-13 12:55:42,548] [ INFO] - Total prediction steps = 3
[2024-05-13 12:55:42,548] [ INFO] - Pre device batch size = 32
[2024-05-13 12:55:42,548] [ INFO] - Total Batch size = 768
[2024-05-13 12:55:56,791] [ INFO] - [timelog] checkpoint saving time: 0.00s (2024-05-13 12:55:56)
[2024-05-13 12:55:56,791] [ INFO] -
Training completed.
[2024-05-13 12:55:56,805] [ INFO] - Loading best model from checkpoint/checkpoint-170 (score: 0.8204603580562659).
[2024-05-13 12:55:57,120] [ INFO] - set state-dict :([], [])
Traceback (most recent call last):
File "train.py", line 230, in
main()
File "train.py", line 185, in main
shutil.rmtree(checkpoint_path)
File "/usr/lib/python3.8/shutil.py", line 715, in rmtree
_rmtree_safe_fd(fd, path, onerror)
File "/usr/lib/python3.8/shutil.py", line 672, in _rmtree_safe_fd
onerror(os.unlink, fullname, sys.exc_info())
File "/usr/lib/python3.8/shutil.py", line 670, in _rmtree_safe_fd
os.unlink(entry.name, dir_fd=topfd)
FileNotFoundError: [Errno 2] No such file or directory: 'tokenizer_config.json'