基於Mindformers+mindspore框架在升騰910上進行qwen-7b-chat的lora微調

落魄统计佬發表於2024-09-04

基於Mindformers+mindspore框架在昇騰910上進行qwen-7b-chat的8卡lora微調

主要參考文件:https://gitee.com/mindspore/mindformers/tree/r1.0/research/qwen

STEP 1: 環境準備

我使用mindformers官方提供的docker映象進行微調,下載指令:

docker pull swr.cn-central-221.ovaijisuan.com/mindformers/mindformers1.0.2_mindspore2.2.13:20240416

啟動容器指令參考:

#!/bin/bash
CONTAINER_NAME=mindformers-r1.0
CHECKPOINT_PATH=/var/images/llm_setup/model/qwen/Qwen-7B-Chat
DOCKER_CHECKPOINT_PATH=/data/qwen/models/Qwen-7B-Chat
IMAGE_NAME=swr.cn-central-221.ovaijisuan.com/mindformers/mindformers1.0.2_mindspore2.2.13:20240416

docker run -it -u root \
--device=/dev/davinci0 \
--device=/dev/davinci1 \
--device=/dev/davinci2 \
--device=/dev/davinci3 \
--device=/dev/davinci4 \
--device=/dev/davinci5 \
--device=/dev/davinci6 \
--device=/dev/davinci7 \
--device=/dev/davinci_manager \
--device=/dev/devmm_svm \
--device=/dev/hisi_hdc \
-v /etc/localtime:/etc/localtime \
-v /usr/local/Ascend/driver:/usr/local/Ascend/driver \
-v /var/log/npu/:/usr/slog \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v ${CHECKPOINT_PATH}:${DOCKER_CHECKPOINT_PATH} \
--name ${CONTAINER_NAME} \
${IMAGE_NAME} \
/bin/bash

環境驗證

在命令列中輸入如下指令進行驗證,

python -c "import mindspore;mindspore.set_context(device_target='Ascend');mindspore.run_check()"

如果輸出如下結果則環境沒問題:

MindSpore version: 版本號
The result of multiplication calculation is correct, MindSpore has been installed on platform [CPU] successfully!

微調需要的程式碼下載

微調使用程式碼大部分來自於mindformers 官方提供,在映象內執行程式碼下載及目錄進入:

git clone -b r1.0 https://gitee.com/mindspore/mindformers.git
cd mindformers

RANK_TABLE_FILE 生成

開始微調前請先準備多卡微調所需的RANKFILE。用映象執行需要退出映象環境在映象外進行生成:

# 如果容器外沒有git clone mindformers程式碼庫,可以透過wget下載需要的程式碼
wget https://gitee.com/mindspore/models/raw/master/utils/hccl_tools/hccl_tools.py
# 生成rank_table_file檔案
python hccl_tools.py --device_num "[0,8)"

將生成的 hccl_8p_01234567_xx.xx.xx.xx.json 檔案複製到容器內即可進行下面的微調。

STEP 2: 下載模型

由於使用mindformers框架,需要對權重進行轉換。目前使用的這個映象環境進行權重轉換有環境上的衝突,無法安裝相應的包,所以直接從官網下載轉換後的權重、詞表檔案:

# 權重ckpt 大小29G
wget https://ascend-repo-modelzoo.obs.cn-east-2.myhuaweicloud.com/MindFormers/qwen/qwen_7b_base.ckpt
# 詞表檔案
wget https://ascend-repo-modelzoo.obs.cn-east-2.myhuaweicloud.com/MindFormers/qwen/qwen.tiktoken

STEP 3: 資料準備

微調qwen模型需要先將資料轉換為以下json格式:

  {
    "id": "1",
    "conversations": [
      {
        "from": "user",
        "value": "Give three tips for staying healthy."
      },
      {
        "from": "assistant",
        "value": "1.Eat a balanced diet and make sure to include plenty of fruits and vegetables. \n2. Exercise regularly to keep your body active and strong. \n3. Get enough sleep and maintain a consistent sleep schedule."
      }
    ]
  },

然後再轉換為適配mindformers的Mindrecord資料,可以使用如下指令碼:

python research/qwen/qwen_preprocess.py \
--input_glob /path/alpaca-data-conversation.json \	# 源資料路徑(已轉換成以上格式)
--model_file /path/qwen.tiktoken \	# 詞表路徑
--seq_length 2048 \
--output_file /path/alpaca.mindrecord	# 輸出mindrecord格式資料路徑

結果:

img

STEP 4: 開始微調

注意開始微調前需要執行STEP 1中的RANK_TABLE_FILE生成,確保容器內有 hccl_8p_01234567_xx.xx.xx.xx.json 檔案;

啟動指令碼進行微調,修改yaml檔案

啟動以下指令進行微調

cd mindformers/research
bash run_singlenode.sh "python qwen/run_qwen.py \
--config qwen/run_qwen_7b_lora.yaml \
--load_checkpoint /data/qwen/models/Qwen-7B-Chat \
--use_parallel True \
--run_mode finetune \
--auto_trans_ckpt True \
--train_data /path/alpaca.mindrecord" \
/data/hccl_8p_01234567_10.17.2.76.json [0,8] 8

其中有如下注意要點:

  • qwen/run_qwen_7b_lora.yaml 中為需要配置的引數檔案,需要修改如下內容確保無誤:

    load_checkpoint: 'model_dir'    # 使用完整權重,權重按照`model_dir/rank_0/xxx.ckpt`格式存放
    
    model_config:
       seq_length: 2048 # 與資料集長度保持相同
    
    train_dataset: &train_dataset
      data_loader:
        type: MindDataset
        dataset_dir: "/path/alpaca.mindrecord"  # 配置訓練資料集資料夾路徑
        shuffle: True
    
    pet_config:
       pet_type: lora
       lora_rank: 64
       lora_alpha: 16
       lora_dropout: 0.05
       target_modules: '.*wq|.*wk|.*wv|.*wo|.*w1|.*w2|.*w3'
       freeze_exclude: ["*wte*", "*lm_head*"] # 使用chat權重進行微調時刪除該配置
    

微調成功:

img

Q&A

  • 報錯 ValueError x.shape and y.shape need to broadcast,完整報錯資訊如下
...
[INFO] 2024-07-16 13:52:49,028 [mindformers/trainer/base_trainer.py:682] training_process: .........Build Running Wrapper From Config For Train..........
[INFO] 2024-07-16 13:52:49,028 [mindformers/trainer/base_trainer.py:500] create_model_wrapper: .........Build Model Wrapper for Train From Config..........
[INFO] 2024-07-16 13:52:49,040 [mindformers/trainer/base_trainer.py:689] training_process: .........Build Callbacks For Train..........
[INFO] 2024-07-16 13:52:49,042 [mindformers/core/callback/callback.py:530] __init__: Integrated_save is changed to False when using auto_parallel.
[INFO] 2024-07-16 13:52:49,043 [mindformers/trainer/base_trainer.py:724] training_process: .........Starting Init Train Model..........
[INFO] 2024-07-16 13:52:49,043 [mindformers/trainer/utils.py:321] transform_and_load_checkpoint: .........Building model.........
[ERROR] 2024-07-16 14:16:46,150 [mindformers/tools/cloud_adapter/cloud_monitor.py:43] wrapper: Traceback (most recent call last):
  File "/root/miniconda3/envs/mindspore2.2.13_py39/lib/python3.9/site-packages/mindformers/tools/cloud_adapter/cloud_monitor.py", line 34, in wrapper
    result = run_func(*args, **kwargs)
  File "/data/mindformers/research/qwen/run_qwen.py", line 137, in main
    trainer.finetune(finetune_checkpoint=ckpt, auto_trans_ckpt=auto_trans_ckpt)
  File "/root/miniconda3/envs/mindspore2.2.13_py39/lib/python3.9/site-packages/mindspore/_checkparam.py", line 1313, in wrapper
    return func(*args, **kwargs)
  File "/root/miniconda3/envs/mindspore2.2.13_py39/lib/python3.9/site-packages/mindformers/trainer/trainer.py", line 485, in finetune
    self.trainer.train(
  File "/root/miniconda3/envs/mindspore2.2.13_py39/lib/python3.9/site-packages/mindformers/trainer/causal_language_modeling/causal_language_modeling.py", line 97, in train
    self.training_process(
  File "/root/miniconda3/envs/mindspore2.2.13_py39/lib/python3.9/site-packages/mindformers/trainer/base_trainer.py", line 739, in training_process
    transform_and_load_checkpoint(config, model, network, dataset)
  File "/root/miniconda3/envs/mindspore2.2.13_py39/lib/python3.9/site-packages/mindformers/trainer/utils.py", line 322, in transform_and_load_checkpoint
    build_model(config, model, dataset, do_eval=do_eval, do_predict=do_predict)
  File "/root/miniconda3/envs/mindspore2.2.13_py39/lib/python3.9/site-packages/mindformers/trainer/utils.py", line 446, in build_model
    model.build(train_dataset=dataset, epoch=config.runner_config.epochs,
  File "/root/miniconda3/envs/mindspore2.2.13_py39/lib/python3.9/site-packages/mindspore/train/model.py", line 1274, in build
    self._init(train_dataset, valid_dataset, sink_size, epoch)
  File "/root/miniconda3/envs/mindspore2.2.13_py39/lib/python3.9/site-packages/mindspore/train/model.py", line 529, in _init
    train_network.compile(*inputs)
  File "/root/miniconda3/envs/mindspore2.2.13_py39/lib/python3.9/site-packages/mindspore/nn/cell.py", line 997, in compile
    _cell_graph_executor.compile(self, phase=self.phase,
  File "/root/miniconda3/envs/mindspore2.2.13_py39/lib/python3.9/site-packages/mindspore/common/api.py", line 1547, in compile
    result = self._graph_executor.compile(obj, args, kwargs, phase, self._use_vm_mode())
  File "/root/miniconda3/envs/mindspore2.2.13_py39/lib/python3.9/site-packages/mindspore/ops/primitive.py", line 647, in __infer__
    out[track] = fn(*(x[track] for x in args))
  File "/root/miniconda3/envs/mindspore2.2.13_py39/lib/python3.9/site-packages/mindspore/ops/operations/math_ops.py", line 80, in infer_shape
    return get_broadcast_shape(x_shape, y_shape, self.name)
  File "/root/miniconda3/envs/mindspore2.2.13_py39/lib/python3.9/site-packages/mindspore/ops/_utils/utils.py", line 70, in get_broadcast_shape
    raise ValueError(f"For '{prim_name}', {arg_name1}.shape and {arg_name2}.shape need to "
ValueError: For 'Mul', x.shape and y.shape need to broadcast. The value of x.shape[-1] or y.shape[-1] must be 1 or -1 when they are not the same, but got x.shape = [8, 1, 1024] and y.shape = [1, 2048, 2048].

解決方法:確保微調所用的yaml的model_config.seq_length與STEP 3中資料轉換成mindrecords的seq_length一致,像上面的報錯就是源於一個設為1024,一個設為2048;

  • dst_strategy_path = local_strategy_paths[0]報錯IndexError: list index out of range

img

...
[INFO] 2024-07-16 10:52:20,510 [mindformers/trainer/base_trainer.py:682] training_process: .........Build Running Wrapper From Config For Train..........
[INFO] 2024-07-16 10:52:20,510 [mindformers/trainer/base_trainer.py:500] create_model_wrapper: .........Build Model Wrapper for Train From Config..........
[INFO] 2024-07-16 10:52:20,523 [mindformers/trainer/base_trainer.py:689] training_process: .........Build Callbacks For Train..........
[INFO] 2024-07-16 10:52:20,525 [mindformers/trainer/base_trainer.py:724] training_process: .........Starting Init Train Model..........
[INFO] 2024-07-16 10:52:20,527 [mindformers/trainer/utils.py:334] transform_and_load_checkpoint: /data/qwen_ft/output is_share_disk: False
[INFO] 2024-07-16 10:52:20,527 [mindformers/trainer/utils.py:335] transform_and_load_checkpoint: world_size: 8
[INFO] 2024-07-16 10:52:20,528 [mindformers/trainer/utils.py:516] get_src_and_dst_strategy: .........Collecting strategy.........
[ERROR] 2024-07-16 10:52:20,530 [mindformers/tools/cloud_adapter/cloud_monitor.py:43] wrapper: Traceback (most recent call last):
  File "/root/miniconda3/envs/mindspore2.2.13_py39/lib/python3.9/site-packages/mindformers/tools/cloud_adapter/cloud_monitor.py", line 34, in wrapper
    result = run_func(*args, **kwargs)
  File "/data/qwen_ft/qwen/run_qwen.py", line 137, in main
    trainer.finetune(finetune_checkpoint=ckpt, auto_trans_ckpt=auto_trans_ckpt)
  File "/root/miniconda3/envs/mindspore2.2.13_py39/lib/python3.9/site-packages/mindspore/_checkparam.py", line 1313, in wrapper
    return func(*args, **kwargs)
  File "/root/miniconda3/envs/mindspore2.2.13_py39/lib/python3.9/site-packages/mindformers/trainer/trainer.py", line 485, in finetune
    self.trainer.train(
  File "/root/miniconda3/envs/mindspore2.2.13_py39/lib/python3.9/site-packages/mindformers/trainer/causal_language_modeling/causal_language_modeling.py", line 97, in train
    self.training_process(
  File "/root/miniconda3/envs/mindspore2.2.13_py39/lib/python3.9/site-packages/mindformers/trainer/base_trainer.py", line 739, in training_process
    transform_and_load_checkpoint(config, model, network, dataset)
  File "/root/miniconda3/envs/mindspore2.2.13_py39/lib/python3.9/site-packages/mindformers/trainer/utils.py", line 338, in transform_and_load_checkpoint
    src_ckpt_strategy, dst_ckpt_strategy = get_src_and_dst_strategy(config)
  File "/root/miniconda3/envs/mindspore2.2.13_py39/lib/python3.9/site-packages/mindformers/trainer/utils.py", line 522, in get_src_and_dst_strategy
    dst_strategy_path = local_strategy_paths[0]
IndexError: list index out of range

這個問題產生的過程是,當我們使用完整權重(STEP 2下載的qwen_7b_base.ckpt),且微調的yaml檔案配置了 auto_trans_ckpt=True 時,指令碼會自動啟動權重轉換,將完整權重轉換為分佈在8張卡上訓練的分散式權重,並生成8卡的策略檔案。在這個過程中如果沒有在目的地防止相應的權重檔案,或者權重檔案本身有損的情況下,程式沒有按照期待的方式進行切分、生成策略檔案,導致 local_strategy_paths 目錄下沒有相應格式的檔案甚至是空的,就報了這個錯誤。可能的原因和解決方法如下:

  1. 檢查權重是否按照model_dir/rank_0/xxx.ckpt格式存放,存放路徑不正確可能導致無法進行策略檔案生成;
  2. 檢查權重是否有損壞,建議重新按照STEP 2 下載。

相關文章