mirror of
https://github.com/0xSojalSec/airllm.git
synced 2026-03-07 22:33:47 +00:00
add multi gpu parts in readme
This commit is contained in:
@@ -56,7 +56,7 @@ Anima模型基于QLoRA开源的[33B guanaco](https://huggingface.co/timdettmers/
|
||||
|
||||
#### 如何训练
|
||||
|
||||
使用以下步骤可以重现Anima 33B模型:
|
||||
使用以下步骤可以重现Anima 33B模型(单卡80GB H100或双卡 40GB A100均测试过可运行):
|
||||
|
||||
```bash
|
||||
# 1. install dependencies
|
||||
@@ -66,6 +66,9 @@ cd training
|
||||
./run_Amina_training.sh
|
||||
```
|
||||
|
||||
#### 多卡训练
|
||||
由于使用Hugging Face Accelerate,天然支持多卡训练。
|
||||
我们测试过双卡40GB的A100,可以直接运行。
|
||||
|
||||
## 📊验证评估
|
||||
|
||||
|
||||
@@ -57,7 +57,7 @@ For cost considerations, we mostly chose not to do too much grid search, assumin
|
||||
|
||||
#### How to reproduce our training
|
||||
|
||||
Anima 33B model could be reproduced fully with the following steps:
|
||||
Anima 33B model could be reproduced fully with the following steps(tested on single GPU environment of 1x80GB H100, or multi-GPU of 2xA100 40GB):
|
||||
|
||||
```bash
|
||||
# 1. install dependencies
|
||||
@@ -66,7 +66,10 @@ pip install -r requirements.txt
|
||||
cd training
|
||||
./run_Amina_training.sh
|
||||
```
|
||||
#### Multi-GPU training
|
||||
Bause of Hugging Face Accelerate,multi-GPU training is supported out-of-box.
|
||||
|
||||
We tested 2xA100 40GB, the above script can work directlly seemlessly.
|
||||
|
||||
## 📊Evaluations
|
||||
|
||||
|
||||
Reference in New Issue
Block a user