mirror of
https://github.com/0xSojalSec/airllm.git
synced 2026-03-07 22:33:47 +00:00
update readme
This commit is contained in:
@@ -21,6 +21,8 @@ This is the first open source 33B Chinese LLM, we also support DPO alignment tra
|
||||
[](https://static.aicompose.cn/static/wecom_barcode.png?t=1671918938)
|
||||
[](https://huggingface.co/lyogavin/Anima33B-merged)
|
||||
[](https://huggingface.co/lyogavin/Anima-7B-100K)
|
||||
[](https://img.shields.io/discord/1175437549783760896)
|
||||
|
||||
</div>
|
||||
|
||||
## 🔄 更新 Updates
|
||||
|
||||
@@ -5,7 +5,7 @@ with open("README.md", "r") as fh:
|
||||
|
||||
setuptools.setup(
|
||||
name="airllm",
|
||||
version="0.9.3",
|
||||
version="0.9.4",
|
||||
author="Gavin Li",
|
||||
author_email="gavinli@animaai.cloud",
|
||||
description="AirLLM allows single 4GB GPU card to run 70B large language models without quantization, distillation or pruning.",
|
||||
|
||||
Reference in New Issue
Block a user