From 084b006cfe7c10b743830b5126e1caca9fcfab2f Mon Sep 17 00:00:00 2001 From: zaypen Date: Thu, 8 Jun 2023 02:34:50 +0800 Subject: [PATCH] Update LLaMA-model.md (#2460) Better approach of converting LLaMA model --- docs/LLaMA-model.md | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/docs/LLaMA-model.md b/docs/LLaMA-model.md index 338d458b..6706b16d 100644 --- a/docs/LLaMA-model.md +++ b/docs/LLaMA-model.md @@ -30,7 +30,15 @@ pip install protobuf==3.20.1 2. Use the script below to convert the model in `.pth` format that you, a fellow academic, downloaded using Meta's official link: -### [convert_llama_weights_to_hf.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py) +### Convert LLaMA to HuggingFace format + +If you have `transformers` installed in place + +``` +python -m transformers.models.llama.convert_llama_weights_to_hf --input_dir /path/to/LLaMA --model_size 7B --output_dir /tmp/outputs/llama-7b +``` + +Otherwise download script [convert_llama_weights_to_hf.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py) ``` python convert_llama_weights_to_hf.py --input_dir /path/to/LLaMA --model_size 7B --output_dir /tmp/outputs/llama-7b