From ee68ec9079492a72a35c33d5000da432ce94af71 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?L=C5=91rinc=20Pap?= <1841944+paplorinc@users.noreply.github.com> Date: Thu, 27 Apr 2023 17:03:02 +0200 Subject: [PATCH] Update folder produced by download-model (#1601) --- docs/Using-LoRAs.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/Using-LoRAs.md b/docs/Using-LoRAs.md index 0a679c0f..fafd6cde 100644 --- a/docs/Using-LoRAs.md +++ b/docs/Using-LoRAs.md @@ -11,9 +11,9 @@ python download-model.py tloen/alpaca-lora-7b 2. Load the LoRA. 16-bit, 8-bit, and CPU modes work: ``` -python server.py --model llama-7b-hf --lora alpaca-lora-7b -python server.py --model llama-7b-hf --lora alpaca-lora-7b --load-in-8bit -python server.py --model llama-7b-hf --lora alpaca-lora-7b --cpu +python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b +python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --load-in-8bit +python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --cpu ``` * For using LoRAs in 4-bit mode, follow [these special instructions](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode).