Update Using-LoRAs.md

This commit is contained in:
oobabooga 2023-04-22 02:53:01 -03:00 committed by GitHub
parent 6d4f131d0a
commit 9508f207ba
WARNING! Although there is a key with this ID in the database it does not verify this commit! This commit is SUSPICIOUS.
GPG key ID: 4AEE18F83AFDEB23

View file

@ -16,7 +16,7 @@ python server.py --model llama-7b-hf --lora alpaca-lora-7b --load-in-8bit
python server.py --model llama-7b-hf --lora alpaca-lora-7b --cpu python server.py --model llama-7b-hf --lora alpaca-lora-7b --cpu
``` ```
* For using LoRAs in 4-bit mode, follow these special instructions: https://github.com/oobabooga/text-generation-webui/wiki/GPTQ-models-(4-bit-mode)#using-loras-in-4-bit-mode * For using LoRAs in 4-bit mode, follow [these special instructions](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode).
* Instead of using the `--lora` command-line flag, you can also select the LoRA in the "Parameters" tab of the interface. * Instead of using the `--lora` command-line flag, you can also select the LoRA in the "Parameters" tab of the interface.