text-generation-webui/modules
draff e6c631aea4 Replace --load-in-4bit with --llama-bits
Replaces --load-in-4bit with a more flexible --llama-bits arg to allow for 2 and 3 bit models as well. This commit also fixes a loading issue with .pt files which are not in the root of the models folder
2023-03-10 21:36:45 +00:00
..
chat.py Better handle <USER> 2023-03-05 17:01:47 -03:00
deepspeed_parameters.py Fix deepspeed (oops) 2023-02-02 10:39:37 -03:00
extensions.py Move bot_picture.py inside the extension 2023-02-25 03:00:19 -03:00
html_generator.py Store thumbnails as files instead of base64 strings 2023-02-27 13:41:00 -03:00
models.py Replace --load-in-4bit with --llama-bits 2023-03-10 21:36:45 +00:00
RWKV.py Add proper streaming to RWKV 2023-03-07 18:17:56 -03:00
shared.py Replace --load-in-4bit with --llama-bits 2023-03-10 21:36:45 +00:00
stopping_criteria.py Improve the imports 2023-02-23 14:41:42 -03:00
text_generation.py Fix encode() for RWKV 2023-03-07 23:15:46 -03:00
ui.py Stop chat from flashing dark when processing 2023-03-03 13:19:13 -03:00