text-generation-webui/modules
Forkoz c6fe1ced01
Add ChatGLM support (#1256)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-16 19:15:03 -03:00
..
api.py Add defaults to the gradio API 2023-04-16 17:33:28 -03:00
callbacks.py Make the code more like PEP8 for readability (#862) 2023-04-07 00:15:45 -03:00
chat.py Add ChatGLM support (#1256) 2023-04-16 19:15:03 -03:00
deepspeed_parameters.py Fix deepspeed (oops) 2023-02-02 10:39:37 -03:00
extensions.py Merge pull request from GHSA-hv5m-3rp9-xcpf 2023-04-16 01:36:50 -03:00
GPTQ_loader.py Simplify GPTQ_loader.py 2023-04-13 12:13:07 -03:00
html_generator.py Properly handle blockquote blocks 2023-04-16 18:00:12 -03:00
llama_attn_hijack.py Added xformers support to Llama (#950) 2023-04-09 23:08:40 -03:00
llamacpp_model.py Make the code more like PEP8 for readability (#862) 2023-04-07 00:15:45 -03:00
llamacpp_model_alternative.py Bump llama-cpp-python to use LlamaCache 2023-04-16 00:53:40 -03:00
LoRA.py initial multi-lora support (#1103) 2023-04-14 14:52:06 -03:00
models.py Add ChatGLM support (#1256) 2023-04-16 19:15:03 -03:00
RWKV.py Make the code more like PEP8 for readability (#862) 2023-04-07 00:15:45 -03:00
shared.py Add ChatGLM support (#1256) 2023-04-16 19:15:03 -03:00
text_generation.py Better detect when no model is loaded 2023-04-16 17:35:54 -03:00
training.py Minor changes to training.py 2023-04-16 03:08:37 -03:00
ui.py Add skip_special_tokens checkbox for Dolly model (#1218) 2023-04-16 14:24:49 -03:00