text-generation-webui/modules
2023-05-28 23:10:10 -03:00
..
AutoGPTQ_loader.py Small AutoGPTQ fix 2023-05-23 15:20:01 -03:00
callbacks.py Remove mutable defaults from function signature. (#1663) 2023-05-08 22:55:41 -03:00
chat.py Use YAML for presets and settings 2023-05-28 22:34:12 -03:00
deepspeed_parameters.py Style improvements (#1957) 2023-05-09 22:49:39 -03:00
evaluate.py Some qol changes to "Perplexity evaluation" 2023-05-25 15:06:22 -03:00
extensions.py Prevent unwanted log messages from modules 2023-05-21 22:42:34 -03:00
GPTQ_loader.py Prevent unwanted log messages from modules 2023-05-21 22:42:34 -03:00
html_generator.py Add markdown table rendering 2023-05-10 13:41:23 -03:00
llama_attn_hijack.py Prevent unwanted log messages from modules 2023-05-21 22:42:34 -03:00
llamacpp_model.py Make llama.cpp read prompt size and seed from settings (#2299) 2023-05-25 10:29:31 -03:00
logging_colors.py Prevent unwanted log messages from modules 2023-05-21 22:42:34 -03:00
LoRA.py Prevent unwanted log messages from modules 2023-05-21 22:42:34 -03:00
models.py Fix hang in tokenizer for AutoGPTQ llama models. (#2399) 2023-05-28 23:10:10 -03:00
monkey_patch_gptq_lora.py Better warning messages 2023-05-03 21:43:17 -03:00
RWKV.py Style improvements (#1957) 2023-05-09 22:49:39 -03:00
shared.py Change a warning message 2023-05-28 22:48:20 -03:00
text_generation.py Fix return message when no model is loaded 2023-05-28 22:46:32 -03:00
training.py Some qol changes to "Perplexity evaluation" 2023-05-25 15:06:22 -03:00
ui.py Make llama.cpp read prompt size and seed from settings (#2299) 2023-05-25 10:29:31 -03:00
utils.py Use YAML for presets and settings 2023-05-28 22:34:12 -03:00