From e0583f0ec22150314dfed547f30c8b4f0bfa8ee5 Mon Sep 17 00:00:00 2001 From: oobabooga <112222186+oobabooga@users.noreply.github.com> Date: Wed, 21 Dec 2022 16:52:23 -0300 Subject: [PATCH] Update README.md --- README.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index f8029960..75b1633b 100644 --- a/README.md +++ b/README.md @@ -39,7 +39,7 @@ One way to make this process about 10x faster is to convert the models to pytorc The output model will be saved to `torch-dumps/model-name.pt`. This is the default way to load all models except for `gpt-neox-20b`, `opt-13b`, `OPT-13B-Erebus`, `gpt-j-6B`, and `flan-t5`. I don't remember why these models are exceptions. -If I get enough ⭐s on this repository, I will make the process of loading models more transparent and straightforward. +If I get enough ⭐s on this repository, I will make the process of loading models saner and more customizable. ## Starting the webui @@ -47,3 +47,7 @@ If I get enough ⭐s on this repository, I will make the process of loading mode python server.py Then browse to `http://localhost:7860/?__theme=dark` + +## Contributing + +Pull requests are welcome.