diff --git a/README.md b/README.md index 27e0dcd8..a8dbb33a 100644 --- a/README.md +++ b/README.md @@ -21,19 +21,29 @@ Its goal is to become the [AUTOMATIC1111/stable-diffusion-webui](https://github. ## Installation -Create a conda environment: +1. You need to have the conda environment manager installed into your system. If you don't have it already, get it here: [miniconda download](https://docs.conda.io/en/latest/miniconda.html). + +2. Then open a terminal window and create a conda environment: conda create -n textgen conda activate textgen -Install the appropriate pytorch for your GPU. For NVIDIA GPUs, this should work: +3. Install the appropriate pytorch. For NVIDIA GPUs, this should work: conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia -Install the requirements: +For AMD GPUs, you need the ROCm version of pytorch. For running exclusively on the CPU, you just need the stock pytorch and this should probably work: + + conda install pytorch torchvision torchaudio -c pytorch + +4. Clone or download this repository, and then `cd` into its directory from your terminal window. + +5. Install the required Python libraries: pip install -r requirements.txt +After these steps, you should be able to start the webui, but first you need to download some model to load. + ## Downloading models Models should be placed under `models/model-name`. For instance, `models/gpt-j-6B` for [gpt-j-6B](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main). @@ -75,6 +85,8 @@ Then follow these steps to install: python download-model.py EleutherAI/gpt-j-6B ``` +You don't really need all of GPT-J's files, just the tokenizer files, but you might as well download the whole thing. Those files will be automatically detected when you attempt to load gpt4chan. + #### Converting to pytorch (optional) The script `convert-to-torch.py` allows you to convert models to .pt format, which is about 10x faster to load: