Update ancient Docker instructions

This commit is contained in:
oobabooga 2023-11-17 19:52:30 -08:00
parent e0ca49ed9c
commit d1a58da52f
6 changed files with 16 additions and 22 deletions

View file

@ -169,7 +169,7 @@ cp docker/.env.example .env
docker compose up --build
```
* You need to have docker compose v2.17 or higher installed. See [this guide](https://github.com/oobabooga/text-generation-webui/wiki/09-%E2%80%90-Docker) for instructions.
* You need to have Docker Compose v2.17 or higher installed. See [this guide](https://github.com/oobabooga/text-generation-webui/wiki/09-%E2%80%90-Docker) for instructions.
* For additional docker files, check out [this repository](https://github.com/Atinoda/text-generation-webui-docker).
### Updating the requirements

View file

@ -5,5 +5,4 @@ Dockerfile
/models
/presets
/prompts
/softprompts
/training

View file

@ -3,13 +3,8 @@
# https://developer.nvidia.com/cuda-gpus you can find the version for your card here
TORCH_CUDA_ARCH_LIST=7.5
# these commands worked for me with roughly 4.5GB of vram
CLI_ARGS=--model llama-7b-4bit --wbits 4 --listen --auto-devices
# the following examples have been tested with the files linked in docs/README_docker.md:
# example running 13b with 4bit/128 groupsize : CLI_ARGS=--model llama-13b-4bit-128g --wbits 4 --listen --groupsize 128 --pre_layer 25
# example with loading api extension and public share: CLI_ARGS=--model llama-7b-4bit --wbits 4 --listen --auto-devices --no-stream --extensions api --share
# example running 7b with 8bit groupsize : CLI_ARGS=--model llama-7b --load-in-8bit --listen --auto-devices
# your command-line flags go here:
CLI_ARGS=
# the port the webui binds to on the host
HOST_PORT=7860
@ -21,10 +16,5 @@ HOST_API_PORT=5000
# the port the api binds to inside the container
CONTAINER_API_PORT=5000
# the port the api stream endpoint binds to on the host
HOST_API_STREAM_PORT=5005
# the port the api stream endpoint binds to inside the container
CONTAINER_API_STREAM_PORT=5005
# the version used to install text-generation-webui from
WEBUI_VERSION=HEAD

View file

@ -73,5 +73,5 @@ RUN --mount=type=cache,target=/root/.cache/pip,rw \
ENV CLI_ARGS=""
EXPOSE ${CONTAINER_PORT:-7860} ${CONTAINER_API_PORT:-5000} ${CONTAINER_API_STREAM_PORT:-5005}
EXPOSE ${CONTAINER_PORT:-7860} ${CONTAINER_API_PORT:-5000}
CMD . /app/venv/bin/activate && python3 server.py ${CLI_ARGS}

View file

@ -11,7 +11,6 @@ services:
ports:
- "${HOST_PORT:-7860}:${CONTAINER_PORT:-7860}"
- "${HOST_API_PORT:-5000}:${CONTAINER_API_PORT:-5000}"
- "${HOST_API_STREAM_PORT:-5005}:${CONTAINER_API_STREAM_PORT:-5005}"
stdin_open: true
tty: true
volumes:

View file

@ -1,13 +1,21 @@
Docker Compose is a way of installing and launching the web UI in an isolated Ubuntu image using only a few commands.
In order to create the image as described in the main README, you must have docker compose 2.17 or higher:
## Installing Docker Compose
In order to create the image as described in the main README, you must have Docker Compose installed (2.17 or higher is recommended):
```
~$ docker compose version
Docker Compose version v2.17.2
Docker Compose version v2.21.0
```
Make sure to also create the necessary symbolic links:
The installation instructions for various Linux distributions can be found here:
https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository
## Launching the image
Use these commands to launch the image:
```
cd text-generation-webui
@ -17,13 +25,11 @@ cp docker/.env.example .env
docker compose up --build
```
## Table of contents
## More detailed installation instructions
* [Docker Compose installation instructions](#docker-compose-installation-instructions)
* [Repository with additional Docker files](#dedicated-docker-repository)
## Docker Compose installation instructions
By [@loeken](https://github.com/loeken).
- [Ubuntu 22.04](#ubuntu-2204)