Clarify how to start server.py with multimodal API support (#2025)

This commit is contained in:
Damian Stewart 2023-05-12 19:37:49 +02:00 committed by GitHub
parent 437d1c7ead
commit 3f1bfba718
WARNING! Although there is a key with this ID in the database it does not verify this commit! This commit is SUSPICIOUS.
GPG key ID: 4AEE18F83AFDEB23

View file

@ -57,7 +57,10 @@ This extension uses the following parameters (from `settings.json`):
## Usage through API ## Usage through API
You can run the multimodal inference through API, by inputting the images to prompt. Images are embedded like so: `f'<img src="data:image/jpeg;base64,{img_str}">'`, where `img_str` is base-64 jpeg data. Python example: You can run the multimodal inference through API, by inputting the images to prompt. Images are embedded like so: `f'<img src="data:image/jpeg;base64,{img_str}">'`, where `img_str` is base-64 jpeg data. Note that you will need to launch `server.py` with the arguments `--api --extensions multimodal`.
Python example:
```Python ```Python
import base64 import base64
import requests import requests