text-generation-webui/extensions/multimodal/pipelines/llava
2023-10-10 18:40:52 -03:00
..
llava.py Initial support for LLaVA-LLaMA-2. (#3377) 2023-10-10 18:40:52 -03:00
pipelines.py Initial support for LLaVA-LLaMA-2. (#3377) 2023-10-10 18:40:52 -03:00
README.md Generalize multimodality (llava/minigpt4 7b and 13b now supported) (#1741) 2023-05-09 20:18:02 -03:00

LLaVA pipeline

This module provides 2 pipelines:

  • llava-7b - for use with LLaVA v0 7B model (finetuned LLaMa 7B)
  • llava-13b - for use with LLaVA v0 13B model (finetuned LLaMa 13B)

LLaVA uses CLIP openai/clip-vit-large-patch14 as the vision model, and then a single linear layer. For 13B the projector weights are in liuhaotian/LLaVA-13b-delta-v0, and for 7B they are in liuhaotian/LLaVA-7b-delta-v0.

The supported parameter combinations for both the vision model, and the projector are: CUDA/32bit, CUDA/16bit, CPU/32bit