Support for MPT, INCITE, WizardLM, StableLM, Galactica, Vicuna, Guanaco, and Baize instruction following (#1596)

This commit is contained in:
Carl Kenner 2023-05-10 09:07:31 +09:30 committed by GitHub
parent 06c7db017d
commit 814f754451
WARNING! Although there is a key with this ID in the database it does not verify this commit! This commit is SUSPICIOUS.
GPG key ID: 4AEE18F83AFDEB23
51 changed files with 352 additions and 28 deletions

View file

@ -13,7 +13,7 @@ Its goal is to become the [AUTOMATIC1111/stable-diffusion-webui](https://github.
* Dropdown menu for switching between models * Dropdown menu for switching between models
* Notebook mode that resembles OpenAI's playground * Notebook mode that resembles OpenAI's playground
* Chat mode for conversation and role playing * Chat mode for conversation and role playing
* Instruct mode compatible with various formats, including Alpaca, Vicuna, Open Assistant, Dolly, Koala, ChatGLM, and MOSS * Instruct mode compatible with various formats, including Alpaca, Vicuna, Open Assistant, Dolly, Koala, ChatGLM, MOSS, LLaVA, RWKV-Raven, Galactica, StableLM, WizardLM, Baize, MPT, and INCITE formats
* Nice HTML output for GPT-4chan * Nice HTML output for GPT-4chan
* Markdown output for [GALACTICA](https://github.com/paperswithcode/galai), including LaTeX rendering * Markdown output for [GALACTICA](https://github.com/paperswithcode/galai), including LaTeX rendering
* [Custom chat characters](docs/Custom-chat-characters.md) * [Custom chat characters](docs/Custom-chat-characters.md)

View file

@ -0,0 +1,4 @@
user: "[|AI|]"
bot: "[|Human|]"
turn_template: "<|user|><|user-message|>\n<|bot|><|bot-message|>\n"
context: "The following is a conversation between a human and an AI assistant named Baize (named after a mythical creature in Chinese folklore). Baize is an open-source AI assistant developed by UCSD and Sun Yat-Sen University. The human and the AI assistant take turns chatting. Human statements start with [|Human|] and AI assistant statements start with [|AI|]. The AI assistant always provides responses in as much detail as possible, and in Markdown format. The AI assistant always declines to engage with topics, questions and instructions related to unethical, controversial, or sensitive issues. Complete the transcript in exactly that format.\n[|Human|]Hello!\n[|AI|]Hi!\n"

View file

@ -0,0 +1,4 @@
user: ""
bot: "[START_REF]"
turn_template: "<|user-message|> <|bot|><|bot-message|>\n\n"
context: ""

View file

@ -0,0 +1,4 @@
user: "<question>"
bot: "<answer>"
turn_template: "<|user|><|user-message|><|bot|><|bot-message|>"
context: ""

View file

@ -0,0 +1,4 @@
user: "Q:"
bot: "A:"
turn_template: "<|user|> <|user-message|>\n\n<|bot|><|bot-message|>\n\n"
context: ""

View file

@ -0,0 +1,4 @@
user: ""
bot: "TLDR:"
turn_template: "<|user-message|>\n\n<|bot|><|bot-message|>\n\n"
context: ""

View file

@ -0,0 +1,4 @@
user: "Question:"
bot: "<work>"
turn_template: "<|user|> <|user-message|>\n\n<|bot|><|bot-message|>\n\n"
context: ""

View file

@ -0,0 +1,4 @@
user: "<human>"
bot: "<bot>"
turn_template: "<|user|><|user-message|><|bot|><|bot-message|>"
context: "<prefix>You are a helpful chatbot name Stan</prefix>"

View file

@ -0,0 +1,4 @@
name: "Answer:"
your_name: "Question:"
context: ""
turn_template: "<|user|> <|user-message|>\n\n<|bot|><|bot-message|>\n\n"

View file

@ -0,0 +1,4 @@
user: "### Instruction:"
bot: "### Response:"
turn_template: "<|user|>\n<|user-message|>\n\n<|bot|>\n<|bot-message|>\n\n"
context: ""

View file

@ -0,0 +1,4 @@
user: "<human>:"
bot: "<bot>:"
turn_template: "<|user|> <|user-message|>\n<|bot|><|bot-message|>\n"
context: ""

View file

@ -0,0 +1,4 @@
user: "Q:"
bot: "A:"
turn_template: "<|user|> <|user-message|>\n<|bot|><|bot-message|>\n"
context: ""

View file

@ -0,0 +1,10 @@
user: "user"
bot: "assistant"
context: |
<|im_start|>system
- You are a helpful assistant chatbot trained by MosaicML.
- You answer questions.
- You are excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- You are more than just an information source, you are also able to write poetry, short stories, and make jokes.<|im_end|>
turn_template: "<|im_start|><|user|>\n<|user-message|><|im_end|>\n<|im_start|><|bot|>\n<|bot-message|><|im_end|>\n"

View file

@ -0,0 +1,9 @@
user: "<|USER|>"
bot: "<|ASSISTANT|>"
context: |
<|SYSTEM|># StableLM Tuned (Alpha version)
- StableLM is a helpful and harmless open-source AI language model developed by StabilityAI.
- StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes.
- StableLM will refuse to participate in anything that could harm a human.
turn_template: "<|user|><|user-message|><|bot|><|bot-message|>"

View file

@ -0,0 +1,4 @@
user: "### Human:"
bot: "### Assistant:"
turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|>\n\n"
context: "### Assistant: I am StableVicuna, a large language model created by CarperAI. I am here to chat!\n\n"

View file

@ -1,4 +1,4 @@
user: "### Human:" user: "### Human:"
bot: "### Assistant:" bot: "### Assistant:"
turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|>\n" turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|>\n"
context: "A chat between a human and an assistant.\n\n" context: "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.\n\n"

View file

@ -0,0 +1,4 @@
user: "USER:"
bot: "ASSISTANT:"
turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|></s>\n"
context: "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\n\n"

View file

@ -1,4 +0,0 @@
user: "USER:"
bot: "ASSISTANT:"
turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|></s>\n"
context: "A chat between a user and an assistant.\n\n"

View file

@ -0,0 +1,4 @@
user: ""
bot: "### Response:"
turn_template: "<|user-message|>\n\n<|bot|><|bot-message|>\n\n</s>"
context: ""

View file

@ -6,27 +6,53 @@
mode: 'chat' mode: 'chat'
skip_special_tokens: true skip_special_tokens: true
custom_stopping_strings: '' custom_stopping_strings: ''
llama-[0-9]*b-4bit$: .*llama:
wbits: 4
model_type: 'llama' model_type: 'llama'
.*-(4bit|int4)-(gr128|128g): .*gptq(?!u|arl|v2):
wbits: 4 wbits: 4
groupsize: 128 groupsize: 128
.*-(gr128|128g)-(4bit|int4): .*(4bit|int4):
wbits: 4 wbits: 4
groupsize: 128 .*(3bit|int3):
.*-3bit-(gr128|128g):
wbits: 3 wbits: 3
.*(-2bit|_2bit|int2-):
wbits: 2
.*(-1bit|_1bit|int1-):
wbits: 1
.*(8bit|int8):
wbits: 8
.*(-7bit|_7bit|int7-):
wbits: 7
.*(-6bit|_6bit|int6-):
wbits: 6
.*(-5bit|_5bit|int5-):
wbits: 5
.*gptqv2:
groupsize: 'None'
.*(-gr32-|-32g-|groupsize32):
groupsize: 32
.*(-gr64-|-64g-|groupsize64):
groupsize: 64
.*(gr128|128g|groupsize128):
groupsize: 128 groupsize: 128
.*-(gr128|128g)-3bit: .*(gr1024|1024g|groupsize1024):
wbits: 3 groupsize: 1024
groupsize: 128 .*(oasst|stablelm-7b-sft-v7-epoch-3):
.*(oasst-sft-1-pythia-12b|oasst-sft-6-llama-30b):
mode: 'instruct' mode: 'instruct'
instruction_template: 'Open Assistant' instruction_template: 'Open Assistant'
.*vicuna: skip_special_tokens: false
(?!.*v0)(?!.*1.1)(?!.*1_1)(?!.*stable).*vicuna:
mode: 'instruct' mode: 'instruct'
instruction_template: 'Vicuna-v0' instruction_template: 'Vicuna-v0'
.*vicuna.*v0:
mode: 'instruct'
instruction_template: 'Vicuna-v0'
.*vicuna.*(1.1|1_1):
mode: 'instruct'
instruction_template: 'Vicuna-v1.1'
.*stable.*vicuna:
mode: 'instruct'
instruction_template: 'StableVicuna'
.*alpaca: .*alpaca:
mode: 'instruct' mode: 'instruct'
instruction_template: 'Alpaca' instruction_template: 'Alpaca'
@ -35,7 +61,7 @@ llama-[0-9]*b-4bit$:
instruction_template: 'Alpaca' instruction_template: 'Alpaca'
wbits: 4 wbits: 4
groupsize: 128 groupsize: 128
.*(galactica|oasst): .*galactica:
skip_special_tokens: false skip_special_tokens: false
.*dolly-v[0-9]-[0-9]*b: .*dolly-v[0-9]-[0-9]*b:
mode: 'instruct' mode: 'instruct'
@ -59,7 +85,51 @@ llama-[0-9]*b-4bit$:
.*moss-moon.*sft: .*moss-moon.*sft:
mode: 'instruct' mode: 'instruct'
instruction_template: 'MOSS' instruction_template: 'MOSS'
.*stablelm-tuned:
mode: 'instruct'
instruction_template: 'StableLM'
truncation_length: 4096
chat_prompt_size: 4096
chat_prompt_size_max: 4096
.*stablelm-base:
truncation_length: 4096
chat_prompt_size: 4096
chat_prompt_size_max: 4096
.*wizardlm:
mode: 'instruct'
model_type: 'llama'
instruction_template: 'WizardLM'
.*galactica.*finetuned:
mode: 'instruct'
instruction_template: 'Galactica Finetuned'
.*galactica.*-v2:
mode: 'instruct'
instruction_template: 'Galactica v2'
(?!.*finetuned)(?!.*-v2).*galactica:
mode: 'instruct'
instruction_template: 'Galactica'
.*guanaco:
mode: 'instruct'
instruction_template: 'Guanaco non-chat'
.*baize:
mode: 'instruct'
instruction_template: 'Baize'
.*mpt-.*instruct:
mode: 'instruct'
instruction_template: 'Alpaca'
.*mpt-.*chat:
mode: 'instruct'
instruction_template: 'MPT-Chat'
(?!.*-flan-)(?!.*-t5-).*lamini-:
mode: 'instruct'
instruction_template: 'Alpaca'
.*incite.*chat:
mode: 'instruct'
instruction_template: 'INCITE-Chat'
.*incite.*instruct:
mode: 'instruct'
instruction_template: 'INCITE-Instruct'
.*pygmalion-7b: .*pygmalion-7b:
model_type: 'llama' model_type: 'llama'
.*metharme-7b: .*metharme-7b:
model_type: 'llama' model_type: 'llama'

View file

@ -145,12 +145,12 @@ def load_quantized(model_name):
# Find the model type # Find the model type
if not shared.args.model_type: if not shared.args.model_type:
name = model_name.lower() name = model_name.lower()
if any((k in name for k in ['llama', 'alpaca', 'vicuna', 'llava'])): if any((k in name for k in ['opt-', 'opt_', 'opt1', 'opt3', 'optfor', 'galactica', 'galpaca', 'pygmalion-350m'])):
model_type = 'llama'
elif any((k in name for k in ['opt-', 'galactica'])):
model_type = 'opt' model_type = 'opt'
elif any((k in name for k in ['gpt-j', 'pygmalion-6b'])): elif any((k in name for k in ['gpt-j', 'gptj', 'gpt4all-j', 'malion-6b', 'pygway'])):
model_type = 'gptj' model_type = 'gptj'
elif any((k in name for k in ['llama', 'alpac', 'vicuna', 'guanaco', 'koala', 'llava', 'wizardlm'])):
model_type = 'llama'
else: else:
logging.error("Can't determine model type from model name. Please specify it manually using --model_type argument") logging.error("Can't determine model type from model name. Please specify it manually using --model_type argument")
exit() exit()

View file

@ -71,12 +71,31 @@ settings = {
'prompts': { 'prompts': {
'default': 'QA', 'default': 'QA',
'.*(gpt4chan|gpt-4chan|4chan)': 'GPT-4chan', '.*(gpt4chan|gpt-4chan|4chan)': 'GPT-4chan',
'.*oasst': 'Open Assistant', '.*(oasst|stablelm-7b-sft-v7-epoch-3)': 'Open Assistant',
'.*alpaca': "Alpaca", '.*(alpac|dolly)': "Alpaca",
'.*mpt-.*instruct': "Alpaca",
"(?!.*v0)(?!.*1.1)(?!.*1_1)(?!.*stable).*vicuna": "Vicuna v0",
".*vicuna.*v0": "Vicuna v0",
".*vicuna.*(1.1|1_1)": "Vicuna v1.1",
".*stable.*vicuna": "StableVicuna",
".*guanaco": "Guanaco-Chat",
".*koala": "Koala",
".*stablelm-tuned": "StableLM",
".*wizardlm": "WizardLM",
".*galactica.*finetuned": "Galactica Finetuned",
".*galactica.*-v2": "Galactica v2",
"(?!.*finetuned)(?!.*-v2).*galactica": "Galactica",
".*baize": "Baize",
".*mpt-.*instruct": "Alpaca",
".*mpt-.*chat": "MPT-Chat",
"(?!.*-flan-)(?!.*-t5-).*lamini-": "Alpaca",
".*incite.*chat": "INCITE-Chat",
".*incite.*instruct": "INCITE-Instruct",
}, },
'lora_prompts': { 'lora_prompts': {
'default': 'QA', 'default': 'QA',
'.*alpaca': "Alpaca", '.*alpaca': "Alpaca",
'.*baize': "Baize",
} }
} }

View file

@ -0,0 +1,14 @@
max_new_tokens=1024
do_sample=True
top_p=0.95
top_k=1000
temperature=1.0
num_beams=1
typical_p=1.0
repetition_penalty=1.0
encoder_repetition_penalty=1.0
no_repeat_ngram_size=0
min_length=0
penalty_alpha=0
length_penalty=1.0
early_stopping=False

View file

@ -0,0 +1,14 @@
max_new_tokens=128
temperature=0.7
top_k=0
top_p=0.9
do_sample=True
typical_p=1.0
repetition_penalty=1.0
encoder_repetition_penalty=1.0
no_repeat_ngram_size=0
min_length=0
penalty_alpha=0
num_beams=1
length_penalty=1.0
early_stopping=False

View file

@ -0,0 +1,14 @@
max_new_tokens=64
temperature=0.7
do_sample=True
top_p=1.0
top_k=50
typical_p=1.0
repetition_penalty=1.0
encoder_repetition_penalty=1.0
no_repeat_ngram_size=0
min_length=0
penalty_alpha=0
num_beams=1
length_penalty=1.0
early_stopping=False

5
prompts/Baize.txt Normal file
View file

@ -0,0 +1,5 @@
The following is a conversation between a human and an AI assistant named Baize (named after a mythical creature in Chinese folklore). Baize is an open-source AI assistant developed by UCSD and Sun Yat-Sen University. The human and the AI assistant take turns chatting. Human statements start with [|Human|] and AI assistant statements start with [|AI|]. The AI assistant always provides responses in as much detail as possible, and in Markdown format. The AI assistant always declines to engage with topics, questions and instructions related to unethical, controversial, or sensitive issues. Complete the transcript in exactly that format.
[|Human|]Hello!
[|AI|]Hi!
[|Human|]What is the population of China?
[|AI|]

View file

@ -0,0 +1,9 @@
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Instruction
Input:
Input
### Response:

View file

@ -0,0 +1 @@
A paper that introduced a neural network architecture for recognizing digits [START_REF]

View file

@ -0,0 +1,9 @@
Question: Translate the following Math formula:
\[
\zeta(s) = \sum_{n=1}^{\infty} n^{-s}
\]
into plain English.
Answer:

View file

@ -0,0 +1 @@
# Multi-Head Attention

View file

@ -0,0 +1 @@
<question>How to make a campfire<answer>

View file

@ -0,0 +1 @@
Title: Self-Supervised Learning, A Survey

3
prompts/Galactica Q.txt Normal file
View file

@ -0,0 +1,3 @@
Q: What is the notch signaling pathway?
A:

View file

@ -0,0 +1,3 @@
Information overload is a major obstacle to scientific progress. The explosive growth in scientific literature and data has made it ever harder to discover useful insights in a large mass of information. Today scientific knowledge is accessed through search engines, but they are unable to organize scientific knowledge alone. In this paper we introduce Galactica: a large language model that can store, combine and reason about scientific knowledge. We train on a large scientific corpus of papers, reference material, knowledge bases and many other sources. We outperform existing models on a range of scientific tasks. On technical knowledge probes such as LaTeX equations, Galactica outperforms the latest GPT-3 by 68.2% versus 49.0%. Galactica also performs well on reasoning, outperforming Chinchilla on mathematical MMLU by 41.3% to 35.7%, and PaLM 540B on MATH with a score of 20.4% versus 8.8%. It also sets a new state-of-the-art on downstream tasks such as PubMedQA and MedMCQA dev of 77.6% and 52.9%. And despite not being trained on a general corpus, Galactica outperforms BLOOM and OPT-175B on BIG-bench. We believe these results demonstrate the potential for language models as a new interface for science. We open source the model for the benefit of the scientific community.
TLDR:

View file

@ -0,0 +1,3 @@
Question: A needle 35 mm long rests on a water surface at 20◦C. What force over and above the needles weight is required to lift the needle from contact with the water surface? σ = 0.0728m.
<work>

1
prompts/Galactica v2.txt Normal file
View file

@ -0,0 +1 @@
<prefix>You are a helpful chatbot name Stan</prefix><human>What's my name?<bot>

3
prompts/Galactica.txt Normal file
View file

@ -0,0 +1,3 @@
Question: What is the notch signaling pathway?
Answer:

7
prompts/Guanaco-Chat.txt Normal file
View file

@ -0,0 +1,7 @@
### Instruction:
User: I'm considering getting a pet. Assistant: Owning a pet can be a very rewarding experience. Research the type of pet you're interested in, find out if it fits into your lifestyle and home, and create a budget for food, vet visits, and other expenses.
### Input:
User: How can I make sure my pet is happy and healthy?
### Response:

View file

@ -0,0 +1,8 @@
### Instruction:
User: I'm trying to better understand quantum physics. Can you explain what a quantum state is? Assistant: Sure! A quantum state is a mathematical description of the properties of a quantum system. It describes the physical condition of a system and can involve multiple parameters, such as position, momentum, and energy. This state acts like a wave and its behavior is determined by the Schrödinger equation. User: Can you explain the Schrödinger equation?
### Input:
System: The Schrödinger equation is a mathematical equation which describes the behavior of a quantum system. It determines the shape of the wavefunction, which describes how a quantum system evolves with time. The equation describes the relationship between the energy of the system and its wavefunction, and its behavior is determined by the values of the measurable parameters such as momentum and position.
User: How does the Schrödinger equation relate to other equations in physics?
### Response:

View file

@ -0,0 +1,4 @@
### Instruction:
Generate a list of ten dining places when you are in Rome.
### Response:

View file

@ -0,0 +1,7 @@
### Instruction:
Classify the given text into three categories, output the labels.
### Input:
The movie was predictable, yet enjoyable.
### Response:

2
prompts/INCITE-Chat.txt Normal file
View file

@ -0,0 +1,2 @@
<human>: Who is Alan Turing?
<bot>:

View file

@ -0,0 +1,2 @@
Q: The capital of France is?
A:

1
prompts/Koala.txt Normal file
View file

@ -0,0 +1 @@
BEGINNING OF CONVERSATION: USER: Hello! GPT:Hi! How can I help you?</s>USER: What is the largest animal on earth? GPT:

11
prompts/MPT-Chat.txt Normal file
View file

@ -0,0 +1,11 @@
<|im_start|>system
- You are a helpful assistant chatbot trained by MosaicML.
- You answer questions.
- You are excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- You are more than just an information source, you are also able to write poetry, short stories, and make jokes.<|im_end|>
<|im_start|>user
How are you<|im_end|>
<|im_start|>assistant
I am doing well!<|im_end|>
<|im_start|>user
How are you now?<|im_end|>

7
prompts/StableLM.txt Normal file
View file

@ -0,0 +1,7 @@
<|SYSTEM|># StableLM Tuned (Alpha version)
- StableLM is a helpful and harmless open-source AI language model developed by StabilityAI.
- StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes.
- StableLM will refuse to participate in anything that could harm a human.
<|USER|>Write a story about the future of AI development
<|ASSISTANT|>

4
prompts/StableVicuna.txt Normal file
View file

@ -0,0 +1,4 @@
### Assistant: I am StableVicuna, a large language model created by CarperAI. I am here to chat!
### Human: Write a story about the future of AI development
### Assistant:

4
prompts/Vicuna v0.txt Normal file
View file

@ -0,0 +1,4 @@
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.
### Human: Write a story about the future of AI development
### Assistant:

4
prompts/Vicuna v1.1.txt Normal file
View file

@ -0,0 +1,4 @@
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: Write a story about the future of AI development
ASSISTANT:

3
prompts/WizardLM.txt Normal file
View file

@ -0,0 +1,3 @@
If a car travels 120 miles in 2 hours, what is its average speed in miles per hour?
### Response:

View file

@ -40,11 +40,29 @@
"prompts": { "prompts": {
"default": "QA", "default": "QA",
".*(gpt4chan|gpt-4chan|4chan)": "GPT-4chan", ".*(gpt4chan|gpt-4chan|4chan)": "GPT-4chan",
".*oasst": "Open Assistant", ".*(oasst|stablelm-7b-sft-v7-epoch-3)": "Open Assistant",
".*alpaca": "Alpaca" ".*(alpac|dolly)": "Alpaca",
"(?!.*v0)(?!.*1.1)(?!.*1_1)(?!.*stable).*vicuna": "Vicuna v0",
".*vicuna.*v0": "Vicuna v0",
".*vicuna.*(1.1|1_1)": "Vicuna v1.1",
".*stable.*vicuna": "StableVicuna",
".*guanaco": "Guanaco-Chat",
".*koala": "Koala",
".*stablelm-tuned": "StableLM",
".*wizardlm": "WizardLM",
".*galactica.*finetuned": "Galactica Finetuned",
".*galactica.*-v2": "Galactica v2",
"(?!.*finetuned)(?!.*-v2).*galactica": "Galactica",
".*baize": "Baize",
".*mpt-.*instruct": "Alpaca",
".*mpt-.*chat": "MPT-Chat",
"(?!.*-flan-)(?!.*-t5-).*lamini-": "Alpaca",
".*incite.*chat": "INCITE-Chat",
".*incite.*instruct": "INCITE-Instruct"
}, },
"lora_prompts": { "lora_prompts": {
"default": "QA", "default": "QA",
".*(alpaca-lora-7b|alpaca-lora-13b|alpaca-lora-30b)": "Alpaca" ".*(alpaca-lora-7b|alpaca-lora-13b|alpaca-lora-30b)": "Alpaca",
".*baize": "Baize"
} }
} }