Commit graph

1043 commits

Author SHA1 Message Date
kalomaze
367e5e6e43
Implement Min P as a sampler option in HF loaders (#4449) 2023-11-02 16:32:51 -03:00
oobabooga
fcb7017b7a Remove a checkbox 2023-11-02 12:24:09 -07:00
Julien Chaumond
fdcaa955e3
transformers: Add a flag to force load from safetensors (#4450) 2023-11-02 16:20:54 -03:00
oobabooga
c0655475ae Add cache_8bit option 2023-11-02 11:23:04 -07:00
oobabooga
42f816312d Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2023-11-02 11:09:26 -07:00
oobabooga
77abd9b69b Add no_flash_attn option 2023-11-02 11:08:53 -07:00
Julien Chaumond
a56ef2a942
make torch.load a bit safer (#4448) 2023-11-02 14:07:08 -03:00
Mehran Ziadloo
aaf726dbfb
Updating the shared settings object when loading a model (#4425) 2023-11-01 01:29:57 -03:00
oobabooga
9bd0724d85 Change frequency/presence penalty ranges 2023-10-31 20:57:56 -07:00
Meheret
0707ed7677
updated wiki link (#4415) 2023-10-31 19:09:05 -03:00
oobabooga
262f8ae5bb Use default gr.Dataframe for evaluation table 2023-10-27 06:49:14 -07:00
oobabooga
839a87bac8 Fix is_ccl_available & is_xpu_available imports 2023-10-26 20:27:04 -07:00
Abhilash Majumder
778a010df8
Intel Gpu support initialization (#4340) 2023-10-26 23:39:51 -03:00
oobabooga
92b2f57095 Minor metadata bug fix (second attempt) 2023-10-26 18:57:32 -07:00
tdrussell
72f6fc6923
Rename additive_repetition_penalty to presence_penalty, add frequency_penalty (#4376) 2023-10-25 12:10:28 -03:00
oobabooga
ef1489cd4d Remove unused parameter in AutoAWQ 2023-10-23 20:45:43 -07:00
oobabooga
1edf321362 Lint 2023-10-23 13:09:03 -07:00
oobabooga
280ae720d7 Organize 2023-10-23 13:07:17 -07:00
oobabooga
49e5eecce4 Merge remote-tracking branch 'refs/remotes/origin/main' 2023-10-23 12:54:05 -07:00
oobabooga
306d764ff6 Minor metadata bug fix 2023-10-23 12:46:24 -07:00
adrianfiedler
4bc411332f
Fix broken links (#4367)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-10-23 14:09:57 -03:00
oobabooga
92691ee626 Disable trust_remote_code by default 2023-10-23 09:57:44 -07:00
tdrussell
4440f87722
Add additive_repetition_penalty sampler setting. (#3627) 2023-10-23 02:28:07 -03:00
oobabooga
df90d03e0b Replace --mul_mat_q with --no_mul_mat_q 2023-10-22 12:23:03 -07:00
Googulator
d0c3b407b3
transformers loader: multi-LoRAs support (#3120) 2023-10-22 16:06:22 -03:00
omo
4405513ca5
Option to select/target additional linear modules/layers in LORA training (#4178) 2023-10-22 15:57:19 -03:00
oobabooga
2d1b3332e4 Ignore warnings on Colab 2023-10-21 21:45:25 -07:00
oobabooga
09f807af83 Use ExLlama_HF for GPTQ models by default 2023-10-21 20:45:38 -07:00
oobabooga
506d05aede Organize command-line arguments 2023-10-21 18:52:59 -07:00
oobabooga
fbac6d21ca Add missing exception 2023-10-20 23:53:24 -07:00
Brian Dashore
3345da2ea4
Add flash-attention 2 for windows (#4235) 2023-10-21 03:46:23 -03:00
Johan
1d5a015ce7
Enable special token support for exllamav2 (#4314) 2023-10-21 01:54:06 -03:00
turboderp
ae8cd449ae
ExLlamav2_HF: Convert logits to FP32 (#4310) 2023-10-18 23:16:05 -03:00
oobabooga
f17f7a6913 Increase the evaluation table height 2023-10-16 12:55:35 -07:00
oobabooga
8ea554bc19 Check for torch.xpu.is_available() 2023-10-16 12:53:40 -07:00
oobabooga
188d20e9e5 Reduce the evaluation table height 2023-10-16 10:53:42 -07:00
oobabooga
2d44adbb76 Clear the torch cache while evaluating 2023-10-16 10:52:50 -07:00
oobabooga
71cac7a1b2 Increase the height of the evaluation table 2023-10-15 21:56:40 -07:00
oobabooga
e14bde4946 Minor improvements to evaluation logs 2023-10-15 20:51:43 -07:00
oobabooga
b88b2b74a6 Experimental Intel Arc transformers support (untested) 2023-10-15 20:51:11 -07:00
Forkoz
8cce1f1126
Exllamav2 lora support (#4229)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-10-14 16:12:41 -03:00
oobabooga
773c17faec Fix a warning 2023-10-10 20:53:38 -07:00
oobabooga
f63361568c Fix safetensors kwarg usage in AutoAWQ 2023-10-10 19:03:09 -07:00
oobabooga
39f16ff83d Fix default/notebook tabs css 2023-10-10 18:45:12 -07:00
oobabooga
fae8062d39
Bump to latest gradio (3.47) (#4258) 2023-10-10 22:20:49 -03:00
oobabooga
9fab9a1ca6 Minor fix 2023-10-10 14:08:11 -07:00
oobabooga
a49cc69a4a Ignore rope_freq_base if value is 10000 2023-10-10 13:57:40 -07:00
oobabooga
3a9d90c3a1 Download models with 4 threads by default 2023-10-10 13:52:10 -07:00
Forkoz
35695e18c7
Remove import. (#4247)
For real this time.
2023-10-09 18:06:11 -03:00
Forkoz
2e471071af
Update llama_attn_hijack.py (#4231) 2023-10-08 15:16:48 -03:00