| camenduru/stable-diffusion-webui-colab |
14,090 |
|
0 |
0 |
over 2 years ago |
0 |
|
16 |
unlicense |
Jupyter Notebook |
| stable diffusion webui colab |
| huggingface/peft |
12,271 |
|
0 |
101 |
about 2 years ago |
11 |
December 06, 2023 |
65 |
apache-2.0 |
Python |
| 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning. |
| microsoft/LoRA |
7,814 |
|
0 |
16 |
over 2 years ago |
3 |
August 27, 2023 |
79 |
mit |
Python |
| Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models" |
| hiyouga/ChatGLM-Efficient-Tuning |
3,130 |
|
0 |
0 |
over 2 years ago |
6 |
August 12, 2023 |
0 |
apache-2.0 |
Python |
| Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调 |
| adapter-hub/adapters |
2,803 |
|
0 |
7 |
about 1 month ago |
18 |
April 06, 2023 |
51 |
apache-2.0 |
Python |
| A Unified Library for Parameter-Efficient and Modular Transfer Learning |
| PhoebusSi/Alpaca-CoT |
2,235 |
|
0 |
0 |
over 2 years ago |
0 |
|
30 |
apache-2.0 |
Jupyter Notebook |
| We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr! |
| ssbuild/chatglm_finetuning |
1,486 |
|
0 |
0 |
over 2 years ago |
0 |
|
38 |
|
Python |
| chatglm 6b finetuning and alpaca finetuning |
| siliconflow/onediff |
787 |
|
0 |
0 |
about 2 years ago |
0 |
|
27 |
|
Python |
| OneDiff: An out-of-the-box acceleration library for diffusion models. |
| predibase/lorax |
719 |
|
0 |
0 |
about 2 years ago |
0 |
|
45 |
apache-2.0 |
Python |
| Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs |
| ashishpatel26/LLM-Finetuning |
582 |
|
0 |
0 |
over 2 years ago |
0 |
|
1 |
|
Jupyter Notebook |
| LLM Finetuning with peft |