| hiyouga/LLaMA-Factory |
10,715 |
|
0 |
0 |
about 2 years ago |
19 |
December 03, 2023 |
96 |
apache-2.0 |
Python |
| Easy-to-use LLM fine-tuning framework (LLaMA, BLOOM, Mistral, Baichuan, Qwen, ChatGLM) |
| microsoft/LoRA |
7,814 |
|
0 |
16 |
over 2 years ago |
3 |
August 27, 2023 |
79 |
mit |
Python |
| Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models" |
| hiyouga/ChatGLM-Efficient-Tuning |
3,130 |
|
0 |
0 |
over 2 years ago |
6 |
August 12, 2023 |
0 |
apache-2.0 |
Python |
| Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调 |
| stochasticai/xTuring |
2,392 |
|
0 |
0 |
over 2 years ago |
0 |
|
11 |
apache-2.0 |
Python |
| Easily build, customize and control your own LLMs |
| zjunlp/KnowLM |
870 |
|
0 |
0 |
about 2 years ago |
0 |
|
1 |
apache-2.0 |
Python |
| An Open-sourced Knowledgable Large Language Model Framework. |
| Joyce94/LLM-RLHF-Tuning |
225 |
|
0 |
0 |
over 2 years ago |
0 |
|
1 |
|
Python |
| LLM Tuning with PEFT (SFT+RM+PPO+DPO with LoRA) |
| WangRongsheng/Aurora |
217 |
|
0 |
0 |
about 2 years ago |
0 |
|
0 |
apache-2.0 |
Python |
| 🐳 Aurora is a [Chinese Version] MoE model. Aurora is a further work based on Mixtral-8x7B, which activates the chat capability of the model's Chinese open domain. |
| zetavg/LLaMA-LoRA-Tuner |
168 |
|
0 |
0 |
almost 3 years ago |
0 |
|
10 |
|
Python |
| UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT-like Chat UI to demonstrate your language models. |
| avocardio/Zicklein |
28 |
|
0 |
0 |
over 2 years ago |
0 |
|
1 |
apache-2.0 |
Python |
| Finetuning instruct-LLaMA on german datasets. |
| RangiLyu/llama.mmengine |
25 |
|
0 |
0 |
about 3 years ago |
0 |
|
0 |
apache-2.0 |
Python |
| Training LLaMA language model with MMEngine! It supports LoRA fine-tuning! |