| ashishpatel26/LLM-Finetuning |
582 |
|
0 |
0 |
over 2 years ago |
0 |
|
1 |
|
Jupyter Notebook |
| LLM Finetuning with peft |
| georgian-io/LLM-Finetuning-Hub |
556 |
|
0 |
0 |
about 2 years ago |
0 |
|
6 |
apache-2.0 |
Python |
| Repository that contains LLM fine-tuning and deployment scripts along with our research findings. |
| yangjianxin1/Firefly-LLaMA2-Chinese |
199 |
|
0 |
0 |
over 2 years ago |
0 |
|
7 |
|
Python |
| Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型 |
| iamarunbrahma/finetuned-qlora-falcon7b-medical |
197 |
|
0 |
0 |
about 2 years ago |
0 |
|
2 |
mit |
Jupyter Notebook |
| Finetuning of Falcon-7B LLM using QLoRA on Mental Health Conversational Dataset |
| leehanchung/lora-instruct |
91 |
|
0 |
0 |
over 2 years ago |
0 |
|
11 |
apache-2.0 |
Python |
| Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA |
| gmongaras/Llama-2_Huggingface_4Bit_QLoRA |
7 |
|
0 |
0 |
over 2 years ago |
0 |
|
0 |
|
Python |
| A working example of a 4bit QLoRA Falcon model using huggingface |