| ymcui/Chinese-LLaMA-Alpaca |
15,877 |
|
0 |
0 |
over 2 years ago |
0 |
|
8 |
apache-2.0 |
Python |
| 中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs) |
| camenduru/stable-diffusion-webui-colab |
14,090 |
|
0 |
0 |
over 2 years ago |
0 |
|
16 |
unlicense |
Jupyter Notebook |
| stable diffusion webui colab |
| huggingface/peft |
12,271 |
|
0 |
101 |
about 2 years ago |
11 |
December 06, 2023 |
65 |
apache-2.0 |
Python |
| 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning. |
| hiyouga/LLaMA-Factory |
10,715 |
|
0 |
0 |
about 2 years ago |
19 |
December 03, 2023 |
96 |
apache-2.0 |
Python |
| Easy-to-use LLM fine-tuning framework (LLaMA, BLOOM, Mistral, Baichuan, Qwen, ChatGLM) |
| microsoft/LoRA |
7,814 |
|
0 |
16 |
over 2 years ago |
3 |
August 27, 2023 |
79 |
mit |
Python |
| Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models" |
| cloneofsimo/lora |
5,959 |
|
0 |
0 |
over 2 years ago |
0 |
|
82 |
apache-2.0 |
Jupyter Notebook |
| Using Low-rank adaptation to quickly fine-tune diffusion models. |
| yangjianxin1/Firefly |
3,505 |
|
0 |
0 |
over 2 years ago |
0 |
|
138 |
|
Python |
| Firefly(流萤): 中文对话式大语言模型(全量微调+QLoRA),支持微调Mixtral-8x7B、Zephyr、Mistral、Aquila2、Baichuan2、CodeLlama、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya、Bloom等大模型 |
| mymusise/ChatGLM-Tuning |
3,454 |
|
0 |
0 |
over 2 years ago |
0 |
|
178 |
mit |
Python |
| 基于ChatGLM-6B + LoRA的Fintune方案 |
| Akegarasu/lora-scripts |
3,328 |
|
0 |
0 |
about 2 years ago |
0 |
|
9 |
agpl-3.0 |
Python |
| LoRA & Dreambooth training scripts & GUI use kohya-ss's trainer, for diffusion model. |
| 1technophile/OpenMQTTGateway |
3,311 |
|
0 |
0 |
about 2 years ago |
0 |
|
59 |
gpl-3.0 |
C++ |
| MQTT gateway for ESP8266 or ESP32 with bidirectional 433mhz/315mhz/868mhz, Infrared communications, BLE, Bluetooth, beacons detection, mi flora, mi jia, LYWSD02, LYWSD03MMC, Mi Scale, TPMS, BBQ thermometer compatibility & LORA. |