| ymcui/Chinese-LLaMA-Alpaca |
15,877 |
|
0 |
0 |
over 2 years ago |
0 |
|
8 |
apache-2.0 |
Python |
| 中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs) |
| hiyouga/LLaMA-Factory |
10,715 |
|
0 |
0 |
about 2 years ago |
19 |
December 03, 2023 |
96 |
apache-2.0 |
Python |
| Easy-to-use LLM fine-tuning framework (LLaMA, BLOOM, Mistral, Baichuan, Qwen, ChatGLM) |
| SYSTRAN/faster-whisper |
6,940 |
|
0 |
22 |
about 2 years ago |
12 |
November 26, 2023 |
140 |
mit |
Python |
| Faster Whisper transcription with CTranslate2 |
| mozilla/mozjpeg |
5,225 |
|
0 |
1 |
over 2 years ago |
2 |
December 01, 2023 |
92 |
other |
C |
| Improved JPEG encoder. |
| kornelski/pngquant |
4,922 |
|
0 |
0 |
about 2 years ago |
5 |
November 13, 2022 |
49 |
other |
C |
| Lossy PNG compressor — pngquant command based on libimagequant library |
| UFund-Me/Qbot |
4,799 |
|
0 |
0 |
over 2 years ago |
0 |
|
51 |
mit |
Jupyter Notebook |
| [🔥updating ...] AI 自动量化交易机器人 AI-powered Quantitative Investment Research Platform. 📃 online docs: https://ufund-me.github.io/Qbot ✨ :news: qbot-mini: https://github.com/Charmve/iQuant |
| IntelLabs/distiller |
4,252 |
|
0 |
0 |
almost 3 years ago |
0 |
|
65 |
apache-2.0 |
Jupyter Notebook |
| Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller |
| AutoGPTQ/AutoGPTQ |
3,206 |
|
0 |
0 |
about 2 years ago |
0 |
|
174 |
mit |
Python |
| An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm. |
| PINTO0309/PINTO_model_zoo |
3,121 |
|
0 |
0 |
about 2 years ago |
0 |
|
11 |
mit |
Python |
| A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8), EdgeTPU, CoreML. |
| IntelLabs/nlp-architect |
2,924 |
|
0 |
0 |
over 3 years ago |
10 |
April 12, 2020 |
14 |
apache-2.0 |
Python |
| A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks |