| microsoft/nni |
13,536 |
|
8 |
27 |
over 2 years ago |
55 |
September 14, 2023 |
342 |
mit |
Python |
| An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning. |
| Tencent/PocketFlow |
2,553 |
|
0 |
0 |
over 5 years ago |
0 |
|
73 |
other |
Python |
| An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications. |
| guan-yuan/awesome-AutoML-and-Lightweight-Models |
647 |
|
0 |
0 |
over 5 years ago |
0 |
|
0 |
|
|
| A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering. |
| microsoft/archai |
428 |
|
0 |
0 |
over 2 years ago |
9 |
September 15, 2023 |
2 |
mit |
Python |
| Accelerate your Neural Architecture Search (NAS) through fast, reproducible and modular research. |
| mit-han-lab/amc |
406 |
|
0 |
0 |
over 2 years ago |
0 |
|
18 |
mit |
Python |
| [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices |
| tianyic/only_train_once |
242 |
|
0 |
0 |
about 2 years ago |
15 |
August 29, 2023 |
7 |
mit |
Python |
| OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators, CNN, LLM |
| mit-han-lab/amc-models |
164 |
|
0 |
0 |
about 5 years ago |
0 |
|
0 |
mit |
Python |
| [ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices |
| jim-schwoebel/allie |
126 |
|
0 |
0 |
over 2 years ago |
0 |
|
70 |
apache-2.0 |
Python |
| 🤖 An automated machine learning framework for audio, text, image, video, or .CSV files (50+ featurizers and 15+ model trainers). Python 3.6 required. |
| cheneydon/efficient-bert |
31 |
|
0 |
0 |
almost 3 years ago |
0 |
|
0 |
|
Python |
| This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up Knowledge Distillation". |