| szilard/benchm-ml |
1,839 |
|
0 |
0 |
over 3 years ago |
0 |
|
11 |
mit |
R |
| A minimal benchmark for scalability, speed and accuracy of commonly used open source implementations (R packages, Python scikit-learn, H2O, xgboost, Spark MLlib etc.) of the top machine learning algorithms for binary classification (random forests, gradient boosted trees, deep neural networks etc.). |
| szilard/GBM-perf |
180 |
|
0 |
0 |
almost 5 years ago |
0 |
|
30 |
mit |
HTML |
| Performance of various open source GBM implementations |
| catboost/benchmarks |
157 |
|
0 |
0 |
over 2 years ago |
0 |
|
5 |
apache-2.0 |
Jupyter Notebook |
| Comparison tools |
| IntelPython/scikit-learn_bench |
99 |
|
0 |
0 |
over 2 years ago |
0 |
|
18 |
apache-2.0 |
Python |
| scikit-learn_bench benchmarks various implementations of machine learning algorithms across data analytics frameworks. It currently support the scikit-learn, DAAL4PY, cuML, and XGBoost frameworks for commonly used machine learning algorithms. |
| ja-thomas/autoxgboost |
90 |
|
0 |
0 |
about 6 years ago |
0 |
|
25 |
other |
R |
| guitargeek/XGBoost-FastForest |
77 |
|
0 |
0 |
over 2 years ago |
0 |
|
2 |
mit |
C++ |
| Minimal library code to deploy XGBoost models in C++. |
| Azure/fast_retraining |
50 |
|
0 |
0 |
over 6 years ago |
0 |
|
0 |
mit |
Jupyter Notebook |
| Show how to perform fast retraining with LightGBM in different business cases |
| h2oai/xgboost-predictor |
32 |
|
3 |
3 |
almost 3 years ago |
20 |
May 25, 2023 |
6 |
apache-2.0 |
Java |
| nikolaydubina/go-ml-benchmarks |
24 |
|
0 |
0 |
over 3 years ago |
0 |
|
2 |
|
Go |
| ⏱ Benchmarks of machine learning inference for Go |
| RAMitchell/GBM-Benchmarks |
18 |
|
0 |
0 |
over 5 years ago |
0 |
|
3 |
mit |
Python |