| Tencent/ncnn |
18,693 |
|
0 |
1 |
about 2 years ago |
26 |
October 27, 2023 |
1,010 |
other |
C++ |
| ncnn is a high-performance neural network inference framework optimized for the mobile platform |
| isl-org/Open3D |
10,043 |
|
0 |
0 |
about 2 years ago |
0 |
|
1,018 |
other |
C++ |
| Open3D: A Modern Library for 3D Data Processing |
| OAID/Tengine |
4,452 |
|
0 |
0 |
over 2 years ago |
0 |
|
244 |
apache-2.0 |
C++ |
| Tengine is a lite, high performance, modular inference engine for embedded device |
| uTensor/uTensor |
1,607 |
|
0 |
0 |
over 2 years ago |
0 |
|
55 |
apache-2.0 |
C++ |
| TinyML AI inference library |
| ARM-software/armnn |
1,081 |
|
0 |
0 |
about 2 years ago |
7 |
November 23, 2023 |
15 |
mit |
C++ |
| Arm NN ML Software. The code here is a read-only mirror of https://review.mlplatform.org/admin/repos/ml/armnn |
| huawei-noah/bolt |
823 |
|
0 |
0 |
almost 3 years ago |
0 |
|
38 |
mit |
C++ |
| Bolt is a deep learning library with high performance and heterogeneous flexibility. |
| lhelontra/tensorflow-on-arm |
791 |
|
0 |
0 |
over 5 years ago |
0 |
|
19 |
mit |
Shell |
| TensorFlow for Arm |
| PINTO0309/Tensorflow-bin |
480 |
|
0 |
0 |
over 2 years ago |
0 |
|
0 |
apache-2.0 |
Shell |
| Prebuilt binary with Tensorflow Lite enabled. For RaspberryPi / Jetson Nano. Support for custom operations in MediaPipe. XNNPACK, XNNPACK Multi-Threads, FlexDelegate. |
| YinAoXiong/12306_code_server |
388 |
|
0 |
0 |
almost 6 years ago |
0 |
|
0 |
mit |
Python |
| 该仓库用于构建自托管的12306验证码识别服务器 |
| ARM-software/ML-examples |
381 |
|
0 |
0 |
over 2 years ago |
0 |
|
31 |
apache-2.0 |
C++ |
| Arm Machine Learning tutorials and examples |