| Tencent/ncnn |
18,693 |
|
0 |
1 |
about 2 years ago |
26 |
October 27, 2023 |
1,010 |
other |
C++ |
| ncnn is a high-performance neural network inference framework optimized for the mobile platform |
| isl-org/Open3D |
10,043 |
|
0 |
0 |
about 2 years ago |
0 |
|
1,018 |
other |
C++ |
| Open3D: A Modern Library for 3D Data Processing |
| MegEngine/MegEngine |
4,672 |
|
0 |
0 |
about 2 years ago |
0 |
|
163 |
apache-2.0 |
C++ |
| MegEngine 是一个快速、可拓展、易于使用且支持自动求导的深度学习框架 |
| dwelch67/raspberrypi |
2,165 |
|
0 |
0 |
almost 7 years ago |
0 |
|
24 |
|
Assembly |
| Raspberry Pi ARM based bare metal examples |
| raspberrypi/userland |
2,016 |
|
0 |
0 |
over 2 years ago |
0 |
|
103 |
bsd-3-clause |
C |
| Source code for ARM side libraries for interfacing to Raspberry Pi GPU. |
| BabitMF/bmf |
595 |
|
0 |
0 |
about 2 years ago |
4 |
August 22, 2023 |
19 |
apache-2.0 |
C++ |
| Cross-platform, customizable multimedia/video processing framework. With strong GPU acceleration, heterogeneous design, multi-language support, easy to use, multi-framework compatible and high performance, the framework is ideal for transcoding, AI inference, algorithm integration, live video streaming, and more. |
| PaddlePaddle/Anakin |
492 |
|
0 |
0 |
about 6 years ago |
0 |
|
68 |
apache-2.0 |
C++ |
| High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices. |
| in66-dev/In-Prestissimo |
267 |
|
0 |
0 |
over 8 years ago |
0 |
|
2 |
other |
C++ |
| A very fast neural network computing framework optimized for mobile platforms.QQ group: 676883532 【验证信息输:绝影】 |
| merrymercy/tvm-mali |
145 |
|
0 |
0 |
over 7 years ago |
0 |
|
2 |
mit |
C |
| Optimizing Mobile Deep Learning on ARM GPU with TVM |
| HolmesShuan/CNN-Inference-Engine-Quick-View |
142 |
|
0 |
0 |
almost 4 years ago |
0 |
|
0 |
|
|
| A quick view of high-performance convolution neural networks (CNNs) inference engines on mobile devices. |