Skip to content
/ mace Public

MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.

License

Notifications You must be signed in to change notification settings

XiaoMi/mace

Folders and files

NameName
Last commit message
Last commit date

Latest commit

0fc55a5 · Mar 11, 2024
May 15, 2019
Feb 23, 2021
Jan 18, 2022
Jul 22, 2021
Jul 30, 2021
May 28, 2022
Dec 10, 2021
Nov 17, 2021
May 22, 2019
Jan 9, 2022
Jan 13, 2022
Apr 8, 2022
Nov 6, 2018
Apr 29, 2020
May 15, 2019
Jul 9, 2021
Feb 24, 2021
Jan 19, 2022
Jan 13, 2022
Jun 14, 2019
Mar 19, 2019
May 20, 2021
Jan 3, 2019
Mar 11, 2024
Apr 17, 2018
Mar 11, 2024
May 15, 2019
Mar 2, 2021
Apr 15, 2019
Sep 3, 2021

Repository files navigation

MACE

License Build Status pipeline status doc build status

Documentation | FAQ | Release Notes | Roadmap | MACE Model Zoo | Demo | Join Us | 中文

Mobile AI Compute Engine (or MACE for short) is a deep learning inference framework optimized for mobile heterogeneous computing on Android, iOS, Linux and Windows devices. The design focuses on the following targets:

  • Performance
    • Runtime is optimized with NEON, OpenCL and Hexagon, and Winograd algorithm is introduced to speed up convolution operations. The initialization is also optimized to be faster.
  • Power consumption
    • Chip dependent power options like big.LITTLE scheduling, Adreno GPU hints are included as advanced APIs.
  • Responsiveness
    • UI responsiveness guarantee is sometimes obligatory when running a model. Mechanism like automatically breaking OpenCL kernel into small units is introduced to allow better preemption for the UI rendering task.
  • Memory usage and library footprint
    • Graph level memory allocation optimization and buffer reuse are supported. The core library tries to keep minimum external dependencies to keep the library footprint small.
  • Model protection
    • Model protection has been the highest priority since the beginning of the design. Various techniques are introduced like converting models to C++ code and literal obfuscations.
  • Platform coverage
    • Good coverage of recent Qualcomm, MediaTek, Pinecone and other ARM based chips. CPU runtime supports Android, iOS and Linux.
  • Rich model formats support

Getting Started

Performance

MACE Model Zoo contains several common neural networks and models which will be built daily against a list of mobile phones. The benchmark results can be found in the CI result page (choose the latest passed pipeline, click release step and you will see the benchmark results). To get the comparison results with other frameworks, you can take a look at MobileAIBench project.

Communication

  • GitHub issues: bug reports, usage issues, feature requests
  • Slack: mace-users.slack.com
  • QQ群: 756046893

Contributing

Any kind of contribution is welcome. For bug reports, feature requests, please just open an issue without any hesitation. For code contributions, it's strongly suggested to open an issue for discussion first. For more details, please refer to the contribution guide.

License

Apache License 2.0.

Acknowledgement

MACE depends on several open source projects located in the third_party directory. Particularly, we learned a lot from the following projects during the development:

Finally, we also thank the Qualcomm, Pinecone and MediaTek engineering teams for their help.