Adlik is an end-to-end optimizing framework for deep learning models. The goal of Adlik is to accelerate deep learning inference process both on cloud and embedded environment.
Adlik consists of two sub projects: Model compiler and Serving platform.
Model compiler supports several optimizing technologies like pruning, quantization and structural compression, which can be easily used for models developed with TensorFlow, Keras, PyTorch, etc.
Serving platform provides deep learning models with optimized runtime based on the deployment environment. Put simply, based on a deep learning model, the users of Adlik can optimize it with model compiler and then deploy it to a certain platform with Adlik serving platform.
With Adlik framework, different deep learning models can be deployed to different platforms with high performance in a much flexible and easy way.
- Support optimization for models from different kinds of deep learning architecture, eg. TensorFlow/Caffe/PyTorch.
- Support compiling models as different formats, OpenVINO IR/ONNX/TensorRT for different runtime, eg. CPU/GPU/FPGA.
- Simplified interfaces for the workflow.
- Model uploading & upgrading, model inference & monitoring.
- Unified inference interfaces for different models.
- Management and scheduling for a solution with multiple models in various runtime.
- Automatic selection of inference runtime.
- Ability to add customized runtime.
This guide is for building Adlik on Ubuntu systems.
Then, clone Adlik and change the working directory into the source directory:
git clone https://github.com/ZTE/Adlik.git
cd Adlik
-
Install the following packages:
python3-setuptools
python3-wheel
-
Build clients:
bazel build //adlik_serving/clients/python:build_pip_package -c opt --incompatible_no_support_tools_in_action_inputs=false
-
Build pip package:
mkdir /tmp/pip-packages && bazel-bin/adlik_serving/clients/python/build_pip_package /tmp/pip-packages
First, install the following packages:
automake
libtool
make
-
Install
intel-openvino-ie-rt-core
package from OpenVINO. -
Assume the installation path of OpenVINO is
/opt/intel/openvino_VERSION
, run the following command:export INTEL_CVSDK_DIR=/opt/intel/openvino_VERSION export InferenceEngine_DIR=$INTEL_CVSDK_DIR/deployment_tools/inference_engine/share bazel build //adlik_serving \ --config=openvino \ -c opt \ --incompatible_no_support_tools_in_action_inputs=false \ --incompatible_disable_nocopts=false
Run the following command:
bazel build //adlik_serving \
--config=tensorflow-cpu \
-c opt \
--incompatible_no_support_tools_in_action_inputs=false \
--incompatible_disable_nocopts=false
Assume builing with CUDA version 10.0.
-
Install the following packages from here and here:
cuda-cublas-dev-10-0
cuda-cufft-dev-10-0
cuda-cupti-10-0
cuda-curand-dev-10-0
cuda-cusolver-dev-10-0
cuda-cusparse-dev-10-0
libcudnn7=*+cuda10.0
libcudnn7-dev=*+cuda10.0
-
Run the following command:
env TF_CUDA_VERSION=10.0 \ bazel build //adlik_serving \ --config=tensorflow-gpu \ -c opt \ --incompatible_no_support_tools_in_action_inputs=false \ --incompatible_disable_nocopts=false \ --incompatible_use_specific_tool_files=false
Assume builing with CUDA version 10.0.
-
Install the following packages from here and here:
cuda-cublas-10-0
cuda-cufft-10-0
cuda-cupti-10-0
cuda-curand-10-0
cuda-cusolver-10-0
cuda-cusparse-10-0
cuda-nvml-dev-10-0
libcudnn7=*+cuda10.0
libcudnn7-dev=*+cuda10.0
libnvinfer6=*+cuda10.0
libnvinfer-dev=*+cuda10.0
libnvonnxparsers6=*+cuda10.0
libnvonnxparsers-dev=*+cuda10.0
-
Run the following command:
env TF_CUDA_VERSION=10.0 \ bazel build //adlik_serving \ --config=tensorrt \ -c opt \ --action_env=LIBRARY_PATH=/usr/local/cuda-10.0/lib64/stubs \ --incompatible_no_support_tools_in_action_inputs=false \ --incompatible_disable_nocopts=false
The ci/docker/build.sh
file can be used to build a Docker images that contains all the requirements for building
Adlik. You can build Adlik with the Docker image.