You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: board_setup/README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,6 +9,6 @@
9
9
10
10
The Vitis-AI repository provides pre-built board images that can be leveraged by users who wish to test-drive the Vitis AI workflow, run examples and evaluate models from the Model Zoo. This directory provides the necessary scripts and files that will enable usage of these targets.
11
11
12
-
As of the 3.5 release of Vitis AI, the target setup documentation has migrated to Github.IO and is integrated into the Quickstart tutorials. **[PLEASE ACCESS THE LATEST QUICKSTART DOCUMENTATION ONLINE](https://xilinx.github.io/Vitis-AI/docs)** or **[OPEN THE OFFLINE DOCUMENTATION IN YOUR BROWSER](../docs/docs/index.html)**.
12
+
As of the 3.5 release of Vitis AI, the target setup documentation has migrated to Github.IO and is integrated into the Quickstart tutorials. **[PLEASE ACCESS THE LATEST QUICKSTART DOCUMENTATION ONLINE](https://xilinx.github.io/Vitis-AI)** or **[OPEN THE OFFLINE DOCUMENTATION IN YOUR BROWSER](../docs/docs/index.html)**.
* Note that when you start Docker appropriate as shown above, your ``/workspace`` folder will correspond to ``/Vitis-AI`` and your initial path in Docker will be ``/workspace``. If you inspect ``docker_run.sh`` you can see that the -v option is leveraged which links the Docker file system to your Host file system. Verify that you see the created ``/resnet18`` subfolder in your workspace:
237
238
238
239
.. code-block:: Bash
239
240
240
241
[Docker] $ ls
241
242
242
-
4. Download the pre-trained resnet18 model from PyTorch to the docker environment and store it in the ``model`` folder . This is the floating point (FP32) model that will be quantized to INT8 precision for deployment on the target.
243
+
4. Next, download the pre-trained resnet18 model from PyTorch to the docker environment and store it in the ``model`` folder . This is the floating point (FP32) model that will be quantized to INT8 precision for deployment on the target. Also, since you have re-entered the Docker container, you need to re-run the setup script.
@@ -381,7 +383,7 @@ The Vitis AI Compiler compiles the graph operators as a set of micro-coded instr
381
383
Model Deployment
382
384
================
383
385
384
-
1. Locate the ``resnet18_pt`` folder ``/usr/share/vitis_ai_library/models/`` folder along with the other Viitis AI model examples.
386
+
1. Copy the ``resnet18_pt`` folder into the ``/usr/share/vitis_ai_library/models/`` directory. This will locate your compiled model in the default Vitis AI Library example model directory, alongside the other Vitis AI example models. Our purpose in doing this is to simplify the commands that follow, in which we will execute the Vitis AI Library samples with our model.
385
387
386
388
2. The `vitis_ai_library_r3.5.0_images.tar.gz <https://www.xilinx.com/bin/public/openDownload?filename=vitis_ai_library_r3.5.0_images.tar.gz>`__ and `vitis_ai_library_r3.5.0_video.tar.gz <https://www.xilinx.com/bin/public/openDownload?filename=vitis_ai_library_r3.5.0_video.tar.gz>`__ packages
387
389
contain test images and videos that can be leveraged to evaluate our quantized model and other pre-built Vitis-AI Library examples.
@@ -413,15 +415,15 @@ contain test images and videos that can be leveraged to evaluate our quantized m
.. note:: When you start Docker as shown earlier, your ``/workspace`` folder will correspond to ``/Vitis-AI`` and your initial path in Docker will be ``/workspace``. If you inspect ``docker_run.sh`` you can see that the -v option is leveraged which links the Docker file system to your Host file system. Verify that you see the created ``/resnet18`` subfolder in your workspace:
* The model will be located under the ``/usr/share/vitis_ai_library/models/`` folder along with the other Viitis-AI model examples.
503
+
* The model will be located under the ``/usr/share/vitis_ai_library/models/`` folder along with the other Vitis-AI model examples.
504
504
505
505
506
506
2. The `vitis_ai_library_r3.5.0_images.tar.gz <https://www.xilinx.com/bin/public/openDownload?filename=vitis_ai_library_r3.5.0_images.tar.gz>`__ and `vitis_ai_library_r3.5.0_video.tar.gz <https://www.xilinx.com/bin/public/openDownload?filename=vitis_ai_library_r3.5.0_video.tar.gz>`__ packages
@@ -548,7 +548,7 @@ If you wish to do so, you can copy the `result.jpg` file back to your host and r
Copy file name to clipboardExpand all lines: docs/_sources/docs/ref_design_docs/README_DPUCV2DX8G.rst.txt
+17-2Lines changed: 17 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,3 +1,5 @@
1
+
:orphan:
2
+
1
3
VEK280 DPUCV2DX8G Reference Design
2
4
==================================
3
5
@@ -165,7 +167,15 @@ is set as shown below.
165
167
166
168
**Step1:** Build VEK280 platform
167
169
168
-
First, build the VEK280 platform in the folder `$TRD_HOME/vek280_platform`, following the instructions in `$TRD_HOME/vek280_platform/README.md`.
170
+
First, build the VEK280 platform in the folder `$TRD_HOME/vek280_platform`, more details refer to the instructions in `$TRD_HOME/vek280_platform/README.md`.
Vitis AI DPUs are available for both Zynq Ultrascale+ MPSoC as well as Versal Edge and Core chip-down designs. The Kria K26 SOM is supported as a production-ready Edge platform, and Alveo accelerator cards are supported for cloud applications.
77
+
Vitis AI DPUs are available for both Zynq Ultrascale+ MPSoC as well as Versal Edge and Core chip-down designs. The Kria |trade| K26 SOM is supported as a production-ready Edge platform, and Alveo |trade| accelerator cards are supported for cloud applications.
78
78
79
79
What does the Vitis AI Library provide?
80
80
---------------------------------------
@@ -172,4 +172,10 @@ In the past, developers have ported the DPUCZ reference design (formerly known a
172
172
What is the specific AI accelerator that AMD Xilinx provides for Zynq™ Ultrascale+? Is it a systolic array?
The DPUCZ IP that is provided with the Vitis AI IDE is the specialized accelerator. It is a custom processor that has a specialized instruction set. Graph operators such as CONV, POOL, ELTWISE are compiled as instructions that are executed by the DPU. The DPUCZ bears similarities to a systolic array but has specialized micro-coded engines that are optimized for specific tasks. Some of these engines are optimized for conventional convolution, while some are optimized for tasks such as depth-wise convolution, eltwise and others. We tend to refer to the DPUCZ as a Matrix of (Heterogeneous) Processing Engines.
175
+
The DPUCZ IP that is provided with the Vitis AI IDE is the specialized accelerator. It is a custom processor that has a specialized instruction set. Graph operators such as CONV, POOL, ELTWISE are compiled as instructions that are executed by the DPU. The DPUCZ bears similarities to a systolic array but has specialized micro-coded engines that are optimized for specific tasks. Some of these engines are optimized for conventional convolution, while some are optimized for tasks such as depth-wise convolution, eltwise and others. We tend to refer to the DPUCZ as a Matrix of (Heterogeneous) Processing Engines.
Copy file name to clipboardExpand all lines: docs/_sources/docs/workflow-model-deployment.rst.txt
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,7 +30,7 @@ The Vitis AI workflow is largely unified for Embedded and Data Center applicatio
30
30
31
31
- Zynq |trade| Ultrascale+ |trade|, Kria |trade|, and Versal |trade| SoC applications leverage the on-chip processor subsystem (APU) as the host control node for model deployment. Considering optimization and :ref:`whole-application-acceleration` of subgraphs deployed on the SoC APU is crucial.
32
32
33
-
- Alveo data center card deployments leverage the AMD64 architecture host for execution of subgraphs that cannot be deployed on the DPU.
33
+
- Alveo trade| data center card deployments leverage the AMD64 architecture host for execution of subgraphs that cannot be deployed on the DPU.
34
34
35
35
- Zynq Ultrascale+ and Kria designs can leverage the DPU with either the Vivado workflow or the Vitis workflow.
0 commit comments