Skip to content

Commit 8d95691

Browse files
committed
2024.2 release documentation
1 parent c1c3425 commit 8d95691

File tree

159 files changed

+97332
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

159 files changed

+97332
-0
lines changed

2024.2/html/.buildinfo

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
# Sphinx build info version 1
2+
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
3+
config: cd386fcf65357dcb967e69a7e5a5144b
4+
tags: 645f666f9bcd5a90fca523b33c5a78b7

2024.2/html/.nojekyll

Whitespace-only changes.
Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
Asynchronous XRT (XRT Native API's)
2+
===================================
3+
4+
This is simple example which showcases asynchronous programming mechanism through the user-defined queues.
5+
6+
**KEY CONCEPTS:** `XRT Native API <https://docs.xilinx.com/r/en-US/ug1393-vitis-application-acceleration/Setting-Up-XRT-Managed-Kernels-and-Kernel-Arguments>`__, `Asynchronous Programming <https://xilinx.github.io/XRT/2023.1/html/xrt_native_apis.html?highlight=queue#asynchornous-programming-with-xrt-experimental>`__
7+
8+
**KEYWORDS:** `xrt::queue <https://xilinx.github.io/XRT/2023.1/html/xrt_native_apis.html?highlight=queue#executing-multiple-tasks-through-queue>`__, `enqueue <https://xilinx.github.io/XRT/2023.1/html/xrt_native_apis.html?highlight=queue#executing-multiple-tasks-through-queue>`__, `wait() <https://xilinx.github.io/XRT/2023.1/html/xrt_native_apis.html?highlight=queue#executing-multiple-tasks-through-queue>`__
9+
10+
11+
In this example we showcase asynchronous programming mechanism through the user-defined queues. The ``xrt::queue`` is lightweight, general-purpose queue implementation which is completely separated from core XRT native API data structures.
12+
13+
XRT queue implementation needs ``#include <experimental/xrt_queue.h`` to be added as the header file. The implementation also use C++17 features so the host code must be compiled with ``g++ -std=c++17``.
14+
15+
Executing multiple tasks through queue
16+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
17+
18+
.. code:: c++
19+
:number-lines: 84
20+
21+
22+
xrt::queue main_queue;
23+
xrt::queue queue_bo1;
24+
auto bo0_event = main_queue.enqueue([&bo0] {bo0.sync(XCL_BO_SYNC_BO_TO_DEVICE); });
25+
auto bo1_event = queue_bo1.enqueue([&bo1] {bo1.sync(XCL_BO_SYNC_BO_TO_DEVICE); });
26+
main_queue.enqueue(bo1_event);
27+
main_queue.enqueue([&run] {run.start(); run.wait(); });
28+
auto bo_out_event = main_queue.enqueue([&bo_out] {bo_out.sync(XCL_BO_SYNC_BO_FROM_DEVICE); });
29+
bo_out_event.wait();
30+
31+
32+
In line number 86 and 87, ``bo0`` and ``bo1`` host-to-device data transfers are enqueued through two separate queues to achieve parallel transfers. To synchronize between these two queues, the returned event from the ``queue_bo1`` is enqueued in the ``main_queue``, similar to a task enqueue (line 88). As a result, any other task submitted after that event won't execute until the event is finished. So, in the above code example, subsequent task in the ``main_queue`` (such as kernel execution) would wait till the ``bo1_event`` is completed. By submitting an event returned from a ``queue::enqueue`` to another queue, we can synchronize among the queues.
33+
34+
**EXCLUDED PLATFORMS:**
35+
36+
- All NoDMA Platforms, i.e u50 nodma etc
37+
38+
DESIGN FILES
39+
------------
40+
41+
Application code is located in the src directory. Accelerator binary files will be compiled to the xclbin directory. The xclbin directory is required by the Makefile and its contents will be filled during compilation. A listing of all the files in this example is shown below
42+
43+
::
44+
45+
src/host.cpp
46+
src/vadd.cpp
47+
48+
Access these files in the github repo by `clicking here <https://github.com/Xilinx/Vitis_Accel_Examples/tree/master/host_xrt/asynchronous_xrt>`__.
49+
50+
COMMAND LINE ARGUMENTS
51+
----------------------
52+
53+
Once the environment has been configured, the application can be executed by
54+
55+
::
56+
57+
./asynchronous_xrt -x <vadd XCLBIN>
58+
Lines changed: 225 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,225 @@
1+
AXI Burst Performance
2+
=====================
3+
4+
This is an AXI Burst Performance check design. It measures the time it takes to write a buffer into DDR or read a buffer from DDR. The example contains 2 sets of 6 kernels each: each set having a different data width and each kernel having a different burst_length and num_outstanding parameters to compare the impact of these parameters on effective throughput.
5+
6+
7+
This is an AXI Burst Performance check design. It measures the time it takes to write a buffer into DDR or read a buffer from DDR. The example contains 2 sets of 6 kernels each: each set having a different data width and each kernel having a different burst_length and num_outstanding parameters to compare the impact of these parameters on effective throughput.
8+
9+
A counter is coded inside each of the kernels to accurately count the number of cycles between the start and end of the buffer transfer.
10+
11+
In this version, the kernels are configured as follows:
12+
13+
::
14+
15+
Data Width - 256
16+
test_kernel_maxi_256bit_1: burst length= 4, outstanding transactions=4
17+
test_kernel_maxi_256bit_2: burst length=16, outstanding transactions=4
18+
test_kernel_maxi_256bit_3: burst length=32, outstanding transactions=4
19+
test_kernel_maxi_256bit_4: burst length= 4, outstanding transactions=32
20+
test_kernel_maxi_256bit_5: burst length=16, outstanding transactions=32
21+
test_kernel_maxi_256bit_6: burst length=32, outstanding transactions=32
22+
23+
Data Width - 512
24+
test_kernel_maxi_512bit_1: burst length= 4, outstanding transactions=4
25+
test_kernel_maxi_512bit_2: burst length=16, outstanding transactions=4
26+
test_kernel_maxi_512bit_3: burst length=32, outstanding transactions=4
27+
test_kernel_maxi_512bit_4: burst length= 4, outstanding transactions=32
28+
test_kernel_maxi_512bit_5: burst length=16, outstanding transactions=32
29+
test_kernel_maxi_512bit_6: burst length=32, outstanding transactions=32
30+
31+
Below are the resource numbers while running the design on U200 platform:
32+
33+
Data Width - 256
34+
35+
========================= ==== ==== ====
36+
Kernel LUT REG BRAM
37+
========================= ==== ==== ====
38+
test_kernel_maxi_256bit_1 4.2K 7.2K 11
39+
test_kernel_maxi_256bit_2 4.3K 7.2K 11
40+
test_kernel_maxi_256bit_3 4.4K 7.3K 11
41+
test_kernel_maxi_256bit_4 4.3K 7.2K 11
42+
test_kernel_maxi_256bit_5 4.3K 7.3K 11
43+
test_kernel_maxi_256bit_6 4.5K 7.1K 15
44+
========================= ==== ==== ====
45+
46+
Data Width - 512
47+
48+
========================= ==== ==== ====
49+
Kernel LUT REG BRAM
50+
========================= ==== ==== ====
51+
test_kernel_maxi_512bit_1 4.8K 9.0K 14
52+
test_kernel_maxi_512bit_2 4.9K 9.1K 14
53+
test_kernel_maxi_512bit_3 5.2K 9.1K 14
54+
test_kernel_maxi_512bit_4 4.9K 9.1K 14
55+
test_kernel_maxi_512bit_5 4.9K 9.1K 14
56+
test_kernel_maxi_512bit_6 5.2K 9.0K 23
57+
========================= ==== ==== ====
58+
59+
Following is the real log reported while running the design on U200 platform for 16 KB transfers:
60+
61+
::
62+
63+
Test parameters
64+
- xclbin file : ./build_dir.hw.xilinx_u200_xdma_201830_2/test_kernel_maxi_256bit.xclbin
65+
- frequency : 300 MHz
66+
- buffer size : 16.00 KB
67+
68+
Found Platform
69+
Platform Name: Xilinx
70+
INFO: Reading ./build_dir.hw.xilinx_u200_xdma_201830_2/test_kernel_maxi_256bit.xclbin
71+
Loading: './build_dir.hw.xilinx_u200_xdma_201830_2/test_kernel_maxi_256bit.xclbin'
72+
Trying to program device[1]: xilinx_u200_xdma_201830_2
73+
Device[1]: program successful!
74+
75+
Kernel->AXI Burst WRITE performance
76+
Data Width = 256 burst_length = 4 num_outstanding = 4 buffer_size = 16.00 KB | throughput = 2.55877 GB/sec
77+
Data Width = 256 burst_length = 16 num_outstanding = 4 buffer_size = 16.00 KB | throughput = 6.31398 GB/sec
78+
Data Width = 256 burst_length = 32 num_outstanding = 4 buffer_size = 16.00 KB | throughput = 6.84251 GB/sec
79+
Data Width = 256 burst_length = 4 num_outstanding = 32 buffer_size = 16.00 KB | throughput = 4.26223 GB/sec
80+
Data Width = 256 burst_length = 16 num_outstanding = 32 buffer_size = 16.00 KB | throughput = 6.45647 GB/sec
81+
Data Width = 256 burst_length = 32 num_outstanding = 32 buffer_size = 16.00 KB | throughput = 6.84251 GB/sec
82+
83+
Kernel->AXI Burst READ performance
84+
Data Width = 256 burst_length = 4 num_outstanding = 4 buffer_size = 16.00 KB | throughput = 2.01658 GB/sec
85+
Data Width = 256 burst_length = 16 num_outstanding = 4 buffer_size = 16.00 KB | throughput = 6.54884 GB/sec
86+
Data Width = 256 burst_length = 32 num_outstanding = 4 buffer_size = 16.00 KB | throughput = 7.79836 GB/sec
87+
Data Width = 256 burst_length = 4 num_outstanding = 32 buffer_size = 16.00 KB | throughput = 7.7851 GB/sec
88+
Data Width = 256 burst_length = 16 num_outstanding = 32 buffer_size = 16.00 KB | throughput = 7.79836 GB/sec
89+
Data Width = 256 burst_length = 32 num_outstanding = 32 buffer_size = 16.00 KB | throughput = 7.79836 GB/sec
90+
91+
Test parameters
92+
- xclbin file : ./build_dir.hw.xilinx_u200_xdma_201830_2/test_kernel_maxi_512bit.xclbin
93+
- frequency : 300 MHz
94+
- buffer size : 16.00 KB
95+
96+
Found Platform
97+
Platform Name: Xilinx
98+
INFO: Reading ./build_dir.hw.xilinx_u200_xdma_201830_2/test_kernel_maxi_512bit.xclbin
99+
Loading: './build_dir.hw.xilinx_u200_xdma_201830_2/test_kernel_maxi_512bit.xclbin'
100+
Trying to program device[1]: xilinx_u200_xdma_201830_2
101+
Device[1]: program successful!
102+
103+
Kernel->AXI Burst WRITE performance
104+
Data Width = 512 burst_length = 4 num_outstanding = 4 buffer_size = 16.00 KB | throughput = 5.17832 GB/sec
105+
Data Width = 512 burst_length = 16 num_outstanding = 4 buffer_size = 16.00 KB | throughput = 8.23316 GB/sec
106+
Data Width = 512 burst_length = 32 num_outstanding = 4 buffer_size = 16.00 KB | throughput = 11.5306 GB/sec
107+
Data Width = 512 burst_length = 4 num_outstanding = 32 buffer_size = 16.00 KB | throughput = 8.10201 GB/sec
108+
Data Width = 512 burst_length = 16 num_outstanding = 32 buffer_size = 16.00 KB | throughput = 11.5016 GB/sec
109+
Data Width = 512 burst_length = 32 num_outstanding = 32 buffer_size = 16.00 KB | throughput = 11.2473 GB/sec
110+
111+
Kernel->AXI Burst READ performance
112+
Data Width = 512 burst_length = 4 num_outstanding = 4 buffer_size = 16.00 KB | throughput = 4.04385 GB/sec
113+
Data Width = 512 burst_length = 16 num_outstanding = 4 buffer_size = 16.00 KB | throughput = 11.6776 GB/sec
114+
Data Width = 512 burst_length = 32 num_outstanding = 4 buffer_size = 16.00 KB | throughput = 13.6646 GB/sec
115+
Data Width = 512 burst_length = 4 num_outstanding = 32 buffer_size = 16.00 KB | throughput = 13.6646 GB/sec
116+
Data Width = 512 burst_length = 16 num_outstanding = 32 buffer_size = 16.00 KB | throughput = 13.6646 GB/sec
117+
Data Width = 512 burst_length = 32 num_outstanding = 32 buffer_size = 16.00 KB | throughput = 13.6646 GB/sec
118+
119+
TEST PASSED
120+
121+
Following is the real log reported while running the design on U200 platform for 16 MB transfers:
122+
123+
::
124+
125+
Test parameters
126+
- xclbin file : ./build_dir.hw.xilinx_u200_xdma_201830_2/test_kernel_maxi_256bit.xclbin
127+
- frequency : 300 MHz
128+
- buffer size : 16.00 MB
129+
130+
Found Platform
131+
Platform Name: Xilinx
132+
INFO: Reading ./build_dir.hw.xilinx_u200_xdma_201830_2/test_kernel_maxi_256bit.xclbin
133+
Loading: './build_dir.hw.xilinx_u200_xdma_201830_2/test_kernel_maxi_256bit.xclbin'
134+
Trying to program device[1]: xilinx_u200_xdma_201830_2
135+
Device[1]: program successful!
136+
137+
Kernel->AXI Burst WRITE performance
138+
Data Width = 256 burst_length = 4 num_outstanding = 4 buffer_size = 16.00 MB | throughput = 2.66919 GB/sec
139+
Data Width = 256 burst_length = 16 num_outstanding = 4 buffer_size = 16.00 MB | throughput = 6.62449 GB/sec
140+
Data Width = 256 burst_length = 32 num_outstanding = 4 buffer_size = 16.00 MB | throughput = 7.59737 GB/sec
141+
Data Width = 256 burst_length = 4 num_outstanding = 32 buffer_size = 16.00 MB | throughput = 4.47013 GB/sec
142+
Data Width = 256 burst_length = 16 num_outstanding = 32 buffer_size = 16.00 MB | throughput = 7.1518 GB/sec
143+
Data Width = 256 burst_length = 32 num_outstanding = 32 buffer_size = 16.00 MB | throughput = 7.94597 GB/sec
144+
145+
Kernel->AXI Burst READ performance
146+
Data Width = 256 burst_length = 4 num_outstanding = 4 buffer_size = 16.00 MB | throughput = 2.02206 GB/sec
147+
Data Width = 256 burst_length = 16 num_outstanding = 4 buffer_size = 16.00 MB | throughput = 6.80909 GB/sec
148+
Data Width = 256 burst_length = 32 num_outstanding = 4 buffer_size = 16.00 MB | throughput = 8.59958 GB/sec
149+
Data Width = 256 burst_length = 4 num_outstanding = 32 buffer_size = 16.00 MB | throughput = 8.68773 GB/sec
150+
Data Width = 256 burst_length = 16 num_outstanding = 32 buffer_size = 16.00 MB | throughput = 8.93942 GB/sec
151+
Data Width = 256 burst_length = 32 num_outstanding = 32 buffer_size = 16.00 MB | throughput = 8.93942 GB/sec
152+
153+
Test parameters
154+
- xclbin file : ./build_dir.hw.xilinx_u200_xdma_201830_2/test_kernel_maxi_512bit.xclbin
155+
- frequency : 300 MHz
156+
- buffer size : 16.00 MB
157+
158+
Found Platform
159+
Platform Name: Xilinx
160+
INFO: Reading ./build_dir.hw.xilinx_u200_xdma_201830_2/test_kernel_maxi_512bit.xclbin
161+
Loading: './build_dir.hw.xilinx_u200_xdma_201830_2/test_kernel_maxi_512bit.xclbin'
162+
Trying to program device[1]: xilinx_u200_xdma_201830_2
163+
Device[1]: program successful!
164+
165+
Kernel->AXI Burst WRITE performance
166+
Data Width = 512 burst_length = 4 num_outstanding = 4 buffer_size = 16.00 MB | throughput = 5.1399 GB/sec
167+
Data Width = 512 burst_length = 16 num_outstanding = 4 buffer_size = 16.00 MB | throughput = 11.7942 GB/sec
168+
Data Width = 512 burst_length = 32 num_outstanding = 4 buffer_size = 16.00 MB | throughput = 14.6941 GB/sec
169+
Data Width = 512 burst_length = 4 num_outstanding = 32 buffer_size = 16.00 MB | throughput = 8.93979 GB/sec
170+
Data Width = 512 burst_length = 16 num_outstanding = 32 buffer_size = 16.00 MB | throughput = 14.3008 GB/sec
171+
Data Width = 512 burst_length = 32 num_outstanding = 32 buffer_size = 16.00 MB | throughput = 15.1586 GB/sec
172+
173+
Kernel->AXI Burst READ performance
174+
Data Width = 512 burst_length = 4 num_outstanding = 4 buffer_size = 16.00 MB | throughput = 3.92988 GB/sec
175+
Data Width = 512 burst_length = 16 num_outstanding = 4 buffer_size = 16.00 MB | throughput = 13.1114 GB/sec
176+
Data Width = 512 burst_length = 32 num_outstanding = 4 buffer_size = 16.00 MB | throughput = 16.8218 GB/sec
177+
Data Width = 512 burst_length = 4 num_outstanding = 32 buffer_size = 16.00 MB | throughput = 16.8222 GB/sec
178+
Data Width = 512 burst_length = 16 num_outstanding = 32 buffer_size = 16.00 MB | throughput = 16.8295 GB/sec
179+
Data Width = 512 burst_length = 32 num_outstanding = 32 buffer_size = 16.00 MB | throughput = 16.8219 GB/sec
180+
181+
TEST PASSED
182+
183+
**EXCLUDED PLATFORMS:**
184+
185+
- All Embedded Zynq Platforms, i.e zc702, zcu102 etc
186+
- All Versal Platforms, i.e vck190 etc
187+
- AWS VU9P F1
188+
- Samsung SmartSSD Computation Storage Drive
189+
- Samsung U.2 SmartSSD
190+
- All NoDMA Platforms, i.e u50 nodma etc
191+
- Versal V70
192+
193+
DESIGN FILES
194+
------------
195+
196+
Application code is located in the src directory. Accelerator binary files will be compiled to the xclbin directory. The xclbin directory is required by the Makefile and its contents will be filled during compilation. A listing of all the files in this example is shown below
197+
198+
::
199+
200+
src/host.cpp
201+
src/test_kernel_common.hpp
202+
src/test_kernel_maxi_256bit_1.cpp
203+
src/test_kernel_maxi_256bit_2.cpp
204+
src/test_kernel_maxi_256bit_3.cpp
205+
src/test_kernel_maxi_256bit_4.cpp
206+
src/test_kernel_maxi_256bit_5.cpp
207+
src/test_kernel_maxi_256bit_6.cpp
208+
src/test_kernel_maxi_512bit_1.cpp
209+
src/test_kernel_maxi_512bit_2.cpp
210+
src/test_kernel_maxi_512bit_3.cpp
211+
src/test_kernel_maxi_512bit_4.cpp
212+
src/test_kernel_maxi_512bit_5.cpp
213+
src/test_kernel_maxi_512bit_6.cpp
214+
215+
Access these files in the github repo by `clicking here <https://github.com/Xilinx/Vitis_Accel_Examples/tree/master/performance/axi_burst_performance>`__.
216+
217+
COMMAND LINE ARGUMENTS
218+
----------------------
219+
220+
Once the environment has been configured, the application can be executed by
221+
222+
::
223+
224+
./axi_burst_performance -x1 <test_kernel_maxi_256bit XCLBIN> -x2 <test_kernel_maxi_512bit XCLBIN>
225+

2024.2/html/_sources/common.rst.txt

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
Common Files
2+
============
3+
4+
The common files' section contains:
5+
6+
- Collection of common files used across all examples to assist in the quick development of application host code.
7+
- Collection of utility functions used as part of the Makefiles in all of the examples. This set includes Makefile rules and scripts to launch Vitis compiled applications onto boards hosted by Nimbix directly from the developers terminal shell.
8+
9+
Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
Compilation and Execution
2+
=========================
3+
4+
It is primarily recommended to start with Hello World example which makes the new users aware about the basic structure of a Vitis based Application.
5+
6+
Compiling for Application Emulation
7+
-----------------------------------
8+
9+
As part of the capabilities available to an application developer, Vitis includes environments to test the correctness of an application at both a software functional level and a hardware emulated level.
10+
11+
These modes, which are named sw_emu and hw_emu, allow the developer to profile and evaluate the performance of a design before compiling for board execution.
12+
It is recommended that all applications are executed in at least the sw_emu mode before being compiled and executed on an FPGA board.
13+
14+
For DC platforms:
15+
16+
::
17+
18+
cd <PATH TO SAMPLE APPLICATION>
19+
make all TARGET=<sw_emu|hw_emu> PLATFORM=<FPGA Platform>
20+
21+
For SoC platforms:
22+
23+
::
24+
25+
cd <PATH TO SAMPLE APPLICATION>
26+
make all TARGET=<sw_emu|hw_emu> PLATFORM=<FPGA platform> HOST_ARCH=<aarch32/aarch64> EDGE_COMMON_SW=<rootfs and kernel image path>
27+
28+
where,
29+
30+
*sw_emu = software emulation*,
31+
*hw_emu = hardware emulation*
32+
33+
By default, HOST_ARCH=x86. HOST_ARCH and EDGE_COMMON_SW are required for SoC shells. Please download and use the pre-built image from `here <https://www.xilinx.com/support/download/index.html/content/xilinx/en/downloadNav/embedded-platforms.html>`__.
34+
35+
**NOTE:** The software emulation flow is a functional correctness check only. It does not estimate the performance of the application in hardware.
36+
37+
The hardware emulation flow is a cycle accurate simulation of the hardware generated for the application. Also, it is expected for this simulation to take a long time.
38+
39+
Executing Emulated Application
40+
------------------------------
41+
*(Recommended Execution Flow for Example Applications in Emulation)*
42+
43+
The makefile for the application can directly execute the application with the following command:
44+
45+
For DC platforms:
46+
47+
::
48+
49+
cd <PATH TO SAMPLE APPLICATION>
50+
make run TARGET=<sw_emu|hw_emu> PLATFORM=<FPGA Platform>
51+
52+
For SoC platforms:
53+
54+
::
55+
56+
cd <PATH TO SAMPLE APPLICATION>
57+
make run TARGET=<sw_emu|hw_emu> PLATFORM=<FPGA platform> HOST_ARCH=<aarch32/aarch64> EDGE_COMMON_SW=<rootfs and kernel image path>
58+
59+
where,
60+
61+
*sw_emu = software emulation*,
62+
*hw_emu = hardware emulation*
63+
64+
By default, HOST_ARCH=x86. HOST_ARCH and EDGE_COMMON_SW are required for SoC shells. Please download and use the pre-built image from `here <https://www.xilinx.com/support/download/index.html/content/xilinx/en/downloadNav/embedded-platforms.html>`__.
65+
66+
If the application has not been previously compiled, the check makefile rule will compile and execute the application in the emulation mode selected by the user.
67+
68+
Compiling for FPGA Accelerator Card
69+
-----------------------------------
70+
71+
The command to compile for the FPGA acceleration
72+
board is:
73+
74+
For DC platforms:
75+
76+
::
77+
78+
cd <PATH TO SAMPLE APPLICATION>
79+
make all PLATFORM=<FPGA Platform> TARGET=<hw>
80+
81+
For SoC platforms:
82+
83+
::
84+
85+
cd <PATH TO SAMPLE APPLICATION>
86+
make all PLATFORM=<FPGA Platform> TARGET=<hw> HOST_ARCH=<aarch32/aarch64> EDGE_COMMON_SW=<rootfs and kernel image path>
87+
88+
By default, HOST_ARCH=x86. HOST_ARCH and EDGE_COMMON_SW are required for SoC shells. Please download and use the pre-built image from `here <https://www.xilinx.com/support/download/index.html/content/xilinx/en/downloadNav/embedded-platforms.html>`__.
89+
90+
**NOTE:** Compilation for hardware generates custom logic to implement the functionality of the kernels in an application. It is typical for hardware compile times to range from 30 minutes to a couple of hours.

0 commit comments

Comments
 (0)