-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert a subset of GPU dialect ops to the OpenCL GPU runtime calls #333
Conversation
d855aee
to
95d6a07
Compare
03d6a65
to
51cf443
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please add lits? Would help reviewing.
…nCL runtime calls
…nCL runtime calls
1815d78
to
9a3240e
Compare
…nCL runtime calls
…nCL runtime calls
d9651f1
to
34415b1
Compare
…nCL runtime calls
…nCL runtime calls
#ifndef GC_GPU_OCL_CONST_ONLY | ||
|
||
// TBD | ||
|
||
#else | ||
#undef GC_GPU_OCL_CONST_ONLY |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dead code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a placeholder for #343. It also undefs this one
#define GC_GPU_OCL_CONST_ONLY |
@@ -680,7 +680,7 @@ defaultTilingOfType(RewriterBase &rewriter, Operation *op, | |||
} else { | |||
defaultTileSize.resize(iteratorTypes.size(), rewriter.getIndexAttr(0)); | |||
// Try tileSize from `32` to `16`. | |||
SmallVector<int64_t> tsOrder = {32, 16}; | |||
SmallVector<int64_t> tsOrder = {16, 32}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does the change do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I've totally forgotten about this dirty hack. Without it the entire GPU pipeline does not work. Seems further investigation of this issue is required.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems, this is caused by #332
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Worked around by increasing the shapes.
// Else block | ||
// The kernel has already been created by another thread, destroying this | ||
// one. | ||
rewriter.setInsertionPointToStart(elseBlock); | ||
helper.destroyKernels(rewriter, loc, result); | ||
result = rewriter.create<LLVM::ExtractValueOp>(loc, helper.ptrType, | ||
casResult, 0); | ||
rewriter.create<LLVM::ReturnOp>(loc, result); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I follow, why do we destroy the kernel here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the current value of the global field is not null, then some other thread has already created and assigned the the value. In this case, we destroy the created kernel and return that one, created by another thread.
@@ -93,6 +93,20 @@ def LinalgToXeGPU : Pass<"linalg-to-xegpu", "func::FuncOp"> { | |||
"DPAS register block sizes MxNxK">, | |||
]; | |||
} | |||
|
|||
def AddContextArg : Pass<"add-ctx-arg", "func::FuncOp"> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should it be applied to kernels only?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, it should be applied to the functions, that launch the kernels. The argument is passed to the runtime.
e3ceb4f
to
1610c78
Compare
…nCL runtime calls
Added a new path, that converts the following GPU dialect ops to the corresponding callsof the GPU OpenCL runtime functions (to be implemented later): - gpu.alloc, gpu.dealloc, gpu.memcpy and gpu.launch The first argument of each runtime's function is a pointer to the context structure. This is not a cl_context, this is an execution context, i.e. a single execution of the module's main function. It contains the queue, wait list (in case of out-of-order mode) and someother data, required for the module ops execution. It's expected, that the pointer to the context is passed to the module's main function as the last argument of type memref with zero dims. For each gpu.launch operation, 2 additional functions are created: - getXXXKernel(): returns the kernel pointer, stored in a global variable. If it's NULL, calls createXXXKernel(). - createXXXKernel(): Calls the runtime's function, that creates a kernel. SPIRV, kernel name, and sizes are passed to the function. The returned pointer is saved in the global var using `llvm.cmpxchg`, to make sure it doesn't overwrite a kernel, created by another thread. Finally, a destructor function is created, that calls the corresponding runtime's kernel destroy function and passes the pointers, stored in the global vars. This function must be called by themodule owner, when destroying the module. The kernel is not a cl_kernel, but a runtime's internal structure, that contains a compiledcl_program, preconfigured cl_kernel and other data, required for the kernel execution. The runtime's launch function clones the preconfigured kernel, sets the arguments and enqueues a command to execute the kernel.
1610c78
to
74842e0
Compare
74842e0
to
443ff0e
Compare
…nCL runtime calls
lib/gc/Transforms/GPU/Pipeline.cpp
Outdated
pm.addPass(imex::createConvertGPUXToLLVMPass()); | ||
pm.addPass(createGpuToGpuOcl()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we should postpone changes to the pipeline until (#343) is merged? Otherwise we're breaking a working pipeline since there's no implementation for the new runtime-funcs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reverted changes to the pipeline.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
good to merge once this is addressed
cc11292
to
01a4a6e
Compare
else: | ||
# FIXME: Enable when #343 is merged. | ||
config.excludes = ['gpu-to-gpuocl.mlir'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test should pass even without #343 since it's just the lowering, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right, it should pass.
01a4a6e
to
b31e64d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good to be merged from my side
Added a new path, that converts the following GPU dialect ops to the corresponding callsof the OpenCL GPU runtime functions (to be implemented later):
The first argument of each runtime's function is a pointer to the context structure. This is not a cl_context, this is an execution context, i.e. a single execution of the module's main function. It contains the queue, wait list (in case of out-of-order mode) and someother data, required for the module ops execution. It's expected, that the pointer to the context is passed to the module's main function as the last argument of type memref with zero dims.
For each gpu.launch operation, 2 additional functions are created:
llvm.cmpxchg
, to make sure it doesn't overwrite a kernel, created by another thread.Finally, a destructor function is created, that calls the corresponding runtime's kernel destroy function and passes the pointers, stored in the global vars. This function must be called by themodule owner, when destroying the module.
The kernel is not a cl_kernel, but a runtime's internal structure, that contains a compiledcl_program, preconfigured cl_kernel and other data, required for the kernel execution. The runtime's launch function clones the preconfigured kernel, sets the arguments and enqueues a command to execute the kernel.
Example of code produced by this path: