Different versions of packages #1032
Replies: 3 comments 2 replies
-
I think this is largely the way to go. |
Beta Was this translation helpful? Give feedback.
-
Okay. I seem to have managed to convince setuptools to build a Python wheel without any Python code in it 😄. So we can package up additional workers separately, and put everything Python into a base package. Setuptools is quite particular about how it wants the code to be organised, and I've been thinking about reorganising the repository, but that can be postponed and I'd like to keep the build system/packaging PR as small as possible, it's going to be a big one anyway. So maybe for the future. |
Beta Was this translation helpful? Give feedback.
-
Well, it looks like I spoke too soon. Initially it seemed like setuptools was doing just what I wanted, but then I started running into all sorts of intermittent problems, and in the end I looked around the Internet a bit and discovered that Hatchling works better. It looks like we'll be using that instead, at least by default, community codes will be able to use something different if they want. My prototype now has the following mock-up community code (I don't know offhand if ph4 has a GPU and a non-GPU version, but in the mock-up it does):
with the At the top level,
Packages are declared enabled or disabled based on the list of features detected by autoconf and the The top-level build system acts more like an installer than a build system, it'll help you install dependencies and install all packages or individual packages as desired. The help text that's printed will change depending on the platform you're on and whether you have a Conda environment activated. So if you're on a Mac, it'll detect either Homebrew or MacPorts and tell you how to install things using that instead of via apt, and if you don't have a Conda environment active it'll tell you how to make one. |
Beta Was this translation helpful? Give feedback.
-
I'm looking at building different versions of packages again, for example with or without MPI, or with and without GPU support (and therefore say a CUDA dependency). A community code has an
interface.py
, possibly other Python code (?), and one or more worker binaries. Depending on what hardware you have you may be able to use one or more workers, but isinterface.py
always the same, regardless of which worker is used and regardless of global features like MPI?I'm thinking that the best solution is to have
amuse-code
withinterface.py
andcpu_worker
, and then have a separateamuse-code-gpu
package with onlygpu_worker
, which would depend onamuse-code
(and on CUDA or OpenCL or something). So in general there would be a base package with the interface and a basic works-anywhere worker if available, and then the other packages would be add-ons that provide additional workers. Does that make sense, or would this break something?For MPI support, I guess we need a version of the framework with MPI support and workers that are built with MPI support, which would then depend on
amuse-framework-mpi
instead ofamuse-framework
, but does this change anything ininterface.py
? Is there a reason to have a non-MPI version at all in a desktop environment? I suppose we need it mainly for HPC machines that don't do MPI_Spawn?Beta Was this translation helpful? Give feedback.
All reactions