-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Building parts of Spack environments on different nodes: option1 (include other concretised environments) #58
Comments
You don't need to have a rule excluding cp2k - instead it ought to be in the gpu environment. Don't use my
so that you make separate concretised environments for base, gpu-on-gpu, gpu and then include those in |
(There's a question about which of the environments should |
For example, you concretise and build the base environment, then the gpu-on-gpu environment separately (and on a gpu node), then the gpu environment that includes the gpu-on-gpu environment, and finally the myriad environment that includes everything and which has anything extra that is only for myriad. That's if we keep the split of how the environments are divided up the same as I initially did - if another split makes more sense, do that instead. CPU Gromacs should go in base if we're keeping the split the same since CPU Gromacs gets installed everywhere. |
I got the most recent gromacs from spack develop: #44 (comment) so we can have
|
Testing I created a
Deleted
Added
After concretising the base, I still don't see
I need to reconcretize the env
I need to test removing a spec from Testing creating an independent environment via an environment files
(spacksite: fc-myriad-s0.22-test) [ccspapp@build01 fctestenv]$ ls the new env in the directory
|
Our existing specs are split into four parts (myriad, base, gpu, gpu-on-gpu). The process for doing this and yaml files for these four parts need to be created, and the yaml files added to the repo. Q: which environments should include which of the others to completely create this structure? |
Once this is working, then we can add the final packages to whichever of those files is appropriate. |
Creating an env for Gromacs/plumed/ GPU (versions are listed below) that included base (needs editing)
adding Gromacs
|
I have altered these yaml files to be the include_concrete versions:
Right now I have myriad.yaml including all of the other envs, but gpu including base and gpu-on-gpu including gpu. Need to test how that works. I have also added the casteps and cpu gromacses to base.yaml and the gpu gromacses to gpu.yaml. (Don't know if that needs to be gpu-on-gpu.yaml). |
Starting with (in existing
castep and gromacs aren't in the build cache yet so there are some things to build, starting with gsl.
|
GROMACS 2024.2 on its own and Plumed 2.9.0 were fine.
Need to check on the patches for GROMACS 2023 (looks minor):
|
Plumed 2.9.0: "Patch for GROMACS 2023 (preliminary, in particular for replica-exchange, expanded ensemble, hrex features)." => we want the 2.9.2 plumed for GROMACS 2023.x instead. The actual failure was only the version patching in cmake as in plumed/plumed2#960 (comment) but other functionality may not be there. |
Have added a Now want these:
|
Ok, we now have these in my
No patching issues. |
LAMMPS:
Added Check if want
NAMD: Default is the TCL interface rather than python and that is how we currently build. CUDA:
|
CPU lammps built, CPU NAMD 2.14 built (with charmpp 6.10.2), but charmpp 7.0.0 for NAMD 3.0 failed:
|
This is what we got for the charmpp 6.10.2 built for namd 2.14:
I think if we want the backend to be You can also choose I will change the |
Also needs |
Still issue with charmpp 8.0.0:
|
namd 2.14 still fine. Needed to run |
There is no |
(and |
Trying outside Spack to see what gets created in Charmpp has a The build command from the log was
Going to add -v for verbose make and not do -j6.
|
|
Testing the build outside Spack but using Spack's modules, you need to add in a
(Apparently I left in the |
That build succeeded... |
The part that failed before is:
Once it all finishes, there is a |
Fails same way with builtin.charmpp
==> Error: ProcessError: Command exited with status 2:
'./build' 'LIBS' 'ucx-linux-x86_64' 'gcc' 'gfortran' '-j6' '--destination=/lustre/shared/ucl/apps/spack/0.
22/hk-stack/spack/opt/spack/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/_
_spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__s
pack/linux-rhel7-cascadelake/gcc-12.3.0/charmpp-8.0.0-pk5hjxqntzc6uftgncpjafejw3ewwhyr' 'ompipmix' '--basedir=
/lustre/shared/ucl/apps/spack/0.22/hk-stack/spack/opt/spack/__spack_path_placeholder__/__spack_path_placeholde
r__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder_
_/__spack_path_placeholder__/__spack/linux-rhel7-cascadelake/gcc-12.3.0/openmpi-4.1.6-vfg7lekutaiginlgvs57titv
pvnjgys5' '--basedir=/lustre/shared/ucl/apps/spack/0.22/hk-stack/spack/opt/spack/__spack_path_placeholder__/__
spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__sp
ack_path_placeholder__/__spack_path_placeholder__/__spack/linux-rhel7-cascadelake/gcc-12.3.0/ucx-1.16.0-xrxjlo
wa7a3vqa3qt4n74ojkd4dz5ybh' 'smp' '--build-shared' '--with-production'
7 errors found in build log:
1463 [ 95%] Building CXX object src/libs/ck-libs/ampi/CMakeFiles/moduleampif.dir/ampif.C.o
1464 [ 95%] Building CXX object src/libs/ck-libs/ampi/CMakeFiles/moduleampif.dir/ddt.C.o
1465 [ 95%] Building CXX object src/libs/ck-libs/ampi/CMakeFiles/moduleampif.dir/mpich-alltoall.C.o
1466 [ 95%] Building CXX object src/libs/ck-libs/ampi/CMakeFiles/moduleampif.dir/ampi_mpix.C.o
1467 [ 95%] Building CXX object src/libs/ck-libs/ampi/CMakeFiles/moduleampif.dir/ampi_noimpl.C.o
1468 [ 95%] Linking CXX static library ../../../../lib/libmoduleampi.a
>> 1469 Fatal Error by charmc in directory /lustre/shared/ucl/apps/spack/0.22/hk-stack/spack/opt/spack/_
_spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_pla
ceholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__sp
ack/linux-rhel7-cascadelake/gcc-12.3.0/charmpp-8.0.0-pk5hjxqntzc6uftgncpjafejw3ewwhyr/src/libs/c
k-libs/ampi
1470 Trying to link, but no object files or library archives were specified
1471 charmc exiting... Confirmed which version was used
|
Need to go back and look at |
The actual error we are getting comes from line 2046 of
Above that, there's a
That's what is supposed to be happening here. |
|
I found what is happening when this fails, though I don't know why. I made an environment just for the charmpp install to hopefully make it easier to clean up and I kept the install prefix:
In these lines:
In the failing case,
where In the situations where the build works, |
Ignoring namd 3.0.0 for a bit, adding cp2k. |
Hmm, that gave me a
which in turn uses this openblas and fftw
and flips castep 24 to use them too, whereas castep 23.1 is unchanged.
Need to check on whether that's sensible, because openblas from |
It doesn't say in the CASTEP docs and I wasn't finding anything recent in the mailing list archives, but the makefiles do not mention any openmp related to the BLAS libraries. For the gfortran builds, openmp comes in as an option for the FFTW libraries. For MKL it links If there's a use case for it, we can add a different one in future. |
Ok, now just CP2K depends on the fftw and openblas with openmp. I will go with that.
|
Likely going to need an omp module-naming rule for only those packages, by name (since we don't need to specify if we only have one install and it is on). |
Testing
|
Can't get the modules to regenerate without clashes.
I tried with |
The openblas ones are fine (and I can just regenerate for specific packages too):
|
or
needs to be above the So generally we want the more specific ones together at the top. |
Support for QUIP is being deprecated in CP2K (cp2k/cp2k#3600 and cp2k/cp2k#3424), leave it out. Our older builds have it so someone asked for it in the past which is why I included it. |
Added ORCA binaries to source mirrors. Note for 6.0.1 there is a separate avx2-6.0.1 binary. So orca-5.0.4 is in base, whereas a version of 6.0.1 will either be in |
Hmm, orca 5.0.4 depends on openmpi 4.1.2 (4.1.1 but Spack alters that to 4.1.2 to avoid pmix issues). This means I have to tell everything else to either prefer or require openmpi 4.1.6 - which needs to be either in a packages.yaml or packages section of spack.yaml to avoid having to specify per package. https://spack.readthedocs.io/en/latest/packages_yaml.html#package-requirements I think preferences like this might do, so it prefers them in this order:
For right now I'm testing in my base env's spack.yaml - at the end this should possibly come out into packages.yaml and the modules out into modules.yaml. Maybe. It would prevent duplication across several environments, but require you to look at more files to see the whole specification. |
No, then it makes castep 23.1 use openmpi 4.1.2 instead... And gromacs 2023.5. |
I think what I actually want here is an ompi412 environment that |
^ This is all ok. With the exception of namd 3.0 which I will leave out until later, all the cpu software is sorted, now looking at the gpu parts. (This should mean that Kathleen's initial stack is fully defined). |
GPU builds
Yeah, doesn't like the environment variable like Owain found, I was sure this worked at some point previously.
If $SPACK_ROOT is expanded to actual value it is ok, but then we need to update the 0.23 each time the major version of Spack changes. |
Using the expanded value, my gpu_first.yaml concretises with these new versions that we didn't explicitly ask for:
Checking with fftw depends on the new openmpi, hwloc is now Anyway, now building so I will find out. Module naming was fine using the current rules. |
That's... inconvenient. Oh, but namd 3.0.1 is released too though I don't think it'll help with the charm++ issue: https://www.ks.uiuc.edu/Research/namd/3.0.1/announce.html |
Ignoring all the namds, uncommented py-alphafold from gpu_first to try a build. |
No, py-tensorflow runs into tensorflow/tensorflow#62497 for lack of a /usr/bin/python3 on ETA: It was added to |
Tensorflow ok, now Aria2 issue:
|
My best guess is it was using OpenSSL before. Recipe doesn't currently have any variants. |
Aria2 package now has variants and lists gnutls and openssl as conditional dependencies: https://github.com/UCL-ARC/hpc-spack/blob/0.23/repos/dev/packages/aria2/package.py py-alphafold now built. |
Got some module clashes to sort out in
Thought I'd sorted out the fftw but appears not.
|
I created a new spacksite in Myriad using Spack 0.22 on the
build01
node:gpg key trust done so I can use the buildcache
(spacksite: fc-myriad-s0.22-test) [ccspapp@build01 hpc-spack]$ spack gpg trust /shared/ucl/apps/spack/0.22/buildcache/build_cache/_pgp/8AD9CBD92CD2A4AEB15F3458969BB097C2225210.pub gpg: key C2225210: public key "ARCHPCSolutions (GPG created for Spack) <[email protected]>" imported gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1) gpg: inserting ownertrust of 6
but then I had an empty list of the buildcache:
(spacksite: fc-myriad-s0.22-test) [ccspapp@build01 hpc-spack]$ spack buildcache list --allarch ==> 0 cached builds.
so I tried updating the indexing without the flag
-d
and it works!(spacksite: fc-myriad-s0.22-test) [ccspapp@build01 hpc-spack]$ spack buildcache update-index -d /shared/ucl/apps/spack/0.22/buildcache ==> Error: unrecognized arguments: -d
Experimenting with the
myriad.yaml
generated by Heather see https://github.com/UCL-ARC/hpc-spack/issues/56I activated my env
myproject
Adding Gromacs to the spec in my env
myproject
and then concretiseGromacs is added to the root specs
I need to define a rule to exclude installing/concretising cp2k in my env ???
The text was updated successfully, but these errors were encountered: