Nims-Hyrax is an implementation of the Hyrax stack by Cottage Labs and AntLeaf. It is built with Docker containers, which simplify development and deployment onto live services.
Clone the repository with git clone https://github.com/nims-dpfc/nims-hyrax.git
.
Ensure you have docker and docker-compose. See notes on installing docker.
Open a console and try running docker -h
and docker-compose -h
to verify they are both accessible.
Create the environment file .env
. You can start by copying the template file .env.template.development to .env
and customizing the values to your setup. (For production environment, use .env.template as your template, not .env.template.development)
If you would like to do a test run of the system, start the docker containers
$ cd nims-hyrax
$ docker-compose up -d
You should see the containers being built and the services start.
We use the Git Flow branching model, so ensure you set up
your project directory by running git flow init
and accept the defaults.
Installation for git-flow.
Branch name for production releases: [master]
Branch name for "next release" development: [develop]
Feature branches? [feature/]
Bugfix branches? [bugfix/]
Release branches? [release/]
Hotfix branches? [hotfix/]
Support branches? [support/]
Version tag prefix? []
Hooks and filters directory? [<your-path-to-checked-out-repo>/nims-hyrax/.git/hooks]
The default branch in this repository is develop
, and master
should be used for stable releases only. After
finishing bugfixes or releases with git-flow
remember to also push tags with git push --tags
.
New code is created in feature/
or hotfix/
branches, and from there we make a pull request against the develop branch. A member of the team other than the new code's author reviews the pull request and performs the merge. Codeship tests run when the develop
branch is updated.
There are 4 docker-compose
files provided in the repository, which build the containers running the services as shown above
- docker-compose.yml is the main docker-compose file. It builds all the core servcies required to run the application
- fcrepo is the container running the Fedora 4 commons repository, an rdf document store. By default, this runs the fedora service on port 8080 internally in docker (http://fcrepo:8080/fcrepo/rest).
- Solr container runs SOLR, an enterprise search server. By default, this runs the SOLR service on port 8983 internally in docker (http://solr:8983).
- db container running a postgres database, used by the Hyrax application. By default, this runs the database service on port 5432 internally in docker.
- redis container running redis, used by Hyrax to manage background tasks. By default, this runs the redis service on port 6379 internally in docker.
- app container sets up the [Hyrax] application, which is then used by 2 services - web and workers.
- Web container runs the materials data repository application. By default, this runs the materials data repository service on port 3000 internally in docker (http://web:3000).
This container runs docker-entrypoint.sh. It needs the database, solr and fedora containers to be up and running. It waits for 15s to ensure Solr and fedora are running and exits if they are not. It runs a rake task, (setup_hyrax.rake) to setup the application.
The default workflows are loaded, the default admin set and collection types are created and the users in setup.json are created as a part of the setup. - Wokers container runs the background tasks for materials data repository, using sidekiq and redis. By default, this runs the worker service.
Hyrax processes long-running or particularly slow work in background jobs to speed up the web request/response cycle. When a user submits a file through a work (using the web or an import task), there a humber of background jobs that are run, initilated by the hyrax actor stack, as explained here
You can monitor the background workers using the materials data repository service at http://web:3000/sidekiq when logged in as an admin user.
- fcrepo is the container running the Fedora 4 commons repository, an rdf document store. By default, this runs the fedora service on port 8080 internally in docker (http://fcrepo:8080/fcrepo/rest).
- docker-compose.override.yml This file exposes the ports for fcrepo, solr and the hyrax web container, so they an be accessed outside the container. If running this service in development or test, we could use this file.
- docker-compose-production.yml is the production configuration, customised for the infrastructure at NIMS.
The data for the application is stored in docker named volumes as specified by the compose files. These are:
$ docker volume list
nims-hyrax_cache
nims-hyrax_db
nims-hyrax_derivatives
nims-hyrax_fcrepo
nims-hyrax_file_uploads
nims-hyrax_letsencrypt
nims-hyrax_redis
nims-hyrax_solr_home
These will persist when the system is brought down and rebuilt. Deleting them will require importers etc. to run again.
When running in development and test environment, prepare your .env file using .env.template.development as the template. You need to use docker-compose -f docker-compose.yml -f docker-compose.override.yml
. This will use the docker-compose.yml file and the docker-compose.override.yml file and not use the docker-compose-production.yml.
- fcrepo container will run the fedora service, which will be available in port 8080 at http://localhost:8080/fcrepo/rest
- Solr container will run the Solr service, which will be available in port 8983 at http://localhost:8983
- The web container runs the materials data repository service, which will be available in port 3000 at http://localhost:3000
You could setup an alias for docker-compose on your local machine, to ease typing
alias ngdrdocker='docker-compose -f docker-compose.yml -f docker-compose.override.yml'
Static asset build is only run in production
environment to speed up container creation in develop. To see features such as the IIIF viewer, yarn install
must be run on the web container once it's up.
ngdrdocker run web yarn install
When running in production, prepare your .env file using .env.template as the template (not .env.template.development). You need to use docker-compose -f docker-compose.yml -f docker-compose-production.yml
, replacing docker-compose.override.yml with docker-compose-production.yml. To assist this, an alias similar to that below can be useful:
alias ngdrdocker='docker-compose -f docker-compose.yml -f docker-compose-production.yml'
- The service will run without the ports of intermediary services such as Solr being exposed to the host.
- Materials data repository is accessible at port 443, http requests to port 80 will be redirected to https.
To start with, you would need to build the system, before running the services. To do this you need to issue the build
command
$ ngdrdocker build
Note: This is using the alias defined above, as a short form for
In development:
$ docker-compose -f docker-compose.yml -f docker-compose.override.yml build
In production:
$ docker-compose -f docker-compose.yml -f docker-compose-production.yml build
To run the containers after build, issue the up
command (-d means run as daemon, in the background):
ngdrdocker up -d
Note: This is using the alias defined above, as a short form for
In development:
$ docker-compose -f docker-compose.yml -f docker-compose.override.yml up -d
In production:
$ docker-compose -f docker-compose.yml -f docker-compose-production.yml up -d
The containers should all start and the services should be available in their end points as described above
- web server at http://localhost:3000 in development and https://domain-name in production
You can see the state of the containers with docker-compose ps
, and view logs e.g. for the web container using docker-compose logs web
The services that you would need to monitor the logs for are docker mainly web and workers.
# Bring the whole application up to run in the background, building the containers
ngdrdocker up -d --build
# Halt the system
ngdrdocker down
# Re-create the nginx container without affecting the rest of the system (and run in the background with -d)
ngdrdocker up -d --build --no-deps --force-recreate nginx
# View the logs for the web application container
ngdrdocker logs web
# Create a log dump file
ngdrdocker logs web | tee web_logs_`date --iso-8601`
# (writes to e.g. web_logs_2019-03-27)
# View all running containers
docker ps
# (example output:)
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f42cf90d4494 nims-hyrax_nginx "sh -c 'nginx && cer…" 5 days ago Up 5 days 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nims-hyrax_nginx_1
6da65933de09 nims-hyrax_web "bash -c /bin/docker…" 5 days ago Up 14 hours 3000/tcp nims-hyrax_web_1
ab9600b12f2d nims-hyrax_workers "bundle exec sidekiq" 5 days ago Up 5 days nims-hyrax_workers_1
a9e18ff5eef7 ualbertalib/docker-fcrepo4:4.7 "catalina.sh run" 5 days ago Up 5 days 8080/tcp nims-hyrax_fcrepo_1
8a31c9b41e54 nims-hyrax_solr "/docker-entrypoint.…" 5 days ago Up 5 days (healthy) 8983/tcp nims-hyrax_solr_1
4382df4d4033 nims-hyrax_db "docker-entrypoint.s…" 5 days ago Up 5 days (healthy) 5432/tcp nims-hyrax_db_1
7580bf933d43 redis:5 "docker-entrypoint.s…" 5 days ago Up 5 days (healthy) 6379/tcp nims-hyrax_redis_1
# Using its container name, you can run a shell in a container to view or make changes directly
docker exec -it nims-hyrax_nginx_1 sh
On saku05
and the demo server on Digital Ocean we use docker version 18.09.3, and docker-compose version 1.23.2
-
Install Docker by following step 1 of the Docker Compose installation tutorial on your machine.
-
Make sure you don't need to
sudo
to run docker. Instructions on set-up and how to test that it works. -
Install Docker Compose by following steps 2 and onwards from the Docker Compose installation Tutorial.
Ubuntu Linux users, the command that Docker-Compose provides you with will not work since /usr/local/bin is not writeable by anybody but root in default Ubuntu setups. Use
sudo tee
instead, e.g.:
$ curl -L https://github.com/docker/compose/releases/download/[INSERT_DESIRED_DOCKER_COMPOSE_VERSION_HERE]/docker-compose-`uname -s`-`uname -m` | sudo tee /usr/local/bin/docker-compose > /dev/null && sudo chmod a+x /usr/local/bin/docker-compose
- Open a console and try running
docker -h
anddocker-compose -h
to verify they are both accessible.
If you would like to use a local Docker-based CAS server for single sign-on and sign off, a little more configuration is required. Note that these steps are optional: you could use database authentication or LDAP authentication, or a remote CAS server instead.
-
In your system's
/etc/hosts
file, add the following two entries which will redirect the specified hostnames to localhost:127.0.0.1 mdr.nims.test 127.0.0.1 cas.mdr.nims.test
-
In your
.env
file, set the following variables:MDR_DEVISE_AUTH_MODULE=cas_authenticatable CAS_BASE_URL=https://cas.mdr.nims.test:8443/cas/
-
Now build and run the
web
andcas
containers:docker-compose build web cas docker-compose up web cas
-
Open a browser and goto the MDR website: http://mdr.nims.test:3000/ Click on Login and you should be directed to https://cas.mdr.nims.test:8443/cas/
At this point your web browser will likely complain that the SSL certificate is invalid / untrusted. Grant the certificate
cas.mdr.nims.test
full trust:- In Chrome, view the certificate and export it (or drag it) to your desktop
- Next, double-click on the certificate file (
cas.mdr.nims.test.cer
) and mark it as Always Trust (see: https://support.apple.com/en-gb/guide/keychain-access/kyca11871/mac) - Check that reloading https://cas.mdr.nims.test:8443/cas/ should now present the valid CAS website without any certificate warnings or other errors
-
To test single sign-on, open a browser window and go to to the MDR website: http://mdr.nims.test:3000/
- Click on "login" and you will be redirected to the CAS website.
- Log in as
user1
/password
. - After completing the login on the CAS website you will be redirected back to the MDR website and now logged in as
user1
-
To test single sign-off, after logging in as
user1
on MDR (see previous step), open an extra browser window and navigate directly to the CAS website: https://cas.mdr.nims.test:8443/cas- Logout of the CAS system (by clicking on "log out" in "please log out and exit your web browser")
- Then reload the other browser window which had the user logged in to MDR and verify that they are now logged out.
There is docker documentation advising how to back up volumes and their data.
- As mentioned above, there is a
.env
file containing application secrets. This must not be checked into version control! - The system is configured on start-up using the
docker-entrypoint.sh
script, which configures users in theseed/setup.json
file. - Importers are run manually in the container using the rails console. See The project wiki for more information.