diff --git a/README-bash.md b/README-bash.md deleted file mode 100644 index e886817c..00000000 --- a/README-bash.md +++ /dev/null @@ -1,381 +0,0 @@ -# mysql-backup -Back up mysql databases to... anywhere! - -## Overview -mysql-backup is a simple way to do MySQL database backups and restores when the database is running in a container. - -It has the following features: - -* dump and restore -* dump to local filesystem or to SMB server -* select database user and password -* connect to any container running on the same system -* select how often to run a dump -* select when to start the first dump, whether time of day or relative to container start time - -Please see [CONTRIBUTORS.md](./CONTRIBUTORS.md) for a list of contributors. - -## Support - -Support is available at the [databack Slack channel](http://databack.slack.com); register [here](https://join.slack.com/t/databack/shared_invite/zt-1cnbo2zfl-0dQS895icOUQy31RAruf7w). We accept issues here and general support questions on Slack. - -## Backup -To run a backup, launch `mysql-backup` image as a container with the correct parameters. Everything is controlled by environment variables passed to the container. - -For example: - -````bash -docker run -d --restart=always -e DB_DUMP_FREQ=60 -e DB_DUMP_BEGIN=2330 -e DB_DUMP_TARGET=/db -e DB_SERVER=my-db-container -v /local/file/path:/db databack/mysql-backup -```` - -The above will run a dump every 60 minutes, beginning at the next 2330 local time, from the database accessible in the container `my-db-container`. - -The following are the environment variables for a backup: - -__You should consider the [use of `--env-file=`](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables-e-env-env-file), [docker secrets](https://docs.docker.com/engine/swarm/secrets/) to keep your secrets out of your shell history__ - -* `DB_SERVER`: hostname to connect to database. Required. -* `DB_PORT`: port to use to connect to database. Optional, defaults to `3306` -* `DB_USER`: username for the database -* `DB_PASS`: password for the database -* `DB_NAMES`: names of databases to dump (separated by space); defaults to all databases in the database server -* `DB_NAMES_EXCLUDE`: names of databases (separated by space) to exclude from the dump; `information_schema`. `performance_schema`, `sys` and `mysql` are excluded by default. This only applies if `DB_DUMP_BY_SCHEMA` is set to `true`. For example, if you set `DB_NAMES_EXCLUDE=database1 db2` and `DB_DUMP_BY_SCHEMA=true` then these two databases will not be dumped by mysqldump -* `SINGLE_DATABASE`: If is set to `true`, mysqldump command will run without `--databases` flag. This avoid `USE ;` statement which is useful for the cases in which you want to import the dumpfile into a database with a different name. -* `DB_DUMP_FREQ`: How often to do a dump, in minutes. Defaults to 1440 minutes, or once per day. -* `DB_DUMP_BEGIN`: What time to do the first dump. Defaults to immediate. Must be in one of two formats: - * Absolute: HHMM, e.g. `2330` or `0415` - * Relative: +MM, i.e. how many minutes after starting the container, e.g. `+0` (immediate), `+10` (in 10 minutes), or `+90` in an hour and a half -* `DB_DUMP_CRON`: Set the dump schedule using standard [crontab syntax](https://en.wikipedia.org/wiki/Cron), a single line. -* `RUN_ONCE`: Run the backup once and exit if `RUN_ONCE` is set. Useful if you use an external scheduler (e.g. as part of an orchestration solution like Cattle or Docker Swarm or [kubernetes cron jobs](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/)) and don't want the container to do the scheduling internally. If you use this option, all other scheduling options, like `DB_DUMP_FREQ` and `DB_DUMP_BEGIN` and `DB_DUMP_CRON`, become obsolete. -* `DB_DUMP_DEBUG`: If set to `true`, print copious shell script messages to the container log. Otherwise only basic messages are printed. -* `DB_DUMP_TARGET`: Where to put the dump file, should be a directory. Supports four formats: - * Local: If the value of `DB_DUMP_TARGET` starts with a `/` character, will dump to a local path, which should be volume-mounted. - * SMB: If the value of `DB_DUMP_TARGET` is a URL of the format `smb://hostname/share/path/` then it will connect via SMB. - * S3: If the value of `DB_DUMP_TARGET` is a URL of the format `s3://bucketname/path` then it will connect via awscli. - * Multiple: If the value of `DB_DUMP_TARGET` contains multiple targets, the targets should be separated by a whitespace **and** the value surrounded by quotes, e.g. `"/db s3://bucketname/path"`. -* `DB_DUMP_SAFECHARS`: The dump filename usually includes the character `:` in the date, to comply with RFC3339. Some systems and shells don't like that character. If this environment variable is set, it will replace all `:` with `-`. -* `AWS_ACCESS_KEY_ID`: AWS Key ID -* `AWS_SECRET_ACCESS_KEY`: AWS Secret Access Key -* `AWS_DEFAULT_REGION`: Region in which the bucket resides -* `AWS_ENDPOINT_URL`: Specify an alternative endpoint for s3 interopable systems e.g. Digitalocean -* `AWS_CLI_OPTS`: Additional arguments to be passed to the `aws` part of the `aws s3 cp` command, click [here](https://docs.aws.amazon.com/cli/latest/reference/#options) for a list. _Be careful_, as you can break something! -* `AWS_CLI_S3_CP_OPTS`: Additional arguments to be passed to the `s3 cp` part of the `aws s3 cp` command, click [here](https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html#options) for a list. If you are using AWS KMS, `sse`, `sse-kms-key-id`, etc., may be of interest. -* `SMB_USER`: SMB username. May also be specified in `DB_DUMP_TARGET` with an `smb://` url. If both specified, this variable overrides the value in the URL. -* `SMB_PASS`: SMB password. May also be specified in `DB_DUMP_TARGET` with an `smb://` url. If both specified, this variable overrides the value in the URL. -* `COMPRESSION`: Compression to use. Supported are: `gzip` (default), `bzip2` -* `DB_DUMP_BY_SCHEMA`: Whether to use separate files per schema in the compressed file (`true`), or a single dump file (`false`). Defaults to `false`. -* `DB_DUMP_KEEP_PERMISSIONS`: Whether to keep permissions for a file target. By default, `mysql-backup` copies the backup compressed file to the target with `cp -a`. In certain filesystems with certain permissions, this may cause errors. You can disable the `-a` flag by setting `DB_DUMP_KEEP_PERMISSIONS=false`. Defaults to `true`. -* `MYSQLDUMP_OPTS`: A string of options to pass to `mysqldump`, e.g. `MYSQLDUMP_OPTS="--opt abc --param def --max_allowed_packet=123455678"` will run `mysqldump --opt abc --param def --max_allowed_packet=123455678` -* `NICE`: true to perform mysqldump with ionice and nice option:- check for more information :- http://eosrei.net/articles/2013/03/forcing-mysqldump-always-be-nice-cpu-and-io -* `TMP_PATH`: tmp directory to be used during backup creation and other operations. Optional, defaults to `/tmp` - -### Scheduling -There are several options for scheduling how often a backup should run: - -* `RUN_ONCE`: run just once and exit. -* `DB_DUMP_FREQ` and `DB_DUMP_BEGIN`: run every x minutes, and run the first one at a particular time. -* `DB_DUMP_CRON`: run on a schedule. - -#### Cron Scheduling -If a cron-scheduled backup takes longer than the beginning of the next backup window, it will be skipped. For example, if your cron line is scheduled to backup every hour, as follows: - -``` -0 * * * * -``` - -And the backup that runs at 13:00 finishes at 14:05, the next backup will not be immediate, but rather at 15:00. - -The cron algorithm is as follows: after each backup run, calculate the next time that the cron statement will be true and schedule the backup then. - -#### Order of Priority -The scheduling options have an order of priority: - -1. `RUN_ONCE` runs once, immediately, and exits, ignoring everything else. -2. `DB_DUMP_CRON`: runs according to the cron schedule, ignoring `DB_DUMP_FREQ` and `DB_DUMP_BEGIN`. -3. `DB_DUMP_FREQ` and `DB_DUMP_BEGIN`: if nothing else is set. - - - -### Permissions -By default, the backup/restore process does **not** run as root (UID O). Whenever possible, you should run processes (not just in containers) as users other than root. In this case, it runs as username `appuser` with UID/GID `1005`. - -In most scenarios, this will not affect your backup process negatively. However, if you are using the "Local" dump target, i.e. your `DB_DUMP_TARGET` starts with `/` - and, most likely, is a volume mounted into the container - you can run into permissions issues. For example, if your mounted directory is owned by root on the host, then the backup process will be unable to write to it. - -In this case, you have two options: - -* Run the container as root, `docker run --user 0 ... ` or, in i`docker-compose.yml`, `user: "0"` -* Ensure your mounted directory is writable as UID or GID `1005`. - - -### Database Container -In order to perform the actual dump, `mysql-backup` needs to connect to the database container. You **must** pass the database hostname - which can be another container or any database process accessible from the backup container - by passing the environment variable `DB_SERVER` with the hostname or IP address of the database. You **may** override the default port of `3306` by passing the environment variable `DB_PORT`. - -````bash -docker run -d --restart=always -e DB_USER=user123 -e DB_PASS=pass123 -e DB_DUMP_FREQ=60 -e DB_DUMP_BEGIN=2330 -e DB_DUMP_TARGET=/db -e DB_SERVER=my-db-container -v /local/file/path:/db databack/mysql-backup -```` - -### Dump Target - -The dump target is where you want the backup files to be saved. The backup file *always* is a compressed file the following format: - -`db_backup_YYYY-MM-DDTHH:mm:ssZ.` - -Where the date is RFC3339 date format, excluding the milliseconds portion. - -* YYYY = year in 4 digits -* MM = month number from 01-12 -* DD = date for 01-31 -* HH = hour from 00-23 -* mm = minute from 00-59 -* ss = seconds from 00-59 -* T = literal character `T`, indicating the separation between date and time portions -* Z = literal character `Z`, indicating that the time provided is UTC, or "Zulu" -* compression = appropriate file ending for selected compression, one of: `gz` (gzip, default); `bz2` (bzip2) - -The time used is UTC time at the moment the dump begins. - -Notes on format: - -* SMB does not allow for `:` in a filename (depending on server options), so they are replaced with the `-` character when writing to SMB. -* Some shells do not handle a `:` in the filename gracefully. Although these usually are legitimate characters as far as the _filesystem_ is concerned, your shell may not like it. To avoid this issue, you can set the "no-colons" options with the environment variable `DB_DUMP_SAFECHARS` - -The dump target is the location where the dump should be placed, defaults to `/backup` in the container. Of course, having the backup in the container does not help very much, so we very strongly recommend you volume mount it outside somewhere. See the above example. - -If you use a URL like `smb://host/share/path`, you can have it save to an SMB server. If you need loging credentials, use `smb://user:pass@host/share/path`. - -Note that for smb, if the username includes a domain, e.g. your user is `mydom\myuser`, then you should use the samb convention of replacing the '\' with a ';'. In other words `smb://mydom;myuser:pass@host/share/path` - -If you use a URL like `s3://bucket/path`, you can have it save to an S3 bucket. - -Note that for s3, you'll need to specify your AWS credentials and default AWS region via `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` and `AWS_DEFAULT_REGION` - -Also note that if you are using an s3 interopable storage system like DigitalOcean you can use that as the target by setting `AWS_ENDPOINT_URL` to `${REGION_NAME}.digitaloceanspaces.com` and setting `DB_DUMP_TARGET` to `s3://bucketname/path`. - -#### Custom backup source file name -There may be use-cases where you need to modify the source path of the backup file **before** it gets uploaded to the dump target. -An example is combining multiple compressed files into one and giving it a new name, i.e. ```db-other-files-combined.tar.gz```. -To do that, place an executable file called `source.sh` in the following path: - - /scripts.d/source.sh - -Whatever your script returns to _stdout_ will be used as the source name for the backup file. - -The following exported environment variables will be available to the script above: - -* `DUMPFILE`: full path in the container to the output file -* `NOW`: date of the backup, as included in `DUMPFILE` and given by `date -u +"%Y-%m-%dT%H:%M:%SZ"` -* `DUMPDIR`: path to the destination directory so for example you can copy a new tarball including some other files along with the sql dump. -* `DB_DUMP_DEBUG`: To enable debug mode in post-backup scripts. - -**Example run:** - - NOW=20180930151304 DUMPFILE=/tmp/backups/db_backup_201809301513.gz DUMPDIR=/backup DB_DUMP_DEBUG=true /scripts.d/source.sh - -**Example custom source script:** - -```bash - #!/bin/bash - - # Rename source file - echo -n "db-plus-wordpress_${NOW}.gz" -``` - -#### Custom backup target file name -There may be use-cases where you need to modify the target upload path of the backup file **before** it gets uploaded. -An example is uploading a backup to a date stamped object key path in S3, i.e. ```s3://bucket/2018/08/23/path```. -To do that, place an executable file called ```target.sh``` in the following path: - - /scripts.d/target.sh - -Whatever your script returns to _stdout_ will be used as the name for the backup file. - -The following exported environment variables will be available to the script above: - -* `DUMPFILE`: full path in the container to the output file -* `NOW`: date of the backup, as included in `DUMPFILE` and given by `date -u +"%Y-%m-%dT%H:%M:%SZ"` -* `DUMPDIR`: path to the destination directory so for example you can copy a new tarball including some other files along with the sql dump. -* `DB_DUMP_DEBUG`: To enable debug mode in post-backup scripts. - -**Example run:** - - NOW=20180930151304 DUMPFILE=/tmp/backups/db_backup_201809301513.gz DUMPDIR=/backup DB_DUMP_DEBUG=true /scripts.d/target.sh - -**Example custom target script:** - -```bash - #!/bin/bash - - # Rename target file - echo -n "db-plus-wordpress-uploaded_${NOW}.gz" -``` - -### Backup pre and post processing - -Any executable script with _.sh_ extension in _/scripts.d/pre-backup/_ or _/scripts.d/post-backup/_ directories in the container will be executed before -and after the backup dump process has finished respectively, but **before** -uploading the backup file to its ultimate target. This is useful if you need to -include some files along with the database dump, for example, to backup a -_WordPress_ install. - -To use them you need to add a host volume that points to the post-backup scripts in the docker host. Start the container like this: - -````bash -docker run -d --restart=always -e DB_USER=user123 -e DB_PASS=pass123 -e DB_DUMP_FREQ=60 \ - -e DB_DUMP_BEGIN=2330 -e DB_DUMP_TARGET=/db -e DB_SERVER=my-db-container:db \ - -v /path/to/pre-backup/scripts:/scripts.d/pre-backup \ - -v /path/to/post-backup/scripts:/scripts.d/post-backup \ - -v /local/file/path:/db \ - databack/mysql-backup -```` - -Or, if you prefer compose: - -```yml -version: '2.1' -services: - backup: - image: databack/mysql-backup - restart: always - volumes: - - /local/file/path:/db - - /path/to/pre-backup/scripts:/scripts.d/pre-backup - - /path/to/post-backup/scripts:/scripts.d/post-backup - env: - - DB_DUMP_TARGET=/db - - DB_USER=user123 - - DB_PASS=pass123 - - DB_DUMP_FREQ=60 - - DB_DUMP_BEGIN=2330 - - DB_SERVER=mysql_db - mysql_db: - image: mysql - .... -``` - -The scripts are _executed_ in the [entrypoint](https://github.com/databack/mysql-backup/blob/master/entrypoint) script, which means it has access to all exported environment variables. The following are available, but we are happy to export more as required (just open an issue or better yet, a pull request): - -* `DUMPFILE`: full path in the container to the output file -* `NOW`: date of the backup, as included in `DUMPFILE` and given by `date -u +"%Y-%m-%dT%H:%M:%SZ"` -* `DUMPDIR`: path to the destination directory so for example you can copy a new tarball including some other files along with the sql dump. -* `DB_DUMP_DEBUG`: To enable debug mode in post-backup scripts. - -In addition, all of the environment variables set for the container will be available to the script. - -For example, the following script will rename the backup file after the dump is done: - -````bash -#!/bin/bash -# Rename backup file. -if [[ -n "$DB_DUMP_DEBUG" ]]; then - set -x -fi - -if [ -e ${DUMPFILE} ]; -then - now=$(date +"%Y-%m-%d-%H_%M") - new_name=db_backup-${now}.gz - old_name=$(basename ${DUMPFILE}) - echo "Renaming backup file from ${old_name} to ${new_name}" - mv ${DUMPFILE} ${DUMPDIR}/${new_name} -else - echo "ERROR: Backup file ${DUMPFILE} does not exist!" -fi - -```` - -You can think of this as a sort of basic plugin system. Look at the source of the [entrypoint](https://github.com/databack/mysql-backup/blob/master/entrypoint) script for other variables that can be used. - -### Encrypting the Backup - -Post-processing also give you options to encrypt the backup using openssl. The openssl binary is available -to the processing scripts. - -The sample [examples/encrypt.sh](./examples/encrypt.sh) provides a sample post-processing script that you can use -to encrypt your backup with AES256. - -## Restore -### Dump Restore -If you wish to run a restore to an existing database, you can use mysql-backup to do a restore. - -You need only the following environment variables: - -__You should consider the [use of `--env-file=`](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables-e-env-env-file) to keep your secrets out of your shell history__ - -* `DB_SERVER`: hostname to connect to database. Required. -* `DB_PORT`: port to use to connect to database. Optional, defaults to `3306` -* `DB_USER`: username for the database -* `DB_PASS`: password for the database -* `DB_NAMES`: name of database to restore to. Required if `SINGLE_DATABASE=true`, otherwise has no effect. Although the name is plural, it must contain exactly one database name. -* `SINGLE_DATABASE`: If is set to `true`, `DB_NAMES` is required and mysql command will run with `--database=$DB_NAMES` flag. This avoids the need of `USE ;` statement, which is useful when restoring from a file saved with `SINGLE_DATABASE` set to `true`. -* `DB_RESTORE_TARGET`: path to the actual restore file, which should be a compressed dump file. The target can be an absolute path, which should be volume mounted, an smb or S3 URL, similar to the target. -* `DB_DUMP_DEBUG`: if `true`, dump copious outputs to the container logs while restoring. -* To use the S3 driver `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` and `AWS_DEFAULT_REGION` will need to be defined. - - -Examples: - -1. Restore from a local file: `docker run -e DB_SERVER=gotodb.example.com -e DB_USER=user123 -e DB_PASS=pass123 -e DB_RESTORE_TARGET=/backup/db_backup_201509271627.gz -v /local/path:/backup databack/mysql-backup` -2. Restore from an SMB file: `docker run -e DB_SERVER=gotodb.example.com -e DB_USER=user123 -e DB_PASS=pass123 -e DB_RESTORE_TARGET=smb://smbserver/share1/backup/db_backup_201509271627.gz databack/mysql-backup` -3. Restore from an S3 file: `docker run -e DB_SERVER=gotodb.example.com -e AWS_ACCESS_KEY_ID=awskeyid -e AWS_SECRET_ACCESS_KEY=secret -e AWS_DEFAULT_REGION=eu-central-1 -e DB_USER=user123 -e DB_PASS=pass123 -e DB_RESTORE_TARGET=s3://bucket/path/db_backup_201509271627.gz databack/mysql-backup` - -### Restore when using docker-compose -`docker-compose` automagically creates a network when started. `docker run` simply attaches to the bridge network. If you are trying to communicate with a mysql container started by docker-compose, you'll need to specify the network in your command arguments. You can use `docker network ls` to see what network is being used, or you can declare a network in your docker-compose.yml. - -#### Example: -`docker run -e DB_SERVER=gotodb.example.com -e DB_USER=user123 -e DB_PASS=pass123 -e DB_RESTORE_TARGET=/backup/db_backup_201509271627.gz -v /local/path:/backup --network="skynet" databack/mysql-backup` - -### Using docker (or rancher) secrets -Environment variables used in this image can be passed in files as well. This is useful when you are using docker (or rancher) secrets for storing sensitive information. - -As you can set environment variable with `-e ENVIRONMENT_VARIABLE=value`, you can also use `-e ENVIRONMENT_VARIABLE_FILE=/path/to/file`. Contents of that file will be assigned to the environment variable. - -**Example:** - -```bash -docker run -d \ - -e DB_HOST_FILE=/run/secrets/DB_HOST \ - -e DB_USER_FILE=/run/secrets/DB_USER \ - -e DB_PASS_FILE=/run/secrets/DB_PASS \ - -v /local/file/path:/db \ - databack/mysql-backup -``` - -### Restore pre and post processing - -As with backups pre and post processing, you can do the same with restore operations. -Any executable script with _.sh_ extension in _/scripts.d/pre-restore/_ or -_/scripts.d/post-restore/_ directories in the container will be executed before the restore process starts and after it finishes respectively. This is useful if you need to -restore a backup file that includes some files along with the database dump. - -For example, to restore a _WordPress_ install, you would uncompress a tarball containing -the db backup and a second tarball with the contents of a WordPress install on -`pre-restore`. Then on `post-restore`, uncompress the WordPress files on the container's web server root directory. - -For an example take a look at the post-backup examples, all variables defined for post-backup scripts are available for pre-processing too. Also don't forget to add the same host volumes for `pre-restore` and `post-restore` directories as described for post-backup processing. - -### Automated Build -This github repo is the source for the mysql-backup image. The actual image is stored on the docker hub at `databack/mysql-backup`, and is triggered with each commit to the source by automated build via Webhooks. - -There are 2 builds: 1 for version based on the git tag, and another for the particular version number. - -## Tests - -The tests all run in docker containers, to avoid the need to install anything other than `make` and `docker`, and even can run over remote docker connections, avoiding any local bind-mounts. To run all tests: - -``` -make test -``` - -To run with debugging - -``` -make test DEBUG=debug -``` - -The above will generate _copious_ outputs, so you might want to redirect stdout and stderr to a file. - -This runs each of the several testing targets, each of which is a script in `test/test_*.sh`, which sets up tests, builds containers, runs the tests, and collects the output. - -## License -Released under the MIT License. -Copyright Avi Deitcher https://github.com/deitch diff --git a/README.md b/README.md index d1c8950c..f84426b3 100644 --- a/README.md +++ b/README.md @@ -21,8 +21,7 @@ Please see [CONTRIBUTORS.md](./CONTRIBUTORS.md) for a list of contributors. ## Versions This is the latest version, based on the complete rebuild of the codebase for 1.0.0 release based on -golang, completed in late 2023. The README for versions prior to 1.0.0, based on bash, is available -[here](./README-bash.md). +golang, completed in late 2023. ## Support diff --git a/entrypoint_orig b/entrypoint_orig deleted file mode 100755 index 2a5852f6..00000000 --- a/entrypoint_orig +++ /dev/null @@ -1,195 +0,0 @@ -#!/bin/bash - -. /functions.sh - -# ensure it is defined -MYSQLDUMP_OPTS=${MYSQLDUMP_OPTS:-} - -# login credentials -if [ -n "${DB_USER}" ]; then - DBUSER="-u${DB_USER}" -else - DBUSER= -fi -if [ -n "${DB_PASS}" ]; then - DBPASS="-p${DB_PASS}" -else - DBPASS= -fi - -# database server -if [ -z "${DB_SERVER}" ]; then - echo "DB_SERVER variable is required. Exiting." - exit 1 -fi -# database port -if [ -z "${DB_PORT}" ]; then - echo "DB_PORT not provided, defaulting to 3306" - DB_PORT=3306 -fi - -# -# set compress and decompress commands -COMPRESS= -UNCOMPRESS= -case $COMPRESSION in - gzip) - COMPRESS="gzip" - UNCOMPRESS="gunzip" - EXTENSION="tgz" - ;; - bzip2) - COMPRESS="bzip2" - UNCOMPRESS="bzip2 -d" - EXTENSION="tbz2" - ;; - *) - echo "Unknown compression requested: $COMPRESSION" >&2 - exit 1 -esac - - -# temporary dump dir -TMPDIR="${TMP_PATH}/backups" -TMPRESTORE="${TMP_PATH}/restorefile" - -# this is global, so has to be set outside -declare -A uri - - - -if [[ -n "$DB_RESTORE_TARGET" ]]; then - # Execute additional scripts for pre backup restore processing. For example, - # uncompress a tarball that contains the tarballs for the sql dump and a - # wordpress installation. - if [ -d /scripts.d/pre-restore/ ]; then - for i in $(ls /scripts.d/pre-restore/*.sh); do - if [ -x $i ]; then - DB_RESTORE_TARGET=${DB_RESTORE_TARGET} DB_DUMP_DEBUG=${DB_DUMP_DEBUG} $i - fi - done - fi - uri_parser ${DB_RESTORE_TARGET} - if [[ "${uri[schema]}" == "file" ]]; then - cp $DB_RESTORE_TARGET $TMPRESTORE 2>/dev/null - elif [[ "${uri[schema]}" == "s3" ]]; then - [[ -n "$AWS_ENDPOINT_URL" ]] && AWS_ENDPOINT_OPT="--endpoint-url $AWS_ENDPOINT_URL" - aws ${AWS_CLI_OPTS} ${AWS_ENDPOINT_OPT} s3 cp ${AWS_CLI_S3_CP_OPTS} "${DB_RESTORE_TARGET}" $TMPRESTORE - elif [[ "${uri[schema]}" == "smb" ]]; then - if [[ -n "$SMB_USER" ]]; then - UPASSARG="-U" - UPASS="${SMB_USER}%${SMB_PASS}" - elif [[ -n "${uri[user]}" ]]; then - UPASSARG="-U" - UPASS="${uri[user]}%${uri[password]}" - else - UPASSARG= - UPASS= - fi - if [[ -n "${uri[userdomain]}" ]]; then - UDOM="-W ${uri[userdomain]}" - else - UDOM= - fi - smbclient -N "//${uri[host]}/${uri[share]}" ${UPASSARG} "${UPASS}" ${UDOM} -c "get ${uri[sharepath]} ${TMPRESTORE}" - fi - # did we get a file? - if [[ -f "$TMPRESTORE" ]]; then - if [ "$SINGLE_DATABASE" = "true" ]; then - DBDATABASE="-D$DB_NAMES" - else - DBDATABASE= - fi - workdir="${TMP_PATH}/restore.$$" - rm -rf $workdir - mkdir -p $workdir - $UNCOMPRESS < $TMPRESTORE | tar -C $workdir -xvf - - cat $workdir/* | mysql -h $DB_SERVER -P $DB_PORT $DBUSER $DBPASS $DBDATABASE - rm -rf $workdir - /bin/rm -f $TMPRESTORE - else - echo "Could not find restore file $DB_RESTORE_TARGET" - exit 1 - fi - # Execute additional scripts for post backup restore processing. For example, - # uncompress a tarball that contains the files of a wordpress installation - if [ -d /scripts.d/post-restore/ ]; then - for i in $(ls /scripts.d/post-restore/*.sh); do - if [ -x $i ]; then - DB_RESTORE_TARGET=${DB_RESTORE_TARGET} DB_DUMP_DEBUG=${DB_DUMP_DEBUG} $i - fi - done - fi -else - # wait for the next time to start a backup - # for debugging - echo Starting at $(date) - last_run=0 - current_time=$(date +"%s") - freq_time=$(($DB_DUMP_FREQ*60)) - # get the begin time on our date - # REMEMBER: we are using the basic date package in alpine - # could be a delay in minutes or an absolute time of day - if [ -n "$DB_DUMP_CRON" ]; then - # calculate how long until the next cron instance is met - waittime=$(wait_for_cron "$DB_DUMP_CRON" "$current_time" $last_run) - elif [[ $DB_DUMP_BEGIN =~ ^\+(.*)$ ]]; then - waittime=$(( ${BASH_REMATCH[1]} * 60 )) - target_time=$(($current_time + $waittime)) - else - today=$(date +"%Y-%m-%d") - target_time=$(date --date="${today} ${DB_DUMP_BEGIN}" +"%s") - - if [[ "$target_time" < "$current_time" ]]; then - target_time=$(($target_time + 24*60*60)) - fi - - waittime=$(($target_time - $current_time)) - fi - - # If RUN_ONCE is set, don't wait - if [ -z "${RUN_ONCE}" ]; then - sleep $waittime - last_run=$(date +"%s") - fi - - # enter the loop - exit_code=0 - while true; do - # make sure the directory exists - mkdir -p $TMPDIR - do_dump - [ $? -ne 0 ] && exit_code=1 - # we can have multiple targets - for target in ${DB_DUMP_TARGET}; do - backup_target ${target} - [ $? -ne 0 ] && exit_code=1 - done - # remove lingering file - /bin/rm ${TMPDIR}/${SOURCE} - - # wait, unless RUN_ONCE is set - current_time=$(date +"%s") - if [ -n "${RUN_ONCE}" ]; then - exit $exit_code - elif [ -n "${DB_DUMP_CRON}" ]; then - waittime=$(wait_for_cron "${DB_DUMP_CRON}" "$current_time" $last_run) - else - current_time=$(date +"%s") - # Calculate how long the previous backup took - backup_time=$(($current_time - $target_time)) - # Calculate how many times the frequency time was passed during the previous backup. - freq_time_count=$(($backup_time / $freq_time)) - # Increment the count with one because we want to wait at least the frequency time once. - freq_time_count_to_add=$(($freq_time_count + 1)) - # Calculate the extra time to add to the previous target time - extra_time=$(($freq_time_count_to_add*$freq_time)) - # Calculate the new target time needed for the next calculation - target_time=$(($target_time + $extra_time)) - # Calculate the wait time - waittime=$(($target_time - $current_time)) - fi - sleep $waittime - last_run=$(date +"%s") - done -fi diff --git a/functions.sh b/functions.sh deleted file mode 100644 index 4c910a74..00000000 --- a/functions.sh +++ /dev/null @@ -1,554 +0,0 @@ -#!/bin/bash -# Function definitions used in the entrypoint file. - -# -# Environment variable reading function -# -# The function enables reading environment variable from file. -# -# usage: file_env VAR [DEFAULT] -# ie: file_env 'XYZ_DB_PASSWORD' 'example' -# (will allow for "$XYZ_DB_PASSWORD_FILE" to fill in the value of -# "$XYZ_DB_PASSWORD" from a file, especially for Docker's secrets feature -function file_env() { - local var="$1" - local fileVar="${var}_FILE" - local def="${2:-}" - if [ "${!var:-}" ] && [ "${!fileVar:-}" ]; then - echo >&2 "error: both $var and $fileVar are set (but are exclusive)" - exit 1 - fi - local val="$def" - if [ "${!var:-}" ]; then - val="${!var}" - elif [ "${!fileVar:-}" ]; then - val="$(< "${!fileVar}")" - fi - export "$var"="$val" - unset "$fileVar" -} - - -# -# URI parsing function -# -# The function creates global variables with the parsed results. -# It returns 0 if parsing was successful or non-zero otherwise. -# -# [schema://][user[:password]@]host[:port][/path][?[arg1=val1]...][#fragment] -# -function uri_parser() { - uri=() - # uri capture - full="$@" - - # safe escaping - full="${full//\`/%60}" - full="${full//\"/%22}" - - # URL that begins with '/' is like 'file:///' - if [[ "${full:0:1}" == "/" ]]; then - full="file://localhost${full}" - fi - # file:/// should be file://localhost/ - if [[ "${full:0:8}" == "file:///" ]]; then - full="${full/file:\/\/\//file://localhost/}" - fi - - # top level parsing - pattern='^(([a-z0-9]{2,5})://)?((([^:\/]+)(:([^@\/]*))?@)?([^:\/?]+)(:([0-9]+))?)(\/[^?]*)?(\?[^#]*)?(#.*)?$' - [[ "$full" =~ $pattern ]] || return 1; - - # component extraction - full=${BASH_REMATCH[0]} - uri[uri]="$full" - uri[schema]=${BASH_REMATCH[2]} - uri[address]=${BASH_REMATCH[3]} - uri[user]=${BASH_REMATCH[5]} - uri[password]=${BASH_REMATCH[7]} - uri[host]=${BASH_REMATCH[8]} - uri[port]=${BASH_REMATCH[10]} - uri[path]=${BASH_REMATCH[11]} - uri[query]=${BASH_REMATCH[12]} - uri[fragment]=${BASH_REMATCH[13]} - if [[ ${uri[schema]} == "smb" && ${uri[path]} =~ ^/([^/]*)(/?.*)$ ]]; then - uri[share]=${BASH_REMATCH[1]} - uri[sharepath]=${BASH_REMATCH[2]} - fi - - # does the user have a domain? - if [[ -n ${uri[user]} && ${uri[user]} =~ ^([^\;]+)\;(.+)$ ]]; then - uri[userdomain]=${BASH_REMATCH[1]} - uri[user]=${BASH_REMATCH[2]} - fi - return 0 -} - - - -# -# execute actual backup -# -function do_dump() { - # what is the name of our source and target? - now=$(date -u +"%Y-%m-%dT%H:%M:%SZ") - # SOURCE: file that the uploader looks for when performing the upload - # TARGET: the remote file that is actually uploaded - - # option to replace - if [ -n "$DB_DUMP_SAFECHARS" ]; then - now=${now//:/-} - fi - SOURCE=db_backup_${now}.$EXTENSION - TARGET=${SOURCE} - - # Execute additional scripts for pre processing. For example, uncompress a - # backup file containing this db backup and a second tar file with the - # contents of a wordpress install so they can be restored. - if [ -d /scripts.d/pre-backup/ ]; then - for i in $(ls /scripts.d/pre-backup/*.sh); do - if [ -x $i ]; then - NOW=${now} DUMPFILE=${TMPDIR}/${TARGET} DUMPDIR=${TMPDIR} DB_DUMP_DEBUG=${DB_DUMP_DEBUG} $i - [ $? -ne 0 ] && return 1 - fi - done - fi - - # do the dump - workdir="${TMP_PATH}/backup.$$" - rm -rf $workdir - mkdir -p $workdir - NICE_CMD= - # if we asked to do by schema, then we need to get a list of all of the databases, take each, and then tar and zip them - if [ "$NICE" = "true" ]; then - NICE_CMD="nice -n19 ionice -c2" - fi - if [ -n "$DB_DUMP_BY_SCHEMA" -a "$DB_DUMP_BY_SCHEMA" = "true" ]; then - if [[ -z "$DB_NAMES" ]]; then - DB_LIST=$(mysql -h $DB_SERVER -P $DB_PORT $DBUSER $DBPASS -N -e 'show databases') - [ $? -ne 0 ] && return 1 - else - DB_LIST="$DB_NAMES" - fi - if [ -z "$DB_NAMES_EXCLUDE" ]; then - DB_NAMES_EXCLUDE="information_schema performance_schema mysql sys" - fi - declare -A exclude_list - for i in $DB_NAMES_EXCLUDE; do - exclude_list[$i]="true" - done - for onedb in $DB_LIST; do - if [ -v exclude_list[$onedb] ]; then - # skip db if it is in the exclude list - continue - fi - $NICE_CMD mysqldump -h $DB_SERVER -P $DB_PORT $DBUSER $DBPASS --databases ${onedb} $MYSQLDUMP_OPTS > $workdir/${onedb}_${now}.sql - [ $? -ne 0 ] && return 1 - done - else - # just a single command - if [ "$SINGLE_DATABASE" = "true" ]; then - DB_LIST="$DB_NAMES" - elif [[ -n "$DB_NAMES" ]]; then - DB_LIST="--databases $DB_NAMES" - else - DB_LIST="-A" - fi - $NICE_CMD mysqldump -h $DB_SERVER -P $DB_PORT $DBUSER $DBPASS $DB_LIST $MYSQLDUMP_OPTS > $workdir/backup_${now}.sql - [ $? -ne 0 ] && return 1 - fi - tar -C $workdir -cvf - . | $COMPRESS > ${TMPDIR}/${SOURCE} - [ $? -ne 0 ] && return 1 - rm -rf $workdir - [ $? -ne 0 ] && return 1 - - # Execute additional scripts for post processing. For example, create a new - # backup file containing this db backup and a second tar file with the - # contents of a wordpress install. - if [ -d /scripts.d/post-backup/ ]; then - for i in $(ls /scripts.d/post-backup/*.sh); do - if [ -x $i ]; then - NOW=${now} DUMPFILE=${TMPDIR}/${SOURCE} DUMPDIR=${TMPDIR} DB_DUMP_DEBUG=${DB_DUMP_DEBUG} $i - [ $? -ne 0 ] && return 1 - fi - done - fi - - # Execute a script to modify the name of the source file path before uploading to the dump target - # For example, modifying the name of the source dump file from the default, e.g. db-other-files-combined.tar.$EXTENSION - if [ -f /scripts.d/source.sh ] && [ -x /scripts.d/source.sh ]; then - SOURCE=$(NOW=${now} DUMPFILE=${TMPDIR}/${SOURCE} DUMPDIR=${TMPDIR} DB_DUMP_DEBUG=${DB_DUMP_DEBUG} /scripts.d/source.sh | tr -d '\040\011\012\015') - [ $? -ne 0 ] && return 1 - - if [ -z "${SOURCE}" ]; then - echo "Your source script located at /scripts.d/source.sh must return a value to stdout" - exit 1 - fi - fi - # Execute a script to modify the name of the target file before uploading to the dump target. - # For example, uploading to a date stamped object key path in S3, i.e. s3://bucket/2018/08/23/path - if [ -f /scripts.d/target.sh ] && [ -x /scripts.d/target.sh ]; then - TARGET=$(NOW=${now} DUMPFILE=${TMPDIR}/${SOURCE} DUMPDIR=${TMPDIR} DB_DUMP_DEBUG=${DB_DUMP_DEBUG} /scripts.d/target.sh | tr -d '\040\011\012\015') - [ $? -ne 0 ] && return 1 - - if [ -z "${TARGET}" ]; then - echo "Your target script located at /scripts.d/target.sh must return a value to stdout" - exit 1 - fi - fi - - return 0 -} - -# -# place the backup in appropriate location(s) -# -function backup_target() { - local target=$1 - # determine target proto - uri_parser ${target} - - # what kind of target do we have? Plain filesystem? smb? - case "${uri[schema]}" in - "file") - mkdir -p ${uri[path]} - cpOpts="-a" - [ -n "$DB_DUMP_KEEP_PERMISSIONS" -a "$DB_DUMP_KEEP_PERMISSIONS" = "false" ] && cpOpts="" - cp $cpOpts ${TMPDIR}/${SOURCE} ${uri[path]}/${TARGET} - ;; - "s3") - # allow for endpoint url override - [[ -n "$AWS_ENDPOINT_URL" ]] && AWS_ENDPOINT_OPT="--endpoint-url $AWS_ENDPOINT_URL" - aws ${AWS_CLI_OPTS} ${AWS_ENDPOINT_OPT} s3 cp ${AWS_CLI_S3_CP_OPTS} ${TMPDIR}/${SOURCE} "${target}/${TARGET}" - ;; - "smb") - if [[ -n "$SMB_USER" ]]; then - UPASSARG="-U" - UPASS="${SMB_USER}%${SMB_PASS}" - elif [[ -n "${uri[user]}" ]]; then - UPASSARG="-U" - UPASS="${uri[user]}%${uri[password]}" - else - UPASSARG= - UPASS= - fi - if [[ -n "${uri[userdomain]}" ]]; then - UDOM="-W ${uri[userdomain]}" - else - UDOM= - fi - - # smb has issues with the character `:` in filenames, so replace with `-` - smbTargetName=${TARGET//:/-} - smbclient -N "//${uri[host]}/${uri[share]}" ${UPASSARG} "${UPASS}" ${UDOM} -c "cd ${uri[sharepath]}; put ${TMPDIR}/${SOURCE} ${smbTargetName}" - ;; - esac - [ $? -ne 0 ] && return 1 - return 0 -} - -# -# calculate seconds until next cron match -# -function wait_for_cron() { - local cron="$1" - local compare="$2" - local last_run="$3" - # we keep a copy of the actual compare time, because we might shift the compare time in a moment - local comparesec=$compare - # there must be at least 60 seconds between last run and next run, so if it is less than 60 seconds, - # add differential seconds to $compare - local compareDiff=$(($compare - $last_run)) - if [ $compareDiff -lt 60 ]; then - compare=$(($compare + $(( 60-$compareDiff )) )) - fi - - # cron only works in minutes, so we want to round down to the current minute - # e.g. if we are at 20:06:25, we need to treat it as 20:06:00, or else our waittime will be -25 - # on the other hand, if we are at 20:06:00, do not round it down - local current_seconds=$(getepochas "$comparesec" +"%-S") - if [ $current_seconds -ne 0 ]; then - comparesec=$(( $comparesec - $current_seconds )) - fi - - # reminder, cron format is: - # minute(0-59) - # hour(0-23) - # day of month(1-31) - # month(1-12) - # day of week(0-6 = Sunday-Saturday) - local cron_minute=$(echo -n "$cron" | awk '{print $1}') - local cron_hour=$(echo -n "$cron" | awk '{print $2}') - local cron_dom=$(echo -n "$cron" | awk '{print $3}') - local cron_month=$(echo -n "$cron" | awk '{print $4}') - local cron_dow=$(echo -n "$cron" | awk '{print $5}') - - local success=1 - - # when is the next time we hit that month? - local next_minute=$(getepochas "$compare" +"%-M") - local next_hour=$(getepochas "$compare" +"%-H") - local next_dom=$(getepochas "$compare" +"%-d") - local next_month=$(getepochas "$compare" +"%-m") - local next_dow=$(getepochas "$compare" +"%-u") - local next_year=$(getepochas "$compare" +"%-Y") - - # date returns DOW as 1-7/Mon-Sun, we need 0-6/Sun-Sat - next_dow=$(( $next_dow % 7 )) - - local cron_next= - - # logic for determining next time to run - # start by assuming our current min/hr/dom/month/dow is good, store it as "next" - # go through each section: if it matches, keep going; if it does not, make it match or move ahead - - while [ "$success" != "0" ]; do - # minute: - # if minute matches, move to next step - # if minute does not match, move "next" minute to the time that does match in cron - # if "next" minute is ahead of cron minute, then increment "next" hour by one - # move to hour - cron_next=$(next_cron_expression "$cron_minute" 59 "$next_minute") - if [ "$cron_next" != "$next_minute" ]; then - if [ "$next_minute" -gt "$cron_next" ]; then - next_hour=$(( $next_hour + 1 )) - fi - next_minute=$cron_next - fi - - # hour: - # if hour matches, move to next step - # if hour does not match: - # if "next" hour is ahead of cron hour, then increment "next" day by one - # set "next" hour to cron hour, set "next" minute to 0, return to beginning of loop - cron_next=$(next_cron_expression "$cron_hour" 23 "$next_hour") - if [ "$cron_next" != "$next_hour" ]; then - if [ "$next_hour" -gt "$cron_next" ]; then - next_dom=$(( $next_dom + 1 )) - fi - next_hour=$cron_next - next_minute=0 - fi - - # weekday: - # if weekday matches, move to next step - # if weekday does not match: - # move "next" weekday to next matching weekday, accounting for overflow at end of week - # reset "next" hour to 0, reset "next" minute to 0, return to beginning of loop - cron_next=$(next_cron_expression "$cron_dow" 6 "$next_dow") - if [ "$cron_next" != "$next_dow" ]; then - dowDiff=$(( $cron_next - $next_dow )) - if [ "$dowDiff" -lt "0" ]; then - dowDiff=$(( $dowDiff + 7 )) - fi - next_dom=$(( $next_dom + $dowDiff )) - next_hour=0 - next_minute=0 - fi - - # dom: - # if dom matches, move to next step - # if dom does not match: - # if "next" dom is ahead of cron dom OR "next" month does not have crom dom (e.g. crom dom = 30 in Feb), - # increment "next" month, reset "next" day to 1, reset "next" minute to 0, reset "next" hour to 0, return to beginning of loop - # else set "next" day to cron day, reset "next" minute to 0, reset "next" hour to 0, return to beginning of loop - maxDom=$(max_day_in_month $next_month $next_year) - cron_next=$(next_cron_expression "$cron_dom" 30 "$next_dom") - if [ "$cron_next" != "$next_dom" ]; then - next_hour=0 - next_minute=0 - fi - if [ $next_dom -gt $cron_next -o $next_dom -gt $maxDom ]; then - next_month=$(( $next_month + 1 )) - if [ $next_month -gt 12 ]; then - next_month=$(( $next_month - 12)) - next_year=$(( $next_year + 1 )) - fi - next_dom=1 - else - next_dom=$cron_next - fi - - - # month: - # if month matches, move to next step - # if month does not match: - # if "next" month is ahead of cron month, increment "next" year by 1 - # set "next" month to cron month, set "next" day to 1, set "next" minute to 0, set "next" hour to 0 - # return to beginning of loop - cron_next=$(next_cron_expression "$cron_month" 12 "$next_month") - if [ "$cron_next" != "$next_month" ]; then - # must be sure to roll month if needed - if [ $cron_next -gt 12 ]; then - next_year=$(( $next_year + 1 )) - cron_next=$(( $cron_next - 12 )) - fi - if [ $next_month -gt $cron_next ]; then - next_year=$(( $next_year + 1 )) - fi - next_month=$cron_next - next_day=1 - next_minute=0 - next_hour=0 - fi - - success=0 - done - # success: "next" is now set to the next match! - - local future=$(getdateas "${next_year}-${next_month}-${next_dom}T${next_hour}:${next_minute}:00" "+%s") - local futurediff=$(($future - $comparesec)) - echo $futurediff -} - -# next_cron_expression function that takes a cron term, e.g. "3", "4-7", "*", "3,4-7", "*/5", "3-25/5", -# and calculates the lowest term that fits the cron expression that is equal to or greater than some number. -# uses the "max" argument to determine the maximum -# For example, given the arguments, these are the results and why: -# "*" "60" "4" -> "4" 4 is the number that is greater than or equal to "*" -# "4" "60" "4" -> "4" 4 is the number that is greater than or equal to "4" -# "5" "60" "4" -> "5" 5 is the next number that matches "5", and is >= 4 -# "3-7" "60" "4" -> "4" 4 is the number that fits within 3-7 -# "3-7" "60" "9" -> "3" no number in the range 3-7 ever is >= 9, so next one will be 3 when we circle back -# "*/2" "60" "4" -> "4" 4 is divisible by 2 -# "*/5" "60" "4" -> "5" 5 is the next number in the range * that is divisible by 5, and is >= 4 -# "0-20/5" "60" "4" -> "5" 5 is the next number in the range 0-20 that is divisible by 5, and is >= 4 -# "15-30/5" "60" "4" -> "15" 15 is the next number in the range 15-30 that is in increments of 5, and is >= 4 -# "15-30/5" "60" "20"-> "20" 20 is the next number in the range 15-30 that is in increments of 5, and is >= 20 -# "15-30/5" "60" "35"-> "15" no number in the range 15-30/5 will ever be >=35, so 15 is the first circle back -# "*/10" "12" "11" -> "0" the next match after 11 would be 20, but that would be greater than the maximum, so we circle back to 0 -# -function next_cron_expression() { - local crex="$1" - local max="$2" - local num="$3" - - # expand the list - note that this can handle a single-element list - local allvalid="" - local tmpvalid="" - # take each comma-separated expression - local parts=${crex//,/ } - # replace * with # so that we can handle * as one of comma-separated terms without doing shell expansion - parts=${parts//\*/#} - for i in $parts; do - # if it is a * or exact match, just add the number - if [ "$i" = "#" -o "$i" = "$num" ]; then - echo $num - return 0 - fi - - # it might be a step function, so we will have to reduce from the total range - partstep=${i##*\/} - partnum=${i%%\/*} - tmpvalid="" - local start= - local end= - if [ "${partnum}" = "#" ]; then - # calculate all of the numbers until the max - start=0 - end=$max - else - # handle a range like 3-7, which includes a single number like 4 - start=${partnum%%-*} - end=${partnum##*-} - fi - # calculate the valid ones just for this range - tmpvalid=$(seq $start $end) - - # it is a step function if the partstep is not the same as the whole thing - if [ "$partstep" != "$i" ]; then - # add to allvalid only the ones that match the term - # there are two possible use cases: - # first number is 0: any divisible by the partstep, i.e. j%partstep - # first number is not 0: start at first and increment by partstep until we run out - # this latter one is just the equivalent of dropping all numbers by (first) and then seeing if divisible - for j in $tmpvalid; do - if [ $(( (${j} - ${start}) % ${partstep} )) -eq 0 ]; then - allvalid="$allvalid $j" - fi - done - else - # if it is not a step function, just add the tmpvalid to the allvalid - allvalid="$allvalid $tmpvalid" - fi - done - - # sort for deduplication and ordering - allvalid=$(echo $allvalid | tr ' ' '\n' | sort -n -u | tr '\n' ' ') - for i in $allvalid; do - if [ "$i" -ge "$num" ]; then - echo $i - return 0 - fi - done - # if we got here, no number matched, so take the very first one - echo ${allvalid%% *} -} - -function max_day_in_month() { - local month="$1" - local year="$1" - - case $month in - "1"|"3"|"5"|"7"|"8"|"10"|"12") - echo 31 - ;; - "2") - local div4=$(( $year % 4 )) - local div100=$(( $year % 100 )) - local div400=$(( $year % 400 )) - local days=28 - if [ "$div4" = "0" -a "$div100" != "0" ]; then - days=29 - fi - if [ "$div400" = "0" ]; then - days=29 - fi - echo $days - ;; - *) - echo 30 - ;; - esac -} - -function getdateas() { - local input="$1" - local outformat="$2" - local os=$(uname -s | tr '[A-Z]' '[a-z]') - case "$os" in - linux) - date --date="$input" "$outformat" - ;; - darwin) - # need to determine if it was Zulu time or local - lastchar="${input: -1}" - format="%Y-%m-%dT%H:%M:%S" - uarg="-u" - if [ "$lastchar" = "Z" ]; then - format="${format}Z" - uarg="-u" - fi - date $uarg -j -f "$format" "$input" "$outformat" - ;; - *) - echo "unknown OS $os" >&2 - exit 1 - esac -} -function getepochas() { - local input="$1" - local format="$2" - local os=$(uname -s | tr '[A-Z]' '[a-z]') - case "$os" in - linux) - date --date="@$input" "$format" - ;; - darwin) - date -u -j -r "$input" "$format" - ;; - *) - echo "unknown OS $os" >&2 - exit 1 - esac -} diff --git a/test/Dockerfile b/test/Dockerfile deleted file mode 100644 index e6d0b170..00000000 --- a/test/Dockerfile +++ /dev/null @@ -1,31 +0,0 @@ -FROM mysql:8.0 - -## MYSQL - - -FROM alpine:3.19 - -## SAMBA - -# smb port -EXPOSE 445 - -# install the necessary client -RUN apk add --update bash samba-server && rm -rf /var/cache/apk/* && touch /etc/samba/smb.conf - -# enter smb.conf -COPY smb.conf /etc/samba/ -COPY smbusers /etc/samba/ -COPY *.tdb /var/lib/samba/private/ -# create a user with no home directory but the right password -RUN adduser user -D -H -RUN echo user:pass | chpasswd - -### s3 -RUN apk add --update minio - -# start samba -#CMD /usr/sbin/smbd -F --debug-stdout -d 4 --no-process-group - -# start minio -#RUN minio server /path/to/s3 diff --git a/test/Dockerfile_cron b/test/Dockerfile_cron deleted file mode 100644 index 5e8a8439..00000000 --- a/test/Dockerfile_cron +++ /dev/null @@ -1,10 +0,0 @@ -# mysql backup image -ARG BASE=mysqlbackup_backup_test -FROM ${BASE} -MAINTAINER Avi Deitcher - -COPY entrypoint_cron.sh /entrypoint - -ENTRYPOINT ["/entrypoint"] - - diff --git a/test/test_restore.sh b/test/test_restore.sh deleted file mode 100755 index 363e2294..00000000 --- a/test/test_restore.sh +++ /dev/null @@ -1,106 +0,0 @@ -#!/bin/bash -set -ex - -source ./_functions.sh - -#cleanup all temporary folders -mkdir -p backups -rm -rf backups/*.tgz -mkdir -p certs -rm -rf certs/*.pem - -makevolume - -makenetwork - -make_test_images - -makesmb - -start_service_containers - -#temporary use original certificate location -MYSQLDUMP_OPTS="--ssl-cert /var/lib/mysql/client-cert.pem --ssl-key /var/lib/mysql/client-key.pem" -db_connect="docker exec -i $mysql_cid mysql ${MYSQLDUMP_OPTS} -u$MYSQLUSER -p$MYSQLPW --protocol=tcp -h127.0.0.1 --wait --connect_timeout=20 tester" -$db_connect -e 'select 1;' -echo 'use tester; create table t1 (id INT, name VARCHAR(20)); INSERT INTO t1 (id,name) VALUES (1, "John"), (2, "Jill"), (3, "Sam"), (4, "Sarah");' | $db_connect - - -#fix /certs/*.pem files permissions -c_with_wrong_permission=$(docker container create --label mysqltest --net mysqltest --name mysqltest-fix-certs-permissions -v ${BACKUP_VOL}:/backups -v ${CERTS_VOL}:/certs ${DBDEBUG} -e DB_USER=$MYSQLUSER -e DB_PASS=$MYSQLPW -e DB_DUMP_FREQ=60 -e DB_DUMP_BEGIN=+0 -e DB_SERVER=mysql -e MYSQLDUMP_OPTS="--compact ${MYSQLDUMP_OPTS}" ${BACKUP_IMAGE}) -docker container start ${c_with_wrong_permission} >/dev/null -docker exec -u 0 ${c_with_wrong_permission} chown -R appuser /certs>/dev/null -rm_containers $c_with_wrong_permission - -# now we can reset to /certs -MYSQLDUMP_OPTS="--ssl-cert /certs/client-cert.pem --ssl-key /certs/client-key.pem" - -# copy our certificates locally -docker cp mysql:/certs/client-key.pem $PWD/certs/client-key.pem -docker cp mysql:/certs/client-cert.pem $PWD/certs/client-cert.pem - -cid_dump=$( \ -docker container create \ --u 0 \ ---label mysqltest \ ---name mysqldump \ ---net mysqltest \ --v $PWD/backups/:/backups \ --v $PWD/certs/client-key.pem:/certs/client-key.pem \ --v $PWD/certs/client-cert.pem:/certs/client-cert.pem \ --e DB_DUMP_TARGET=/backups \ --e DB_DUMP_DEBUG=0 \ --e DB_SERVER=mysql \ --e DB_USER=$MYSQLUSER \ --e DB_PASS=$MYSQLPW \ --e DB_DUMP_ONCE=1 \ --e MYSQLDUMP_OPTS="${MYSQLDUMP_OPTS}" \ -${BACKUP_IMAGE}) -docker container start $cid_dump - -sleepwait 5 - -# remove tester database so we can be sure that restore actually works restoring tester database -docker exec -i mysql mysql ${MYSQLDUMP_OPTS} -uroot -proot --protocol=tcp -h127.0.0.1 -e "drop database tester;" - -ls -l $PWD/backups -backup_name=$(ls $PWD/backups) -if [[ "$backup_name" == "" ]]; then - echo "***********************************" - echo "backup file was not created, see container log output:" - docker logs $cid_dump - echo "***********************************" - exit 1 -fi - -cid_restore=$( \ -docker container create \ --u 0 \ ---label mysqltest \ ---name mysqlrestore \ ---net mysqltest \ --v $PWD/certs/client-key.pem:/certs/client-key.pem \ --v $PWD/certs/client-cert.pem:/certs/client-cert.pem \ --v $PWD/backups/:/backups \ --e DB_DUMP_DEBUG=0 \ --e DB_SERVER=mysql \ --e DB_USER=$MYSQLUSER \ --e DB_PASS=$MYSQLPW \ --e DB_RESTORE_TARGET=/backups/${backup_name} \ --e RESTORE_OPTS="${RESTORE_OPTS}" \ -${BACKUP_IMAGE}) -docker container start $cid_restore - -sleepwait 5 - -db_command="docker exec -i $mysql_cid mysql ${MYSQLDUMP_OPTS} -u$MYSQLUSER -p$MYSQLPW --protocol=tcp -h127.0.0.1 tester" -echo 'use tester; select * from t1' | $db_command - -rm_service_containers $smb_cid $mysql_cid $s3_cid -docker rm $cid_dump $cid_restore - -rm -rf backups/*.tgz -rmdir backups -rm -rf certs/*.pem -rmdir certs -