Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Support for storing files to Alibaba Cloud OSS #231

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,8 @@ RUN apk add --update 'mariadb-client>10.3.15' mariadb-connector-c bash python3 p
RUN groupadd -g 1005 appuser && \
useradd -r -u 1005 -g appuser appuser
# ensure smb stuff works correctly
RUN mkdir -p /var/cache/samba && chmod 0755 /var/cache/samba && chown appuser /var/cache/samba && chown appuser /var/lib/samba/private
RUN mkdir -p /var/cache/samba && chmod 0755 /var/cache/samba && chown appuser /var/cache/samba && chown appuser /var/lib/samba/private && \
mkdir -p /home/appuser && chown -R appuser /home/appuser
USER appuser

# install the entrypoint
Expand Down
5 changes: 5 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,7 @@ __You should consider the [use of `--env-file=`](https://docs.docker.com/engine/
* Local: If the value of `DB_DUMP_TARGET` starts with a `/` character, will dump to a local path, which should be volume-mounted.
* SMB: If the value of `DB_DUMP_TARGET` is a URL of the format `smb://hostname/share/path/` then it will connect via SMB.
* S3: If the value of `DB_DUMP_TARGET` is a URL of the format `s3://bucketname/path` then it will connect via awscli.
* OSS: If the value of `DB_DUMP_TARGET` is a URL of the format `oss://bucketname/path` then it will also connect via awscli. [OSS compatible with S3](https://www.alibabacloud.com/help/en/oss/developer-reference/compatibility-with-amazon-s3-1/?spm=a2c63.p38356.0.0.97ba41a0Ft7InT)
* Multiple: If the value of `DB_DUMP_TARGET` contains multiple targets, the targets should be separated by a whitespace **and** the value surrounded by quotes, e.g. `"/db s3://bucketname/path"`.
* `DB_DUMP_SAFECHARS`: The dump filename usually includes the character `:` in the date, to comply with RFC3339. Some systems and shells don't like that character. If this environment variable is set, it will replace all `:` with `-`.
* `AWS_ACCESS_KEY_ID`: AWS Key ID
Expand All @@ -60,6 +61,10 @@ __You should consider the [use of `--env-file=`](https://docs.docker.com/engine/
* `AWS_ENDPOINT_URL`: Specify an alternative endpoint for s3 interopable systems e.g. Digitalocean
* `AWS_CLI_OPTS`: Additional arguments to be passed to the `aws` part of the `aws s3 cp` command, click [here](https://docs.aws.amazon.com/cli/latest/reference/#options) for a list. _Be careful_, as you can break something!
* `AWS_CLI_S3_CP_OPTS`: Additional arguments to be passed to the `s3 cp` part of the `aws s3 cp` command, click [here](https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html#options) for a list. If you are using AWS KMS, `sse`, `sse-kms-key-id`, etc., may be of interest.
* `OSS_REGION`: Alibaba Cloud OSS Region ID, e.g. `oss-cn-beijing`, `oss-cn-hangzhou`
* `OSS_ENDPOINT_URL`: Alibaba Cloud OSS Endpoint URL, e.g. `https://oss-cn-beijing.aliyuncs.com`, `https://oss-cn-hangzhou.aliyuncs.com`
* `OSS_ACCESS_KEY_ID`: Alibaba Cloud OSS Access Key ID
* `OSS_ACCESS_KEY_SECRET`: Alibaba Cloud OSS Access Key Secret
* `SMB_USER`: SMB username. May also be specified in `DB_DUMP_TARGET` with an `smb://` url. If both specified, this variable overrides the value in the URL.
* `SMB_PASS`: SMB password. May also be specified in `DB_DUMP_TARGET` with an `smb://` url. If both specified, this variable overrides the value in the URL.
* `COMPRESSION`: Compression to use. Supported are: `gzip` (default), `bzip2`
Expand Down
15 changes: 15 additions & 0 deletions docker-compose.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
services:
mysql-backup:
build: .
environment:
- DB_DUMP_FREQ=60
- DB_SERVER=192.168.31.237
- DB_USER=root
- DB_PASS=123456
- DB_NAMES=api_hwgwb_com
- DB_DUMP_TARGET=oss://db-backup-mysql/api_hwgwb_com
- OSS_REGION=oss-cn-beijing
- OSS_ENDPOINT_URL=https://oss-cn-beijing.aliyuncs.com
- OSS_ACCESS_KEY_ID=LTAI5tBsui3RmMz8FqzHxLPw
- OSS_ACCESS_KEY_SECRET=IFIvp6SDz27ZS176ubFSAw2X1Octz9
- DB_RESTORE_TARGET=oss://db-backup-mysql/api_hwgwb_com/db_backup_2023-08-10T07:13:24Z.tgz
14 changes: 14 additions & 0 deletions entrypoint
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,11 @@ file_env "AWS_ACCESS_KEY_ID"
file_env "AWS_SECRET_ACCESS_KEY"
file_env "AWS_DEFAULT_REGION"

file_env "OSS_REGION"
file_env "OSS_ENDPOINT_URL"
file_env "OSS_ACCESS_KEY_ID"
file_env "OSS_ACCESS_KEY_SECRET"

file_env "SMB_USER"
file_env "SMB_PASS"

Expand Down Expand Up @@ -97,6 +102,12 @@ TMPRESTORE="${TMP_PATH}/restorefile"
# this is global, so has to be set outside
declare -A uri

# if OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET are set, add oss profile
if [[ -n "$OSS_ACCESS_KEY_ID" && -n "$OSS_ACCESS_KEY_SECRET" ]]; then
aws configure set aws_access_key_id $OSS_ACCESS_KEY_ID --profile oss
aws configure set aws_secret_access_key $OSS_ACCESS_KEY_SECRET --profile oss
aws configure set s3.addressing_style virtual --profile oss
fi


if [[ -n "$DB_RESTORE_TARGET" ]]; then
Expand All @@ -116,6 +127,9 @@ if [[ -n "$DB_RESTORE_TARGET" ]]; then
elif [[ "${uri[schema]}" == "s3" ]]; then
[[ -n "$AWS_ENDPOINT_URL" ]] && AWS_ENDPOINT_OPT="--endpoint-url $AWS_ENDPOINT_URL"
aws ${AWS_CLI_OPTS} ${AWS_ENDPOINT_OPT} s3 cp ${AWS_CLI_S3_CP_OPTS} "${DB_RESTORE_TARGET}" $TMPRESTORE
elif [[ "${uri[schema]}" == "oss" ]]; then
DB_RESTORE_TARGET=${DB_RESTORE_TARGET/oss:\/\//s3:\/\/}
aws --profile oss --region $OSS_REGION --endpoint-url $OSS_ENDPOINT_URL s3 cp "${DB_RESTORE_TARGET}" $TMPRESTORE
elif [[ "${uri[schema]}" == "smb" ]]; then
if [[ -n "$SMB_USER" ]]; then
UPASSARG="-U"
Expand Down
4 changes: 4 additions & 0 deletions functions.sh
Original file line number Diff line number Diff line change
Expand Up @@ -221,6 +221,10 @@ function backup_target() {
[[ -n "$AWS_ENDPOINT_URL" ]] && AWS_ENDPOINT_OPT="--endpoint-url $AWS_ENDPOINT_URL"
aws ${AWS_CLI_OPTS} ${AWS_ENDPOINT_OPT} s3 cp ${AWS_CLI_S3_CP_OPTS} ${TMPDIR}/${SOURCE} "${target}/${TARGET}"
;;
"oss")
target=${target/oss:\/\//s3:\/\/}
aws --profile oss --region $OSS_REGION --endpoint-url $OSS_ENDPOINT_URL s3 cp ${TMPDIR}/${SOURCE} "${target}/${TARGET}"
;;
"smb")
if [[ -n "$SMB_USER" ]]; then
UPASSARG="-U"
Expand Down
1 change: 1 addition & 0 deletions test/test_dump.sh
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ targets=(
"smb://user:pass@smb/auth/SEQ/data"
"smb://CONF;user:pass@smb/auth/SEQ/data"
"s3://mybucket/SEQ/data"
"oss://mybucket/SEQ/data"
"file:///backups/SEQ/data file:///backups/SEQ/data"
)

Expand Down
1 change: 1 addition & 0 deletions test/test_source_target.sh
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ targets=(
"file:///backups/SEQ/data"
"smb://user:pass@smb/auth/SEQ/data"
"s3://mybucket/SEQ/data"
"oss://mybucket/SEQ/data"
)

# we need to run through each each target and test the backup.
Expand Down