docs: update doc for the new version

This commit is contained in:
2024-08-03 01:18:14 +02:00
parent bd86310707
commit 99c8a45065
3 changed files with 73 additions and 250 deletions

View File

@@ -1,7 +1,7 @@
name: Build name: Build
on: on:
push: push:
branches: [ "main","v1.0"] branches: [ "main"]
workflow_dispatch: workflow_dispatch:
inputs: inputs:
docker_tag: docker_tag:

319
README.md
View File

@@ -1,5 +1,12 @@
# PostgreSQL Backup # PostgreSQL Backup
PostgreSQL Backup and Restoration tool. Backup database to AWS S3 storage or any S3 Alternatives for Object Storage. pg-bkup it's a Docker container image that can be used to backup and restore Postgres database. It supports local storage, AWS S3 or any S3 Alternatives for Object Storage, and SSH compatible storage.
It also supports __encrypting__ your backups using GPG.
---
The [jkaninda/pg-bkup](https://hub.docker.com/r/jkaninda/pg-bkup) Docker image can be deployed on Docker, Docker Swarm and Kubernetes.
It handles __recurring__ backups of postgres database on Docker and can be deployed as __CronJob on Kubernetes__ using local, AWS S3 or SSH compatible storage.
It also supports __encrypting__ your backups using GPG.
[![Build](https://github.com/jkaninda/pg-bkup/actions/workflows/build.yml/badge.svg)](https://github.com/jkaninda/pg-bkup/actions/workflows/build.yml) [![Build](https://github.com/jkaninda/pg-bkup/actions/workflows/build.yml/badge.svg)](https://github.com/jkaninda/pg-bkup/actions/workflows/build.yml)
[![Go Report](https://goreportcard.com/badge/github.com/jkaninda/mysql-bkup)](https://goreportcard.com/report/github.com/jkaninda/pg-bkup) [![Go Report](https://goreportcard.com/badge/github.com/jkaninda/mysql-bkup)](https://goreportcard.com/report/github.com/jkaninda/pg-bkup)
@@ -19,283 +26,95 @@ PostgreSQL Backup and Restoration tool. Backup database to AWS S3 storage or any
- [MySQL](https://github.com/jkaninda/mysql-bkup) - [MySQL](https://github.com/jkaninda/mysql-bkup)
## Storage: ## Storage:
- local - Local
- s3 - AWS S3 or any S3 Alternatives for Object Storage
- Object storage - SSH
## Volumes: ## Documentation is found at <https://jkaninda.github.io/pg-bkup>
## Quickstart
- /s3mnt => S3 mounting path ### Simple backup using Docker CLI
- /backup => local storage mounting path
### Usage To run a one time backup, bind your local volume to `/backup` in the container and run the `pg-bkup backup` command:
| Options | Shorts | Usage | ```shell
|-----------------------|--------|------------------------------------------------------------------------| docker run --rm --network your_network_name \
| pg-bkup | bkup | CLI utility | -v $PWD/backup:/backup/ \
| backup | | Backup database operation | -e "DB_HOST=dbhost" \
| restore | | Restore database operation | -e "DB_USERNAME=username" \
| history | | Show the history of backup | -e "DB_PASSWORD=password" \
| --storage | -s | Set storage. local or s3 (default: local) | jkaninda/pg-bkup pg-bkup backup -d database_name
| --file | -f | Set file name for restoration |
| --path | | Set s3 path without file name. eg: /custom_path |
| --dbname | -d | Set database name |
| --port | -p | Set database port (default: 5432) |
| --mode | -m | Set execution mode. default or scheduled (default: default) |
| --disable-compression | | Disable database backup compression |
| --prune | | Delete old backup, default disabled |
| --keep-last | | Delete old backup created more than specified days ago, default 7 days |
| --period | | Set crontab period for scheduled mode only. (default: "0 1 * * *") |
| --timeout | -t | Set timeout (default: 60s) |
| --help | -h | Print this help message and exit |
| --version | -V | Print version information and exit |
## Environment variables
| Name | Requirement | Description |
|-------------|--------------------------------------------------|------------------------------------------------------|
| DB_PORT | Optional, default 5432 | Database port number |
| DB_HOST | Required | Database host |
| DB_NAME | Optional if it was provided from the -d flag | Database name |
| DB_USERNAME | Required | Database user name |
| DB_PASSWORD | Required | Database password |
| ACCESS_KEY | Optional, required for S3 storage | AWS S3 Access Key |
| SECRET_KEY | Optional, required for S3 storage | AWS S3 Secret Key |
| BUCKET_NAME | Optional, required for S3 storage | AWS S3 Bucket Name |
| S3_ENDPOINT | Optional, required for S3 storage | AWS S3 Endpoint |
| FILE_NAME | Optional if it was provided from the --file flag | Database file to restore (extensions: .sql, .sql.gz) |
## Note:
Creating a user for backup tasks who has read-only access is recommended!
> create read-only user
## Backup database :
Simple backup usage
```sh
bkup backup
``` ```
### S3 Alternatively, pass a `--env-file` in order to use a full config as described below.
```sh
pg-bkup backup --storage s3
```
## Docker run:
```sh
docker run --rm --network your_network_name --name pg-bkup -v $PWD/backup:/backup/ -e "DB_HOST=database_host_name" -e "DB_USERNAME=username" -e "DB_PASSWORD=password" jkaninda/pg-bkup pg-bkup backup -d database_name
```
## Docker compose file: Add a `backup` service to your compose setup and mount the volumes you would like to see backed up:
### Simple backup in docker compose file
```yaml ```yaml
version: '3'
services: services:
postgres:
image: postgres:14.5
container_name: postgres
restart: unless-stopped
volumes:
- ./postgres:/var/lib/postgresql/data
environment:
POSTGRES_DB: bkup
POSTGRES_PASSWORD: password
POSTGRES_USER: bkup
pg-bkup: pg-bkup:
# In production, it is advised to lock your image tag to a proper
# release version instead of using `latest`.
# Check https://github.com/jkaninda/pg-bkup/releases
# for a list of available releases.
image: jkaninda/pg-bkup image: jkaninda/pg-bkup
container_name: pg-bkup container_name: pg-bkup
depends_on:
- postgres
command: command:
- /bin/sh - /bin/sh
- -c - -c
- pg-bkup backup -d bkup - pg-bkup backup
volumes: volumes:
- ./backup:/backup - ./backup:/backup
environment: environment:
- DB_PORT=5432 - DB_PORT=5432
- DB_HOST=postgres - DB_HOST=postgres
- DB_NAME=bkup - DB_NAME=foo
- DB_USERNAME=bkup - DB_USERNAME=bar
- DB_PASSWORD=password - DB_PASSWORD=password
``` # pg-bkup container must be connected to the same network with your database
## Restore database : networks:
- web
Simple database restore operation usage networks:
web:
```sh
pg-bkup restore --file database_20231217_115621.sql --dbname database_name
``` ```
```sh ## Available image registries
pg-bkup restore -f database_20231217_115621.sql -d database_name
```
### S3
```sh This Docker image is published to both Docker Hub and the GitHub container registry.
pg-bkup restore --storage s3 --file database_20231217_115621.sql --dbname database_name Depending on your preferences and needs, you can reference both `jkaninda/pg-bkup` as well as `ghcr.io/jkaninda/pg-bkup`:
```
## Docker run:
```sh
docker run --rm --network your_network_name --name pg-bkup -v $PWD/backup:/backup/ -e "DB_HOST=database_host_name" -e "DB_USERNAME=username" -e "DB_PASSWORD=password" jkaninda/pg-bkup pg-bkup restore -d database_name -f napata_20231219_022941.sql.gz
```
## Docker compose file:
```yaml
version: '3'
services:
pg-bkup:
image: jkaninda/pg-bkup
container_name: pg-bkup
command:
- /bin/sh
- -c
- pg-bkup restore --file database_20231217_115621.sql -d database_name
volumes:
- ./backup:/backup
environment:
#- FILE_NAME=database_20231217_040238.sql.gz # Optional if file name is set from command
- DB_PORT=5432
- DB_HOST=postgres
- DB_USERNAME=user_name
- DB_PASSWORD=password
```
## Run
```sh
docker-compose up -d
```
## Backup to S3
```sh
docker run --rm --privileged --device /dev/fuse --name pg-bkup -e "DB_HOST=db_hostname" -e "DB_USERNAME=username" -e "DB_PASSWORD=password" -e "ACCESS_KEY=your_access_key" -e "SECRET_KEY=your_secret_key" -e "BUCKETNAME=your_bucket_name" -e "S3_ENDPOINT=https://s3.us-west-2.amazonaws.com" jkaninda/pg-bkup pg-bkup backup -s s3 -d database_name
```
> To change s3 backup path add this flag : --path /my_customPath . default path is /pg-bkup
Simple S3 backup usage
```sh
pg-bkup backup --storage s3 --dbname mydatabase
```
```yaml
pg-bkup:
image: jkaninda/pg-bkup
container_name: pg-bkup
privileged: true
devices:
- "/dev/fuse"
command:
- /bin/sh
- -c
- pg-bkup restore --storage s3 -f database_20231217_115621.sql.gz --dbname database_name
environment:
- DB_PORT=5432
- DB_HOST=postgress
- DB_USERNAME=user_name
- DB_PASSWORD=password
- ACCESS_KEY=${ACCESS_KEY}
- SECRET_KEY=${SECRET_KEY}
- BUCKET_NAME=${BUCKET_NAME}
- S3_ENDPOINT=${S3_ENDPOINT}
``` ```
## Run in Scheduled mode docker pull jkaninda/pg-bkup:v1.0
docker pull ghcr.io/jkaninda/pg-bkup:v1.0
This tool can be run as CronJob in Kubernetes for a regular backup which makes deployment on Kubernetes easy as Kubernetes has CronJob resources.
For Docker, you need to run it in scheduled mode by adding `--mode scheduled` flag and specify the periodical backup time by adding `--period "0 1 * * *"` flag.
Make an automated backup on Docker
## Syntax of crontab (field description)
The syntax is:
- 1: Minute (0-59)
- 2: Hours (0-23)
- 3: Day (0-31)
- 4: Month (0-12 [12 == December])
- 5: Day of the week(0-7 [7 or 0 == sunday])
Easy to remember format:
```conf
* * * * * command to be executed
``` ```
```conf Documentation references Docker Hub, but all examples will work using ghcr.io just as well.
- - - - -
| | | | |
| | | | ----- Day of week (0 - 7) (Sunday=0 or 7)
| | | ------- Month (1 - 12)
| | --------- Day of month (1 - 31)
| ----------- Hour (0 - 23)
------------- Minute (0 - 59)
```
> At every 30th minute ## Supported Engines
```conf This image is developed and tested against the Docker CE engine and Kubernetes exclusively.
*/30 * * * * While it may work against different implementations, there are no guarantees about support for non-Docker engines.
```
> “At minute 0.” every hour
```conf
0 * * * *
```
> “At 01:00.” every day ## References
```conf We decided to publish this image as a simpler and more lightweight alternative because of the following requirements:
0 1 * * *
```
## Example of scheduled mode - The original image is based on `ubuntu` and requires additional tools, making it heavy.
- This image is written in Go.
- `arm64` and `arm/v7` architectures are supported.
- Docker in Swarm mode is supported.
- Kubernetes is supported.
> Docker run :
```sh ## Deploy on Kubernetes
docker run --rm --name pg-bkup -v $BACKUP_DIR:/backup/ -e "DB_HOST=$DB_HOST" -e "DB_USERNAME=$DB_USERNAME" -e "DB_PASSWORD=$DB_PASSWORD" jkaninda/pg-bkup pg-bkup backup --dbname $DB_NAME --mode scheduled --period "0 1 * * *"
```
> With Docker compose For Kubernetes, you don't need to run it in scheduled mode. You can deploy it as CronJob.
```yaml ### Simple Kubernetes CronJob usage:
version: "3"
services:
pg-bkup:
image: jkaninda/pg-bkup
container_name: pg-bkup
privileged: true
devices:
- "/dev/fuse"
command:
- /bin/sh
- -c
- pg-bkup backup --storage s3 --path /mys3_custom_path --dbname database_name --mode scheduled --period "*/30 * * * *"
environment:
- DB_PORT=5432
- DB_HOST=postgreshost
- DB_USERNAME=userName
- DB_PASSWORD=${DB_PASSWORD}
- ACCESS_KEY=${ACCESS_KEY}
- SECRET_KEY=${SECRET_KEY}
- BUCKET_NAME=${BUCKET_NAME}
- S3_ENDPOINT=${S3_ENDPOINT}
```
## Kubernetes CronJob
For Kubernetes, you don't need to run it in scheduled mode.
Simple Kubernetes CronJob usage:
```yaml ```yaml
apiVersion: batch/v1 apiVersion: batch/v1
@@ -311,8 +130,6 @@ spec:
containers: containers:
- name: pg-bkup - name: pg-bkup
image: jkaninda/pg-bkup image: jkaninda/pg-bkup
securityContext:
privileged: true
command: command:
- /bin/sh - /bin/sh
- -c - -c
@@ -331,13 +148,19 @@ spec:
value: "" value: ""
- name: ACCESS_KEY - name: ACCESS_KEY
value: "" value: ""
- name: SECRET_KEY - name: AWS_S3_ENDPOINT
value: "" value: "https://s3.amazonaws.com"
- name: BUCKET_NAME - name: AWS_S3_BUCKET_NAME
value: "" value: "xxx"
- name: S3_ENDPOINT - name: AWS_REGION
value: "https://s3.us-west-2.amazonaws.com" value: "us-west-2"
restartPolicy: Never - name: AWS_ACCESS_KEY
value: "xxxx"
- name: AWS_SECRET_KEY
value: "xxxx"
- name: AWS_DISABLE_SSL
value: "false"
restartPolicy: OnFailure
``` ```
## License ## License

View File

@@ -18,7 +18,7 @@ email: hi@jonaskaninda.com
description: >- # this means to ignore newlines until "baseurl:" description: >- # this means to ignore newlines until "baseurl:"
PostgreSQL Backup and Restoration tool. Backup database to AWS S3 storage or any S3 Alternatives for Object Storage. PostgreSQL Backup and Restoration tool. Backup database to AWS S3 storage or any S3 Alternatives for Object Storage.
baseurl: "" # the subpath of your site, e.g. /blog baseurl: "" # the subpath of your site, e.g. /blog
url: "" # the base hostname & protocol for your site, e.g. http://example.com url: "jkaninda.github.io/pg-bkup/" # the base hostname & protocol for your site, e.g. http://example.com
twitter_username: jonaskaninda twitter_username: jonaskaninda
github_username: jkaninda github_username: jkaninda