Better way to include videos in backups

Hi,

In our instance, we actually do not backup videos (hls and transcoded videos), only a database export and the others files in /storage + the docker configuration.

I thought about a simple Kimsufi as a redundancy server (though lately all Kimsufis are out of stock :/)

Are the redundant servers copy HLS and all transcoded resolutions ?

Is it a good way to backup videos when you don’t really have the money yet to buy another identical server ?

Hello,

Yes

No, it’s not a good way to do backups.

Thanks !

So we should copy all videos and HLS directories for each backup ?

I wonder how to optimize / improve this way of backup.

The main goals are :

  • Decrease the size of backups (big sizes caused by videos)
  • Less manual motinoring (maybe 2 manual interventions per month could be good)

Now

  • The daily backup process is run automatically with a cron by the server itself
  • BZ2 compression is used after packing with TAR the whole docker volume / config
  • Database export is packed instead of the DB docker volume directory
  • The BZ2 compressed backup is sent to a private FTP

Improvements

Some improvement I have in mind

  • Check if the database had been any new insert or update in the last 24 hours (since the last backup), if not do not backup and rename the last FTP backup with the right date
  • Checkf if the database new insert or update are releated to transcoding, if not do not copy the videos and hls directory, if yes copy everything

@Chocobozzz what do you think about these improvements ? do you have best practices / improvements to share for this backup system I’ve described ?

The first check is probably always going to be true.
The second might be useful. You are interested in new rows to the videoFile table that correspond to local videos.

1 « J'aime »

For backups, I suggest the use of an incremental backup system.

For my instance, I use backup ninja (https://backup.ninja/). It make it easy to configure backups with postgresql dump, and files incremental backup for the storage dir.
Backups can be encrypted and signed. Ninja backup use Duplicity for that.

Here is a config example:

File 10-db.pgsql:

dir = /var/www/peertube/backup/ninja/
databases = peertube_prod
compress = no

File 90-files.dup:

testconnect = no

[gpg]
sign=no
password = xxxxxxxxxxxxxxx
[source]

include = /etc
include = /var/www/peertube/backup
include = /var/www/peertube/storage
include = /var/www/peertube/config

[dest]

incremental = yes
desturl = rsync://xxxxxxxxxxxxxxx

2 « J'aime »

Thanks you @rigelk and @JohnLivingston for you help =).

Using backup.ninja is not in our plans for the moment, but maybe in a close future when we’ll have funds enough.

I used the table « videoFile », check any changes of count rows and max id, we don’t have lot’s of video upload everyday (for the moment), it’s a good compromise to not consume lots of space on the private FTP and cleaning it often.

I now have an error when using pipe with tar and curl :

tar: -: Cannot write: Broken pipe
tar: Child returned status 141
tar: Error is not recoverable: exiting now

After this command : https://gist.github.com/kimsible/0d7e2328d01591e64ab459c9fbf327ac#file-peertube-backup-L72

tar --exclude=docker-volume/db --exclude=docker-volume/data/videos --exclude=docker-volume/data/streaming-playlists -cjf - docker-volume .env docker-compose.yml | curl $SCHEME://$SERVER_URL/$BACKUP_BZIP $SSL_OPTION-sT -

Someone would have an idea ? @rigelk ?

I’m sorry.

I don’t know tar :confused: