Tuesday, 6 August 2013

Automatic for the People - Part 2

Down in the Dumps

I run a few systems that use databases and these (I'd assumed) were being backed up using my weekly backup script. But just copying the system files while the database was running proved to be a great way of losing the last few records should you need to restore. And I'd invariably be forced to run myisamchk on the recovered database before mySQL was happy.

After this happened a few times I started dumping my databases on a daily basis. A dump is not something nasty, but a flat file containing the database structure and contents in an SQL format.


- Example for mySQL
For each database you have add a line as follows to your backup script:-

mysqldump wiki --password=iliketurtles --add-drop-table > /var/backups/wiki.sql

Here I'm dumping my wiki database and it assumes I have an admin user for mySQL called root with a password as 'iliketurtles'. For convenience we'll make it add drop table commands, and recovery would be as simple as:-

   pingu:=# mysql -u root -p wiki < wiki.sql

NB. You would need to ensure is that a database called wiki exists or it will thrown an error, but an empty one will do.

- Example for PostgreSQL
For each database add a line as follows to your backup script:-

pg-dump -Fc davical > /var/backups/davical.pgdump

This also assumes you have a superuser called root in Postgresql.

NB. Again the database must exist before you can recover, then you would just enter the following command:-

   pingu:=# pg_restore -d davical davical.pgdump

So two very different databases engines taken care of. Just duplicate these commands for all the databases you want to backup.

Daily Increments

The weekly backup I described last time has served me well for many years, and the few times that I've needed to recover anything have been a success. But more recently my friend Grunders (who'd been storing open office files in his home directory over a VPN) lost his files when the hard disk failed. I offered him the backup from the weekend but he said it was no good because he'd made too many changes.

Luck was with him though, that old trick of turning off the computer and powering back on brought the drive back to life long enough to retrieve his missing files. However we went on to convinced ourselves that we needed a daily incremental backup. My Initially attempt at doing this simply failed as I'd tied myself in bash script syntax knots. But a few days later Grunders came to the rescue with the following code:-

SOURCE=/home/
TARGET=$(date +%Y.%m.%d-%H%M)
RUNTIME=$(date +%Y%m%d%H%M)

cd "$SOURCE"
IFS="
"
LIST=`ls`
for i in $LIST
do
  DirUser=$(echo "$i")
  echo $DirUser
  cd /home/$DirUser
  find * \( ! -regex '.*/\..*' \) -newer /root/REFTIME -print0 | xargs -0 tar --no-recursion -cpf /DailyBackup/$DirUser/$TARGET.tar &gt;&gt; /root/dailyincbackup.log
  if [ -e /TimeMachine/DailyBackup/$DirUser/$TARGET.tar ]
  then
    mkdir /TimeMachine/DailyBackup/$DirUser/$TARGET
    tar -xvf /DailyBackup/$DirUser/$TARGET.tar -C /TimeMachine/DailyBackup/$DirUser/$TARGET
    rm /DailyBackup/$DirUser/$TARGET.tar
  fi
done
touch -t $RUNTIME /root/REFTIME

It's actually a little bit odd as the script is using find to locate files that have changed since the date held in /root/REFTIME and create a tar file of the changes. (The regex part removes hidden files, such as those that Apple Mac's add) Then to avoid tar files he's untarred the resulting file and deleted it. Obviously this is not ideal, but there is an alternative way that uses a cp command:-

find . \( ! -regex '.*/\..*' \) -newer /root/REFTIME -exec cp -Ra {} /DailyBackup/$DirUser/$TARGET \;

The receiving directory must exist for this to work, and it may leave empty directories if no changes where detected, so I'll leave the script as it is for now.

Thanks G, all it needs now is something to purge old files.

No comments:

Post a Comment