Showing posts with label Backups. Show all posts
Showing posts with label Backups. Show all posts

Wednesday, 11 December 2013

Minecraft Backup for MultiWorld servers

Basic Server Backup

One of the main reasons we built our new Linux server was to be able to run a simple Minecraft server, but as time's gone by we've moved to a more sophisticated configuration. We started out by switching from vanilla Minecraft to (plug-in friendly) CraftBukkit and from there we've experimented with various plug-ins that improved security or added enhancements to the game.

We've always used the linux startup script from the minecraft wiki and this gives us the added benefit of easy world backups without having to shut the service down. Backups are triggered from an entry in the crontab as below.

10 23 * * * /etc/init.d/minecraft backup

Here the backup option is triggered at 23:10 every day and generates the following entries in the minecraft server log.

2013-12-01 23:10:01 [INFO] [Server ] SERVER BACKUP STARTING. Server going readonly...
2013-12-01 23:10:01 [INFO] CONSOLE: Disabled level saving..
2013-12-01 23:10:01 [INFO] CONSOLE: Forcing save..
2013-12-01 23:10:03 [INFO] CONSOLE: Save complete.
2013-12-01 23:10:19 [INFO] CONSOLE: Enabled level saving..
2013-12-01 23:10:19 [INFO] [Server ] SERVER BACKUP ENDED. Server going read-write...  

MultiSite Backup

It wasn't long before we had the Multi-World plug-in installed and a dozen new Worlds linked in. My son built a linking portal with teleports to all of the new worlds and it was great, but it became apparent that these weren't being backed up by the startup script.

Mojang's script covers just the main world which is defined in the #Settings section at the top of the file. (see example below)

#Settings
SERVICE='craftbukkit.jar'
OPTIONS='nogui'
USERNAME='root'
WORLD='Plop'

OTHERWORLDS='creative hungergames spawn griefcity'
MCPATH='/opt/minecraft'
BACKUPPATH='/TimeMachine/DailyBackup/minecraft'
MAXHEAP=2048
MINHEAP=1024
HISTORY=1024
CPU_COUNT=1

I added a new variable called 'OTHERWORLDS' and I populated it with a list of the worlds currently being missed. (Note: these are separated by a space character so make sure the worlds don't include spaces in their names)

Next I altered the backup section of the script to loop through the OTHERWORLDS list and add each world directory to the backup tar file. (see script below)

mc_backup() {
mc_saveoff
NOW=`date "+%Y-%m-%d_%Hh%M"`
BACKUP_FILE="$BACKUPPATH/${WORLD}_${NOW}.tar"
echo "Backing up minecraft world…"
#as_user "cd $MCPATH && cp -r $WORLD $BACKUPPATH/${WORLD}_`date "+%Y.%m.%d_%H.%M"`" as_user "tar -C \"$MCPATH\" -cf \"$BACKUP_FILE\" $WORLD"

  for i in $OTHERWORLDS
  do
    as_user "tar -C \"$MCPATH\" -rf \"$BACKUP_FILE\" $i"
  done

echo "Backing up $SERVICE"
as_user "tar -C \"$MCPATH\" -rf \"$BACKUP_FILE\" $SERVICE"
#as_user "cp \"$MCPATH/$SERVICE\" \"$BACKUPPATH/minecraft_server_${NOW}.jar\""

mc_saveon

echo "Compressing backup..."
as_user "gzip -f \"$BACKUP_FILE\""
echo "Done."
}

Now when the backup runs the main world is backed up to a daily tar file and then the other worlds are added to it. (It will create a TAR file named after your main world and a timestamp)

Possible Updates

The only real problem with this script is that is doesn't cater for world names containing spaces. This was initially an issue that I struggled to fix, and in the end I took the easy option and altered the world name to remove the offending character.

But as is often the case the script is good enough, so I have no plans to fix it.

Update

An improved method can be found here http://theperfectbeast.blogspot.co.uk/p/linux-minecraft-server-script.html

Friday, 4 October 2013

OSX Time Machine kills old disks

Round 1

Earlier this year I decided to replace the disk in my Macbook Pro with an SSD. The HDD was only a year old but I was already having problems that required frequent fixing in DiskUtility. Also those odd clicking noises it was making were making me feel nervous.

I decided to backup asap (once I'd fixed the disk again) and set about manually copying my data to an external disk in the finder window. After is kept failing on a bad block I decided to try and update my Time Machine backup instead. It ran for a while and then the machine froze, forcing me to reboot the machine. I then found that the disk was corrupted and Diskutility could see it but refused to touch it.

I bought a copy of Stellar Phoenix Mac Recovery because the demo version showed all my files listed, but after recovering about 200Gb they all proved unusable. Other attempts have proved equally unsuccessful with days of processing not finding any files what-so-ever. My experience of buying recovery software has proven to be a huge waste of money! (I found similar trying to recover ReiserFS partitions)

So I ran fsck (details here) to see if it would fix it..

admin$ fsck /dev/disk1s2 -f

BAD SUPER BLOCK: MAGIC NUMBER WRONG

LOOK FOR ALTERNATIVE SUPERBLOCKS? [yn] y


CANNOT READ: BLK 16585216

CONTINUE? [yn] y

THE FOLLOWING DISK SECTORS COULD NOT BE READ: 16585216, 16585217, 16585218, 16585219, 16585220, 16585221, 16585222, 16585223,

CANNOT READ: BLK 567944160

CONTINUE? [yn] y

THE FOLLOWING DISK SECTORS COULD NOT BE READ: 567944168, 567944169, 567944170, 567944171, 567944172, 567944173, 567944174, 567944175,

SEARCH FOR ALTERNATE SUPER-BLOCK FAILED. YOU MUST USE THE
-b OPTION TO FSCK TO SPECIFY THE LOCATION OF AN ALTERNATE
SUPER-BLOCK TO SUPPLY NEEDED INFORMATION; SEE fsck(8).

So that didn't work then! (I haven't got a clue where I'd find an alternative super-block.)

Round 2

This last weekend I bought a new USB portable disk unit to use as a new Time Machine volume. My wife's Macbook hadn't been backed up since earlier this year so I decided that was the first machine to try it out on.

I hooked up the drive, kicked off Time Machine and left it running. I noticed after about an hour that it was stuck at 48% so I attempted to kill it and start again, but the Macbook refused to respond. In the end I rebooted the machine but wasn't able to get past a blank grey startup screen. I booted from a startup CD can ran DiskUtility but it refused to see the internal disk.

In the end I replaced the HDD (which was 4½ years old) and recovered from her old Time Machine backup. So that's another internal disk that Time Machine pushed over the edge, and this time it refused to show as a drive.

I'm definitely reaching the conclusion that you shouldn't use Time Machine if you have any doubts over the integrity of your hard disk. Make sure you backup regularly and you'll not lose much.

You do backup don't you?

Update (10th Oct)

I've found that Disk Warrior 4.4 can cope with a faulty disk and will mount this disk in read-only mode so that you can copy off your files. It wasn't without it's problems though, the failing disk causing disk copies to occasionally hang on certain files.

Tuesday, 6 August 2013

Automatic for the People - Part 2

Down in the Dumps

I run a few systems that use databases and these (I'd assumed) were being backed up using my weekly backup script. But just copying the system files while the database was running proved to be a great way of losing the last few records should you need to restore. And I'd invariably be forced to run myisamchk on the recovered database before mySQL was happy.

After this happened a few times I started dumping my databases on a daily basis. A dump is not something nasty, but a flat file containing the database structure and contents in an SQL format.


- Example for mySQL
For each database you have add a line as follows to your backup script:-

mysqldump wiki --password=iliketurtles --add-drop-table > /var/backups/wiki.sql

Here I'm dumping my wiki database and it assumes I have an admin user for mySQL called root with a password as 'iliketurtles'. For convenience we'll make it add drop table commands, and recovery would be as simple as:-

   pingu:=# mysql -u root -p wiki < wiki.sql

NB. You would need to ensure is that a database called wiki exists or it will thrown an error, but an empty one will do.

- Example for PostgreSQL
For each database add a line as follows to your backup script:-

pg-dump -Fc davical > /var/backups/davical.pgdump

This also assumes you have a superuser called root in Postgresql.

NB. Again the database must exist before you can recover, then you would just enter the following command:-

   pingu:=# pg_restore -d davical davical.pgdump

So two very different databases engines taken care of. Just duplicate these commands for all the databases you want to backup.

Daily Increments

The weekly backup I described last time has served me well for many years, and the few times that I've needed to recover anything have been a success. But more recently my friend Grunders (who'd been storing open office files in his home directory over a VPN) lost his files when the hard disk failed. I offered him the backup from the weekend but he said it was no good because he'd made too many changes.

Luck was with him though, that old trick of turning off the computer and powering back on brought the drive back to life long enough to retrieve his missing files. However we went on to convinced ourselves that we needed a daily incremental backup. My Initially attempt at doing this simply failed as I'd tied myself in bash script syntax knots. But a few days later Grunders came to the rescue with the following code:-

SOURCE=/home/
TARGET=$(date +%Y.%m.%d-%H%M)
RUNTIME=$(date +%Y%m%d%H%M)

cd "$SOURCE"
IFS="
"
LIST=`ls`
for i in $LIST
do
  DirUser=$(echo "$i")
  echo $DirUser
  cd /home/$DirUser
  find * \( ! -regex '.*/\..*' \) -newer /root/REFTIME -print0 | xargs -0 tar --no-recursion -cpf /DailyBackup/$DirUser/$TARGET.tar &gt;&gt; /root/dailyincbackup.log
  if [ -e /TimeMachine/DailyBackup/$DirUser/$TARGET.tar ]
  then
    mkdir /TimeMachine/DailyBackup/$DirUser/$TARGET
    tar -xvf /DailyBackup/$DirUser/$TARGET.tar -C /TimeMachine/DailyBackup/$DirUser/$TARGET
    rm /DailyBackup/$DirUser/$TARGET.tar
  fi
done
touch -t $RUNTIME /root/REFTIME

It's actually a little bit odd as the script is using find to locate files that have changed since the date held in /root/REFTIME and create a tar file of the changes. (The regex part removes hidden files, such as those that Apple Mac's add) Then to avoid tar files he's untarred the resulting file and deleted it. Obviously this is not ideal, but there is an alternative way that uses a cp command:-

find . \( ! -regex '.*/\..*' \) -newer /root/REFTIME -exec cp -Ra {} /DailyBackup/$DirUser/$TARGET \;

The receiving directory must exist for this to work, and it may leave empty directories if no changes where detected, so I'll leave the script as it is for now.

Thanks G, all it needs now is something to purge old files.

Thursday, 1 August 2013

Automatic for the People - Part 1

DYBB

Many, many years ago I took over a Support Manager role at a snack factory and I inherited my predecessor's office. Chris (who I'd met a few years before when I'd worked there in my summer holidays) now worked at head office, but was around a few days a week to help me learn the systems. He'd left me a lot of his old stuff and there was an A4 (approx US letter size) page stuck to the wall above the HP Deskjet 500 printer that just had the letters 'DYBB' filling the page. I pondered for a while what this should mean (I was sure it was nothing to do with boy scouts and "Doing Your Best") and eventually I just asked him.

Chris smiled and replied, "That what I tell all my users... Do Your Bloody Backups!"

Of course this was back in the days of 386 and 486 computers running DOS, and hardly anyone stored files on the Novell file server. Therefore there was a portable Colorado Trakker tape backup unit that the finance departments looked after, and it was up to the few PC owners to run weekly backups to QIC tapes. Some did, some didn't, and many just relied on floppy disks.

Technology has a way of picking the least opportune moment to catch us out (see my previous blog entry). Over the last ten years of running a home server I've learnt that if you're passionate about keeping your data, then backups need to be automatic.


Keeping it Simple

I took the decision many years ago that I'd just perform a backup once a week for my /data, /etc, /var and /home directories. I have a 2Tb backup disk formatted with a 50Gb partition, with another partition for the remainder. There are no entries in the fstab file so it's not mounted automatically, and it uses standard EXT3 formatting.

1 - Backup of data is just a straight copy of new and updated files onto the bigger backup partition. The disk is temporarily mounted to /mnt and then a simple copy refreshes the files and directories within it. It does have a few disadvantages but I love the simplicity of it, and so far it's served me well.

Here's the bit of script that does this:-

Drive="/dev/sda2"
mount -t ext3 $Drive /mnt
# CHECK MOUNTED OK
OK=`df | awk '/[ \/]mnt/ {print $1}'`
if [ "$OK" = "$Drive" ]; then
  cp -Rvu /data /mnt
  umount /mnt
  sleep 120
  hdparm -y /dev/sda
fi

It's using df and awk to check to ensure the drive gets mounted OK before continuing, then cp to copy the data. At the end it unmounts the drive and uses hdparm to put it to sleep.

2 - The Etc, Var and Home directories contain files where I wanted to keep a few versions because file updates are common. So here I mount the smaller backup partition and implement a Grandfather, Father, Son approach to the problem. Son is a directory containing a full copy of the files, which then gets GZ Tarred the next week to become the 'father', and later 'grandfather' file. I hate untarring files so it means if needed I can retrieve recently versions of files conveniently from the /son directory. In practice I've hardly ever had to resort to plundering the older tar files.

Here's the bit of script that does this:-

Drive="/dev/sda2"
# CHECK MOUNTED OK
OK=`df | awk '/[ \/]mnt/ {print $1}'`
if [ "$OK" = "$Drive" ]; then
  # if exist father, rename to grandfather
  cd /mnt/
  if [ father.tar.gz ] ; then
    mv father.tar.gz grandfather.tar.gz
  fi

  # if exist /son then tar to father.tar.gz

  if [ son ] ; then
    tar -zcf father.tar.gz son/*
    rm son -R

  fi


  mkdir son

  cd son
  mkdir etc
  mkdir var
  mkdir home
  mkdir root

  cp /etc/* /mnt/son/etc/ -a

  cp /var/* /mnt/son/var/ -a
  cp /home/* /mnt/son/home/ -a
  cp /root/* /mnt/son/root/ -a

  # Unmount drive

  cd /
  umount /mnt
  sleep 120
  hdparm -y /dev/sda

fi

Again it's using df and awk to ensure the drive gets mounted, then performs the grandfather, father, son backup as described above. Finally it unmounts the drive and puts it to sleep.

I've completely ignored database backups and incremental backups for now, I'll continue with that next time.