Showing posts with label BASH. Show all posts
Showing posts with label BASH. Show all posts

Friday, 6 June 2014

Adding a Second Minecraft Server in Linux

Introduction

We've been using the linux startup script from the minecraft wiki for some time now and generally it works a treat, but recently I've wanted to set up a second service for my two younger sons to play on, away from the highly modified worlds that my oldest son had set up. The script can't cope with two services running at the same time, but with a few tweaks you can run as many as you like. The following instructions create a new game server called Pinga using a service called kidcraft.

Creating the New Minecraft Installation

I wanted it use vanilla Minecraft version 1.5.2, which was the very first version that we tried, but it should work for any version including craftbukkit, or any other modded versions.

1. Make a copy of your existing Minecraft installation.

cp -Rv /opt/minecraft /opt/kidcraft

2. Remove the old world directories (in my case the world was called 'Plop')

rm -R Plop

3. Edit your server.properties file and change the following settings:-
   - server-port : 25566 (the default is 25565 so pick a number that isn't being used)
   -server-name : Pinga (enter your new server name)
   - level-name : Pinga (will be used to name your new world)
   - gamemode : 1 (Creative - my kids don't want to play survival, you can leave this as 0)

Also make sure the other settings are set back if altered from standard.

4. Empty the Ops, Banned-ip, Banned-players and White-list files or delete them (they'll get recreated when you start the server).

5. Start the server up and allow it to create a new world.

6. Check you can connect to it in the game using the Multiplayer option.

7. Close down the server.

Create a New Backup Folder

If you use the backup option then you want to specify a new location for the files so they don't get mixed in with your existing backups. Mine were going into the /DailyBackup/Minecraft folder.

8. Go to your backup folder and create a new sub-directory

cd /DailyBackup
mkdir kidcraft

Modifying the Startup Script

The startup script uses the 'screen' utility as the control mechanism, and this also allows you to access the console if required. You will need to make some changes to the new script so that the commands connect to the correct service.

 9. Copy the startup script

cp /etc/init.d/minecraft /etc/init.d/kidcraft

10. Ensure it can be executed

chmod 755 /etc/init.d/kidcraft

11. Edit the new startup script and modify the following:-
   - WORLD : 'Pinga'
   - MCPATH : '/opt/kidcraft'
   - BACKUPPATH : '/DailyBackup/kidcraft'

12. Then alter the startup line in the mc_start() function

as_user "cd $MCPATH && screen -h $HISTORY -dmS kidcraft $INVOCATION"

13. Repeat stage 12 for the following functions:-
   - mc_saveoff()
   - mc_saveon()
   - mc_stop()
   - mc_command()

14. Save and exit the file.

You should be able to start and stop the new service using this new script in the normal way.

eg. /etc/init.d/kidcraft start

You can also access the console using the following command.

screen -r -S kidcraft

(Nb. use Ctrl + A D to exit)

One more thing..

Nearly forgot,.. if you want to make this server available over the web, don't forget to set up another port forwarding service on your router.

Update

An improved method can be found here http://theperfectbeast.blogspot.co.uk/p/linux-minecraft-server-script.html

Tuesday, 6 August 2013

Automatic for the People - Part 2

Down in the Dumps

I run a few systems that use databases and these (I'd assumed) were being backed up using my weekly backup script. But just copying the system files while the database was running proved to be a great way of losing the last few records should you need to restore. And I'd invariably be forced to run myisamchk on the recovered database before mySQL was happy.

After this happened a few times I started dumping my databases on a daily basis. A dump is not something nasty, but a flat file containing the database structure and contents in an SQL format.


- Example for mySQL
For each database you have add a line as follows to your backup script:-

mysqldump wiki --password=iliketurtles --add-drop-table > /var/backups/wiki.sql

Here I'm dumping my wiki database and it assumes I have an admin user for mySQL called root with a password as 'iliketurtles'. For convenience we'll make it add drop table commands, and recovery would be as simple as:-

   pingu:=# mysql -u root -p wiki < wiki.sql

NB. You would need to ensure is that a database called wiki exists or it will thrown an error, but an empty one will do.

- Example for PostgreSQL
For each database add a line as follows to your backup script:-

pg-dump -Fc davical > /var/backups/davical.pgdump

This also assumes you have a superuser called root in Postgresql.

NB. Again the database must exist before you can recover, then you would just enter the following command:-

   pingu:=# pg_restore -d davical davical.pgdump

So two very different databases engines taken care of. Just duplicate these commands for all the databases you want to backup.

Daily Increments

The weekly backup I described last time has served me well for many years, and the few times that I've needed to recover anything have been a success. But more recently my friend Grunders (who'd been storing open office files in his home directory over a VPN) lost his files when the hard disk failed. I offered him the backup from the weekend but he said it was no good because he'd made too many changes.

Luck was with him though, that old trick of turning off the computer and powering back on brought the drive back to life long enough to retrieve his missing files. However we went on to convinced ourselves that we needed a daily incremental backup. My Initially attempt at doing this simply failed as I'd tied myself in bash script syntax knots. But a few days later Grunders came to the rescue with the following code:-

SOURCE=/home/
TARGET=$(date +%Y.%m.%d-%H%M)
RUNTIME=$(date +%Y%m%d%H%M)

cd "$SOURCE"
IFS="
"
LIST=`ls`
for i in $LIST
do
  DirUser=$(echo "$i")
  echo $DirUser
  cd /home/$DirUser
  find * \( ! -regex '.*/\..*' \) -newer /root/REFTIME -print0 | xargs -0 tar --no-recursion -cpf /DailyBackup/$DirUser/$TARGET.tar &gt;&gt; /root/dailyincbackup.log
  if [ -e /TimeMachine/DailyBackup/$DirUser/$TARGET.tar ]
  then
    mkdir /TimeMachine/DailyBackup/$DirUser/$TARGET
    tar -xvf /DailyBackup/$DirUser/$TARGET.tar -C /TimeMachine/DailyBackup/$DirUser/$TARGET
    rm /DailyBackup/$DirUser/$TARGET.tar
  fi
done
touch -t $RUNTIME /root/REFTIME

It's actually a little bit odd as the script is using find to locate files that have changed since the date held in /root/REFTIME and create a tar file of the changes. (The regex part removes hidden files, such as those that Apple Mac's add) Then to avoid tar files he's untarred the resulting file and deleted it. Obviously this is not ideal, but there is an alternative way that uses a cp command:-

find . \( ! -regex '.*/\..*' \) -newer /root/REFTIME -exec cp -Ra {} /DailyBackup/$DirUser/$TARGET \;

The receiving directory must exist for this to work, and it may leave empty directories if no changes where detected, so I'll leave the script as it is for now.

Thanks G, all it needs now is something to purge old files.

Thursday, 1 August 2013

Automatic for the People - Part 1

DYBB

Many, many years ago I took over a Support Manager role at a snack factory and I inherited my predecessor's office. Chris (who I'd met a few years before when I'd worked there in my summer holidays) now worked at head office, but was around a few days a week to help me learn the systems. He'd left me a lot of his old stuff and there was an A4 (approx US letter size) page stuck to the wall above the HP Deskjet 500 printer that just had the letters 'DYBB' filling the page. I pondered for a while what this should mean (I was sure it was nothing to do with boy scouts and "Doing Your Best") and eventually I just asked him.

Chris smiled and replied, "That what I tell all my users... Do Your Bloody Backups!"

Of course this was back in the days of 386 and 486 computers running DOS, and hardly anyone stored files on the Novell file server. Therefore there was a portable Colorado Trakker tape backup unit that the finance departments looked after, and it was up to the few PC owners to run weekly backups to QIC tapes. Some did, some didn't, and many just relied on floppy disks.

Technology has a way of picking the least opportune moment to catch us out (see my previous blog entry). Over the last ten years of running a home server I've learnt that if you're passionate about keeping your data, then backups need to be automatic.


Keeping it Simple

I took the decision many years ago that I'd just perform a backup once a week for my /data, /etc, /var and /home directories. I have a 2Tb backup disk formatted with a 50Gb partition, with another partition for the remainder. There are no entries in the fstab file so it's not mounted automatically, and it uses standard EXT3 formatting.

1 - Backup of data is just a straight copy of new and updated files onto the bigger backup partition. The disk is temporarily mounted to /mnt and then a simple copy refreshes the files and directories within it. It does have a few disadvantages but I love the simplicity of it, and so far it's served me well.

Here's the bit of script that does this:-

Drive="/dev/sda2"
mount -t ext3 $Drive /mnt
# CHECK MOUNTED OK
OK=`df | awk '/[ \/]mnt/ {print $1}'`
if [ "$OK" = "$Drive" ]; then
  cp -Rvu /data /mnt
  umount /mnt
  sleep 120
  hdparm -y /dev/sda
fi

It's using df and awk to check to ensure the drive gets mounted OK before continuing, then cp to copy the data. At the end it unmounts the drive and uses hdparm to put it to sleep.

2 - The Etc, Var and Home directories contain files where I wanted to keep a few versions because file updates are common. So here I mount the smaller backup partition and implement a Grandfather, Father, Son approach to the problem. Son is a directory containing a full copy of the files, which then gets GZ Tarred the next week to become the 'father', and later 'grandfather' file. I hate untarring files so it means if needed I can retrieve recently versions of files conveniently from the /son directory. In practice I've hardly ever had to resort to plundering the older tar files.

Here's the bit of script that does this:-

Drive="/dev/sda2"
# CHECK MOUNTED OK
OK=`df | awk '/[ \/]mnt/ {print $1}'`
if [ "$OK" = "$Drive" ]; then
  # if exist father, rename to grandfather
  cd /mnt/
  if [ father.tar.gz ] ; then
    mv father.tar.gz grandfather.tar.gz
  fi

  # if exist /son then tar to father.tar.gz

  if [ son ] ; then
    tar -zcf father.tar.gz son/*
    rm son -R

  fi


  mkdir son

  cd son
  mkdir etc
  mkdir var
  mkdir home
  mkdir root

  cp /etc/* /mnt/son/etc/ -a

  cp /var/* /mnt/son/var/ -a
  cp /home/* /mnt/son/home/ -a
  cp /root/* /mnt/son/root/ -a

  # Unmount drive

  cd /
  umount /mnt
  sleep 120
  hdparm -y /dev/sda

fi

Again it's using df and awk to ensure the drive gets mounted, then performs the grandfather, father, son backup as described above. Finally it unmounts the drive and puts it to sleep.

I've completely ignored database backups and incremental backups for now, I'll continue with that next time.