Monday, 19 August 2013

PermissionsEx: Anti-griefing methods for Minecraft

More Grief

If you've already installed and configured Grey Lists using PermissionsEx (as described in my earlier post) then you are well aware of the protection this provides against those pesky griefers. Yeah I know you can white-list your server but this helps you want to showcase your work without risk.

My son can be a little too trusting, and far too keen to attract new builders, and it always seems to be when we have friends of friends connected that trouble starts brewing. A few days ago the inevitable happened, a school pal and his friend paired up for an orgy of TNT abuse, dropping as many buildings as they could in full view of the regular builders. The server was downed soon as he figured out what was happening and the permissions.yml file altered to remove the offenders building rights, but ultimately damage had been done. One of the longest standing buildings was hit so hard that it was reduced to a crater, all in about a minutes chaos!

PermissionsEx Revisited

All the griefing we've experienced so far has been the result of fire or TNT, with the latter obliterating buildings well beyond repair. It made sense after this attack to limit their use, removing them from the standard 'builder' groups rights, but after well over an hour of altering permissions and testing we had only managed to remove the rights to place TNT. It's really a case of trying different modifyworld  restriction in the permissions.yml file until you get a result.

The first thing to be aware of is the order of the permission lines is important. The file is scanned top to bottom and as soon as a match is found it stops. This means restrictions must go before grants using wildcards or they'll never be reached.

Builder:
      prefix: '&0(&8Builder&0)&7 '
      permissions:
      - -modifyworld.blocks.place.46
      - -modifyworld.bucket.empty.10
      - -modifyworld.bucket.fill.10
      - -modifyworld.items.pickup.259
      - -modifyworld.items.craft.259
      - -modifyworld.items.use.259.on.block.*
      - -modifyworld.items.use.259
      - modifyworld.*
      options:
          rank: '900'

nb. Our restrictions go before modifyworld.* and use the '-' minus symbol to differentiate them from grants. The objects are listed by item/block id (46 = tnt, 10 = lava, 259 = flint and steel)

Now when TNT (46) is placed by a builder the block gets picked straight back up again, similarly the restrictions on flint and steel (259) don't allow you to pick it up or use it. The lava bucket (327) lines need to restrict filling and emptying of a liquid, in this case lava.

You will also need to enable some options in the modifyworld.yml before this works.

item-restrictions: true
use-material-names: false
drop-restricted-item: true
item-use-check: true

I set use-material-names to false so that I could use item and block numbers, but you can leave this set to 'true' and use proper names. Then your restriction would read as per the example below:-

   - -modifyworld.blocks.place.TNT

Don't mix the two up or your permissions won't work. Also be aware that the plugins are very sensitive to formatting errors, so check your server log after changing and if necessary run your config through a YAML file validator.

In the mean time I'll work on the fire problem & if anyone has any suggestions then please comment.

Tuesday, 6 August 2013

Automatic for the People - Part 2

Down in the Dumps

I run a few systems that use databases and these (I'd assumed) were being backed up using my weekly backup script. But just copying the system files while the database was running proved to be a great way of losing the last few records should you need to restore. And I'd invariably be forced to run myisamchk on the recovered database before mySQL was happy.

After this happened a few times I started dumping my databases on a daily basis. A dump is not something nasty, but a flat file containing the database structure and contents in an SQL format.


- Example for mySQL
For each database you have add a line as follows to your backup script:-

mysqldump wiki --password=iliketurtles --add-drop-table > /var/backups/wiki.sql

Here I'm dumping my wiki database and it assumes I have an admin user for mySQL called root with a password as 'iliketurtles'. For convenience we'll make it add drop table commands, and recovery would be as simple as:-

   pingu:=# mysql -u root -p wiki < wiki.sql

NB. You would need to ensure is that a database called wiki exists or it will thrown an error, but an empty one will do.

- Example for PostgreSQL
For each database add a line as follows to your backup script:-

pg-dump -Fc davical > /var/backups/davical.pgdump

This also assumes you have a superuser called root in Postgresql.

NB. Again the database must exist before you can recover, then you would just enter the following command:-

   pingu:=# pg_restore -d davical davical.pgdump

So two very different databases engines taken care of. Just duplicate these commands for all the databases you want to backup.

Daily Increments

The weekly backup I described last time has served me well for many years, and the few times that I've needed to recover anything have been a success. But more recently my friend Grunders (who'd been storing open office files in his home directory over a VPN) lost his files when the hard disk failed. I offered him the backup from the weekend but he said it was no good because he'd made too many changes.

Luck was with him though, that old trick of turning off the computer and powering back on brought the drive back to life long enough to retrieve his missing files. However we went on to convinced ourselves that we needed a daily incremental backup. My Initially attempt at doing this simply failed as I'd tied myself in bash script syntax knots. But a few days later Grunders came to the rescue with the following code:-

SOURCE=/home/
TARGET=$(date +%Y.%m.%d-%H%M)
RUNTIME=$(date +%Y%m%d%H%M)

cd "$SOURCE"
IFS="
"
LIST=`ls`
for i in $LIST
do
  DirUser=$(echo "$i")
  echo $DirUser
  cd /home/$DirUser
  find * \( ! -regex '.*/\..*' \) -newer /root/REFTIME -print0 | xargs -0 tar --no-recursion -cpf /DailyBackup/$DirUser/$TARGET.tar &gt;&gt; /root/dailyincbackup.log
  if [ -e /TimeMachine/DailyBackup/$DirUser/$TARGET.tar ]
  then
    mkdir /TimeMachine/DailyBackup/$DirUser/$TARGET
    tar -xvf /DailyBackup/$DirUser/$TARGET.tar -C /TimeMachine/DailyBackup/$DirUser/$TARGET
    rm /DailyBackup/$DirUser/$TARGET.tar
  fi
done
touch -t $RUNTIME /root/REFTIME

It's actually a little bit odd as the script is using find to locate files that have changed since the date held in /root/REFTIME and create a tar file of the changes. (The regex part removes hidden files, such as those that Apple Mac's add) Then to avoid tar files he's untarred the resulting file and deleted it. Obviously this is not ideal, but there is an alternative way that uses a cp command:-

find . \( ! -regex '.*/\..*' \) -newer /root/REFTIME -exec cp -Ra {} /DailyBackup/$DirUser/$TARGET \;

The receiving directory must exist for this to work, and it may leave empty directories if no changes where detected, so I'll leave the script as it is for now.

Thanks G, all it needs now is something to purge old files.

Sunday, 4 August 2013

Minecraft Grey Lists

Anti-Griefing

A few weeks ago my son's Minecraft world became trashed (buildings completely obliterated) by friends of a friend who visited our server. Thankfully we had a backup to restore from and then to ensure it didn't happen again we turned on white lists in the server.properties file.

But the story doesn't end here... a few days ago number one son asked me if I knew anything about grey lists, informing me roughly what they're supposed to do. Apparently if you set up a grey list then it enables anyone to join the server but they're restricted from building. This was new to me and seems ideal, meaning he'd be able to pass about the server connection details freely without worrying about being griefed again.

Grey Lists

The vanilla server only supports black lists (to ban people) or white lists (to only allow access to those listed). There's no setting in the properties file to enable grey lists, so I googled until I found that a plugin called PermissionsEx. This enables you to define access groups with appropriate rights, and assign users to them.

But you can't add plugins to the vanilla version of Minecraft server,.. there's no plugin directory, instead you have to use the CraftBukkit server instead. (Click here for download page)

While I'm at it, here's the link for the PermissionsEx plugin. (Click here for download page)

Installing Bukkit & PermissionsEx

Here's the steps we went through:
  1. Edit your server.properties file and disable white-lists if you where using them.
  2. Download the craftbukkit jar file and copy it into your minecraft server directory.
  3. Run the craftbukkit server program to create the extra files and folders (it also converts your world files).
  4. Download the PermissionEx jar files and copy these into the newly created plugin directory.
  5. Restart the server again and a PermissionEx sub-directory will be created.
  6. Change to this directory but leave config.yml alone.
  7. Edit the permissions.yml file and replace the contents with the following lines, then add your users into the end section (in line with the examples).
groups:
    default:
        default: true
        options:
            rank: '1000'
        permissions:
        - modifyworld.chat
    Builder:
        prefix: '&0(&8Builder&0)&7 '
        permissions:
        - modifyworld.*
        options:
            rank: '900'
    Moderator:
        prefix: '&0(&1Moderator&0)&7 '
        permissions:
        - -modifyworld.mobtarget.*
        - modifyworld.*
        options:
            rank: '100'
    Admin:
        prefix: '&0(&4Admins&0)&7 '
        permissions:
        - -modifyworld.mobtarget.*
        - modifyworld.*
        - permissions.*
        options:
            rank: '1'
users:
    SomePlayerName:
        group:
        - Builder
    YourPlayerName:
        group:
        - Admin

NB. Ensure you don't add any spaces before 'groups' or 'users' or the file will be ignored.

We found that if your name is in the ops.txt file then you automatically get admin level.

Follow this link for further details on configuration and how to type /pex commands when in the game.

Finally I updated the /etc/init.d/minecraft startup script to use the CraftBukkit jar file.

Thursday, 1 August 2013

Automatic for the People - Part 1

DYBB

Many, many years ago I took over a Support Manager role at a snack factory and I inherited my predecessor's office. Chris (who I'd met a few years before when I'd worked there in my summer holidays) now worked at head office, but was around a few days a week to help me learn the systems. He'd left me a lot of his old stuff and there was an A4 (approx US letter size) page stuck to the wall above the HP Deskjet 500 printer that just had the letters 'DYBB' filling the page. I pondered for a while what this should mean (I was sure it was nothing to do with boy scouts and "Doing Your Best") and eventually I just asked him.

Chris smiled and replied, "That what I tell all my users... Do Your Bloody Backups!"

Of course this was back in the days of 386 and 486 computers running DOS, and hardly anyone stored files on the Novell file server. Therefore there was a portable Colorado Trakker tape backup unit that the finance departments looked after, and it was up to the few PC owners to run weekly backups to QIC tapes. Some did, some didn't, and many just relied on floppy disks.

Technology has a way of picking the least opportune moment to catch us out (see my previous blog entry). Over the last ten years of running a home server I've learnt that if you're passionate about keeping your data, then backups need to be automatic.


Keeping it Simple

I took the decision many years ago that I'd just perform a backup once a week for my /data, /etc, /var and /home directories. I have a 2Tb backup disk formatted with a 50Gb partition, with another partition for the remainder. There are no entries in the fstab file so it's not mounted automatically, and it uses standard EXT3 formatting.

1 - Backup of data is just a straight copy of new and updated files onto the bigger backup partition. The disk is temporarily mounted to /mnt and then a simple copy refreshes the files and directories within it. It does have a few disadvantages but I love the simplicity of it, and so far it's served me well.

Here's the bit of script that does this:-

Drive="/dev/sda2"
mount -t ext3 $Drive /mnt
# CHECK MOUNTED OK
OK=`df | awk '/[ \/]mnt/ {print $1}'`
if [ "$OK" = "$Drive" ]; then
  cp -Rvu /data /mnt
  umount /mnt
  sleep 120
  hdparm -y /dev/sda
fi

It's using df and awk to check to ensure the drive gets mounted OK before continuing, then cp to copy the data. At the end it unmounts the drive and uses hdparm to put it to sleep.

2 - The Etc, Var and Home directories contain files where I wanted to keep a few versions because file updates are common. So here I mount the smaller backup partition and implement a Grandfather, Father, Son approach to the problem. Son is a directory containing a full copy of the files, which then gets GZ Tarred the next week to become the 'father', and later 'grandfather' file. I hate untarring files so it means if needed I can retrieve recently versions of files conveniently from the /son directory. In practice I've hardly ever had to resort to plundering the older tar files.

Here's the bit of script that does this:-

Drive="/dev/sda2"
# CHECK MOUNTED OK
OK=`df | awk '/[ \/]mnt/ {print $1}'`
if [ "$OK" = "$Drive" ]; then
  # if exist father, rename to grandfather
  cd /mnt/
  if [ father.tar.gz ] ; then
    mv father.tar.gz grandfather.tar.gz
  fi

  # if exist /son then tar to father.tar.gz

  if [ son ] ; then
    tar -zcf father.tar.gz son/*
    rm son -R

  fi


  mkdir son

  cd son
  mkdir etc
  mkdir var
  mkdir home
  mkdir root

  cp /etc/* /mnt/son/etc/ -a

  cp /var/* /mnt/son/var/ -a
  cp /home/* /mnt/son/home/ -a
  cp /root/* /mnt/son/root/ -a

  # Unmount drive

  cd /
  umount /mnt
  sleep 120
  hdparm -y /dev/sda

fi

Again it's using df and awk to ensure the drive gets mounted, then performs the grandfather, father, son backup as described above. Finally it unmounts the drive and puts it to sleep.

I've completely ignored database backups and incremental backups for now, I'll continue with that next time.