Xen vs OpenVZ

Posted on .

Some things I have been able to measure after I changed from OpenVZ to Xen.

BackupPC has 62 hosts reporting “Pool is 348.76GB comprising 2086499 files and 4369 directories” using 6.5mil inodes on XFS on its own separate set of disks.

Load during BackupPC usage on OpenVZ used to send the host into a frenzy of 10+, with the guest at 2-3. Whilst BackupPC was running, all guests felt sluggish.
Load with Xen has the host and all other VMs completely unaffected, and the guest at 3-5.

BackupPC nightly takes 2x the time on Xen than it does on OpenVZ. I have adjusted the nightly to be split up over 2 days instead of all in 1.

All backups are taking ~20% longer with Xen.

Memory usage on the host with OpenVZ used to climb to 12GB+ (out of 16GB total) when BackupPC nightly was running, and then slowly dissipate during the day. The memory usage wasn’t in the cache/buffer section either, it was marked as actually in use. However no application was using all that RAM. The RAM being used was listed as “slab” RAM (tip: run “slabtop”) which is what XFS uses as cache, however the free command sees the RAM as used which is confusing. I think as the host had 14GB+ free most the time, it used as much as it could.

Memory usage on the host whilst using Xen was obviously not affected by the BackupPC guest. BackupPC guest total RAM is only 2GB, so memory usage would climb to 1.5GB, in line with expectations. 1GB+ of this was slab memory.

In general I have noticed most VMs to feel slower with Xen, especially with disk read/writes. However hdparm tests show raw throughput has remained unchanged (~80MB/s). I have noticed though that when another VM is busy, the other guests have no noticeable reduction in performance on Xen. This was my main reason for switching, as BackupPC was bringing all other guests to a near halt on OpenVZ.

I think the reason for the reduction in performance is the fact each guest has to manage its own disk cache whilst during OpenVZ, the majority of data was mounted on the host with plenty of free RAM available for caching. This is a fundamental difference between containers and hypervisors.

I have reduced MySQL guest RAM from 4GB to 1GB. The Zabbix database alone is 14GB, but in all I haven’t noticed any noticeable difference in performance for Zabbix, or its frontend with both the move to Xen and the RAM change. The daily-backup on the SQL server (backup all databases) takes 50% longer to complete (~1min, now ~1m40s).

The small number of timed tests I could compare from one to the other may show a performance loss on Xen, but with this being a true hypervisor system, it was expected. I didn’t expect BackupPC to suffer as much as it did during the nightly, but I believe this to be because it no longer has the freedom of 14GB free RAM to use as cache.

I have also lost the RAM flexibility that comes with OpenVZ, but overall I am happy with the move back to Xen. I moved to OpenVZ as my DVB-T card didn’t function when booting on the Xen kernel, but I plan to replace MythTV with a Sky box in the near future and was getting frustrated with BackupPC on OpenVZ.

logcheck — amavisd-new filter

Posted on .

Tested using Debian 7 Wheezy. To be added to /etc/logcheck/ignore.d.server/

With javascript enabled, the above regex block has a toolbar with a copy-to-clipboard button.

I have quite a few of these custom filters, I’ll post some more at another time.

Roundcube Plugin: Defense


With the new 0.9-beta I found some plugins needed updating. The antiBruteForce plugin that I relied on to thwart bruteforce login attempts no longer worked. I searched for an alternative and found the ‘security’ plugin, which looked like it would be a good alternative. However upon closer inspection it seems to miss a few critical features, so I set out to fill the void of a decent anti-brute-force plugin for Roundcube 0.9+.

Introducing roundcube-defense.

  • Bruteforce protection
    • Ban based on X failed-logins per Y seconds (default: 5 fails / 60m)
    • Ban for X seconds. (default: 120)
    • Increasing ban duration by power of 4 for repeated offenders (2m, 8m, 32m, 8h32m, etc)
  • Whitelist
  • Blacklist
  • Failed logins log [TODO: Logs are in DB, but no interface yet]
    • Only accessible by administrator

Visit the github page for more information. Worked fine with internal testing, however any bug reports or feature requests are welcome via the issues tracker.

Installing Wireshark (tshark) on pfSense 2.0.1

Posted on .

As discussed earlier, you can add extra packages from the FreeBSD repository to pfSense.

One of the most useful applications for any firewall is a packet sniffer. pfSense comes with tcpdump but Wireshark has more features, one of which is parsing application level protocols to give an easier understanding of traffic.

You can install tshark using pkg_add, however pfSense is missing some key libraries,

/libexec/ld-elf.so.1: Shared object "libkrb5.so.10" not found, required by "tshark"

I have tar’d the required libraries which you can download here,
  Libraries required by tshark on pfSense 2.0.1 (447.6 KiB, 1,555 hits)

You can also quickly install these libraries directly with the following commands,

cd /root && fetch http://www.nooblet.org/blog/download/libtshark-pfsense.tar.gz && tar xfvz libtshark-pfsense.tar.gz && mv /root/libtshark-pfsense/* /usr/local/lib/ && rm -rf root/libtshark-pfsense/