Upgrading a late 2006 model Mac Mini

One of the Macs in our house is a late 2006 model Mac Mini (MacMini1.1 model MA206LL/A). The machine itself still works hapily with Mac OS X Leopard (10.6), but it only has 2Gb of memory and since its harddisk broke a while ago, it is working from an USB harddisk. All in all technically still OK, but terrible user experience as it is just slow.
Today I did some investigation on the Internet to see to what extend this old machine can still be upgraded and bumped into an interesting overview on xlr8yourmac.com. It turns out that the basics are quite good and with a few changes it can still be used for some time:

  • CPU – currently a Core Duo that could be replaced with a Core 2 Duo
    The Core Duo processor is a 32-bit one that does not support 64-bit OS X. Fortunately the processor is on a socket (and not soldered to the main board) and its pin layout is identical to Core 2 Duo models. This is also being discussed on Apple’s discussion forum (still exists so Apple is not stopping it) and according to posts on MacRumors.com others have done this successfully, so this is definitely something I will try. Guess what, there is even a step-by-step guide on iFixIT on how to do it!
  • Memory – currently limited to 2GB but potentially could support 3Gb (of 2x 2Gb)
    Memory is limited to 2Gb (2x 1Gb) with the Core Duo processor, but the Core 2 Duo can support up to 4Gb (2x 2Gb) of memory. Unfortunately the MacMini1.1 model firmware does not support it but it turns out to be possible to flash the firmware of a MacMini2.1 as the folks on the NetKas forum explain. The links to the firmware no longer worked, but I found them on a French Mac Forum thanks to this post. After this upgrade 3Gb can be used, which is still 50% more than the machine had.
    There is a separate step-by-step guide on iFixIT for replacing the memory, but I don’t think I will need it as I will do it when I replace the CPU.
  • Harddisk – currently broken 5400rpm 80Gb disk, replacing this with a 60Gb SSD harddisk is a no-brainer
    Replacing a broken harddisk for an SSD disk is nothing fancy, though it is important to enable Trim support on OS X after replacing it when you use a non-Apple disk. For this I found the excellent tool Chameleon some time ago for my Macbook Pro.
    Also for this step there is a step-by-step guide on iFixIT, that I won’t need either as I will install the new harddisk when I replace the CPU.
  • Software – currently OS X Leopard (10.6) is the maximum
    Replacing the Core Duo CPU for a Core 2 Duo would turn the MacMini1.1 effectively into a Macmini2.1, which is capable of running OS X Lion (10.7) according to discussions on Apple’s discussion forms. There is apparently only one hack needed (removal of a file on the installation media) to be able to perform a clean install according to a discussion on MacRumors.com.

As I am not that uncomfortable with opening my old Mac Mini (did it before when I added memory) and the other steps appear doable, I will give this a shot. I just ordered the components and plan to perform the upgrade next weekend (assuming all parts will be in).

Access Cisco Firewall forwarded external IPv4 port from inside

For some time now I am using a borrowed Cisco 881 router as router/firewall for my internet connection. The box is stable and configured as I want, but unlike with the Linux and Fritz!Box routers I used before, the Cisco does not allow to connect to forwarded IPv4 ports on its external address. This is inconvenient in my situation as this means that I am unable to reach some services from my internal network (i.e. I cannot reach websites I host). So far the only way around this was using split DNS and double administration, which is quite tedious and inconvenient.

Some time ago when looking how to set this up, I bumped into this article:  NAT: access outside global address from the inside (this site seems to be down at the moment, but it’s content is still available through here thanks to the Internet Archive). This describes an alternative way to setup the Cisco NAT rules using the NAT Virtual Interface (NVI),which decouples them from the specific interface in a specific direction. Today I have tested this approach.


To setup the new NAT approach, change the existing NAT rules:

ip nat inside source static tcp 80 WW.XX.YY.ZZ 80

into something that looks like the next line:

ip nat source static tcp 80 WW.XX.YY.ZZ 80

ip access-list extended NAT-INSIDE-ADDRESSES
permit ip any
ip nat source list NAT-INSIDE-ADDRESSES interface FastEthernet0/1 overload

(basically remove the inside clause in the statement). In my setup is the internal IP address of my web server and WW.XX.YY.ZZ represents my external IP address. In this example I forwarded port 80 (HTTP). The last part is required to make sure that also internal traffic on FastEthernet0/1 will be NATted properly to avoid asynchronous data flows.

 Testing it

The first basis tests of this new setup were promising. Indeed, after these changes I could access my external sites also from internal addresses. However, when downloading something from an internal site I noticed that the performance was not very good. This was something I definitely could live with as the traffic would not be massive. However, due to this change in config, all NAT traffic turned out to be slower and effectively the performance of my network connection was about half of what it used to be. Before this change the Cisco 881 was capable of streaming about 38 – 43 Mbit, which was not my full 50Mbit bandwith, but close enough. With this (NVI) setup, I noticed that my max. network bandwith  using SpeedTest.NET dropped to 20Mbit and below. With the command
show processes cpu history
on the router I noticed that the poor Cisco 881 was at 100% CPU load/utilization during the downloads. I suspect that the old Cisco 881 (which does not support 50Mbit in the first place) is CPU-bound when using NAT Virtual Interfaces and not capable of handling this at higher speeds.


Technically, the approach to use the NAT Virtual Interface (NVI) feature of IOS works to enable access to NAT forwarded external ports from the inside. However, since this appears to be very CPU intensive, it is not a good solution for now as the Cisco 881 cannot cope with the load and the internet bandwith is effectively reduced to only 50%. I think need to revisit this approach once I have acquired a router that is capable to support the bandwith I have and see if then can handle the CPU load.

Happy New Year!

A Happy New Year and best wishes for 2014 to all of you!

As you may have noticed, things have become extremely quiet here and and published only a few posts during 2013. The key reason for that was that at my day job things were (and still are) tough, which did not let me a lot of time over the weekend spend time with my family, to play around with technical stuff, and still have enough energy (and discipline) left to post about that here. Now for this year I do not expect that to change immediatly, but I still do have a number of posts in draft to finish regarding the recovery of upgrade to Mac OS X Maverick, I have started to play with Cisco routers which requires me to document stuff and have some small projects lined up for this year to complete….

Now of course this is the beginning of the year, which normally means a fresh start and a lot of initial ideas… so let’s change some things now for the rest of this year…

Restoring OpenDirectory on Mac OS X Mountain Lion Server

After some more checking on the contents of the /Recovered Items folder left over after my failed upgrade of OS X from Lion to Mountain Lion I decided to proceed with re-installation of the components to see if I could get things back as they were again.

The first step was to install the Server component again (which has gone missing after the upgrade). This only took a simple purchase of the Server.app component in the App Store. After that I had a Mac Server again and could start my reinstallation.

The first component to reconfigure was the Open Directory component. It was extremely important for me not to lose that one as it contained all my users, their passwords and group membership as well as all the e-mail addresses each user had (I am hosting a few different domains, re-creating that would mean a lot of work).

When I enabled the Open directory server component, I had to specify how I wanted to configure that. This screen included an option to import a backup. As I still had the whole data structure from my previous installation, I tried that first but that did’t work. Then I noticed that the directory /Recovered Items/private/var/backups/ contained a file called ServerBackup_OpenDirectoryMaster.sparseimage that was less than a day old. I selected that file as backup, which was accepted to restore from and it looks like that did the trick. My users were restored and I could also login with my regular userID again.

Based on this initial success I decided to rebuild the rest of my server as I knew the other components (PostgreSQL, Postfix, Dovecot, etc) pretty well from when I hosted everything still on Linux… I will continue to document the steps I took as well as my custom setup as it may be useful for others.

Upgrade to Mountain Lion Server Failed…

Today I decided to (finally) upgrade my Mac Mini Server running OS X Lion Server to Mountain Lion Server. The upgrade was way overdue and Mountain Lion appeared to be pretty stable by now, so I decided to make the switch this weekend. Based on other’s good experiences, I had made a last Time Machine backup, disabled incoming mail on my firewall, purchased the update to Mountain Lion in the App Store and started the process.

Unfortunately after about 1 hour of processing i got a message like "Upgrade Failed, system will now restart". After this restart it turned out I was left with a vanilla install of Mac OS  X on my Mac Mini Server. It even started to ask all the 1st time questions again including whether I wanted to register my server wit Apple again. Once I logged in it turned out that indeed I had a vanilla installation of the bare OS X Mountain Lion system on my Mac Mini Server, still without the Server components (which was expected). Fortunately all user data was still were it should be (in /Users) but apart from that all system settings and other data (opendirectory, databases, mail, calendars, contacts, etc.) turned out to have moved to a folder called /Recovered Items. Apple… WTF?

A quick scan indicated that no data appears to be lost (pfew…) but I need to do some investigation on how to recover from this and decide whether I want to restore my backups (which eventually won’t resove anything as the next upgrade would probably fail again). The good thing is that although my Mac Mini Server itself is vital for my infrastructure (it runs a few Linux VMs), but it’s own functions are limited to Nameserver, Mail/Calendar/Contact Server and Fileserver for my other Mac. This may be a good moment to start from scratch and document my customizations while recovering…

Enable regular VNC access to an OS X Server remotely

Mac OS X Server has pretty decent screen sharing and remote desktop features out of the box to manage you headles OS X Server remotely. This works great when you have a Mac OS X desktop or laptop, but I found out today requires some additional setup when you’re using a Microsoft windows client.

The tehcnology used by Apple is VNC, which is a very mature and generally available protocol for which multiple mature clients exists on different platforms. However, Apple has decided to use its own authentication model between the client and the server out of the box (for probably good reasons, not sure though which but they probably wanted to use GSSAPI again). However, the default VNC authentication is not enables out of the box and requires some additional setup to enable access from standard VNC clients.

Today  I found myself needing to do some administrative tasks I knew I could do easily through a remote desktop connection, but since I was a few thousand kilometers away and only had my (Windows 7) work laptop with me, could not do. It turned out I had to enable some settings to allow "classic" (actually standard) VNC clients to connect and authenticate with the Mac OS X remote desktop (VNC) server. Furtunately it turned out to be possible not only through the graphical interface but, as many times with OS X, there was also a command line way to make the necessary adjustments. Running the following command:

sudo /System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Resources/kickstart  -configure -clientopts  -setvnclegacy -vnclegacy yes -setvncpw -vncpw PASSWORD

where PASSWORD is the password to be provided to authorize a standard VNC connection.

With the above command executed through an SSH connection over VPN I was able to enable standard VNC support on my Mac OS X Server and logged in (again though the VPN connection) on my server’s desktop remotely using a standard VNC client.

Just to be complete, the option to use a standard VNC client can be disabled using:

sudo /System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Resources/kickstart  -configure -clientopts  -setvnclegacy -vnclegacy no

Restoring Synology NAS Crashplan existing configuration

In addition to yesterday’s post about running Crashplan on a Synology Disk Station,I thought it was worth mentioning that the key advantage of using PC Load Letter‘s packages is that they nowadays also fully support Crashplan’s auto update feature. So once installed, there is no need to update the package anymore, Crashplan will update itself to the latest version automatically.

Unfortunately this is not visible in Synology’s Package Centre, which hapily shows an updated version of Crashplan is available whenever an updated package is available. Since it is always a good idea to have the latest package installes as it may resolve other issues (i.e. one day autoupdate support started to work now it also it seems to preserve its configuration upon reinstall), it is still a good idea to upgrade every now and then.

One of the key drawbacks of upgrading in the past was that de configuration was lost and the new installation would even become a fresh computer instead of retaining the existing configuration. I had to deal with this several times in the past, normally ending up copying back a backup of the config file manually through an SSH CLI. This did not really work like I wanted as it is manual wotk and felt like a hack, which made me look for the right way to do this. After some searching I found an article on the Crashplan support pages on reconnecting an Existing Backup, which desribes how the GUID of a Crashplan installation (the unique ID that identifies the Crashplan network) can be changes to that of the previous installation so that identity and configuration settings of the previous installation are restored. fo this semi-manual approach. To lookup the GUID of the installation to be restored, one can lookup the GUID from your Crashplan account’s computer overview and selecting the name of the computer, which will also display the GIUD. Follow the instructions to reconnect an Existing Backup.

Recently Crashplan as automated the semi-manual process for Adopting Another Computer  so that the semi-manual process is no longer needed. As described in the Crashplan support page, there is now an option available to adopt another computer after re-installation of the Crashplan client (which is exactly what will happen in case a new version of PC Load Letter‘s package is installed). With this option, restoring all settings has become very easy and since all files are still there, and since all files are still there (no need to restore any files) it only requires a check with the remote systems to ensure everything have already been backed up.

Crashplan packages for Synology Disk Station

For quite some time I am a very happy user of Crashplan, backup service and tool that offers reasonably prices backup storage and can can also be used without their service to backup to another computer running the Crashplan client. I am using it both to backup some of my data to Crashplan’s servers as well as to backup to a few friends of mine and provide backup services to my family. So far this works fine, especially since I have it running on my Synology Disk Station (a DS1010+) with plenty of storage. The neat thing is that my NAS is automatically backed up externally and I do not require a PC to provide secure backup services for some friends and within my family.

The easiest way to install Crashplan on your Synology Disk Station is by installing the package provided by PC Load Letter, I used to do install the Linux version manually before which was not difficult at all either, but since the autoupdate did not work then, using the package really is a better solution. Furthermore, it does not require any hacking or tweaking of the box, so everybody can do this (actually it’s so simple, there’s not excuse anymore not to backup your NAS).

To make the package available you need to add the PCLoadLetter repository with URL http://packages.pcloadletter.co.uk as a source for 3rd party applications. See the Synology Support site on how to install 3rd party applications. Next you can select the Crasplan package from the packages available from the Community section, please note that you need to select the correct package (which is the plan version unless you have a PRO or PROe subscription). Please note that Crashplan requires Java installed, so you may need to install that dependency as well (the package installer will tell you).

Once Crashplan has been installed and is running, it is time to configure Crashplan on the Synology Disk Station. For this, you need Crashplan also installed on a (PC/Mac/Linux) desktop that is supported by Crashplan and can run it’s graphical interface. Download the application from Crashplan’s download page and install it. There is no need to run it locally (but you may opt to do so later to use Crashplan to make backups to your NAS). However, some things need some tweaking to use the client to setup the headless Crashplan installation on the NAS.

The Crashplan support site has an excellent guide on how to Connect to a Headless CrashPlan Desktop that one can use to manage everything from a remote computer using using their client (please note that this assumes you have SSH enabled). I opted for a slightly different approach for my two use cases:

  • On my work laptop (Windows 7) I have Crashplan installed but disabled the service as my employer’s policies do not allow it. There I have simply setup PuTTY to forward local port 4243 to localhost:4243 when I connect to my NAS using SSH. This allows me to simply launch the Crashplan client without any modifications that can reach the service on my Synology disk station as long as I have an SSH connection open.
  • On my private laptop (Max OS X) I have Crashplan installed and use it as well to backup to my NAS. There  I use iTerm to manage my SSH connections, but basically all that does is store the exact ssh commands and parameters used, so that is no difference from what Synology’s guide describes. On that system I change the ui.parameter settings whenever needed to switch between local service running on its standard port and the remote one forwarded using ssh running on another port.

Through the client, you can either associate Crashplan running on your NAS with your existing account (if you have that already setup) or register for a new account. After this you can setup remote backup destinations as well as allow others to backup to your NAS as well as setup your NAS as a destination for your other computers (under the same account). Crashplan has documented this all on their support website.

Back online

The last 7 months I have not been able to spend any time on this blog, which has not just resulted in no updates, but also in an awful lot of spam in the comments (which was not visible as I have to approve commentes anyway). I found over 12.500 spam messages in the comments, of which 11.000 in one article. The bad thing of this was that this large amount of comments killed the performance of my blog, so I had to do something.

The good thing about Pebble (the blog software I use) is that is has a very simple XML file-based structure to store articles and comments, so this was very easy to cleanup. All  I had to do was

  1. Shutdown the blog system (only shutting down the webapp in Glassfish sufficed
  2. Locate the XML files that were huge
  3. Edit the large XML file using vi on the command line, removing anything between <comments>…</comments>
  4. Restart the Pebble webapp in Glassfish.

And the spam was removed, which also resolved the performance issue

Now the good news, not being able to post anything does not mean I did not have any spare time to experiment with things so I do have a number of items to complete and document the coming weeks (I’m having some time off now) . Expect some posts frequently during the summer period…

Merry Christmas!

Merry Christmas to all that celebrate this event!

Unfortunately I discovered a very unwelcome Christmas present here… it seems some spammer has discovered my blog and posted over 2700+ bogus comments containing spam links. Please do not click on any of these! I have changed the commenting policy (it requires my approval now) and am busy cleaning  up the mess caused at the moment (trying to preserve any serious comments). My sincere apologies for any inconvenience caused.