Factory reset after firmware upgrade of TP-Link SG3424

Since about a year I have a TP-Link SG3424, a manageable 24-ports gigabit switch that so far has proven to be table and reliable and was a good deal when I bought it compared to similar devices. It’s not a Cisco switch, but honestly I do not know what I am missing out on given the feature set is has. So far it covered my needs and has been operational without any noticable issues or hiccups. The only drawback (for me) so far was that it lacked IPv6 support, but that’s optional on an internal network anyway.

Today I noticed that TP-Link released a new firmware version in December for the SG-3424 switch that added IPv6 support and is available from their support site’s download page. I followed the installation guide, which was completely correct, but a bit incomplete as I had a few surprises during my upgrade:

  1. The installation manual instructs you to change your network settings to the 192.168.0.X network. As I have an operational environment using another IP range, I happily ignored this and was able to perform the upgrade. However, after the upgrade, I could no longer reach my switch until I changed the IP address of my laptop to the 192.168.0.X network. This is not an issue for someone that knows what he is doing, but should have been mentioned in the installation manual in my opinion…
  2. After the firmware update, the switch was back at its factory settings and lost all configuration settings (including the logins, so I had to login with admin/admin again). This is bad TP-Link, even if your update process requires a full reset, PUT IT IN THE UPGRADE MANUAL at least…
    Fortunately I have a tendency to make backups before I change things and did create a backup before the update , but it was a surprise one would not expect.
  3. When restoring the configuration, all settings were restored correctly EXCEPT the logins and password of the admin user. Again, not a blocker, but a surprise and something NOT DOCUMENTED.

All in all minor things that one can resolve pretty easily, but not something you would expect from a company that positions the product at the professional market… Cisco-like 10+ upgrade guides are a pain to get through, but a 2-pager with only half of the information doesn’t help either.

Apart from these glitches, the upgrade was worthwile as the switch now has IPv6 support, which functions great and has (as always) all features one can imagine (like being able to add manual addresses next to the auto-negotiation addresses, support for RA and DHCP6, support for IPv6 on all services like SSH and SNMP) appear to work fine.

Mac OS X Lion (10.7) on my (upgraded) Late 2006 Mac Mini

After yesterday’s upgrade of my Late 2006 Mac Mini (MacMini1.1) it was time today to see if I could get OS X 10.7 (Lion) working on it. As per  discussions on Apple’s discussion forms this should be possible (as the hardware supports after the upgrade I did). However, the standard OS X Lion installation did not want to install on this hardware yet. As per the discussion on MacRumors.com I had to remove the file


before the installation wanted to start. Once I did that, I could do a clean install on the new SSD Harddisk withougt any issues or additional hacks needed. Also transferring the users, apps and settings from the old system still on the external USB harddisk went fine and actually totally surprised me (I never used it before) as it turned the clean install in a totall usable system including the configuration of the OpenDirectory server.

After the installation it is important to enable Trim support on OS X to extend the lifetime of my SSD harddisk with the excellent tool Chameleon.

Right now I am very happy with the end result: a Late 2006 Mac Mini running OS X Lion (10.7):
Late 2006 Mac Mini after upgrade

Obviously only 3 Gb of memory is available as that is the max. the hardware supports, but still this is a very good solution to have a 2nd Mac Mini system for my children.

Upgraded my MacMini1.1 to a MacMini2.1 :-)

As I already wrote in my last post, I was looking into upgrading my old late 2006 Mac Mini to extend it’s techinical live. Today I have successfully managed to upgrade the hardware of the old machine and it is running fine again. The steps I did were:

  • Replaced the Core Duo 1.66Ghz CPU with a Core 2 Duo 2Ghz CPU
    The exact CPU I purchased for about €25 through e-Bay was an Intel Core 2 Duo T7200 SL9SF 2.0GHz/4M/667 Laptop CPU. To install it I basically followed the steps as in the iFixIT step-by-step guide to replace a MacMini CPU, which were pretty straightforward.
  • Booted OS X from the MacMini’s USB harddisk while the machine was still open to install the MacMini2.1 firmware
    As per this NetKas forum guide, the firmware must be updated before adding the additional memory or the Mac Mini won’t boot. The links to the firmware were broken, I actually downloaded them MediaFire and followed the steps from the French Mac forum post that linked to them.
  • Replaced the 2x 1Gb memory with 2x 2Gb memory modules
    Through Marktplaats.NL I acquired 2 used memory modules for only €35. As the machine was still open it was pretty straightforward to replace the memory modules.
  • Replaced the 80Gb broken harddisk with a 60 Gb SSD Drive
    I purchased an ADATA S511 60GB SSH Harddisk online for only € 39,00. Again installing it was straightforward as the machine was still open.

After all steps were completed, the Mac Mini was still working as before on Mac OS X Snow Leopard (10.6.8) without problems. The extra memory did help, but as it was still booting from an USB Disk it was still very slow.

As extra bonus I noticed that the Mac Mini now supported the modern Mac keyboards (the flat iron ones) during the BIOS startup. Before this update I always needed the old (plastic) keyboard I got with the machine to be able to intervene in the boot process, but now this also worked with the modern keyboards :-).

Tomorrow I will look into upgrading OS X and see how that goes. So far so good, the machine is working faster already and for about €100 in total not a bad investment to keep using it.

Upgrading a late 2006 model Mac Mini

One of the Macs in our house is a late 2006 model Mac Mini (MacMini1.1 model MA206LL/A). The machine itself still works hapily with Mac OS X Leopard (10.6), but it only has 2Gb of memory and since its harddisk broke a while ago, it is working from an USB harddisk. All in all technically still OK, but terrible user experience as it is just slow.
Today I did some investigation on the Internet to see to what extend this old machine can still be upgraded and bumped into an interesting overview on xlr8yourmac.com. It turns out that the basics are quite good and with a few changes it can still be used for some time:

  • CPU – currently a Core Duo that could be replaced with a Core 2 Duo
    The Core Duo processor is a 32-bit one that does not support 64-bit OS X. Fortunately the processor is on a socket (and not soldered to the main board) and its pin layout is identical to Core 2 Duo models. This is also being discussed on Apple’s discussion forum (still exists so Apple is not stopping it) and according to posts on MacRumors.com others have done this successfully, so this is definitely something I will try. Guess what, there is even a step-by-step guide on iFixIT on how to do it!
  • Memory – currently limited to 2GB but potentially could support 3Gb (of 2x 2Gb)
    Memory is limited to 2Gb (2x 1Gb) with the Core Duo processor, but the Core 2 Duo can support up to 4Gb (2x 2Gb) of memory. Unfortunately the MacMini1.1 model firmware does not support it but it turns out to be possible to flash the firmware of a MacMini2.1 as the folks on the NetKas forum explain. The links to the firmware no longer worked, but I found them on a French Mac Forum thanks to this post. After this upgrade 3Gb can be used, which is still 50% more than the machine had.
    There is a separate step-by-step guide on iFixIT for replacing the memory, but I don’t think I will need it as I will do it when I replace the CPU.
  • Harddisk – currently broken 5400rpm 80Gb disk, replacing this with a 60Gb SSD harddisk is a no-brainer
    Replacing a broken harddisk for an SSD disk is nothing fancy, though it is important to enable Trim support on OS X after replacing it when you use a non-Apple disk. For this I found the excellent tool Chameleon some time ago for my Macbook Pro.
    Also for this step there is a step-by-step guide on iFixIT, that I won’t need either as I will install the new harddisk when I replace the CPU.
  • Software – currently OS X Leopard (10.6) is the maximum
    Replacing the Core Duo CPU for a Core 2 Duo would turn the MacMini1.1 effectively into a Macmini2.1, which is capable of running OS X Lion (10.7) according to discussions on Apple’s discussion forms. There is apparently only one hack needed (removal of a file on the installation media) to be able to perform a clean install according to a discussion on MacRumors.com.

As I am not that uncomfortable with opening my old Mac Mini (did it before when I added memory) and the other steps appear doable, I will give this a shot. I just ordered the components and plan to perform the upgrade next weekend (assuming all parts will be in).

Access Cisco Firewall forwarded external IPv4 port from inside

For some time now I am using a borrowed Cisco 881 router as router/firewall for my internet connection. The box is stable and configured as I want, but unlike with the Linux and Fritz!Box routers I used before, the Cisco does not allow to connect to forwarded IPv4 ports on its external address. This is inconvenient in my situation as this means that I am unable to reach some services from my internal network (i.e. I cannot reach websites I host). So far the only way around this was using split DNS and double administration, which is quite tedious and inconvenient.

Some time ago when looking how to set this up, I bumped into this article:  NAT: access outside global address from the inside (this site seems to be down at the moment, but it’s content is still available through here thanks to the Internet Archive). This describes an alternative way to setup the Cisco NAT rules using the NAT Virtual Interface (NVI),which decouples them from the specific interface in a specific direction. Today I have tested this approach.


To setup the new NAT approach, change the existing NAT rules:

ip nat inside source static tcp 80 WW.XX.YY.ZZ 80

into something that looks like the next line:

ip nat source static tcp 80 WW.XX.YY.ZZ 80

ip access-list extended NAT-INSIDE-ADDRESSES
permit ip any
ip nat source list NAT-INSIDE-ADDRESSES interface FastEthernet0/1 overload

(basically remove the inside clause in the statement). In my setup is the internal IP address of my web server and WW.XX.YY.ZZ represents my external IP address. In this example I forwarded port 80 (HTTP). The last part is required to make sure that also internal traffic on FastEthernet0/1 will be NATted properly to avoid asynchronous data flows.

 Testing it

The first basis tests of this new setup were promising. Indeed, after these changes I could access my external sites also from internal addresses. However, when downloading something from an internal site I noticed that the performance was not very good. This was something I definitely could live with as the traffic would not be massive. However, due to this change in config, all NAT traffic turned out to be slower and effectively the performance of my network connection was about half of what it used to be. Before this change the Cisco 881 was capable of streaming about 38 – 43 Mbit, which was not my full 50Mbit bandwith, but close enough. With this (NVI) setup, I noticed that my max. network bandwith  using SpeedTest.NET dropped to 20Mbit and below. With the command
show processes cpu history
on the router I noticed that the poor Cisco 881 was at 100% CPU load/utilization during the downloads. I suspect that the old Cisco 881 (which does not support 50Mbit in the first place) is CPU-bound when using NAT Virtual Interfaces and not capable of handling this at higher speeds.


Technically, the approach to use the NAT Virtual Interface (NVI) feature of IOS works to enable access to NAT forwarded external ports from the inside. However, since this appears to be very CPU intensive, it is not a good solution for now as the Cisco 881 cannot cope with the load and the internet bandwith is effectively reduced to only 50%. I think need to revisit this approach once I have acquired a router that is capable to support the bandwith I have and see if then can handle the CPU load.

Happy New Year!

A Happy New Year and best wishes for 2014 to all of you!

As you may have noticed, things have become extremely quiet here and and published only a few posts during 2013. The key reason for that was that at my day job things were (and still are) tough, which did not let me a lot of time over the weekend spend time with my family, to play around with technical stuff, and still have enough energy (and discipline) left to post about that here. Now for this year I do not expect that to change immediatly, but I still do have a number of posts in draft to finish regarding the recovery of upgrade to Mac OS X Maverick, I have started to play with Cisco routers which requires me to document stuff and have some small projects lined up for this year to complete….

Now of course this is the beginning of the year, which normally means a fresh start and a lot of initial ideas… so let’s change some things now for the rest of this year…

Restoring OpenDirectory on Mac OS X Mountain Lion Server

After some more checking on the contents of the /Recovered Items folder left over after my failed upgrade of OS X from Lion to Mountain Lion I decided to proceed with re-installation of the components to see if I could get things back as they were again.

The first step was to install the Server component again (which has gone missing after the upgrade). This only took a simple purchase of the Server.app component in the App Store. After that I had a Mac Server again and could start my reinstallation.

The first component to reconfigure was the Open Directory component. It was extremely important for me not to lose that one as it contained all my users, their passwords and group membership as well as all the e-mail addresses each user had (I am hosting a few different domains, re-creating that would mean a lot of work).

When I enabled the Open directory server component, I had to specify how I wanted to configure that. This screen included an option to import a backup. As I still had the whole data structure from my previous installation, I tried that first but that did’t work. Then I noticed that the directory /Recovered Items/private/var/backups/ contained a file called ServerBackup_OpenDirectoryMaster.sparseimage that was less than a day old. I selected that file as backup, which was accepted to restore from and it looks like that did the trick. My users were restored and I could also login with my regular userID again.

Based on this initial success I decided to rebuild the rest of my server as I knew the other components (PostgreSQL, Postfix, Dovecot, etc) pretty well from when I hosted everything still on Linux… I will continue to document the steps I took as well as my custom setup as it may be useful for others.

Upgrade to Mountain Lion Server Failed…

Today I decided to (finally) upgrade my Mac Mini Server running OS X Lion Server to Mountain Lion Server. The upgrade was way overdue and Mountain Lion appeared to be pretty stable by now, so I decided to make the switch this weekend. Based on other’s good experiences, I had made a last Time Machine backup, disabled incoming mail on my firewall, purchased the update to Mountain Lion in the App Store and started the process.

Unfortunately after about 1 hour of processing i got a message like "Upgrade Failed, system will now restart". After this restart it turned out I was left with a vanilla install of Mac OS  X on my Mac Mini Server. It even started to ask all the 1st time questions again including whether I wanted to register my server wit Apple again. Once I logged in it turned out that indeed I had a vanilla installation of the bare OS X Mountain Lion system on my Mac Mini Server, still without the Server components (which was expected). Fortunately all user data was still were it should be (in /Users) but apart from that all system settings and other data (opendirectory, databases, mail, calendars, contacts, etc.) turned out to have moved to a folder called /Recovered Items. Apple… WTF?

A quick scan indicated that no data appears to be lost (pfew…) but I need to do some investigation on how to recover from this and decide whether I want to restore my backups (which eventually won’t resove anything as the next upgrade would probably fail again). The good thing is that although my Mac Mini Server itself is vital for my infrastructure (it runs a few Linux VMs), but it’s own functions are limited to Nameserver, Mail/Calendar/Contact Server and Fileserver for my other Mac. This may be a good moment to start from scratch and document my customizations while recovering…

Enable regular VNC access to an OS X Server remotely

Mac OS X Server has pretty decent screen sharing and remote desktop features out of the box to manage you headles OS X Server remotely. This works great when you have a Mac OS X desktop or laptop, but I found out today requires some additional setup when you’re using a Microsoft windows client.

The tehcnology used by Apple is VNC, which is a very mature and generally available protocol for which multiple mature clients exists on different platforms. However, Apple has decided to use its own authentication model between the client and the server out of the box (for probably good reasons, not sure though which but they probably wanted to use GSSAPI again). However, the default VNC authentication is not enables out of the box and requires some additional setup to enable access from standard VNC clients.

Today  I found myself needing to do some administrative tasks I knew I could do easily through a remote desktop connection, but since I was a few thousand kilometers away and only had my (Windows 7) work laptop with me, could not do. It turned out I had to enable some settings to allow "classic" (actually standard) VNC clients to connect and authenticate with the Mac OS X remote desktop (VNC) server. Furtunately it turned out to be possible not only through the graphical interface but, as many times with OS X, there was also a command line way to make the necessary adjustments. Running the following command:

sudo /System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Resources/kickstart  -configure -clientopts  -setvnclegacy -vnclegacy yes -setvncpw -vncpw PASSWORD

where PASSWORD is the password to be provided to authorize a standard VNC connection.

With the above command executed through an SSH connection over VPN I was able to enable standard VNC support on my Mac OS X Server and logged in (again though the VPN connection) on my server’s desktop remotely using a standard VNC client.

Just to be complete, the option to use a standard VNC client can be disabled using:

sudo /System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Resources/kickstart  -configure -clientopts  -setvnclegacy -vnclegacy no

Restoring Synology NAS Crashplan existing configuration

In addition to yesterday’s post about running Crashplan on a Synology Disk Station,I thought it was worth mentioning that the key advantage of using PC Load Letter‘s packages is that they nowadays also fully support Crashplan’s auto update feature. So once installed, there is no need to update the package anymore, Crashplan will update itself to the latest version automatically.

Unfortunately this is not visible in Synology’s Package Centre, which hapily shows an updated version of Crashplan is available whenever an updated package is available. Since it is always a good idea to have the latest package installes as it may resolve other issues (i.e. one day autoupdate support started to work now it also it seems to preserve its configuration upon reinstall), it is still a good idea to upgrade every now and then.

One of the key drawbacks of upgrading in the past was that de configuration was lost and the new installation would even become a fresh computer instead of retaining the existing configuration. I had to deal with this several times in the past, normally ending up copying back a backup of the config file manually through an SSH CLI. This did not really work like I wanted as it is manual wotk and felt like a hack, which made me look for the right way to do this. After some searching I found an article on the Crashplan support pages on reconnecting an Existing Backup, which desribes how the GUID of a Crashplan installation (the unique ID that identifies the Crashplan network) can be changes to that of the previous installation so that identity and configuration settings of the previous installation are restored. fo this semi-manual approach. To lookup the GUID of the installation to be restored, one can lookup the GUID from your Crashplan account’s computer overview and selecting the name of the computer, which will also display the GIUD. Follow the instructions to reconnect an Existing Backup.

Recently Crashplan as automated the semi-manual process for Adopting Another Computer  so that the semi-manual process is no longer needed. As described in the Crashplan support page, there is now an option available to adopt another computer after re-installation of the Crashplan client (which is exactly what will happen in case a new version of PC Load Letter‘s package is installed). With this option, restoring all settings has become very easy and since all files are still there, and since all files are still there (no need to restore any files) it only requires a check with the remote systems to ensure everything have already been backed up.