Clean Photo Album permalinks with Piwigo

I am playing for some time now with Piwigo to replace my Menalto Gallery3 online photo gallery. Key reason to look at another solution is that after the move from Gallery 2 to 3 the project (which took ages as it was a major overhaul of the code), the projects seems to have stalled.

So far I really like Piwigo as it has everything I need including iPhoto integration and a (simple) iPhone app. LDAP support is available through a plugin that is basic but suffices for my need. However, one of the key gaps for me was that it did not have any way to generate nice and simple URLs to albums that you can share easily (verbally). Although it was possible to define permalinks for an album, the URL remained ugly in my opinion.
Today I hacked a small patch together for the Piwigo 2.6 codebase that changes the URLs for photo albums to something like:

http://photo.mydomain.tld/albumname

which is exactly the way it worked for me fine (like I had with Gallery3). This only works for albums with a permalink defined, default album URL will retain the /category/<albumid> format, which is fine for my situation.

Steps to obtain more clean album URLs are:

  1. Apply this patch: piwigo-url-patch
  2. Add the following mod_rewrite rewriting rules to your Apache configuration
    RewriteRule ^/category/       /index.php/%{REQUEST_URI}               [L]
    RewriteRule ^/[^.]+$          /index.php/category/%{REQUEST_URI}      [L]

Again, in my setup this worked, I am still testing this so any feedback to improve is welcome. I did notice that occasionally the patch results in a / too many in URLs generated by piwigo, but that is silently ignored and does not affect the functionality.

To actually use the patch, define a permalink under [Administration] –> [Albums] –> [Manage] on the [Permalinks] tab.

Change Gitlab homepage using Apache’s mod_rewrite

For some time I have been looking for a way to share public projects easily using GitLab. With the Public Project option of GitLab this was already possible for some time, but it did not work quite as I would like to (i.e. I would like http://gitlab.mydomain.tld to be the URL for all public projects). Due to the way GitLab is setup, the default URL will redirect the user to the login page, which does provide a link to the Public Projects page, but was not quite what I want.

Of course, as GitLab is open source, I could change the code directly, but as I would have to do that after each upgrade of GitLab (which is monthly!) I did not want to do that. Today I found a way around changing the code by using the following mod_rewrite rules to my Apache configuration (I placed this in the <VirtualHost> configuration but should also work from a .htaccess file):

# Redirect /users/sign_in to /public unless it has a local refferer
# This makes the public projects page the homepage instead of the login page
RewriteCond   %{HTTP_HOST}@@%{HTTP_REFERER}    !^([^@]*)@@https?://1/
RewriteRule   ^/users/sign_in$                 https://%{SERVER_NAME}/public/          [R,L]

This is inspired by a blog post on referer checking from the Apache .htaccess file. To get to this solution I just had to realize that an internal redirect by the application clears the referrer and apply the opposite logic to intervene when this happened (no referrer implies a redirect, when the user clicks on a link the request will have a referrer). How this works is:

  1. The user visits http://gitlab.mydomain.tld/
  2. GitLab redirects this request to its sign_in page
  3. The browser requests the sign_in page, as this was a redirected page the referrer will be empty
  4. The above mod_rewrite rule kicks in and redirects the user to the public projects page

For me this setup works as I expect. The only caveats are that users with browsers setup not to provide a referrer (e.g. for privacy reasons) may no longer be able to login and that a direct link to the sign_in page won’t work (the user will be redirected to the public projects page and has to click the sign_in button). For my setup both are no issue, let me know through the comments if there are other issues or perhaps solutions for this.

Login issues after upgrade to GitLab 6.5

I have been playing around with Gitlab, the open-source self-hosted Github clone for a while now. I plan to use it to publish the scripts and small programs I did over the last few years and will still create later this year.

After the upgrade to Gitlab version 6.5.1 (which was a breeze BTW thanks to their excellent upgrade script) I noticed I could no longer login. to the server. In the logfile log/production I found messages like:

Started POST "/users/sign_in" for 2001:XXX:XXXX:X:XXX:XXXX:XXXX:XXXX at 2014-02-02 13:53:46 +0100
Processing by Devise::SessionsController#create as HTML
Parameters: {"utf8"=>"✓", "authenticity_token"=>"XXXXXXXXXXXXXXXXX", "user"=>{"login"=>"XXXXXXX", "password"=>"[FILTERED]", "remember_me"=>"0"}}
Can't verify CSRF token authenticity
Redirected to https://gitlab.mydomain.tld/
Completed 302 Found in 123ms (ActiveRecord: 7.3ms)
Started GET "/" for 2001:XXX:XXXX:X:XXX:XXXX:XXXX:XXXX at 2014-02-02 13:53:46 +0100
Processing by DashboardController#show as HTML
Completed 401 Unauthorized in 1ms
Started GET "/users/sign_in" for 2001:XXX:XXXX:X:XXX:XXXX:XXXX:XXXX at 2014-02-02 13:53:46 +0100

 
This turned out to be a known issue with the installation of Gitlab. Since Gitlab is only supports NGinX while I am running it on Apache, I needed to dig a bit further for the solution. The problem was caused by a security enhancement in Gitlab 6.5 in combination with HTTPS. Since the SSL processing is handled by Apache, which uses mod_proxy to connect to GitLab only using HTTP, cookies no longer worked properly. The solution was pretty simple, it required the following statement to be added to the Apache Virtual Host configuration:

RequestHeader set X_FORWARDED_PROTO 'https'

Please note that this does require mod_headers to be enabled, if this is not enabled, issue to following two commands to enable it:

sudo a2enmod headers
sudo /etc/init.d/apache2 restart

Factory reset after firmware upgrade of TP-Link SG3424

Since about a year I have a TP-Link SG3424, a manageable 24-ports gigabit switch that so far has proven to be table and reliable and was a good deal when I bought it compared to similar devices. It’s not a Cisco switch, but honestly I do not know what I am missing out on given the feature set is has. So far it covered my needs and has been operational without any noticable issues or hiccups. The only drawback (for me) so far was that it lacked IPv6 support, but that’s optional on an internal network anyway.

Today I noticed that TP-Link released a new firmware version in December for the SG-3424 switch that added IPv6 support and is available from their support site’s download page. I followed the installation guide, which was completely correct, but a bit incomplete as I had a few surprises during my upgrade:

  1. The installation manual instructs you to change your network settings to the 192.168.0.X network. As I have an operational environment using another IP range, I happily ignored this and was able to perform the upgrade. However, after the upgrade, I could no longer reach my switch until I changed the IP address of my laptop to the 192.168.0.X network. This is not an issue for someone that knows what he is doing, but should have been mentioned in the installation manual in my opinion…
  2. After the firmware update, the switch was back at its factory settings and lost all configuration settings (including the logins, so I had to login with admin/admin again). This is bad TP-Link, even if your update process requires a full reset, PUT IT IN THE UPGRADE MANUAL at least…
    Fortunately I have a tendency to make backups before I change things and did create a backup before the update , but it was a surprise one would not expect.
  3. When restoring the configuration, all settings were restored correctly EXCEPT the logins and password of the admin user. Again, not a blocker, but a surprise and something NOT DOCUMENTED.

All in all minor things that one can resolve pretty easily, but not something you would expect from a company that positions the product at the professional market… Cisco-like 10+ upgrade guides are a pain to get through, but a 2-pager with only half of the information doesn’t help either.

Apart from these glitches, the upgrade was worthwile as the switch now has IPv6 support, which functions great and has (as always) all features one can imagine (like being able to add manual addresses next to the auto-negotiation addresses, support for RA and DHCP6, support for IPv6 on all services like SSH and SNMP) appear to work fine.

Mac OS X Lion (10.7) on my (upgraded) Late 2006 Mac Mini

After yesterday’s upgrade of my Late 2006 Mac Mini (MacMini1.1) it was time today to see if I could get OS X 10.7 (Lion) working on it. As per  discussions on Apple’s discussion forms this should be possible (as the hardware supports after the upgrade I did). However, the standard OS X Lion installation did not want to install on this hardware yet. As per the discussion on MacRumors.com I had to remove the file

/System/Library/CoreServices/PlatformSupport.plist

before the installation wanted to start. Once I did that, I could do a clean install on the new SSD Harddisk withougt any issues or additional hacks needed. Also transferring the users, apps and settings from the old system still on the external USB harddisk went fine and actually totally surprised me (I never used it before) as it turned the clean install in a totall usable system including the configuration of the OpenDirectory server.

After the installation it is important to enable Trim support on OS X to extend the lifetime of my SSD harddisk with the excellent tool Chameleon.

Right now I am very happy with the end result: a Late 2006 Mac Mini running OS X Lion (10.7):
Late 2006 Mac Mini after upgrade

Obviously only 3 Gb of memory is available as that is the max. the hardware supports, but still this is a very good solution to have a 2nd Mac Mini system for my children.

Upgraded my MacMini1.1 to a MacMini2.1 :-)

As I already wrote in my last post, I was looking into upgrading my old late 2006 Mac Mini to extend it’s techinical live. Today I have successfully managed to upgrade the hardware of the old machine and it is running fine again. The steps I did were:

  • Replaced the Core Duo 1.66Ghz CPU with a Core 2 Duo 2Ghz CPU
    The exact CPU I purchased for about €25 through e-Bay was an Intel Core 2 Duo T7200 SL9SF 2.0GHz/4M/667 Laptop CPU. To install it I basically followed the steps as in the iFixIT step-by-step guide to replace a MacMini CPU, which were pretty straightforward.
  • Booted OS X from the MacMini’s USB harddisk while the machine was still open to install the MacMini2.1 firmware
    As per this NetKas forum guide, the firmware must be updated before adding the additional memory or the Mac Mini won’t boot. The links to the firmware were broken, I actually downloaded them MediaFire and followed the steps from the French Mac forum post that linked to them.
  • Replaced the 2x 1Gb memory with 2x 2Gb memory modules
    Through Marktplaats.NL I acquired 2 used memory modules for only €35. As the machine was still open it was pretty straightforward to replace the memory modules.
  • Replaced the 80Gb broken harddisk with a 60 Gb SSD Drive
    I purchased an ADATA S511 60GB SSH Harddisk online for only € 39,00. Again installing it was straightforward as the machine was still open.

After all steps were completed, the Mac Mini was still working as before on Mac OS X Snow Leopard (10.6.8) without problems. The extra memory did help, but as it was still booting from an USB Disk it was still very slow.

As extra bonus I noticed that the Mac Mini now supported the modern Mac keyboards (the flat iron ones) during the BIOS startup. Before this update I always needed the old (plastic) keyboard I got with the machine to be able to intervene in the boot process, but now this also worked with the modern keyboards :-).

Tomorrow I will look into upgrading OS X and see how that goes. So far so good, the machine is working faster already and for about €100 in total not a bad investment to keep using it.

Upgrading a late 2006 model Mac Mini

One of the Macs in our house is a late 2006 model Mac Mini (MacMini1.1 model MA206LL/A). The machine itself still works hapily with Mac OS X Leopard (10.6), but it only has 2Gb of memory and since its harddisk broke a while ago, it is working from an USB harddisk. All in all technically still OK, but terrible user experience as it is just slow.
Today I did some investigation on the Internet to see to what extend this old machine can still be upgraded and bumped into an interesting overview on xlr8yourmac.com. It turns out that the basics are quite good and with a few changes it can still be used for some time:

  • CPU – currently a Core Duo that could be replaced with a Core 2 Duo
    The Core Duo processor is a 32-bit one that does not support 64-bit OS X. Fortunately the processor is on a socket (and not soldered to the main board) and its pin layout is identical to Core 2 Duo models. This is also being discussed on Apple’s discussion forum (still exists so Apple is not stopping it) and according to posts on MacRumors.com others have done this successfully, so this is definitely something I will try. Guess what, there is even a step-by-step guide on iFixIT on how to do it!
  • Memory – currently limited to 2GB but potentially could support 3Gb (of 2x 2Gb)
    Memory is limited to 2Gb (2x 1Gb) with the Core Duo processor, but the Core 2 Duo can support up to 4Gb (2x 2Gb) of memory. Unfortunately the MacMini1.1 model firmware does not support it but it turns out to be possible to flash the firmware of a MacMini2.1 as the folks on the NetKas forum explain. The links to the firmware no longer worked, but I found them on a French Mac Forum thanks to this post. After this upgrade 3Gb can be used, which is still 50% more than the machine had.
    There is a separate step-by-step guide on iFixIT for replacing the memory, but I don’t think I will need it as I will do it when I replace the CPU.
  • Harddisk – currently broken 5400rpm 80Gb disk, replacing this with a 60Gb SSD harddisk is a no-brainer
    Replacing a broken harddisk for an SSD disk is nothing fancy, though it is important to enable Trim support on OS X after replacing it when you use a non-Apple disk. For this I found the excellent tool Chameleon some time ago for my Macbook Pro.
    Also for this step there is a step-by-step guide on iFixIT, that I won’t need either as I will install the new harddisk when I replace the CPU.
  • Software – currently OS X Leopard (10.6) is the maximum
    Replacing the Core Duo CPU for a Core 2 Duo would turn the MacMini1.1 effectively into a Macmini2.1, which is capable of running OS X Lion (10.7) according to discussions on Apple’s discussion forms. There is apparently only one hack needed (removal of a file on the installation media) to be able to perform a clean install according to a discussion on MacRumors.com.

As I am not that uncomfortable with opening my old Mac Mini (did it before when I added memory) and the other steps appear doable, I will give this a shot. I just ordered the components and plan to perform the upgrade next weekend (assuming all parts will be in).

Access Cisco Firewall forwarded external IPv4 port from inside

For some time now I am using a borrowed Cisco 881 router as router/firewall for my internet connection. The box is stable and configured as I want, but unlike with the Linux and Fritz!Box routers I used before, the Cisco does not allow to connect to forwarded IPv4 ports on its external address. This is inconvenient in my situation as this means that I am unable to reach some services from my internal network (i.e. I cannot reach websites I host). So far the only way around this was using split DNS and double administration, which is quite tedious and inconvenient.

Some time ago when looking how to set this up, I bumped into this article:  NAT: access outside global address from the inside (this site seems to be down at the moment, but it’s content is still available through here thanks to the Internet Archive). This describes an alternative way to setup the Cisco NAT rules using the NAT Virtual Interface (NVI),which decouples them from the specific interface in a specific direction. Today I have tested this approach.

Setup

To setup the new NAT approach, change the existing NAT rules:

ip nat inside source static tcp 192.168.0.100 80 WW.XX.YY.ZZ 80

into something that looks like the next line:

ip nat source static tcp 192.168.0.100 80 WW.XX.YY.ZZ 80

ip access-list extended NAT-INSIDE-ADDRESSES
permit ip 192.168.0.0 0.0.0.255 any
!
ip nat source list NAT-INSIDE-ADDRESSES interface FastEthernet0/1 overload

(basically remove the inside clause in the statement). In my setup 192.168.0.100 is the internal IP address of my web server and WW.XX.YY.ZZ represents my external IP address. In this example I forwarded port 80 (HTTP). The last part is required to make sure that also internal traffic on FastEthernet0/1 will be NATted properly to avoid asynchronous data flows.

 Testing it

The first basis tests of this new setup were promising. Indeed, after these changes I could access my external sites also from internal addresses. However, when downloading something from an internal site I noticed that the performance was not very good. This was something I definitely could live with as the traffic would not be massive. However, due to this change in config, all NAT traffic turned out to be slower and effectively the performance of my network connection was about half of what it used to be. Before this change the Cisco 881 was capable of streaming about 38 – 43 Mbit, which was not my full 50Mbit bandwith, but close enough. With this (NVI) setup, I noticed that my max. network bandwith  using SpeedTest.NET dropped to 20Mbit and below. With the command
show processes cpu history
on the router I noticed that the poor Cisco 881 was at 100% CPU load/utilization during the downloads. I suspect that the old Cisco 881 (which does not support 50Mbit in the first place) is CPU-bound when using NAT Virtual Interfaces and not capable of handling this at higher speeds.

Conclusion

Technically, the approach to use the NAT Virtual Interface (NVI) feature of IOS works to enable access to NAT forwarded external ports from the inside. However, since this appears to be very CPU intensive, it is not a good solution for now as the Cisco 881 cannot cope with the load and the internet bandwith is effectively reduced to only 50%. I think need to revisit this approach once I have acquired a router that is capable to support the bandwith I have and see if then can handle the CPU load.

Happy New Year!

A Happy New Year and best wishes for 2014 to all of you!

As you may have noticed, things have become extremely quiet here and and published only a few posts during 2013. The key reason for that was that at my day job things were (and still are) tough, which did not let me a lot of time over the weekend spend time with my family, to play around with technical stuff, and still have enough energy (and discipline) left to post about that here. Now for this year I do not expect that to change immediatly, but I still do have a number of posts in draft to finish regarding the recovery of upgrade to Mac OS X Maverick, I have started to play with Cisco routers which requires me to document stuff and have some small projects lined up for this year to complete….

Now of course this is the beginning of the year, which normally means a fresh start and a lot of initial ideas… so let’s change some things now for the rest of this year…

Restoring OpenDirectory on Mac OS X Mountain Lion Server

After some more checking on the contents of the /Recovered Items folder left over after my failed upgrade of OS X from Lion to Mountain Lion I decided to proceed with re-installation of the components to see if I could get things back as they were again.

The first step was to install the Server component again (which has gone missing after the upgrade). This only took a simple purchase of the Server.app component in the App Store. After that I had a Mac Server again and could start my reinstallation.

The first component to reconfigure was the Open Directory component. It was extremely important for me not to lose that one as it contained all my users, their passwords and group membership as well as all the e-mail addresses each user had (I am hosting a few different domains, re-creating that would mean a lot of work).

When I enabled the Open directory server component, I had to specify how I wanted to configure that. This screen included an option to import a backup. As I still had the whole data structure from my previous installation, I tried that first but that did’t work. Then I noticed that the directory /Recovered Items/private/var/backups/ contained a file called ServerBackup_OpenDirectoryMaster.sparseimage that was less than a day old. I selected that file as backup, which was accepted to restore from and it looks like that did the trick. My users were restored and I could also login with my regular userID again.

Based on this initial success I decided to rebuild the rest of my server as I knew the other components (PostgreSQL, Postfix, Dovecot, etc) pretty well from when I hosted everything still on Linux… I will continue to document the steps I took as well as my custom setup as it may be useful for others.