The need for stability

Meg’s sites, and have uptime requirements that far and away exceed those of mine. Almost the entirety of her business is predicated on a persistent Internet presence. As I’m sure you understand, a few minutes of downtime could lead to a belief that she’s no longer in business. This quote from The Social Network says it all:

Mark Zuckerberg: [speaking frantically, almost hysterical] Without money the site can’t function. Okay, let me tell you the difference between Facebook and everyone else, we don’t crash EVER! If those servers are down for even a day, our entire reputation is irreversibly destroyed! Users are fickle, Friendster has proved that. Even a few people leaving would reverberate through the entire user base. The users are interconnected, that is the whole point. College kids are online because their friends are online, and if one domino goes, the other dominos go, don’t you get that? I am not going back to the Caribbean Night at AEPi!

While I certainly appreciate the job that Dreamhost does in keeping their servers and vis-à-vis their network available to the Internet; it’s not at a level of fault tolerance that you can obtain from an Amazon Web Services EC2 instance. EC2 stands for Elastic Cloud Compute and is, for all intents and purposes, a virtual private server capable of hosting an OS such as Windows or Linux. What makes an EC2 instance special is two-fold. First, you can dynamically expand the number of instances running based on demand. Second, and this is the real crux of the issue, you can host multiple EC2 instances in disparate data centers and load balance across all instances. That is where the reliability comes into play.

Amazon is making a big play in this space and has a myriad of options available within AWS. I’m not going to go into great detail, but I’ll have you know that you can host databases, load balancers, DNS, message queues, notification systems, and a whole host of other services. Here is a quick screenshot of the toolbar icons that are available in the AWS console.

AWS Console Toolbar Icons

As a quick aside, I’m using S3 and CloudFront as my CDN for this site.

CloudFormation, AWS and the free Micro tier

In order for users to become acquainted with AWS, Amazon makes an EC2 micro instance free for one year. Beside the obvious which is a free VPS for one year, what it really does is allow you to drive AWS like a rented mule. Create the instance, muck up the OS “real good” and then kill it and create a new one. This is where I started to have fun with another component of AWS titled CloudFormation. From the CloudFormation site:

AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

The templates are JSON, and have a defined structure to them. The CloudFormation documentation has a lot of detaills and a must for getting around. I’m not going to go into those details because I want to jump to the gist of this post, which is taking the Single EC2 Instance web server with Amazon RDS database instance sample template and adding additional CloudFormation functionality.


A key feature in templates is the ability to perform actions against the EC2 instance after it’s created. You can do the following:

  • Install packages such as httpd using yum
  • Unpack one or more zip or tarballs into a defined location
  • Perform command line operations
  • Create files
  • Start services such as httpd
  • Add users
  • Manage user groups

AWS::CloudFormation::Init is a sub-resource created within the template under the EC2 instance. AWS::CloudFormation::Init is passed as metadata to the EC2 resource telling it what to do when when its fired up. AWS::CloudFormation::Init is started by a call to an AWS specific script that is installed within the default Linux instance, cfn-init. This script is called by an Bash script defined in the template. The Bash script, pushed in the UserData field of the EC2 instance properties, also contains a number of functions in the sample template. The original EC2 resource Properties looked like this:


This script is updating the AWS scripts to the latest version, starting AWS::CloudFormation::Init, and then tweaking the WordPress install. WordPress was installed from a tarball into the web root of the server in this section of the template:


This segment of the original template creates the WordPress wp-config.php file with references to the RDS resource that is also generated by the template, installs the necessary packages on the server, and sets up the proper services to run as daemons. The first component of the template that I took issue with was that every install was going to have the same random keys. Fortunately, WordPress has a service that will generate these values for you. All you need to do is call and WordPress will return to you the necessary segments to add to your wp-config.php. First problem to solve, how do I get these values into the middle of the file. Why the middle? I tried a simple append but the WordPress install didn’t see them. After quite a bit of trial and error, I wrote a Perl script that is called from the command line. I did this because I didn’t want to leave my custom script as a file on the server.

Perl file mangling

This script looks for a token that I inserted into the wp-config.php file. This snippet is part of a larger section of the template. Since things have to happen in order, CloudFormation templates allow you to have sets of config options, run them in specific order, and then run commands in order. Here is the larger snippet.


You’ll notice now there are first and a second config sections. They appear backwards in the JSON, however if you look at the configSets (documentation), the sections are called in the correct order. Also commands are called in alphabetical order, so I just named them a, b, c, and so on. The other commands I perform is moving WordPress out of a folder under the webroot, setting the correct file and folder permissions, and of course the mangling of the wp-config.php file.


The last added item to the template is the use of cfn-hup. cfn-hup is an optional script that can run as a daemon and look for template updates that are pushed to the server. When changes occur, cfn-hup reads its respective config files and performs the prescribed actions defined within. I added some boilerplate code into the template from the CloudFormation docs.

userData Redux

Here is the update Bash script that is run after install. Besides starting init, it also starts up cfn-hup. There’s also a full yum -y update in there, however for testing, it’s commented out.

Complete Template

You’re welcome to a copy of the script, and it’s in a ZIP file below. I also started a GitHub repo with this template and any others I create in the future. Thanks!

Download Here

What catalog is that file in?

That’s what I hear often from my wife Meg; a professional photographer. Meg uses Adobe Photoshop Lightroom for cataloging the pictures she takes for her business as well as the pictures she take of our family. An underlying frustration that I find with Lightroom is the fact that photos become lost inside different catalogs on your hard drive, and Lightroom lacks the ability to search across catalogs. Some photographers believe the answer is to have a few monolithic catalogs. While these catalogs may contain everything in one searchable database, the catalogs become slow and unwieldy in a few short gigs of data. My answer is to have many small catalogs and harness the great search capabilities within the Mac OS — Spotlight. One small problem, Spotlight doesn’t inherently know how to index a Lightroom .lrcat file. In order to overcome this shortcoming, I wrote my own Spotlight plugin; something called an importer (See the Spotlight Importer Programming Guide).


Lightroom catalog files are actually SQLite databases and therefore extracting the necessary data was quite simple. I first used SQLite Manager in Firefox to examine the database. I came up with the following simple query to get at the filenames I would need to pass to Spotlight:

The next step was to run this query from within Objective-C. After using the familiar OmniSQLite framework from the larger OmniGroup framework, I settled on a much smaller wrapper, FMDB.


It took several days before I wrapped my head around Spotlight and the necessary components of an importer. Probably the most difficult concept to understand was Uniform Type Identifiers (UTI). Adobe does not identify a UTI for their catalog files. I had to create my own in order for Spotlight to identify the files and know what importer to use in order to index Lightroom catalogs. I came up with com.adobe.lightroom.library. This UTI is in the Info.plist file of my importer under the Exported Type UTIs section.


Take note that the package I created for installing the importer places the importer in ~/Library/Spotlight. I chose the user’s home directory because I can install it without the need for a password. I also run this command

after install. That causes Spotlight to index files already on disk. See here for more info.

Source Code

The codes speaks well for itself and that is why the source is open and hosted at Github. I also created a nice Github project page for the code. You can find the page here and the source code here.


Here is the importer in use:



Download the installer package