ASP.NET MVC 4

In my ASP.NET MVC 4 partial view, I was having an issue where VaidationSummary and VaidationMessageFor were coming up blank. I scratched my head on this for quite some time. I finally came across this entry at the MVC forums at MSDN. It turns out I wasn’t passing the ViewData from parent view, but in fact my own fresh ViewData. This is how I had things before:

The Wrong Way

The Right Way

I figured it out by doing this instead:

Basically what we have here is the original ViewData from the parent view with my additional add-ons that are passed to the partial view. Once I made this change, the ModelState is propagated to the partial view and VaidationSummary and VaidationMessageFor are outputting the desired results.

The need for stability

Meg’s sites, megbitton.com and soulsimagined.com have uptime requirements that far and away exceed those of mine. Almost the entirety of her business is predicated on a persistent Internet presence. As I’m sure you understand, a few minutes of downtime could lead to a belief that she’s no longer in business. This quote from The Social Network says it all:

Mark Zuckerberg: [speaking frantically, almost hysterical] Without money the site can’t function. Okay, let me tell you the difference between Facebook and everyone else, we don’t crash EVER! If those servers are down for even a day, our entire reputation is irreversibly destroyed! Users are fickle, Friendster has proved that. Even a few people leaving would reverberate through the entire user base. The users are interconnected, that is the whole point. College kids are online because their friends are online, and if one domino goes, the other dominos go, don’t you get that? I am not going back to the Caribbean Night at AEPi!

While I certainly appreciate the job that Dreamhost does in keeping their servers and vis-à-vis their network available to the Internet; it’s not at a level of fault tolerance that you can obtain from an Amazon Web Services EC2 instance. EC2 stands for Elastic Cloud Compute and is, for all intents and purposes, a virtual private server capable of hosting an OS such as Windows or Linux. What makes an EC2 instance special is two-fold. First, you can dynamically expand the number of instances running based on demand. Second, and this is the real crux of the issue, you can host multiple EC2 instances in disparate data centers and load balance across all instances. That is where the reliability comes into play.

Amazon is making a big play in this space and has a myriad of options available within AWS. I’m not going to go into great detail, but I’ll have you know that you can host databases, load balancers, DNS, message queues, notification systems, and a whole host of other services. Here is a quick screenshot of the toolbar icons that are available in the AWS console.

AWS Console Toolbar Icons

As a quick aside, I’m using S3 and CloudFront as my CDN for this site.

CloudFormation, AWS and the free Micro tier

In order for users to become acquainted with AWS, Amazon makes an EC2 micro instance free for one year. Beside the obvious which is a free VPS for one year, what it really does is allow you to drive AWS like a rented mule. Create the instance, muck up the OS “real good” and then kill it and create a new one. This is where I started to have fun with another component of AWS titled CloudFormation. From the CloudFormation site:

AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

The templates are JSON, and have a defined structure to them. The CloudFormation documentation has a lot of detaills and a must for getting around. I’m not going to go into those details because I want to jump to the gist of this post, which is taking the Single EC2 Instance web server with Amazon RDS database instance sample template and adding additional CloudFormation functionality.

AWS::CloudFormation::Init

A key feature in templates is the ability to perform actions against the EC2 instance after it’s created. You can do the following:

  • Install packages such as httpd using yum
  • Unpack one or more zip or tarballs into a defined location
  • Perform command line operations
  • Create files
  • Start services such as httpd
  • Add users
  • Manage user groups

AWS::CloudFormation::Init is a sub-resource created within the template under the EC2 instance. AWS::CloudFormation::Init is passed as metadata to the EC2 resource telling it what to do when when its fired up. AWS::CloudFormation::Init is started by a call to an AWS specific script that is installed within the default Linux instance, cfn-init. This script is called by an Bash script defined in the template. The Bash script, pushed in the UserData field of the EC2 instance properties, also contains a number of functions in the sample template. The original EC2 resource Properties looked like this:

UserData

This script is updating the AWS scripts to the latest version, starting AWS::CloudFormation::Init, and then tweaking the WordPress install. WordPress was installed from a tarball into the web root of the server in this section of the template:

Metadata

This segment of the original template creates the WordPress wp-config.php file with references to the RDS resource that is also generated by the template, installs the necessary packages on the server, and sets up the proper services to run as daemons. The first component of the template that I took issue with was that every install was going to have the same random keys. Fortunately, WordPress has a service that will generate these values for you. All you need to do is call https://api.wordpress.org/secret-key/1.1/salt/ and WordPress will return to you the necessary segments to add to your wp-config.php. First problem to solve, how do I get these values into the middle of the file. Why the middle? I tried a simple append but the WordPress install didn’t see them. After quite a bit of trial and error, I wrote a Perl script that is called from the command line. I did this because I didn’t want to leave my custom script as a file on the server.

Perl file mangling

This script looks for a token that I inserted into the wp-config.php file. This snippet is part of a larger section of the template. Since things have to happen in order, CloudFormation templates allow you to have sets of config options, run them in specific order, and then run commands in order. Here is the larger snippet.

configSets

You’ll notice now there are first and a second config sections. They appear backwards in the JSON, however if you look at the configSets (documentation), the sections are called in the correct order. Also commands are called in alphabetical order, so I just named them a, b, c, and so on. The other commands I perform is moving WordPress out of a folder under the webroot, setting the correct file and folder permissions, and of course the mangling of the wp-config.php file.

cfn-hup

The last added item to the template is the use of cfn-hup. cfn-hup is an optional script that can run as a daemon and look for template updates that are pushed to the server. When changes occur, cfn-hup reads its respective config files and performs the prescribed actions defined within. I added some boilerplate code into the template from the CloudFormation docs.

userData Redux

Here is the update Bash script that is run after install. Besides starting init, it also starts up cfn-hup. There’s also a full yum -y update in there, however for testing, it’s commented out.

Complete Template

You’re welcome to a copy of the script, and it’s in a ZIP file below. I also started a GitHub repo with this template and any others I create in the future. Thanks!

Download Here

What catalog is that file in?

That’s what I hear often from my wife Meg; a professional photographer. Meg uses Adobe Photoshop Lightroom for cataloging the pictures she takes for her business as well as the pictures she take of our family. An underlying frustration that I find with Lightroom is the fact that photos become lost inside different catalogs on your hard drive, and Lightroom lacks the ability to search across catalogs. Some photographers believe the answer is to have a few monolithic catalogs. While these catalogs may contain everything in one searchable database, the catalogs become slow and unwieldy in a few short gigs of data. My answer is to have many small catalogs and harness the great search capabilities within the Mac OS — Spotlight. One small problem, Spotlight doesn’t inherently know how to index a Lightroom .lrcat file. In order to overcome this shortcoming, I wrote my own Spotlight plugin; something called an importer (See the Spotlight Importer Programming Guide).

SQLite

Lightroom catalog files are actually SQLite databases and therefore extracting the necessary data was quite simple. I first used SQLite Manager in Firefox to examine the database. I came up with the following simple query to get at the filenames I would need to pass to Spotlight:

The next step was to run this query from within Objective-C. After using the familiar OmniSQLite framework from the larger OmniGroup framework, I settled on a much smaller wrapper, FMDB.

Importer

It took several days before I wrapped my head around Spotlight and the necessary components of an importer. Probably the most difficult concept to understand was Uniform Type Identifiers (UTI). Adobe does not identify a UTI for their catalog files. I had to create my own in order for Spotlight to identify the files and know what importer to use in order to index Lightroom catalogs. I came up with com.adobe.lightroom.library. This UTI is in the Info.plist file of my importer under the Exported Type UTIs section.

Packaging

Take note that the package I created for installing the importer places the importer in ~/Library/Spotlight. I chose the user’s home directory because I can install it without the need for a password. I also run this command

after install. That causes Spotlight to index files already on disk. See here for more info.

Source Code

The codes speaks well for itself and that is why the source is open and hosted at Github. I also created a nice Github project page for the code. You can find the page here and the source code here.

Screenshot

Here is the importer in use:

screenshot

Download

Download the installer package

The MultiMarkdown ToC

Do you ever feel like you’re never finished? I do, and I’m not happy with the fact that I haven’t been able to make Scrivener trigger the mmd-xslt script following a compile to MultiMarkdown HTML. Fine. As with the original XML epiphany, I knew there had to be code to perform an XSLT transformation in PHP on the server. Certainly there is, and my content-doc.php content template has further evolved into an XSLT transformation powerhouse. If a custom variable is provided with a path to the XSLT file on disk, and that files exists, the code transforms the XML using the provided XSLT style sheet. Here is a snippet:

Notes

Something to take note of in the PHP code. transformToDoc returns a DOMDocument object and the $xml variable contains a SimpleXMLElement object returned by the simplexml_load_file() method. If you recall, we insert the XML into the page by echoing the proper child element using the asXml() method. Therefore in order to leave that code intact, I import the DOMDocument back into $xml.

To snippet or not to snippet

My last post spoke of the need to use a snippet of XHTML in order to include it in a WordPress Page. This rule was short-lived. Why am I recanting my story? I switched to MultiMarkdown 3 and attempted the use its XSLT transformation capabilities; something of which requires an entire XHTML file (well-formed XML). The process in which one attempts an XSLT transformation with MMD3 is not an easy one at first glance. This is largely because I have yet to see how Scrivener can trigger MMD3 to perform the task at the completion of a successful compile. In order for Scrivener to be aware of the presence of MMD3 (Scrivener ships with MMD 2), an install of the MMD 3 Support package is required. This package installs in /Library/Application Support/MultiMarkdown. Scrivener is inherently aware of the files in this location, and makes use of them in lieu of the MMD 2 files contained within its app package. One small problem, according to the MMD 3 docs, the XHTML XSLT metadata tag has been removed from the spec.

In the absence of the XHTML XSLT: xhtml-toc-h2.xslt metadata tag, I had to run the mmd-xslt utility from the command line — run it from within its bin/ directory when you do. This script transforms the XHTML file into one that includes a ToC at the top. In my drive towards a successful transformation, I realized that if I had a well-formed XML document, then surely there were commands in PHP that would allow me to gain a reference to the BODY node, and inject it and its inner XML into the WordPress Page output. Sure enough, the necessary functions exist. I did however go back to scratch with the template files. I created a dupe of page.php and content-page.php (doc.php and content-doc.php respectively), and modified the content-doc.php thusly:

Please take note of the use of libxml_use_internal_errors(true);. I’m keeping the page from showing error in the output. IMHO, it’s a security issue if you expose internal paths to the outside world. Now, back to bending MMD 3 and Scrivener to match my particular requirements. Thanks.

Learning

I’ve been working on this app spec for weeks at work. In an effort to improve on what I’d accomplished, I reached out to my friend, and one of the smartest people I know, Peter Becan. I wanted him to teach me how to do it right. Peter’s been doing this sort of thing for a lot longer than I have, and has a particular knack for it. Learning to correctly prepare an application specification is harder than learning to program, IMHO.

As my spec writing progressed, I’d email Peter PDF output from Scrivener as I accumulate enough new content worthy of a dispatch. marked, which I’m using to preview MultiMarkdown markup in my Scrivener document, generates the formatted PDFs. I realize that as time progresses, keeping Peter in the loop requires a different method. In order to keep him informed, and not have to email him a new PDF every so often, I needed to make use of the web to post the formatted output.

Scrivener

Scrivener supports outputting HTML generated by the MultiMarkdown processor as a Compile For target. There are instructions on how to use MultiMarkdown with Scrivener here, however the instruction appear out of date. I’m using Scrivener 2.2 and some of the preferences mentioned in the instruction have either moved, or no longer exist. Luckily, by setting options using MultiMarkdown metadata and a simple change in the compile settings, I obtained the necessary output.

For posterity’s sake, what I did not find was MultiMarkdown Settings… under the File menu

Two things are necessary to produce the desired output for inclusion in a WordPress page. First, as mentioned in the instructions, I had to enable the exporting of Titles for both Documents and Groups within Scrivener. You do this by checking the check box under Title in the Formatting options of the Compilation Settings sheet.

Second, I opted to use a Meta-Data file at the very top of my Scrivener doc to coerce the MultiMarkdown processor to produce the necessary output. The metadata fields that I use are:

The key is the Format field. What that does is instruct the MultiMarkdown processor to only create the HTML for the given markup and not an entire XHTML page. Clearly if I am including the output in a WordPress page, only the HTML associated with the MultiMarkdown markup is necessary; the cruft associated with a well-formed XHTML page (i.e. HEAD, BODY, etc.) would be in the way. With all the correct metadata and settings in place, I use Compile… with a Compile For of MultiMarkdown -> HTML and save that in its own subfolder within my source tree.

Git

Git. Love it or hate it, it’s the linchpin in the operation. My Git repo resides on the same server as my WordPress installation. Having that scenario started me thinking about how I would get the HTML snippet residing within the repo into a place that I could serve just that content and not the entire source tree. I started Googling update website with git, and sure enough I found what I was looking for. After sifting through several top results, I found that this was the best answer.

I have an addendum to those instructions. For the remote path, I used a file:// path pointing to the path of the real repo on the server. Found that here.

The key to copying the HTML to a place I can include it from is using a post-receive hook in Git. Very simply put:

The post-receive hook runs after the entire process is completed and can be used to update other services or notify users. (taken from Pro Git)

However, if you follow the steps set out in the instructions, you’ll wind up with your entire repo in the web directory. While that may work in most cases, it was not ideal for what I was trying to do. Next pass at Google had me looking for ways to only checkout a subset of the entire repo. The key to that is something entitled a sparse checkout. I used the steps outlined here to only checkout the folder (and it’s content) that contained the HTML snippet. One exception however about the sparse checkout instructions, you will need to include a ‘*’ at the end of the path. Otherwise you will receive from Git:

error: Sparse checkout leaves no entry on working directory

For my “local” repo on the server, I picked a spot outside of the root of the WordPress installation to checkout the files to.

WordPress

Last step in the process. How do I insert a snippet of HTML that resides on disk into a WordPress page? Create your own page template and use a custom field. In the end, this was rather easy, once I learned how to do it. This page at the WordPress codex explains how to create the custom template and where to upload it to your server. Scroll down a bit till you get to the Using Custom Fields section. For this, I’m using a custom field called doc_path. Here is my custom template named docs.php:

This allows me to specify the full path to the HTML snippet on disk that I want included in the WordPress page.

Make sure, as I forgot this the first time I saved my new Page, to set the template for the Page to your custom template like this:

Final Output

Because the application I’m writing the spec for is proprietary, I can’t share the real fruits of my labor with you. However, what I did do is create another page containing a snippet of sample.html from the MultiMarkdown source at github. My sample page is here. P.S. Yes, I am aware that the sample page has a broken image link.

I had a need to import some CAD drawings into my Visio document. The CAD drawings were provided to me as PDF documents. Visio has no native way to insert a PDF into a drawing. SnagIt to the rescue. Besides being an excellent app for making screenshots, it installs itself as a printer. Well, all I did was print my PDF to the SnagIt printer, saved the image as a TIFF, and then inserted the TIFF into my Visio drawing.

The resolution was quite good and I achieved exactly what I wanted. Gotta love it when shits works out!

Enhanced by Zemanta

I’ve been toying around with SQL Server CE replication. For whatever reason, my code was failing with the following exception when I called Synchronize():

Failure to connect to sql server with provided connection information. sql server does not exist, access is denied because the iis user is not a valid user on the sql server, or the password is incorrect.

As it turns out, if you use the follow form of the SqlCeReplication ctor (as observed using Reflector):

public SqlCeReplication(string internetUrl, string internetLogin, string internetPassword, string publisher, string publisherDatabase, string publication, string subscriber, string subscriberConnectionString)

the PublisherSecurityMode is set to SecurityType.NTAuthentication. Otherwise, if you use the parameterless ctor, PublisherSecurityMode is left to its default, which is SecurityType.DBAuthentication. This assignment is NOT documented.

I am working on a SQL Server 2005 Reporting Services (SSRS) report that has differing row colors based on a value in each data row.  The color value is defined in the database.  When I initially created the report, each row had a variable background color but the foreground color was black.  The first time I ran the report, my dark blue background didn’t contrast well with my black foreground.  I quickly realized that I needed a way to vary the foreground color programmatically based on the background color.  After first discussing things over with Nate, here is the expression I came up with for the Color property of the table row:

Let me explain where this all comes from.  First off, the color that is stored in the database is used by a VB6 program.  VB6 stores colors as BGR and .NET stores colors as RGB (well, technically aRGB).  The first step is to break down the value from the database to its constituent parts (red, green, and blue) using bitshift operations I learned from Keith Peters and then apply the contrast formula I found from Colin Lieberman‘s website. I then determine that if the blackground is a dark color, then we use white and for a light background, black.  This appears to working like a charm.