Monthly Archives: November 2006

Code Signing: two worlds defined

I’ve always been a fan of code signing. There’s “signing” to give the assembly a strong assembly name, and there’s “signing” to verify the application hasn’t been tampered with. (It irks me that they’re named the same.) It finally gelled in my head the difference between the two. For my own reference, and for the benefit of others, here is a description of each:

Strong Name

Reason: This is necessary to install assemblies into the GAC and to include a library in a signed project. It is specific to managed code.
Benefits: This insures the library doesn’t conflict with other dlls. It says nothing about the origin of the file.

Technique (Visual Studio 2005):
– In the project’s properties, click on the Signing tab
– Check “Sign the assembly”
– In “Choose a strong name key file” choose New to generate a .snk file
– For other projects using the same .snk file, choose Browse in this dialog, and find the .snk file created previously

I like to use the same .snk file for all projects in a solution. When choosing Browse, it copies the .snk file. I additionally do this:
– Close the solution
– Move the .snk file to a central location, and delete the copies
– Open the project files in my text editor of choice
– Locate the .snk file reference, and modify the path accordingly
– Open the solution and rebuild

Technique (Visual Studio 2003):
– Use sn.exe from a Visual Studio Command Prompt to generate a .snk file ( or use VS 2005 :) )
– In the Assembly Info file in each project, add the path to the .snk file

This .snk file should be kept in secret. You can create a public key file using sn.exe worthy of distribution.
– If this is an absolute path into C:Documents and SettingsUserMy DocumentsVisual Studio 2005Projects… and you check this file into source control, your team won’t like you. Please use relative paths.
– VS 2003 only: the compiled code is signed from inside the build directory (e.g. /bin/Debug), so relative references need to start there. I often found I’d have a bunch of “……….” in my key paths.
– VS 2005 only: the snk file is referenced from the project directory, so relative paths need to start there.

Using a strong name in .NET
Demanding a strong name in a library

Verify Origin

Reason: This is necessary for Windows Mobile 5, and good practice. It is not part of managed code, but rather a part of Win32.
Benefits: This documents the binary file hasn’t been changed since it was built, and the builder can be trusted. It uses a chain of trust to a central trusted root certificate to insure the author is indeed who they say they are.

For production use, you should buy a certificate from Verisign or another trusted provider. Ramon has suggested a free provider: I haven’t reached the same level of trust he has, and can’t comment on using it though. For testing, you can create a “self certificate”. It doesn’t tie to a trusted root certificate, but can be installed on a client machine as a trusted source.

– From a Visual Studio Command Prompt, do this once:
makecert -r -sv app.self.pvk -n “CN=AppCert” -b 01/01/2000 -e 12/31/2050 app.self.cer
pvk2pfx.exe -pvk app.self.pvk -spc app.self.cer -pfx app.self.pfx
Substitute “app.self” and “AppCert” in both commands for what ever descriptive info you’d like.
Modify the beggining (-b) and ending (-e) dates as necessary.
Use MSDN or makecert -! for other parameters.

– After building the project each time (and signing for strong naming), run this command:
C:Program FilesMicrosoft Visual Studio 8SDKv2.0Binsigntool.exe sign /f ../../../path/to/app.self.pfx /d “Descriptive App” /du app.exe
Substitute “app.self” for the actual .pfx file, and other info with info relative to your app and your organization.
I like including this as a post-build event on the solution.
An NAnt task would also be a great place to put this command.

These .pvk, .cer, and .pfx files should be kept in secret. Handing them out defeats the purpose.
– Post-build events are run from the build directory (e.g. /bin/Debug), so relative paths have to start from here

Sign your redistributables quick and easy
Signing assemblies for Windows Mobile 5


With both of these in place, you can look at an exe or dll and know that:
– It is uniquely named
– It is unaltered since it was built and the builder can be trusted

The other point of note: .snk sign first, .cer sign second. .snk is .net specific, .cer is bigger than that.


NAnt intellisense in Visual Studio

NAnt is a phenomenally cool tool for automating build scripts. Coding NAnt build files feels natural to me: unix shell script style functions and xml style syntax. However, getting all the parameters just so is a bit intense. Visual Studio can provide intellisense for NAnt, if it has an xsd file to use for reference. NAnt comes with an nant.xsd, and some pretty good docs for installing it. This is great for NAnt tasks, but doesn’t handle NAntContrib’s features. Clint describes a technique here and here for using NAnt to build the nant.xsd file. He was using an older version of NAnt though, so I needed to modify Clint’s code slightly. While I was in there, I grabbed other dlls in the nant bin folder too. Here is a zip file with a .build file, a .reg file, and the resulting nant.xsd it builds. The build file builds the nant.xsd, and copies nant.xsd into the correct spot in Visual Studio. The registry key tells Visual Studio it’s there. The build file assumes you’re using NAnt v. 0.85 release. If you’re not, grab the xmlns reference from the top of the original nant.xsd provided in the NAnt install, and copy it into the build file. To get intellisense in Visual Studio, run the build file, and load the registry key. Restart Visual Studio, and you’re golden. Clint and others have discussed creating a Visual Studio template file, so you could add a new file of type “NAnt Build File” from within Visual Studo. Since I’m usually copying a build file from another project as a starting point, I haven’t found the need to sit down long enough to figure this out. Rob

Feedback spam

Ok, I don’t know who the wise guy is who keeps trying to give me feedback spam about [use your imagination here], but I’m really not thrilled to delete as much feedback spam as I get.  A new version of Subtext should cure it.  If not, I’ll have to get drastic.  You have been warned…


NAnt is my new best friend. NAnt is very easy to learn, very straight forward to use, and very powerful in it’s execution.

NAnt is an automated build tool. Why would you deviate from Visual Studio’s Build menu? Here’s why. .config files. I’ve got a dev config file, I’ve got a production config file, various customers like their own pre-packaged config files, and I like to be able to debug live data issues with their config files as well.

Well, I want that config file baked into the .msi installer I deliver to them. That’s all fine and good, but standing between the end of the .exe’s project build and the beginning of the .vdproj’s msi build is just awful. From time to time, I’d create an empty project that depended on the .exe project, and had a pre-build .bat file that it ran to swap in the config file. This was awkward at best.

Since jumping into .NET, I’ve migrated from just using Visual Studio’s F5 to build things to having a batch file that builds a release verison. It just calls devenv.exe, passing in the solution and configuration names, and directing output to a log file. The glaring problem though is that it’s difficult to stop the process mid-take if things go wrong. Embedding the correct config file is also a matter of great intensity.

Enter NAnt. NAnt is to .NET as Ant is to Java. (And NAnt is to MSBuild as Macintosh is to Windows, or Palm is to Windows CE, or Netscape is to IE — Microsoft knows a great idea when it sees it, and then promptly steals it.)

To install it, do this:
  • Download NAnt and NAntContrib
    • NAntContrib is extra functions and tasks that aren’t part of the NAnt core, such as SVN, MSBuild, Zipping, etc.
  • Unzip them both
  • Copy everything from NAntContrib’s folders into the corresponding NAnt directories.
  • Put the new package somewhere permanent (like c:Program Files)
  • Put NAnt’s bin folder in your Path environment variable
It isn’t absolutely necessary to put NAnt’s bin directory in your path, but it’ll make it MUCH easier to work with. With nant.exe in your path, you can just type “nant” rather than “C:Program FilesNantbinnant.exe”.

To add NAnt’s bin directory to your path do this:
  • Right-click on My Computer
  • Choose Properties
  • Choose Advanced
  • Push Environment Variables
  • Locate Path in the bottom (system) list
  • Click edit
  • This is a semi-colon delimited list of folders (Yes, one of the most stupidest delimiters in the world)
  • Add ; and NAnt’s bin folder’s path to the end
  • Push OK a dozen times
  • No restart is necessary
  • You’ll need to close and reopen any command prompts to see the effect.
From time to time, I’ll copy the whole thing into my text editor of choice, use find-n-replace to change ; into new line, and browse.

Now that you’ve got it installed, time for a little build action. The syntax is incredibly elegant. The NAnt data file is an xml file with a .build extension. The root level element is <project>. Various targets (functions), and properties (variables) are in it. A target contains various tasks to do. Tasks can call other functions, set properties, or do something. Tasks can also require a separate task to have run first. When you call nant.exe, you pass in the build file (or it uses the one in the current directory), and you specify the targets to execute. For example:
nant debug
will run the “debug” task in the .build file in the current directory.

The NAnt zip file has some great HelloWorld .build files to cruise through. Great bedtime reading. Go read it, go experiment. I’ll wait. You done? Good.

Ok, now from the simple to the complex. The standard NAnt way of building .net code is either a task to call csc (e.g. turn your project file into an NAnt file, and synchronize them back and forth — yeah, I don’t like that either), or the <solution> task, passing in the .sln and configuration. Well, <solution> doesn’t work with Visual Studio 2005. Alas, there’s an <msbuild> task in NAntContrib. Pass <msbuild> the .sln filename, the configuration, and we’re good. Well, almost.

First, the good. Starting in Visual Studio 2005, .csproj files are actually msbuild files. MSBuild is for all intents and purposes MSAnt. MSBuild knows how to build a project or a solution created from Visual Studio. No need to mess with NAnt on the project level.

Now the not-so good news. MSBuild doesn’t support .vdproj projects — “Setup and Deployment Projects”. Microsoft’s official answer? call devenv from the command line. The down side? Now you need Visual Studio installed on the build machine. Ok, I can live with that.

About this step in the process, I did 1000 iterations of various things. Here’s the design I finally settled on:
  • A property for the config file to use
  • Set the default config file up front, outside any targets
  • A target per client, changing the config file property
  • A clean target that cleans out the /bin/ and /obj/ folders
  • A target that given a solution file and a configuration, builds it
  • A target per configuration: e.g. 1 for Debug, 1 for Release, etc.
  • A target that
    • calls the clean target
    • calls the “build a solution” target once for each solution in the project.
    • copies the final built pieces to a “build contents” directory
    • copies the config file into the “build contents” directory
  • A target that exec’s devenv, passing in the .vdproj responsible for building the installer
    • I tweaked this .vdproj to get its contents from the “build contents” directory and to remove it from any of the .sln files
Download a sample of this strategy (and accompanying source) here.

Now to build my entire software product, from a command line, I type:
nant -l:logfile Client Configuration Msi
For example:
nant -l:build.log Microsoft Release Msi
will build the release version of my product, and include the Microsoft config file, and write the build status to the file build.log. Most excellent.

And should anything go wrong with the actual software build, that task will fail, and the Msi will never get attempted. Righteous.

A couple tricks I learned along the way:

Joshua has a nice blog post about building msi’s through config files by exec-ing deveng. (Search the page for MSI.) William Caputo has an extremely excelent post on how to resolve the path to Visual Studio. Yes, I could assume it’s in C:Program Files, but this is hardly robust. His technique is incredible. Both techniques are in the sample code.

“**” is the regular expression for “in any directory from here down, recursively”. That’s very nice for cleaning out old build data. name=”**/bin/${Configuration}/**” and name=”**/obj/${Configuration}/**” and we’ve successfully targeted all built data for all projects within this solution (assuming we started in the root of the directory tree).

Intellisense for .build files. This is an incredibly cool topic for a subsequent blog. I found the xml syntax incredibly easy though, and syntax highlighting was just gravy once I got it to work.



High on my list of stuff I need to do is NUnit. Last I tried NUnit, it was wonderful and infuriating. I could run tests or debug code, but not both. My, how times have changed. I must conclude NUnit rocks!

NUnit is for making automated tests of .NET code. Each test is a piece of C# code with a special [Test] attribute. There are 1000 quick start tutorials out there, so I won’t bore you. Do a Google search, and you’re good. And the NUnit quick start docs are also quite good.

Here’s the secret sauce though: TestDriven.NET, a plug-in for Visual Studio 2005. Sadly, it’s closed source. However, it is free, and well worth it. TestDriven.NET provides a context menu in Visual Studio. (See screen shots here.) Right-click on the NUnit project in the solution window, and choose Run Test(s). Or choose a file in the project to only run those tests. I spent quite a while trying to get it to auto-load in Visual Studio before I found this note on TestDriven’s site saying it wasn’t needed, that TestDriven would load things when they were run. (It makes it a bit more difficult to disable the plug-in without uninstalling though.)

Now here’s where it gets really good. One of the options on the menu is “Test With -> Debugger”. I absolutely despised debugging NUnit tests before. I’d launch the GUI, run the test, get a red light, then wonder why. Then I’d have to build some stupid console or windows app to run the NUnit test project’s dll, and step through it. Or I’d insert a bunch of alert(“not this”) or MessageBox.Show(“here”) code, then remember to rip it back out when I found the problem. (And put it back in when that wasn’t it.) Not fun. Alas, that is no more. I set my break points, choose “Test With” -> “Debugger”, and step into the code. Very nice…

Ok, I’m a bit annoyed that I can’t do fix-and-resume, but I understand why. If the process running it was vshost, yeah, it’d work fine. It’s probably run by nunit-console though.

TestDriven.NET installs NUnit, NCover, a code coverage tool (e.g. which portions of my code did it test, which portions are not tested), and MSBee, a tool for using MSBuild with .NET 1.1.

I found this post as I was constructing the blog post. It details how to run NUnit tests on private members using reflection. (Yeah, you could do this in regular code too, but please don’t.)

Another note: the Model-View-Presenter pattern works really nicely with this train of thought, because it seperates all the “do it” logic from the “show it” logic within the GUI layer. Thus, I create an NUnit View, and I’ve got great unit testing of my interface. Very smooth.


SSH on Windows

Apologies up front to those who will flame me for this, but I must concede. I’ve been trying to find a good excuse to get into Linux for a while. I like the idea of open-source software running my organization. I like the idea of creating a test environment or another machine simply by downloading and installing. I can only imagine the headaches and dollars saved by moving to Linux. However, I have yet to find a reason to do so.

My latest example is a good one. I want the ability to ssh. As a mechanism for delivering bug fixes, code updates, and all the other goodies to users, ssh is much safer than ftp. The command line syntax is much easer, using 1 port instead of 2 is awesome. Using pre-shared public keys based on a private key is phenomenal. The whole concept is cool.

My choices are now:
– Pick a distro (via dart board selection), install it, futz with it for eons, debug some ./configure script gone awray (because I didn’t pick the right distro), finally get it stable, look at configuring ssh, futz with it for another eon, get it running, use it for a few weeks, decide to tweak something, destroy it, try to fix it, guess a different dart board choice would be better, reformat, repeat.
– Install ssh on Windows

I found copssh. It installs the minimum cygwin needed, openssh (a maintained version), and some cool tools. I had it up and running, authenticating against pre-shared keys, and doing the do in a half hour or so. It was easy. (It’s incredibly cool to have a bash shell on a windows machine too.)

Then WinSCP enters. It’s a Windows Explorer-like view to an ssh destination. It includes entries in the Send-to context menu for Putty saved sessions. For those that don’t like the command line, it’s just an awesome tool.

Now were I to do that on Linux, do I need to modify sshd.conf? or do I go into /etc/rc2.d and create a S90something script? How do I insure the service starts on reboot (aside from rebooting once)? Apache is gorgeous for what it does. But adding a new virtual directory to IIS is just so easy.

Ok, granted, I like the argument of SLED (Suse Linux Enterprise Desktop) instead of Vista, but I still can’t see my grandma using Linux desktop as easily as Windows. I also can’t see the non-technical user running a Linux desktop without some outside assistance from a Linux guru. And which distro would they pick? (without the marketing) And KDE or Gnome? (Granted, not the only two to choose from, but it’s the first question during the install, and one most users wouldn’t know what to do with.)

And sounds incredibly cool. Anyone with a credo like “To give windows powerusers – turned linux newbies – a place to keep up to date on the latest happenings in linux software and to discuss their problems, adventures, and accomplishments.” has got to be cool.

And the whole Microsoft / Novell deal just sounds fishy. Granted, incredibly cool if Microsoft now backs Mono, Samba, Linux Desktop, etc, etc. But … jumping into bed with the second best enemy just seems like the most effective way to insure both #1 (Red Hat) and #2 (who’s ever heard of Novell after NetWare?) lose.

But I can’t in good conscience try to convince a client that going with the penguin is easier. I just can’t see that.