Visual Studio Online’s build: Azure Powershell fail the build

visual_studio_online_build_tasks Visual Studio Online has many tasks: tasks for building .NET solutions, tasks for building Android, tasks for running bash scripts, tasks for running batch files.  If you want to run in the context of your Azure subscription, you use the Azure Powershell task.  I use this task to do some funny footwork in deploying successful results to an Azure Web App.

The problem:

Write-Host "Exit code: $e"
exit $e
So I’m cruising along, my exe returns a non-zero status code to mean “please fail the build”, and Visual Studio Online happily succeeds.  I’m clearly returning 1 from my Azure Powershell task, but it’s not getting the message. A few hours of pulling out grey hairs, and I’m given a great gift from Chris Patterson.  (Scott Hunter: your team rocks.)  Turns out that the Azure Powershell task isn’t run from powershell.exe, so it doesn’t harvest exit codes.  The solution was simple once he handed it to me.

The Solution:

Write-Host "Exit code: $e"
if ($e -ne 0) {
    Write-Error -Message "##[error]BUILD FAILED: $e"
exit $e
By using Write-Error, the build correctly fails when it needs to, and life is good.  Chris mentions that he’d like to see it work differently, but it didn’t make RTM.  If you’d like this idea to come in-the-box, vote up my Pull Request … because the task is open-source.  Awesome!

Upgrade Android on a Rooted Device

When Android says “System update available”, we all gleefully run it. Those of us with rooted phones get a way through it and come to an android lying on its back, and a sinking feeling in the pit of our stomach. “Did I just brick my phone?” No, but we did break some of the assumptions the Android updater makes. It presumes we’re running the stock bootloader, which we aren’t. Here’s instructions for getting the bootloader back to stock, getting Android updated using the normal means, and getting re-rooted. This guide assumes your phone is unlocked. (Unlocking your phone will wipe it. Backup before unlocking. Yes, really.)

I. Get ready

  1. Charge the phone. If the phone dies while you’re flashing, you’ve bricked the phone. It’s not permanent, but it’s harder.
  2. Install android sdk. I downloaded “” some time ago from a source I don’t remember. Minimal ADB and fastboot is sufficient. The goal is that you have `adb` and `fastboot` command-line commands available.
  3. Get phone drivers. You need drivers both for the regular boot mode and for the recovery console. Universal ADB Driver worked nicely for me.
  4. Turn on developer mode: Go to Settings > About phone and hit the build number 7 times.
  5. Turn on USB Debugging on the phone: Go to Settings > Developer Options > USB Debugging
  6. Ensure you have drivers installed. With your phone plugged into the PC via USB, open a command prompt, and run these commands:
    • `adb devices`: make sure your device is listed as `device` (not `offline`). Click “yes allow USB debugging” on the phone, and run it again until it says `device`. If it doesn’t, check Device Manager, try a different driver, or check the USB connection.
    • `adb reboot bootloader`: reboot the phone into bootloader so you can check the fastboot driver.
    • `fastboot devices`: ensure your phone is listed as `fastboot`.
  7. Backup everything. Double-check everything. What if your phone never booted again? What would you lose? It’s your foot…

II. Put bootloader back to stock

  1. Download factory image for current version of Android: Settings > About Phone shows you your phone’s current version.
  2. Unzip `recovery.img`, `boot.img`, and `system.img`.
  3. Run these commands from a console / terminal in the directory with the unzipped content and adb/fasboot commands. (Run the lines that start with `>` without actually typing the `>`):
  4. # Make sure the device is recognized
    > adb devices
    # Boot into bootloader mode
    > adb reboot bootloader
    # In bootloader mode, make again sure the device is recognized
    > fastboot devices
    # Flash stock LRX21O recovery image
    > fastboot flash recovery recovery.img
    # Flash stock LRX21O system image
    > fastboot flash system system.img
    # Flash stock LRX21O boot image
    > fastboot flash boot boot.img
  5. (4?) Reboot your phone, and you’re back to the stock bootloader.
A few things of note: You’ve lost root. All the things that depend on root are now going haywire. That’s ok. We’re not done.

III. Run Android Update

  1. Incessantly check for updates until Android notices. Settings > About Phone > System updates. Sadly, it caches the answer for a good long time, so this may be a “go to bed, try again tomorrow” kinda thing. Especially if you just tried to update and got the android lying on its back.
  2. Run the update. Reboot.
  3. Reboot a few times. Run a few apps. Check for more Android updates (Settings > About Phone > System updates). Run these updates too. Make sure the new Android version settled in.

IV. Root it again

  1. Download and unzip CF Auto Root from
  2. Re-enable USB Debugging if it got turned off during the update.
  3. Get into fastboot mode: `adb reboot bootloader`
  4. Run `root-windows.bat` (or the appropriate root for your OS). That gets you root again.
  5. Reboot. It will take a long time to boot. No, it didn’t freeze (unless it’s been hours). Feel free to panic, but don’t hard-boot it. If you do brick the phone, start over with `II: Put bootloader back to stock` above. No, you didn’t break it.
  6. Once your phone is booted again, update and run SuperSU, Titanium Backup, Android Firewall, BusyBox, and all your other rooted tools.
You’re done. Welcome to the next version of Android.


inetaWhat have I done recently? I’ve been focusing on regional and national speaking opportunities and community outreach. I taught Git in January in Vegas and in San Francisco in February, I compared Amazon and Azure in San Francisco in February, in March I’ll share What’s new in Visual Studio 2013 in Pasadena, and the highlight is I’m currently speaking at FluentConf. This has been an awesome opportunity to focus on bringing content to users where they are. I’m really glad to be able to share and learn together with you. Looking for slides for these talks? Grab the slides. Want me to speak at your user group or conference? Request me through INETA or drop me a line.

Azure Websites Git Deploy Fail

Windows Azure Here’s an interesting fail.  I’m at the Windows Azure Developer Camp practicing some Git Deploy (total win!), and I hit a very interesting snag.  I’m deploying a legacy .net 3.5 app, and “git push azure master” failed with an http 500.  No other message.  Enter head-scratching mode.  Is it that it’s using a Windows Azure SQL Database?  Did I typo the connection string?  Did I put an unusual character in the site name?  Did I typo the deployment credentials?  Is my machine controlled by aliens? A few rounds of thrashing ensued where I reset deployment credentials, validated the git remote url, typed credentials painstakingly carefully, and still the same result: “http 500.”  Next round: I deleted and recreated the Azure Website (simpler name), switched to .net 3.5 in the Azure Portal, ran git deploy, and fail again.  Crap.  Well, rule out typo or bad character. On a whim I switched it to .net 4.5 in the portal, and git deploy succeeded.  (Of course my site is now toast because it isn’t a .net 4.x site, but it’s now deployed.  Switch back to .net 3.5 and the site springs to life though.)  Ok, this is curious.  I flipped another dozen switches before I found the solution: git deploy fails when the site is set to 3.5 in the Azure Portal.  This is quite repeatable.  Changing nothing else, I set to 4.5, git push azure master succeeds.  I set it to 3.5, git push azure master fails with 500. I know nothing of the internals here, but wondering aloud with Michael Palermo, I’m guessing perhaps Kudu‘s git mechanism links against libgit2’s .net bindings, and that exe is a 4.x app.  Perhaps when IIS is set to 3.5, the deployment also runs in 2.x/3.x mode, and this 4.x deployment app fails. Ultimately, I got past the problem, and were I to make a serious move here, I’d just recompile in .net 4.5.1 and be done.  And more than likely if I’ve embraced the cloud to the point that I’d git deploy into production, I’ve also embraced .net 4.5.  But for that interesting edge case where I am git deploying .net 3.5, should it fail in such an obscure and undiscoverable way?  That’s where the real head-scratching begins.  :D

A Dozen JavaScript Libraries

JavaScript It was my pleasure to present A Dozen JavaScript Libraries at vNext Phoenix this evening.  I think we had a record crowd!  It was great seeing old friends and meeting new friends. Presenting a dozen libraries in an hour and a half is quite a challenge, and we ended up rushing through the last few.  If you missed it, you can view the slides or come see me present it again at Desert Code Camp.  How did it go?  The reviews are starting to come in:
Just spent a couple of hours this evening listening to Rob Richardson give an awesome presentation on A Dozen JavaScript Libraries.  – Guy Ellis
Just got back from a great meetup focusing on 12 JavaScript libraries, even had some live coding (and debugging of course) in there. – Jack Ketcham (@_jket) @rob_rich enjoyed listening to your talk this evening, thanks!  – Jack Ketcham (@_jket)
I attended your vnext presentation today. Thanks very much for putting that together. I learned some new things. Hope to see you at another meetup soon! – Rebecca (by email)
It sounds like the audience had as much fun as the presenter.  Thank you to Interface for hosting, and to Dave Campbell for leading this great group.

Git Training

QuigonheadshotWow, that was fun.  I had the pleasure of training 174 people (42 in-person, 132 virtually) for an 8-hour session on Git.  Thanks to GoDaddy for sponsoring the event.  It was incredibly fun to run with the audience from “How do I install it?” at the beginning of the day to “So that’s how you rebase” at the end.  And tribute to the attendee who pointed out the similarities between me and Quigon in the image to the left. Here’s some of the things we spoke about:
  • “Thinking in Git” is a presentation I love giving where our focus is taking the knowledge you’ve gained in TFS or SVN and transferring it into Git.
  • We speak of who uses Git.  Ultimately I love to reference the open-source web stack from Microsoft on CodePlex.  When you click download, you’re greeted not with a TFS link or even an SVN link; you’re asked to “clone or fork in git.”  If anyone needed a nudge that Git is the new normal, that sounds like it.
  • We speak of the beauty of distributed version control systems, overcoming the “subversion dilemma”: “do I commit now and inflict incomplete code on all of you or do I not commit and lose the benefit of source control.”  Thank you to Eric Link for the excellent metaphor.
  • We walk through installing all the Git Tools for Windows.  It was fun to reference the new kid on the block: SourceTree, a free Git GUI by the makers of JIRA.  (It still fascinates me that KDiff3’s installer pops up /behind/ Git Extension‘s installer.)  The cream of the crop is how Git puts a bash shell on my Windows box.  I’ve mis-used that countless times for running shell scripts on Windows.
  • We walk through the staging area, commit, checkout, branching, merging, push, and pull, and reference the SVN and TFS similarities.  All attendees can now recite by heart the great long command to rule them all:

git log –oneline –graph –decorate

  • After lunch, we walked through “Git, GitHub, and GitFlow”, a great presentation where we review what Git gave us “in the box,” and what new things we can do should we choose to also add GitHub and/or GitFlow.
  • A trip through git aliases eased some peoples fingers as they typed “git log –oneline –graph –decorate” for the 279th time.  :D
  • A great question was how to version database content, so I got to demo both SQL Source Control from Red Gate Software and the built-in Database Projects.  (spoiler alert: the workflow in SQL Source Control is far and away worth the cost of admission.  I’m sorry Database Projects, but the master copy is the database, not the text files.)  Though neither tool is great at dumping SSIS Packages to text files, normal database objects like tables, stored procedures, and views work just fine.
  • Other insightful questions led us on journeys of transferring repositories from TFS and SVN to GitHub Enterprise, or transferring from one Git host to another.
  • We got to experience a standard open-source contribution model as we all forked a repository I created for the purpose, cloned it to our machines, changed it in interesting ways, pushed, and created pull requests.  The hilarity ensued as we watched me demo merging interesting situations.
  • We talked a lot about how to reorder, refine, adjust, and route around commits using “git rebase” and “git cherry-pick”.  (That’s when our minds really started spinning.)
  • The final picture from Labyrinth referencing “initial git training” was priceless.
Ultimately, it was a day well spent for all.  Thank you to all who attended and made it a fantastic day.  Want me to do a training at your user group or wish to sponsor a corporate training for your company?  Drop me a line. Rob

Welcome to Node

las-vegas It’s my pleasure to present “Welcome to Node” to the dotNet user group in Las Vegas on August 29, 2013. Ideally this presentation is equally applicable to any developer approaching Node, though since this audience is coming from .NET, we’ll also discuss similarities and differences between .NET and Node. You can view the slides and download the code that accompany the presentation.

What is Node?

Node is definitely the new, cool, shiny toy, but what sets Node apart from other frameworks and languages? Isn’t this just the next generation’s Ruby? Node is a JavaScript runtime built atop Google V8. It runs on Windows and *nix. It is a blazing fast engine for I/O bound tasks. It is incredibly cool.

What to do with Node?

Node apps tend to fall into one of two categories:
  1. Web-based apps
  2. Command-line tools
Because Node can easily run JavaScript files, it becomes trivial to automate simple tasks by calling node.exe. In many instances, a command-line node utility is much simpler and more concise than a comparable batch file. Once we get into network throughput, Node excels at serving static and dynamic web content, sockets, and even natively supports web sockets. In fact much of Node’s initial popularity is due to the web socket library

A single language for client and server

One of the panacea ideas we’ve had in web development is a single language for both in-browser, client-side content and server-side, back-end content. Right now we’re polyglot programmers, learning HTML, CSS, JavaScript for the client-end, SQL (or NoSQL) for data storage, and PHP, Ruby, Java, .NET or any of a plethora of other technologies for the server-side code. One of the main purposes of Silverlight was to create a single language for in-browser and server-side code to simplify the development experience. Sadly, the days of browser plugins are over, and Silverlight has fallen victim to the trend. This also is a wonderful simplification of the web though — no longer are we burdened by plugins. The browser renders only three technologies. All content is in HTML — whether created on-the-fly or at design-time. To make this content pretty, we use CSS. The way we make this content dynamic is through JavaScript. It’s the single source of browser interactivity, and because we all have browsers in our pockets, on our desks, tablets and Smart TVs, JavaScript has become the most popular, most ubiquitous language on the planet. On the server, we have technology choices, but to keep a single language for both client and server, we choose JavaScript. Node empowers this choice with an incredibly fast, built-in web server. Node has built-in support for http connections, sockets, and web sockets. Building atop this stack with various open-source packages, we can achieve all the power and simplicity of MVC — all from JavaScript. Ultimately, the choice to use JavaScript in both client and server doesn’t mean we have complete code sharing between client and server. Each environment has different concerns and responsibilities. Putting security client-side is silly since we deploy our source code to the browser. But entity validation is a concern that both browser and server share, and we can now accomplish this with a single solution.

Methodologies of Node

Some standard methodologies of Node are:
  1. Everything is asynchronous, no blocking
  2. It’s just Standard JavaScript
  3. There is no global scope
  4. A rich community of open-source packages
Part of what makes Node so fast at I/O is that everything is asynchronous. Instead of blocking the current thread, we use the standard JavaScript pattern of passing in a callback. When the task is complete, our callback is called, and our program resumes. Because of this, the Node event loop is free to service other requests while our request is waiting. If we’re careful (and lucky), what constrains our application isn’t our code but rather the number of concurrent network requests on the box. (This is a wonderful problem to have.) The one place where Node diverges from standard JavaScript (though ECMAScript 5 and 6 catch up here) is global scope. It is trivially easy in the browser to pollute the global namespace. Forget to put a var in front of your variable and you’ve made it a global variable. If I create an identically named variable in my library and also forget the var then we now share the variable, and very bad things can happen. Node solves this for us by making each file a separate module. The module system is very powerful in Node. The community of packages (much like NuGet for .NET) empower node developers to orchistrate elegant solutions of common, reusable, replaceable components. We’ll discuss NPM, the Node Package Manager after we install node.

Installing Node

Installing node on Windows or *nix is trivial. Visit, push download, run, next, and done. To prove it installed correctly, open a command prompt or terminal, and type node –version. If it reports a version like v0.10.17, you have Node installed correctly. The really elegant thing is that the Node homepage identifies your OS through your browser and presents the correct download for your OS. I’m browsing from a 64-bit Windows device, so my download is node-v0.10.17-x64.msi. With the Node install, you also installed npm, the Node Package Manager. The installer also recommended you add these to your PATH. As we walk through the demos we’ll presume you have node and npm in your path as well.

The Node REPL

Run node and you start the repl — the read-eval-print loop. You can type standard JavaScript here, and it’ll print the answer.
>"This is a string"
This is a string
>new Date()
Thu Aug 29 2013 18:00:00 GMT-0700 (US Mountain Standard Time)
The node repl is amazing at quickly running small one-liners, and running node apps is just as easy.

Run a Node app

To run a node app, just pass the JavaScript filename as the first argument.
node app.js
Let’s create a simple hello_world.js file:
console.log(new Date());
and run it:
node hello_world.js
Woo hoo! That was easy. Notice how it started up, did its task, and then exited when it had nothing more to do. I wrote a batch file a while ago where I was harvesting the current date to use as a folder name. The batch file was a page long. We could modify this one-line program to export the date in a specific format, and call it from the batch file, and save ourselves quite a bit of work. Node is instantly becoming useful. If we have a task that happens periodically we can create a sample hello_world2.js like this:
setInterval(function () {
    console.log(new Date());
}, 1000);
Run this app the same way:
node hello_world2.js
Note that every second it prints the date, and doesn’t exit. Hit Cntrl-C twice to crash out of this app. The first scenario is a perfect micro-model for a command-line tool, and the second is a perfect model for a web server. Imagine filling out this file with meaningful content, and just calling it.

Everything is Asynchronous

In Node, there is no Thread.Sleep(). Instead, we use setTimeout(..) to schedule additional work, passing in a callback. If we’re reading a database or an external service, we also just pass in a callback. This is the secret sauce that makes Node so fast. The event loop gets done with the current code having scheduled the future work, and is free to handle other requests. When data is available or when sufficient time has elapsed, the event loop will come back to our request. This one methodology alone makes Node head-and-shoulders faster at processing I/O bound tasks. Because everything is Asynchronous, Node has developed a convention that allows us to call things and harvest results. In most use-cases, the last argument passed in is the callback, and the callback’s first parameter is an error, or null if it didn’t error.
callTheLib('pass','parameters', function (err, results) {
    if (err) {
        throw err; // Bad things happened
    // It worked
Most Node libraries and modules use this standard pattern. You’ll likely see this in most programs and modules you use.

Hello Web Server

Let’s create a simple Node web server. Taking a simple example from
var http = require('http');
var port = process.env.PORT || 1337;
http.createServer(function (req, res) {
    res.writeHead(200, {'Content-Type': 'text/plain'});
    res.end('Hello World\n');
console.log('Server running at'+port+'/');
and save it to a file like hello_web.js then run it in the normal way:
node hello_web.js
As it gets the web server running, it’ll spit out the url to the console (localhost:1337 if you don’t have a PORT environment variable set). Connect your browser to http://localhost:1337/ and you’ll see the reply. Congratulations, you have a Node web server running.

Module Loading

In the “Hello Web Server” we saw an interesting line that loaded the http module: var http = require(‘http’); This loads a built-in package called “http”. Loading other packages is just as easy. If we had a package called ‘lib_name’, we could load it like this:
var lib_instance = require('lib_name');
These modules are loaded using the CommonJS module loading paradigm, which ironically, is synchronous. The browser can’t use this synchronous mechanism as it may need to download things, so in-browser we use an upgraded paradigm: AMD (Asynchronous Module Definition) from libraries such as RequireJS. Though you can use AMD inside node, it seems awkward to put AMD atop CommonJS. Ideally, Node would leverage AMD and we’d have a singular module loading platform for both client and server, but given the existing codebase, this becomes a very unpopular decision. ECMAScript 6’s modules may resolve this in time. For modules that aren’t built-in, we can download these modules with NPM (Node Package Manager). npm.exe was installed with the Node download, and provided you opted to add node to your path, you also got npm in your path. Thus for external modules, provided you wanted a module named “packagename”, the process is two steps:
  1. npm install packagename from the command line
  2. var lib_instance = require(‘packagename’); inside your code
You now have a reference to this library that you can use to call functions and inspect properties associated with this library. The files are downloaded into the node_modules folder in the project directory. The project may have additional dependencies, and as NPM downloads your chosen library, it will also download these dependencies recursively. Each project has a package.json file that explains among other things both the design-time and run-time dependencies of the module. You would do well to create a package.json file for each app you write. Use npm init to walk through a series of prompts to construct this file. One of the major benefits of the node ecosystem is the thriving community publishing open-source packages that do interesting things. For most common problems we have, there are probably 100 libraries that already do it. We can then just string together these packages to orchestrate our application.

Hello Express

The “Hello Web Server” example was lovely in proving the simplicity of the Node platform, but ultimately it isn’t that useful to us. We prefer frameworks that do the heavy lifting for us, and ultimately, we prefer a web framework that gives us an MVC experience. Enter Express, an MVC framework for Node. Express gives us the separation of concerns we’re used to when we build websites, and feels a lot like ASP.NET MVC or Ruby on Rails. Getting started in Express is very easy. From a command-line or terminal:
  1. npm install -g express
  2. express myapp (or choose an appropriate project name)
  3. cd myapp
  4. npm install
  5. node app.js
In step 4, we ran npm install without a package name. This looked through the application’s package.json manifest looking for dependencies and downloaded each one. Once we’ve got the app running on step 5, look in the console to see which port it randomly chose, and hit that with a browser. Welcome to MVC in Node! Thanks Express. Traversing through the folders we see the usual suspects:
  • controllers
  • routes
  • views
Pop open a few, make changes, restart the server (node app.js) and see your application come to life.

Command-line tools with Node

The other common paradigm is to use Node for command-line tools. JSHint is a command-line tool that lints your JavaScript. Though we don’t have a “compile” step per-se, we do have JSHint, an excellent tool to help us find and resolve common mistakes we make. Grunt is a JavaScript build system, and another excellent example of a command-line tool. How can we begin leveraging Node today? We definitely have JavaScript and CSS files associated with our chosen server platform — ASP.NET, Java, PHP, etc. We could use Grunt from within that build to run JSHint, CSS Lint, minify, and concatenate these files to improve server speed. From inside MSBuild we can leverage the Exec task to run grunt. Great first wins for very little effort.

Debugging Node Apps

In Visual Studio, we’re used to hitting F5 and then stepping through code with F10. In Chrome Developer Tools, we can similarly set breakpoints in the browser, examine variables, and F10 debug through code. We can accomplish similar line-by-line debugging in Node as well. Two tools make this incredibly easy. Node Inspector is an open-source tool that wires up Node’s built-in debugger to Google’s Web Developer Tools for a very intriguing debugging experience. The setup is a bit peculiar, but you can view tutorials here and here. I’d describe these steps if I could do a better job than they did. But I’ll just give props to Danny and Erick. Another option is WebStorm. WebStorm is an IDE for HTML, CSS, and JavaScript written by JetBrains, the makers of Resharper, RubyMine, and IntelliJ IDEA, the template built into the official Android IDE. Unfortunately, WebStorm is not a free product, but the money is very, very well spent.
  1. Start Webstorm
  2. Create New Project
  3. Choose template “Node.js Express App”, enter other parameters, and click ok
  4. Click configure
  5. Open some js files (like app.js or routes/index.js)
  6. Set breakpoints (click in left margin)
  7. Push the bug icon (next to the green “run” triangle)
  8. In the console tab (bottom-left) look at the port
  9. Visit the url in a browser (e.g. http://localhost:3000/)
Like NodeInspector, we now have everything we’re used to in Visual Studio — run-time property inspection, stack traces, F10 line-by-line debugging. Problem solved.


This was indeed a whirlwind tour through Node, but hopefully even from this simple introduction, you can see the potential power in the platform. If you followed along, you also got to see how incredibly approachable the platform is too. How will you use Node?. Happy coding!

Layout page and Blank Layout Page in ASP.NET MVC

Here’s the scenario: I want a “master” page for most pages on my ASP.NET MVC site, but I also have a few pages that don’t play along.  Maybe these are pop-ups that don’t need the standard menus, maybe they don’t need the left column, etc.  I could create two totally separate Layout.cshtml pages, but I really don’t want to do that.  I really want all the “standard” stuff in both: CSS reset, jQuery references, etc, etc. Here’s the technique I use.  I create one layout page I’ll call _Blank.cshtml with the core scripts and links but no “chrome” — no content.  I’ll create a second page called _Layout.cshtml with the site theme content in it.  For the majority of pages, I get the standard _Layout.cshtml, for those few pages that need their own custom layout, they can inherit from _Blank.cshtml without having to reinvent the wheel. Here’s _Blank.cshtml:
<!DOCTYPE html>
<!--[if lt IE 7]> <html class="no-js ie6 oldie ie" lang="en"> <![endif]-->
<!--[if IE 7]> <html class="no-js ie7 oldie ie" lang="en"> <![endif]-->
<!--[if IE 8]> <html class="no-js ie8 ie" lang="en"> <![endif]-->
<!--[if IE 9]> <html class="no-js ie9 ie" lang="en"> <![endif]-->
<!--[if IE 10]> <html class="no-js ie10 ie" lang="en"> <![endif]-->
<!--[if gt IE 10]> <html class="no-js ie" lang="en"> <![endif]-->
<!--[if !IE]>--> <html class="no-js" lang="en"> <!--<![endif]-->
 <meta charset="utf-8" />
 <meta name="viewport" content="width=device-width,initial-scale=1" />
 <link rel="icon" type="image/x-icon" href="/favicon.ico" />
 <script src="//"></script>
 <link rel="stylesheet" href="//" />
 <link rel="stylesheet" href="//" />
 @RenderSection("head", required: false)
<!--[if lt IE 9]>
<script src="//"></script>
<!--[if gte IE 9]><!-->
<script src="//"></script>
<script src="//"></script>
<script src="//"></script>
<script src="/js/libs/jquery.unobtrusive-ajax-2.0.30116.0.min.js"></script>
<script src="/js/libs/jquery.validate.unobtrusive-2.0.30116.0.min.js"></script>
<script src="//"></script>
<script src="/js/init.js"></script>
@RenderSection("scripts", required:false)
var _gaq=[['_setAccount','UA-xxxxxxxx-1'],['_trackPageview']];
 "use strict";
 var g=d.createElement(t),s=d.getElementsByTagName(t)[0];
 s.parentNode.insertBefore(g, s);
And heree’s _Layout.cshtml:
 Layout = "~/Views/Shared/_Blank.cshtml";
@section head {
 <link rel="stylesheet" href="/css/Site.css" />
 @RenderSection("head", required: false)
@section scripts {
 <script src="/js/layout.js"></script>
 @RenderSection("scripts", required: false)
<div class="page">
 <header class="clearfix">
   <img src="/img/logo.jpg" alt="The logo" />
   <nav class="clearfix">
       @if ( Html.CurrentUserIsAuthenticated() ) {
         <li>Welcome @Html.CurrentUserName() @Html.ActionLink("Logout", "Logout", "Account")</li>
       } else {
         <li>@Html.ActionLink("Login / Register", "Login", "Account")</li>
       <li><a href="/">Home</a></li>
       <li><a href="/Home/about-us">About Us</a></li>
       <li><a href="/Home/contact-us">Contact Us</a></li>
 <div id="main" class="clearfix">
   Copyright My Company &copy; @DateTime.Now.Year. All Rights Reserved. | 
 Site built by <a href="" target="_blank">Richardson &amp; Sons, LLC</a> | 
   <a href="/Home/terms-of-use" class="bot_hyper">Terms of Use</a> | 
   <a href="/Home/privacy-policy" class="bot_hyper">Privacy Policy</a>
Of course you may need more or less than this.  Maybe you need to reference lo-dash or don’t need moment.  Maybe your blank page doesn’t need an init script.  The methodology is the same though.  A _Blank.cshtml with “a blank page” (and all the scripts), and a _Layout.cshtml with all the standard template content. Happy coding! Rob

SQL in the City London

I had a wonderful time attending SQL in the City London.  It’s really great to see the new versions of the tools and exchange new ideas with friends.  Towards full disclosure, I’ve been a Friend of Red Gate for as long as I can remember, and also spoke at SQL in the City last year in both Austin and Seattle.  This year was a new format: leading attendees through a focused journey of continuous integration and deployment together in one large room.  It worked well. There was also a small off-track room seating about 20 to 40 down a winding hallway, though with such a large attendee base, there was usually no room.  It did help us focus on the main event, but it did remove some of the free-flowing discussion that only a small group can bring. The opening discussion was pretty much a spot-on repeat of last year.  We did the deployment balloon game, and successfully deployed only one of our 5 or 8 groups of balloons.  It’s a wonderful object lesson, though having seen the example and corresponding slides last year, it didn’t really grab my attention.  My fellow attendees seemed to enjoy the presentation — a wonderful reserved optimism that speaks volumes of the glorious British culture.  Breaks between classes helped us absorb the material and yielded great discussions.  We also queued very well for afternoon tea.  There wasn’t much space in the venue to hold us when we weren’t sitting down, so the displays and conversation areas were loud and cramped, and most of us either spilled out of the building or back into our seats.  “A Day in the Life of a DBA” concluded the day, and seemed like it could’ve been a wonderful discussion.  Alas, I quickly zoned out as it turned into a gripe session about everything DBAs hate about developers, networks, management, users, life, and why getting woken up is not fun.  All else being equal, I wish I had fit in the smaller room. I’m really enjoying how the tools are maturing and coming together into a great suite, targeted nicely both towards on-premise use and cloud deployments, both tools to facilitate developers and tools to facilitate IT pros.  David Atkinson did a wonderful demo of continuous integration using the Team City plugin that drives SQL Compare, SQL Data Compare, SQL Docs, other tools, and in short order, database migrations.  Justin Caldicott and Grant Fritchey did a wonderful back-n-forth demo — from both developer’s and IT’s perspective — of the challenges of deployment and how Deployment Manager really saves the day. Deployment Manager is struggling because it’s one of the few Red Gate tools that requires you accept their paradigm rather than fitting into a niche inside your process.  (SQL Compare, SQL Data Compare, SQL Monitor, .NET Reflector, ANTS Profiler, and even to a lesser extent SQL Source Control all get plugged into an existing workflow with ease.  Deployment Manager really needs to own the deployment process.  This is especially difficult to swallow since the IT Pro’s bread and butter is to ensure deployment is seamless, painless, and exact.  Like any good DBA, they get very OCD about it.  Deployment Manager asks the business to “trust us, we’ll do the right thing.”  Automation is wonderful, but black boxes scare people.  Deployment Manager seeks to walk this fine line, and does a descent job, but it is a very hard sell.  Alas, I digress. I got tapped with only a few moments notice to lead a group discussion about Database Migrations — a topic I’m quite passionate about.  Of the 3 groups of 20-or-so, I chose the biggest challenge — the group that didn’t own any Red Gate tools.  We quickly got the standard gripe out of the way: the tools are expensive, and then we began exploring both the challenges of Database Migrations and the solutions that we were proposing.  (It was fun getting to use the royal “we” to discuss Red Gate for a few minutes.)  The methodology proposed seems very solid, and I’m really excited to see how they execute on this vision.  In a true continuous deployment system where the build server fully owns deploying all assets (web sites, command lines, services, GUI tools, and databases) the lynch pin to completely avoiding babysitting it is Database Migrations.  I’m really excited to see this product come to life.  The group seemed very receptive, and some said they’d give SQL Source Control a try.  (SQL Source Control is a great gateway drug into the Red Gate toolchest.) SQL in the City is a wonderful event, and I really enjoyed attending again this year.  I’d highly recommend the free day of training and insight into Red Gate tools, and wish only that there were more of them in cities closer to me.  Many thank yous to Annabel and her team who went above and beyond to make this even a wonderful success.  Well done.

IIS Web Server Best Practices

Best practices I use when setting up IIS: – Each site should be a separate (sub-)domain.  Thus instead of  This solves a few problems: “../../don’t/do/this.jpg” is avoided when you can presume the root of your site is “/”, which means you can avoid relative paths (even if you need not crawl up a directory) and you can avoid the ~.  It’s much cleaner all the way around. – Because each site is its own (sub-)domain, you also avoid the pitfall of virtual directories.  A virtual directory is an app within an app.  ( and some changes to the outer app cascade into the inner app.  For example url rewrite rules, authentication, dll mapping, etc.  Basically the system starts at your app’s directory’s web.config, crawls up the folder stack layering behind every web.config it finds, layers behind that the web.config in the framework directory, and finally machine.config.  This is why you need not copy every changes in your regular web.config into the web.config in the Views folder in MVC.  Because a virtual directory is by definition an app inside another app, of necessity you’ll inherit the outer app’s web.config, potentially negatively impacting your app as their app evolves. – The app pool is the execution space, so, each site should have its own.  That doesn’t completely protect you from another site blowing up your site, but it does help considerably.  Especially if a technician needs to recycle the app pool as part of an app upgrade or if only one site on the machine is having troubles. – If you consistently change things on all sites like removing headers, configuring additional mime-types, rearranging or removing default document names, setting the error pages, etc, do these on the machine node in IIS rather than redundantly for each site. – is a great tool for highlighting configuration errors and places where you’re exposing more information than necessary. – Make sure you handle the “naked domain” (zone apex).  Mapping to is important, but users could just as easily hit (without the www) and if your site doesn’t handle both, a consistent portion of your users will consider your site “broken”.  For SEO purposes, permanently redirect one to the other.  (Which you choose is likely a matter of corporate culture or preference, and ultimately is irrelevant … provided you consistently choose it in IIS configurations, Google Webmaster Tools, etc.) – If at all possible, run the apps in the most recent framework version, in “Integrated” mode, with 32-bit disabled, with modern .net frameworks installed on the box, and all Windows Updates.  Ideally you’ll also be on the most modern OS version too.  Your apps may need code changes to make this possible.  These are the defaults with new installs and are promoted as “modern techniques” [read: best practices], and ensuring your apps are compliant suggests future deployments to similar or newer hardware or OSs will be less traumatic. – c:\inetpub\wwwroot\myapp is a very awful place to put your web site folder.  Because this is the default, if I’m a hacker trying to compromise your site and I find another way into your box (FTP, RDP, network share, etc) I can stick something in all such folders and compromise every site on the box.  Script kiddies have automated attack tools that can do this.  Instead, I create a folder like C:\Internet\ or C:\Websites\ or similar.  Inside, I have a folder for each site with sub-folders for:
  • db
  • logs
  • www
I’ll then put the IIS website content in the www folder, point IIS to put the site’s logs into the logs folder, (Ever crawl through IIS’s default log folder trying to figure out which log folder you want?  I’m sorry, “W3SVC6” is not sufficiently descriptive.) and if SQL Server is on the same machine (not best practice), point it to put the mdf and ldf files into the database folder.  Now when you want to backup “everything”, just stop IIS, stop SQL, and copy C:\Internet.  You got “everything” for all sites on the box (with the exception of C:\Windows\System32\inetsrv\config\applicationHost.config which you should also backup periodically.) – applicationHost.config is not scary.  It’s the “Metabase” (using IIS 6’s term) and is just an xml file.  From time to time, IIS’s GUI has a wacky limitation or I want at it faster than that, so I just pop open C:\Windows\System32\inetsrv\config\applicationHost.config using an elevated notepad and hack away.  Want to setup the second server in the farm to exactly match the first?  Install IIS, install all the plugins you need via Web Platform Installer, run windows update a few times, then use Beyond Compare or WinMerge to diff old box’s applicationHost.config to new box’s, and copy across.  Ta-da!  Instant second box.  (BTW, don’t copy over the “encryption key” lines nor module or handler lines for plugins you didn’t install, and make careful backups before changing this file.  Either that, or you can get good at reformatting boxes.  I needed only make that mistake once.  :D)  Of particular interest in applicationHost.config is the <sites> node. – One of the big challenges with IIS is “who owns the site’s web.config: The IT dept or developers?”  This is because site-specific changes made in IIS are stored in the app’s web.config.  (This is also why php and node apps hosted on windows have a web.config file even though they don’t use .net.) Alter the list of default documents or change the authentication scheme, and it’ll write these nodes into web.config.  On next deploy if you flush and reset everything — including web.config — you’ll remove these settings.  Oops.  (Also see the “if you do it on every site, do it to the box not to each site” note above.) – The remote management service is incredibly cool.  Typically the only reason we RDP to the IIS machine is to use IIS Manager, run windows updates, or figure out why it ran out of hard drive space.  It is an increased attack surface to have the management service on, so perhaps configure the firewall to only expose port 8172 (or the one you configure) to the LAN and not to the public.  Now no more RDPing for config changes. I’m confident this is hardly an exhaustive list, but based on these, you can get to a pretty good place in IIS, and probably get the Google-fu cranked up too.  Happy hosting! Rob