Archives / 2007 / June
  • More Vista Baby Naming Fallout

    I have over 100 links to other blogs, news sites, and rumblings (including Windows Vista magazine and The Guardian) over the naming of our daughter last month. I'll probably post a follow-up with links to them, as some of them are quite humorous.

    However this one jumped out at me this morning:

    This time, it’s allegedly a Vice President of Microsoft, Bil Simser, who has charted new baby naming territory. According to a post in the Windows Vista Magazine, Simser’s new daughter is named Vista Avalon.

    Vice President huh? Well it's *got* to be true, it's on the Internet. What a promotion from MVP, and I don't even work for the company! Gotta love the media. As my first executive decision, I hereby declare .NET, SharePoint, and Visual Studio open source! Now should I should send a note to asking for my parking pass and key to the executive squash courts.

    Update: It's been noted that Vista wasn't listed on Wikipedia's page for unusual names. No longer. I updated it with her name and a link to her blog entry. She's the first entry for the letter "V".

  • Change Calendar Time Zone

    Got an odd message this morning as I was slugging through emails (and figuring out how to corrupt the world with my new CrackBerry).


    I have no idea what this means? Someone changed Mountain Standard Time and I didn't get the memo or something? 

  • ReSharper Goodness?

    One of our devs was doing a refresh from source control just now and got this ReSharper exception:

    JetBrains.ReSharper.Util.InternalErrorException: Shit happened

    Shit happened ---> JetBrains.ReSharper.Util.InternalErrorException: Shit happened

                at JetBrains.ReSharper.Util.Logger.LogError(String) in c:\Agent\work\Server\ReSharper2.5\src\Util\src\Logger.cs:line 389 column 7

                at JetBrains.ReSharper.VS.ProjectModel.WebProjectReferenceManager.ProcessAssemblyReferences(AssemblyReferenceProcessor) in c:\Agent\work\Server\ReSharper2.5\src\VS\src\ProjectModel\WebProjectReferenceManager.cs:line 409 column 9

                at JetBrains.ReSharper.VS.ProjectModel.WebProjectReferenceManager.get_References() in c:\Agent\work\Server\ReSharper2.5\src\VS\src\ProjectModel\WebProjectReferenceManager.cs:line 442 column 9

                at JetBrains.ReSharper.VS.ProjectModel.WebProjectReferenceManager.UpdateAssemblyReferences() in c:\Agent\work\Server\ReSharper2.5\src\VS\src\ProjectModel\WebProjectReferenceManager.cs:line 205 column 7

                at JetBrains.ReSharper.Shell.<>c__DisplayClass1.<Invoke>b__0() in c:\Agent\work\Server\ReSharper2.5\src\Shell\src\Invocator.cs:line 225 column 33

                at System.RuntimeMethodHandle._InvokeMethodFast(Object, Object[], SignatureStruct&, MethodAttributes, RuntimeTypeHandle)

                at System.RuntimeMethodHandle.InvokeMethodFast(Object, Object[], Signature, MethodAttributes, RuntimeTypeHandle)

                at System.Reflection.RuntimeMethodInfo.Invoke(Object, BindingFlags, Binder, Object[], CultureInfo, Boolean)

                at System.Delegate.DynamicInvokeImpl(Object[])

                at System.Windows.Forms.Control.InvokeMarshaledCallbackDo(ThreadMethodEntry)

                at System.Windows.Forms.Control.InvokeMarshaledCallbackHelper(Object)

                at System.Threading.ExecutionContext.runTryCode(Object)

                at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode, CleanupCode, Object)

                at System.Threading.ExecutionContext.RunInternal(ExecutionContext, ContextCallback, Object)

                at System.Threading.ExecutionContext.Run(ExecutionContext, ContextCallback, Object)

                at System.Windows.Forms.Control.InvokeMarshaledCallback(ThreadMethodEntry)

                at System.Windows.Forms.Control.InvokeMarshaledCallbacks()

                at System.Windows.Forms.Control.WndProc(Message&)

                at System.Windows.Forms.ScrollableControl.WndProc(Message&)

                at System.Windows.Forms.ContainerControl.WndProc(Message&)

                at System.Windows.Forms.Form.WndProc(Message&)

                at System.Windows.Forms.ControlNativeWindow.OnMessage(Message&)

                at System.Windows.Forms.ControlNativeWindow.WndProc(Message&)

                at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr, Int32, IntPtr, IntPtr)


    Followup: Found a few references to this here and here. It was fixed in 2.5.2.

  • Project Structure - By Artifact or Business Logic?

    We're currently at a crossroads about how to structure projects. On one hand we started down the path of putting classes and files into folders that made sense according to programming speak. That it, Interfaces; Enums; Exceptions; ValueObjects; Repositories; etc. Here's a sample of a project with that layout:


    After some discussion, we thought it would make more sense to structure classes according to the business constructs they represent. In the above example, we have a Project.cs in the DomainObject folder; an IProject.cs interface in the interface folder; and a PayPeriod.cs value object (used to construct a Project object) in the ValueObjects folder. Additional objects would be added so maybe we would have a Repository folder which would contain a ProjectRepository.

    Using a business aligned structure, it might make more sense to structure it according to a unit of work or use case. So everything you need for say a Project class (the class, the interface, the repository, the factory, etc.) would be in the same folder.

    Here's the same project as above, restructured to put classes and files into folders (which also means namespaces as each folder is a namespace in the .NET world) that make sense to the domain.


    It may seem moot. Where do I put files in a solution structure? But I figured I would ask out there. So the question is, how do you structure your business layer and it's associated classes? Option 1 or Option 2 (or maybe your own structure). What are the pros and cons to each, or in the end, does it matter at all?

  • More ReSharper 3.0 Goodness

    Found an interesting tidbit today. I had a piece of code with the word "bug" in a comment. It showed up like this in my IDE:


    Lo and behold I found that ReSharper is finding keywords in my comments and colorizing them so they stand out.

    It also does this with TODO and other keywords that you define. In the Tools options you'll find a leaf node called Todo Items. In there you can set up patterns. Here's the pattern for Bug:


    So any time it find "bug" (using the regular expression) it'll colorize it red and display the error icon on that line. The default items that are added are Todo, Note, and Bug. You can add your own so you might use this as a good way to highlight things in your code to junior developers (for example creating one called "Pattern" to highlight an implementation of a specific design pattern).

    Note, this might not be a 3.0 thing but since I don't have 2.5 installed anymore I can't tell if it's been there all along.

    Very neat!

  • Richard Campbell in Calgary Wednesday, June 27

    Richard Campbell will be presenting to the Calgary .NET User Group on Wednesday, June 27th. Richard is the co-host of .NET Rocks and an awesome speaker. Based out of Vancouver, he haunts our Calgary corner a few times (last time I remember he was at our 2006 Code Camp) so please do try to get out to see him.

    I had issues (read:errors) trying to register on the website so you should be able to just show up for registration (I tried seeing where you could contact someone, but their contact page doesn't seem to have any contact info like oh, emails or phone numbers). The event is in the Nexen Centre, located at 800 7th Ave SW. Once you get in, you have to go upstairs to +15 level, then past the Brown Bag (a sandwich shop), then over a walking bridge to the conference centre. It's poorly marked, but it's the same place I gave my MOSS 2007 presentation if anyone was paying attention. See you there!

  • A Visual Tour of ReSharper 3.0

    ReSharper 3.0 is out now in final form and looks great. Here's a visual walkthrough of some of the 3.0 features, along with some old and otherwise existing ones ReSharper has to offer.

    Code Analysis

    ReSharper 3.0 has more code analysis features than previous versions. For example here it tells me that I can make this field read-only. Why? Because it's only ever initialized in the declaration and never gets assigned again. You'll also get this suggestion with fields that are initialized in constructors only (but this is a test fixture so there's no constructors). A quick hit of Alt + Enter and I can change this field to the suggestion ReSharper is offering.


    Putting your cursor on the field and hitting Ctrl + Shift + R let's you select from a list of applicable refactorings. By applicable I mean they're context sensitive to the type, scope, and value you're looking at. For example here I get a set of refactorings I can do to a field.


    Now if I hit the same shortcut on a method I get these offerings. Note that I can now invoke the Change Signature refactoring (and others) but Encapsulate Field is no longer available. ReSharper recognizes I'm in a method and not a field and does things in a smart fashion by filtering the refactoring menu down to only what's valid.


    Another suggestion is when methods are only ever referenced by a local class and don't access external values or objects. In that case, ReSharper will suggest that you make the method static. This will reduce on execution time (but we're only talking about saving a few mips here, so don't get too excited).


    With this (and other refactorings) you can press Alt + Enter to see a list of options. This also appears as a small light bulb in the left hand gutter and shows you a list of refactorings and optimizations you can perform on a method or variable.



    ReSharper not only offers great productivity with it's refactorings, but it really helps out when you're trying to navigate around your codebase. With a few simple keystrokes, you'll be flying through your code in no time.

    You can search for a type name by pressing Ctrl + N. This brings up a window for you to type in and narrow down the search. For example here I entered "MI" which shows me all the classes that start with "MI". You'll also notice that "ModuleInfoElement" is also included. This is because the search filters on CamelCase names, which you can also filter down even further.


    Here we've filtered the "MI" list down a little more by entering "MIC".


    Even further we enter "MICV" which shows me the view, presenter, and fixture.


    Documentation and Guidance

    ReSharper also knows about your code and can tell you about it. This helps as sometimes you just don't know what a method is expecting or why a parameter is passed to a method.

    Here I have my mouse cursor in the parameter to the Add method and pressed Ctrl + P to show parameters and documentation. This is culled from the XML comments in your codebase so it's important to document these!


    ReSharper also has the ability to generate some simple documentation (via the Ctrl + Q key) in the form of a popup. This provides information about a type, it's visibility, and where it's located (along wtih hyperlinks to types in the popup). Very handy for jumping around (although you do have to engage the mouse).


    Other Productivity Features

    A few other small features that I always find useful.

    Ctrl + Shift + V

    This pops up a dialog which contains all of the things you've recently copied to the clipboard. You can just highlight the one you want and insert it. Very handy when you have a small snippet that you want to re-use.


    Ctrl + Alt + V

    One of my favorites as I hate typing out values for objects. I'd rather just create the object and not worry about it (ala Ruby) however in C# you do sometimes want a variable around. ReSharper helps you by creating a dialog for taking a method and introducing a variable. It understands the return type and even suggests a name for you. Very quick when you want to reduce the keystrokes:


    There are a ton of more features that are out there. If you're interested, you should check out Joe White's 31 days of ReSharper he posted back in March/April that has a small tip every day from installation and setup to almost all of the refactorings and tools ReSharper has to offer. Awesome.

  • Mike Cohn is blogging

    Or maybe I'm just slow on the uptake? I got word via Mountain Goat Software, Mike's company, that his blog Succeeding with Agile is now available. However there are posts there dating back to January. In any case, whether it's new or not, it's a blogger to read. Mike has always been there for me with little tidbits of extra info and sending me resources when I was swimming in Agile questions. He's an excellent speaker and I look foward to his blog entries, even if they're only going to be once a month (hey, he's a busy guy). Check it out and consider adding him to your blog roll as he's on of the key guys in Agile software today.

  • No iPhone for you Canada!

    I was informed by informed sources (and this is probably old news) that they'll be no iPhones for Canadians, unless you're willing to pay Cingular roaming charges. I was planning on getting an iPhone but found out that a) the plan is locked to Cingular b) Cingular only services the U.S. c) you cannot simply drop in a SIM card from any other provider as iPhones are locked to the Cingular provider.

    My personal opinion is that Apple should have unlocked the phone and let you use any carrier. Okay, so they wouldn't have got the big bucks they're obviously getting from Cingular but if you crunch the numbers (and I'm sure they did) I would think you would have more hardware sales than payola in the long run. Guess not, so until Steve Jobs calls me up and puts me in that position it's no iPhone for us Canucks.

    Update: I was doing a little blog sleuthing and came across various rumours about Rogers being a carrier for the iPhone. However a) it's about 6 months out at best b) there's no official word that I can find and c) more informed (non-official) sources tell me this is false. Gizmodo says it's "confirmed" but I have doubts. Every report though says "a customer service email" or "customer service representative". To me, that's not official in any capacity.

    Someone will obviously hack this and probably within 6 months (or sooner) you could use one up here, but otherwise the only way would be to get a Cingular plan then pay roaming fees all the time. I may have good consultating rates, but not that good.

    Anyways, now I'm looking at the HTCs as people are saying they're good. Looking for any suggestions from anyone on model. There was an article a few days ago Forbes on iPhone alternatives so they look pretty good. Let me know what you recommend?

  • Load Testing Smart Clients

    It's a question, not a blog post. Anyone got some good tips, tricks, techniques, and tools for load testing Smart Clients? There's a plethora of info out for load testing web applications but little to nothing on Smart Clients. Just looking for ideas from the code monkeys out there.

  • An attempt at working with eScrum

    Okay, first off this tool wins the "Most Horrible Name Marketing Could Come Up With" award. I mean seriously, eScrum? Well, I guess when Scrum for Team System is taken what else do you do?

    I took a look at eScrum but after an hour of configuration and various error messages I gave up. I'm the type that if I need to spend half a day to try something out, something that I kind-of already have, that's half a day wasted. I personally think most of the people out there that are saying this tool is "pretty nice" haven't actually installed it (or tried to install it).

    So take this blog entry with a grain of salt as I didn't complete it to get to the finish line.

    What is eScrum?
    Anyways, eScrum is a web-based, end-to-end project management tool for Scrum built on top of TFS. It allows multiple ways to interact with your Scrum project:

    • eScrum web-based UI
    • TFS Team Explorer
    • Excel
    • MS Project

    Like any Scrum tool, it offers a one-stop place for all Scrum artifacts like product backlogs, sprint backlogs, retrospectives, and those oh-so-cool burndown charts.

    Installation is pretty painless. That is until you realize that you need a bevy of Microsoft technologies and tools installed in order to run eScrum. eScrum uses a variety of web and back-end technologies and you need to install of of them before getting your eScrum site up and running, although you can install them before or after eScrum, your choice.

    You'll need to install:


    Once everything is installed hang on a second kids, there's still configuration to be done! eScrum is a bit of a pain to configure. Configuring eScrum is like installing Linux, there are a lot of steps and at any point you can really screw things up.

    ASP.NET AJAX Control Toolkit Version Conflicts
    Since the release site of the AJAX Control Toolkit does not allow download of previous versions and eScrum is compiled with a specific version, you may need to update the web.config file to allow automatic usage of a newer version of the AJAX Control Toolkit.  eScrum has not been tested with newer versions, but may work well.

    Add following XML to the eScrum web.config file after the </configSections> close tag.  Afterward, update the newVersion attribute to the version of the control toolkit that you are using.

        <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
                <assemblyIdentity name="AjaxControlToolkit"
                <bindingRedirect oldVersion="1.0.10301.0" newVersion="1.0.CHANGEME.0"/>

    Setting up SharePoint Templates
    Oh yeah, the fun still continues and we're still not finished. The eScrum TFS Template includes a team SharePoint portal template which gets installed when a new TFS Project is created with the eScrum template.  The SharePoint templates must be added to the server before creating a TFS Project with the eScrum Process Template.
    Deployment Steps. Follow these instructions to get this step done:

    1. Log on to the target server
    2. Open a command prompt and change directory to: <SystemDrive>\Program Files\Common Files\Microsoft Shared\web server extensions\60\BIN
    3. Add the new templates using
      1. stsadm -o addtemplate -filename <path>\eScrum.stp -title "eScrum"
      2. stsadm -o addtemplate -filename <path>\eScrumFeaturesIdeas.stp -title "eScrum Features & Ideas"
      3. stsadm -o addtemplate -filename <path>\eScrumRiskLog.stp -title "eScrum Risk Log"
      4. stsadm -o addtemplate -filename <path>\eScrumStrategy.stp -title "eScrum Strategy & Issues"
    4. Type IISRESET to reset IIS

    Setting up an eScrum TFS Project
    eScrum uses eScrum TFS Projects as a back end storage and management, so you won't be able to use it on existing projects. Once you have added the eScrum Process Template to your TFS server, you will need to create a new TFS Project using the eScrum Template.

    First you'll need to get the templates uploaded via Team Explorer (or inside Visual Studio). Make sure you don't have even the Word document open while you're uploading the template or it will fail when it tries to create the zip file.

    Once you've uploaded the templates and they're available, you need to create a project using the eScrum template:

    1. In Team Explorer, right click your server and select "New Team Project…"
    2. Name your project and use the eScrum template
    3. Add yourself and your team members are all added to the Project Contributors (or Project Administrators, depending on your preference) security group.
      1. Right-click on your new Project and select "Team Project Settings.Group Membership…"
      2. Double-click either the Administrators or Contributors group
      3. Change the "Add member" selection to "Windows User or Group"
      4. Add your members
      5. Click OK

    There are some other installs they want you to do and I suggest you follow the various installation and configuration guides but for my test this was good enough to get something up and running.

    Now browse to where you installed it and you'll see something like this:


    Creating Projects
    eScrum is a little odd, but it seems to align to the Scrum process. Of course the thing with Scrum is that it's adaptable. There is no golden rule of how it works. There are guidelines and people generally follow them but for example in eScrum you must have a product. The eScrum project you create isn't good enough, it needs something actually called a "Product" (using the concept that multiple products form a project). I don't personally do Scrum that way so found it a little frustrating. The other frustrating thing when setting up a project (oh sorry, "product") was that I couldn't save it until I Product contributors were added (team members) and it wouldn't let me add team members until I created groups and that's where I stopped before my brain exploded.

    Enough Configuring, I give up!
    Yes, I gave up installing and configuring the beast as it was just too much. I mean, I'm all for tools and setting up websites but after an hour of screwing around (even though I knew what I was doing) I said enough was enough. Realistically, give yourself a half day (if you rush) or a full day with some testing to get this puppy up and running.

    In fact, even after I had the template setup and a test project created I had no idea (other than through the Web UI) how to create a product? (which I couldn't do because of the security issues) It didn't look like I could create one in Team Explorer as all it would let me create was a bug, product details (but it needs a product first), sprint details, sprint retrospective, or a sprint task. WTF?

    Yeah, the SharePoint Scrum Master was lost so either I'm an idiot (possible) or this tool isn't very intuitive, even for someone who thinks he knows what he's doing.

    I wasn't going to go through the rest of the steps and who knows what else was needed, thus I wasn't able to get screenshots with projects configured and sprint backlog items, etc. I'll leave that for another soul to give up his day for.

    I do however have some images for the various tabs so you can get a feel for what eScrum has to offer:

    Product Page


    Sprint Page


    Daily Scrum Page


    Retrospective Page


    Bottom Line
    Was it worth it? Was it worth all the installing and configuring and configuring and installing?

    IMHO, no.

    I'm very happy with Conchango's Scrum for Team System and hey, to install that I just had to upload a new process template from Team Explorer. No mess no fuss.

    Once you do get the configuration and installation out of the way, eScrum looks interesting. It's got a nice dashboard for tracking your sprint, lets you keep on top of the daily Scrum electronically, and offers a bevy of Scrum reports like burndowns, metrics, and a product summary (none of which I have seen because I didn't take it that far when setting it up).

    There are problems with the setup (even though I didn't finish). For example the SharePoint template contains entry into the Links list pointing to http://eScrum and http://eTools, none of which are correct so you have to fix this (and frankly, I don't even know what the eTools link is supposed to be). The SharePoint templates are just custom lists with a few extra fields, nothing special here. Even the logo for the site was broken in the template so it's obviously this is either rushed or nobody cares about the quality of presentation of the tool (and I wouldn't call this a 1.0 release).

    Other things that immediately are a problem I had with this, you had to modify an XML config file every time you needed to add a project (and it's called a "Group" inside of the config file). Maybe you can do it through the web UI, but it looked to me like you had to modify this for each project.

    I think for any kind of adoption, Microsoft needs to put together an installer for this as we don't all have a day to kill configuring a tool that should be seamless (after all, it's just a website and a TFS template remember). They also should have some documentation/guidance on this. From the looks of what I could get up and running there's very little actual "guidance" on using the tool and frankly, from the websites there's very little anything about this tool. Does MS think you install it (assuming you have the gumption to go through the entire process) and it'll just work and people will understand it? Even Scrum for Team System has nice documentation written on the process that goes along with the tool. Tools and technologies alone do not make for a good package.

    If you want to use Scrum with TFS, stick to Conchango's Scrum For Team System template. It has it's own share of flaws but installs in about 5 minutes.

  • Coalescing with ReSharper 3.0

    The ReSharper 3.0 beta is out and I'm really digging it. It's little things that make my day a better place.

    For example I had a piece of code that looked like this:

       14 public ErrorReportProxy(IWebProxy proxy, bool needProcessEvents)

       15 {

       16     _errorReport.Proxy = proxy;

       17     if (proxy.Credentials != null) _errorReport.Proxy.Credentials = proxy.Credentials;

       18     else _errorReport.Proxy.Credentials = CredentialCache.DefaultCredentials;

       19     _needProcessEvents = needProcessEvents;

       20 }

    "if" statements are so 1990s so you can rewrite it like this using the conditional operator "?"  

       14 public ErrorReportProxy(IWebProxy proxy, bool needProcessEvents)

       15 {

       16     _errorReport.Proxy = proxy;

       17     _errorReport.Proxy.Credentials = (proxy.Credentials != null)

       18                                         ? proxy.Credentials

       19                                         : CredentialCache.DefaultCredentials;

       20     _needProcessEvents = needProcessEvents;

       21 }

    However with nullable types in C# 2.0 you can use the language feature of using the "??" assignment (called the null coalesce operator). The ?? operator defines the default value to be returned when a nullable type is assigned to a non-nullable type. This shortens the code but makes it more readable (at least to me).

    ReShaper to the rescue as it showed me squiggly lines in my code where I had the "?" operator and said (in a nice ReSharper way not an evil Clippy way):

    '?:' expression could be re-written as '??' expression.

    So a quick ALT+ENTER and the code becomes this:

       14 public ErrorReportProxy(IWebProxy proxy, bool needProcessEvents)

       15 {

       16     _errorReport.Proxy = proxy;

       17     _errorReport.Proxy.Credentials = proxy.Credentials ?? CredentialCache.DefaultCredentials;

       18     _needProcessEvents = needProcessEvents;

       19 }

    Okay, so maybe I'm easily amused but it's the little things that make my day (and maybe yours) a little brighter. The coding assistance feature of ReSharper 3.0 is shaping up to be very useful indeed.

  • Refactoring Dumb, Dumber, Dumbest away

    In my previous crazy post I had shown some code that was butt ugly. It was a series of if statements to determine some value (an enum) then use that value to switch on a case statement that would assign some values and audit the steps as it went along. It was ugly and the challenge before me was to refactor it away. I chose the Strategy Pattern as it seemed to make sense in this case, even if it did introduce a few more classes. So here we go.

    First, lets look at the full code we're refactoring. Here's the silly enumerated value:

       15 enum DumbLevel

       16 {

       17     Dumb,

       18     Dumber,

       19     Dumbest,

       20     Not

       21 }

    And here's how it's used:

      695 for (int i = 0; i < _segments.Count; i++ )

      696 {

      697     ISegment segment = _segments[i];


      699     #region determine cable count & heat segment voltage


      701     int cableCount;

      702     decimal heatSegmentVoltage;

      703     DumbLevel dumbLevel;


      705     //determine the specific segment configuration

      706     if (_segments.Count > 1)

      707     {

      708         cableCount = segment.GetCableCount(); //use # of tracers to determine cable count


      710         if (cableCount == 1)

      711         {

      712             if (PassesCount > 1)

      713             {

      714                 if (i <= (_segments.Count - 2)) //for all but last

      715                     dumbLevel = DumbLevel.Dumbest;

      716                 else

      717                     dumbLevel = DumbLevel.Dumb; //last segment

      718             }

      719             else

      720                 dumbLevel = DumbLevel.Dumb;

      721         }

      722         else

      723             dumbLevel = DumbLevel.Dumber;

      724     }

      725     else

      726         dumbLevel = DumbLevel.Not;



      729     //calculate cable count and heat segment voltage based on the segment configuration

      730     switch (dumbLevel)

      731     {

      732         case DumbLevel.Dumb:

      733             cableCount = segment.GetCableCount(); //use # of tracers to determine cable count

      734             AuditStep("Cable Count: {0} (based on count of Tracers identified)", cableCount);

      735             AuditStep("");


      737             heatSegmentVoltage = SupplyVoltage * Project.VoltageDrop * segmentPercentage;

      738             AuditStep("Supply Voltage ({0}) * Voltage Drop ({1}) * Segment Percentage ({2})", SupplyVoltage, Project.VoltageDrop, segmentPercentage);

      739             break;


      741         case DumbLevel.Dumber:

      742             cableCount = segment.GetCableCount(); //use # of tracers to determine cable count

      743             AuditStep("Cable Count: {0} (based on count of Tracers identified)", cableCount);

      744             AuditStep("");


      746             heatSegmentVoltage = SupplyVoltage * Project.VoltageDrop * segmentPercentage / PassesCount;

      747             AuditStep("Supply Voltage ({0}) * Voltage Drop ({1}) * Segment Percentage ({2}) / Passes # ({3})", SupplyVoltage, Project.VoltageDrop, segmentPercentage, PassesCount);

      748             break;


      750         case DumbLevel.Dumbest:

      751             cableCount = _passesCount;

      752             AuditStep("Cable Count: {0} (based on Passes #)", _passesCount);

      753             AuditStep("");


      755             heatSegmentVoltage = SupplyVoltage * Project.VoltageDrop * segmentPercentage / PassesCount;

      756             AuditStep("Supply Voltage ({0}) * Voltage Drop ({1}) * Segment Percentage ({2}) / Passes # ({3})", SupplyVoltage, Project.VoltageDrop, segmentPercentage, PassesCount);

      757             break;


      759         case DumbLevel.Not:

      760             cableCount = 1;

      761             AuditStep("Cable Count: 1");

      762             AuditStep("");


      764             heatSegmentVoltage = SupplyVoltage * Project.VoltageDrop;

      765             AuditStep("Supply Voltage ({0}) * Voltage Drop ({1})", SupplyVoltage, Project.VoltageDrop);

      766             break;


      768         default:

      769             throw new ApplicationException("Could not determine a known segment configuration.");

      770     }

      771 }

    Basically it's going through a series of segments (parts of a cable on a line) and figuring out what the segment configuration should be. Once it figures that out (with our fabulous enum) it then goes through a case statement to update the number of cables in the segement and the voltage required to heat it. This is from an application where the domain is all about electrical heat on cables.

    Lets get started on the refactoring. Inside that tight loop of segments, we'll call out to get the context (our strategy container) with a method we extracted called DetermineCableCountAndHeatSegment. Here's the replacement of the Dumb/Dumber if statement:

      710 SegmentConfigurationContext context = DetermineCableCountAndHeatSegmentVoltage(i, segmentCount, segment, segmentPercentage);

      711 int cableCount = context.Configuration.CalculateCableCount();

      712 decimal heatSegmentVoltage = context.Configuration.CalculateVoltage();

    And here's the actual extracted method:

      765 private SegmentConfigurationContext DetermineCableCountAndHeatSegmentVoltage(int i, int segmentCount, ISegment segment, decimal segmentPercentage)

      766 {

      767     SegmentConfigurationContext context = new SegmentConfigurationContext();


      769     int cableSegmentCount = segment.GetCableCount();


      771     if(segmentCount > 1)

      772     {

      773         if (cableSegmentCount == 1)

      774         {

      775             if (PassesCount > 1 && (i <= (segmentCount - 2)))

      776             {

      777                 context.Configuration = new MultiPassConfiguration(PassesCount, SupplyVoltage, Project.VoltageDrop, segmentPercentage);

      778             }

      779             else

      780             {

      781                 context.Configuration = new SinglePassConfiguration(cableSegmentCount, SupplyVoltage, Project.VoltageDrop, segmentPercentage);

      782             }

      783         }

      784         else

      785         {

      786             context.Configuration = new MultipleCableCountConfiguration(cableSegmentCount, SupplyVoltage, Project.VoltageDrop, segmentPercentage, PassesCount);   

      787         }

      788     }

      789     else

      790     {

      791         context.Configuration = new DefaultSegmentConfiguration(1, SupplyVoltage, Project.VoltageDrop);                       

      792     }


      794     return context;

      795 }

    The if statement is still there, but now it's a little easier to read in a method and makes more sense overall. We can tell now, for example, if there's more than 1 cableSegmentCount we use the MultipleCableCountConfiguration strategy object. Okay, not as clean as I would like it but it's a step forward over a case statement.

    The configuration classes are the strategy implementation. First here's the interface, ISegmentConfiguration:

        3 public interface ISegmentConfiguration

        4 {

        5     int CalculateCableCount();

        6     decimal CalculateVoltage();

        7 }

    This just gives us two methods to return the count of the cables and the voltage for a given segment.

    Then we create concrete implementations for each strategy. Here's the Simplest one, the DefaultSegmentConfiguration:

        3 class DefaultSegmentConfiguration : SegmentConfiguration

        4 {

        5     public DefaultSegmentConfiguration(int cableCount, int supplyVoltage, decimal voltageDrop)

        6     {

        7         CableCount = cableCount;

        8         SupplyVoltage = supplyVoltage;

        9         VoltageDrop = voltageDrop;

       10     }


       12     public override int CalculateCableCount()

       13     {

       14         AuditStep("Cable Count: {0}", CableCount);

       15         AuditStep("");

       16         return CableCount;

       17     }


       19     public override decimal CalculateVoltage()

       20     {

       21         decimal heatSegmentVoltage = SupplyVoltage * VoltageDrop;

       22         AuditStep("Supply Voltage ({0}) * Voltage Drop ({1})", SupplyVoltage, VoltageDrop);

       23         return heatSegmentVoltage;

       24     }

       25 }

    Here it just implements those methods to return the values we want. DefaultSegmentConfiguration inherits from SegmentConfiguration which looks like this:

        5 internal abstract class SegmentConfiguration : DomainBase, ISegmentConfiguration

        6 {

        7     protected int CableCount;

        8     protected int SupplyVoltage;

        9     protected decimal VoltageDrop;

       10     protected decimal SegmentPercentage;

       11     protected int PassesCount;

       12     public abstract int CalculateCableCount();

       13     public abstract decimal CalculateVoltage();

       14 }

    This provides protected values for the sub-classes to use during the calculation and abstract methods to fulfil the ISegmentConfiguration contract. There's also a requirement to audit information in a log along the way so these classes derive from DomainBase where there's an AuditStep method (we're looking at using AOP to replace all the ugly "log the domain" code).

    Now we have multiple configuration classes that handle simple stuff. Calculate cable count and voltage. This lets us focus on the algorithm for each and return the value needed by the caller. Other implementations of ISegmentConfiguration will handle the CalculateVoltage method differently based on how many cables there are, voltage, etc.

    Like I said, it's a start and puts us in a better place to test each configuration now rather than that ugly case statement (that's practically untestable). Its also clearer for new people coming onto the project as they [should] be able to pick up on what the code is doing. Tests will help strengthen this and make the use of the classes much more succinct. More refactorings that could be done here:

    • Get rid of that original if statement, however this might be a bigger problem as it bubbles up to some of the parent classes. At the very least, it could be simplified and still meet the business problem.
    • Since all ISegmentConfiguration classes return the cable count, maybe this should just be a property as there's no real calculation involved here for the cable count.

    Feel free to suggest even more improvements if you see them!

  • Acropolis, CAB, WPF, and the future

    "Acropolis, the future of Smart Client"

    So sayeth Glenn Block, product lead for the Smart Client Software Factory and CAB. Glenn's a good friend and he's just doing his job, but I felt a little shafted when Acropolis popped up on the scene. I mean, after the last few weeks of CAB is complex and CAB is this and CAB is that, the last thing we need is a CAB replacement but here it comes and it's called Acropolis.

    There were many requests to ship a WPF version of SCSF/CAB and well, we're actually doing it now with the SCSFContrib project up on CodePlex. Is Acropolis a WPF version of CAB? We'll see but Glenn says "Acropolis takes the concepts of CAB to levels that folks in p&p might have never dreamed". From the initial reaction I'm seeing from people like Chris Holmes and Oren, Acropolis doesn't look all that impressive. Another wrapper on top of WPF, a little orchestration thrown in to "wire up components and dependencies" and the promise of building apps without writing or generating any code. I've heard this story before with CASE and like Oren, I see ugly XAML (or XML or XML-like) code being behind all this which doesn't give me a warm and fuzzy.

    I have yet to setup Acropolis and take it for a real test drive so I have to act like the movie reviewer who's never seen the movie but heard other reviews and has some initial reactions from the trailer. If CAB wasn't on the scene, this would be a great. It's hard enough to get deep into XAML as it is, so layering more complexity on top of that requires something that will help a developer, not hinder him. True, you can still (and will) rip open the XAML to figure out what's going on and make those adjustments but at least it's not that complex right now with POWPF (plain old WPF if there is such a thing). It's 2007 and we've evolved (almost) to the point where we can trust designers and editors. I still have to tweak the .designer generated files [sometimes] to get the right objects parenting in a WinForm app, but I consider that part of the territory. However when I look at what is behind the Acropolis XAML it makes me shudder. There was a quote from another blog that really disturbed me "Probably the best suggestion I can give to my customers, as I always do, is to take inspiration from all of these solutions and to build his own one". Wow. Last option I would ever say to someone especially if there's something out there to do the heavy lifting for you.

    What bothers me about this whole thing is the MS statement of "we currently have no further plans for SCSF releases". I bought into Software Factories and thought the implementation Microsoft chose (the GAT and GAX) was a good option. Building my own factories or modifying others isn't that difficult and I can express what I really intend in a factory quite easily. With no future releases it means not only CAB is stopped in its tracks, so is SCSF. We just launched the SCSFContrib project which was basically a way to extend the core without touching it, however that restriction now becomes a bit of a roadblock, and we haven't really even got rolling on the project yet.

    Maybe we need to go one step further and allow the core of CAB to be modified/rewritten/extended and let the community evolve it. Is that something that would be useful? I mean, after the debate that raged on and Jeremy Miller banging out his own "roll your own CAB framework" maybe we need to open the heart of the beast and give it an implant that will let it live past the Acropolis phase. Some of us have invested already in one framework and I don't think there's a cost benefit to shift to another one, although that seems like the path we're being pushed down. Maybe the SCSFContrib project needs to be modified to support core changes and really divorce CAB from it's over architected implementation. A CAB where the guts are abstractions might help support a more popular community driven adoption and get it past the dependency on using MS tools. How about a CAB where you can use log4net, or Windsor, or pico? If Oren can build his own Event Broker in hours and Jeremy can instruct people on building your own CAB over a dozen blog posts I don't see why this isn't possible given some help from the world around us.

  • The Robot is in the House

    Errr, make that the yard.

    Quick. What am I doing right now? I mean, right now, this very instant. Yes, I'm typing this blog entry but what else am I doing? I'm cutting my lawn. Really.

    Late last year we had a rather talented chap come in, brave the coldness of Alberta in September, and proceed to lay out 10,000 square feet of sod (no easy feat). That's the size of our backyard. Of course come the spring brings the weeds, and the grass, and the chore of cutting all this new grass. There I was faced with the decision. Should I buy a ride-on mower and be a weekend Andretti, chewing up the grass, and probably killing myself or at least one of the dogs in the process. Or is there a better option?

    Enter the RL1000 Robomow from Friendly Robotics. Yeah, I kid you not. I heard about it through the grapevine, did some research and lo and behold most people felt it was a good buy. It's the same cost as a ride-on mower so it was one or the other. The geek in me of course chose the robot. After all, how many people can claim to have a robot do their backyard lawn cutting for them?

    It arrived on Friday and I spent part of Saturday setting it up. I felt like Sam Neill in Bicentennial Man when their Robot (in the guise of Robin Williams) gets delivered to the house, although the Robomow doesn't talk, do dishes, or take care of kids it does a splendid job of cutting grass. It took me about an hour to lay out 500 feet of perimeter wire. This is a long piece of wire pegged into the ground that holds it down and surrounds your lawn. The Robomow, on it's first run, will travel around the perimeter following the wire and get a feel for how your backyard is layed out. Luckily ours, while huge, is pretty much square with a small section on either side of the house.

    After pegging everything in, I fired it up and had Sheila (RedvsBlue fans will know where this comes from) go around the figure stuff out. Once that worked (and it did on the first shot, I was impressed) I commanded her to do my bidding and mow that lawn. First it edges the lawn, cutting along where the perimeter is setup, then it criss-crosses over the middle part. It does this several times and as it hits each edge, it will make a small adjustment and go back the way it came (in a slightly altered direction from where it came). After 3 or 4 passes it's done. It takes about 2-3 hours to do the lawn but I just sit back and don't worry about it.

    Works like a charm, although a) I had to go and remove some of the larger weeds in the middle of the ground as Sheila got stuck on them and b) you still have to trim the edges but it only takes about 20 minutes for that (and less for a smaller lawn). All in all, a good investment. I have it programmed now to go every Tuesday, Thursday and Saturday night at 7pm. This leaves me the chore of watering the beast on Sundays which is fine for me and keeps everything short and sweet. There's no bagging, it just mulches things down to a fine cut and is actually better for the lawn overall.

    If you're looking for a new geek toy and have a big, flat, square(ish) lawn I highly recommend the Robomow. It really does work as advertised.

  • My own Private WTF

    I've always wanted to submit something to The Daily WTF (come to think of it, I think I did but that was a long time ago) but today it just made me cry as I experienced my own private WTF. We had a developer leaving today and as I was doing a code sweep (looking at what was there, how the domain was shaping up, etc.) I came across this gem:

        5         private enum DumbLevel
        6         {
        7             Dumb,
        8             Dumber,
        9             Dumbest,
       10             Not
       11         }
    Okay, I said to myself. It's his last day, he's having some fun. Back when I worked with wood burning computers I wrote code with silly variable names too.

    Then of course my curiosity was piqued and I just had to know how this enum was being used. This led me to this snippet:

       19             if (cableCount == 0)
       20             {
       21                 if (cableCount == 1)
       22                 {
       23                     if (PassesCount > 1)
       24                     {
       25                         if (i <= (_segments.Count - 2)) //for all but last
       26                             dumbLevel = DumbLevel.Dumbest;
       27                         else dumbLevel = DumbLevel.Dumb; //last segment
       28                     }
       29                     else dumbLevel = DumbLevel.Dumb;
       30                 }
       31                 else dumbLevel = DumbLevel.Dumber;
       32             }
       33             else
       34             {
       35                 dumbLevel = DumbLevel.Not;
       36             }
       38             //calculate cable count and heat segment voltage based on the segment configuration
       39             switch (dumbLevel)
       40             {
       41                 case DumbLevel.Dumb:
       42                     cableCount = segment.GetCableCount(); //use # of tracers to determine cable count 
       43                     AuditStep("Cable Count: {0} (based on count of Tracers identified)", cableCount);
       44                     AuditStep("");
       45                     heatSegmentVoltage = SupplyVoltage*Project.VoltageDrop*segmentPercentage;
       46                     AuditStep("Supply Voltage ({0}) * Voltage Drop ({1}) * Segment Percentage ({2})", SupplyVoltage,
       47                               Project.VoltageDrop, segmentPercentage);
       48                     break;
       50                 case DumbLevel.Dumber:
    	           // code omitted for sanity
       51                     break;
       52             }

    This was by far the ugliest code I've seen on this project. The if statements alone started my blood to boil, but then when I started to see the case statements my brain turned to mush and that was my day done.

    Sigh. Oh well, next week I'll introduce the team to the concept of inheritance so they can see how to make the case statements (and the craziest set of Enum values I've ever seen) go away. And Donald thinks he has it rough hiring new guys?

    NOTE: sorry about the formatting, my VS settings are just hosed today.

  • The Red "X" of Death

    You've heard of the Blue Screen of the Death (Windows). You've heard of the Yellow Screen of Death (ASP.NET). Now here's the Red "X" of Death. 

    Got this on one of our apps the other day by our QA folks. It's a Smart Client app using the DevExpress Ribbon, CAB, and a host of other UI goodness. Needless to say, the error wasn't too useful to anyone trying to fix it.

    There's *supposed* to be a grid in the middle there with all kinds of useful information and calculations. DevExpress just decided that it really didn't want to do all that work and gave us a nice big red "X" as if we're missing an image from a website.

  • Old and busted or new hotness

    Roy Osherove posted a what's hot and what's not list, mainly aimed at this whole ALT.NET developer talk that's been going on. Unfortunately, I'm a little at odds with what Roy posted and don't agree with some (most?) of his comparisons. It's also hard to compare things here as he's grouped items together that either overlap, are completely different, or don't make sense to be together and are vague. I really don't care for the whole ALT.NET tagging as I think even the term ALT.NET is silly but here's my spin on Roy's items.

    UPDATE: Roy updated his blog entry with a note that he didn't necessarily agree with the list, these were his observations of the world. I was a little confused because I thought he was emoting what he felt. Silly me. Still, I think the comparisons are a little strange as it mixes technology with concepts hence why I put my list at the end together. I also stand corrected on A# and that Castle can do both DI and AOP quite well. Thanks for the info!

    Hot: Castle, ActiveRecord, NHibernate
    Not: Datasets, Entity Framework, MS Application Blocks

    I'm not quite sure what he's talking about here. I don't feel ActiveRecord is "hot" and I try to avoid the pattern altogether. NHibernate for sure and Castle is cool (over DataSets any day). Is he comparing Castle and it's DI against MS Application Blocks? More on that later.

    Hot: MVC, NUnit, MonoRail
    Not: Web Forms, SCSF, VSTS, MSTest

    Again it gets a little clouded here (at least with my glasses on). Definately NUnit over MSTest hands down. With the pain and suffering Oren's been going through with Web Forms, MonoRail looks like a good alternative (JP gave a presentation at the Calgary Code Camp and from what I saw it looked promising). MVC hot? A pattern? I guess. However it's a tough call here as SCSF implements MVC and it's not a horrible implementation of the pattern, so how can one be hot and the other not. Also I'll agree that VSTS isn't necessarily hot (more like complex, expensive, etc.) but what are you comparing it to?

    Hot: XP, TDD, Scrum
    Not: MSF Agile, MSF for CMMI

    No argument here and right on the money. I wish MSF Agile was never created.

    Hot: OR/M, NHibernate, LLBLGen, etc.
    Not: DLinq, Data Access Block, Plain ADO.NET

    NHibernate for sure, but LLBLGen generates code that uses ADO.NET under the covers here. I guess the point is that it's not hot to write ADO.NET code directly but have a code generator do it for you? Personally that's fine because anyone that writes their own full DAL is just wasting brain cells.

    Hot: Open Source  (Mono, SourceForge)
    Not: Application Blocks, CodePlex

    This confuses me. Open Source is one thing, but it's being compared to... Open Source. The MS Applicaition Blocks are all open source and every project on CodePlex is as well. If you're comparing SF to CP from an open source perspective, neither really are. You can't get CodePlex code at all (there's an Open Source version someone wrote but it's far from complete) and SourceForge hasn't released their code for years with the old code barely able to instal and configure (Alexandria). Better to go with GForge if you're looking to run your own SourceForge site and source code is provided.

    Hot: CVS, SVN
    Not: VSS, VSTS Source Control

    Agreed. VSS is the devil's spawn, although with all the crashing you can get I would suggest SVN is hot and CVS isn't. CVS is better than VSS stability wise, but it's still not all that hot.

    Hot: Subtext, DasBlog, WordPress, etc.
    Not: Microsoft MSN Spaces, Community Server

    Roy is comparing blog software for those that haven't clued in. I do agree that SubText/DasBlog/WordPress is much more powerful and are better blog engines. CS seems to be bloatware now (and I have a bad feeling from it after the last weblogs update). Microsoft does have SharePoint for blogs and it's getting better, but maybe still not ready for primetime to compete with something like DasBlog. The thing about blog software though is that there isn't going to be a giant shift. I mean, let's say the next guy (Google, MS, whoever) comes out with the be-all and end-all blog engine. Do you think thousands of DasBlog or WordPress users are going to migrate en masse? My blog is on CS, so is Roy's. So are we not hot because of this setup and Hanselman is?

    Okay, I'll stop there as there are some other weird deviations Roy makes. I totally am all in when it comes to simplicity in design, but he compares it to the entire P&P (which started this entire thread the last couple of weeks). I'll bite that CAB is complex and we've done that discussion to death. However where is the more simplistic version of CAB that gives us everything we need? Where's the HOT version of CAB? RYO Winforms, I don't think so. And I haven't worked at Google but being at Microsoft is pretty fun, but I guess that depends on what team you're on.

    There's also a point he makes about Google Gears being hot and Smart Client not. I haven't had an opportunity to really get into Gears and it sounds great, but things always come in great packages. Is this really the future of apps? I mean, with Silverlight we have super rich clients written in .NET managed code and all doing whatever they need to over the wire. Are going back to writing crappy web apps (maybe with MonoRail to reduce the crappiness) and just plug Gears in and voila, offline capabilities. Is a Silverlight/Gears combination the golden ticket here and Smart Clients go the way of big fat clients from VB6 days long passed.

    Like I said, I do agree with some of his comparisons, but let's compare apples to apples here. Here's my modified list where it's just one product/technology/concept against the other. I've also ommitted things that Roy and I agree on that are already in his list:

    Hot Not
    NHibernate Entity Framework
    Windsor Container ObjectBuilder
    Aspect# Policy Injection Application Block
    CruiseControl.NET Visual Studio Team Build
    SharpDevelop Visual Studio
    MonoRail Web Forms
    NUnit/MBUnit MSTest
    Scrum MSF Agile
    NAnt MSBuild
    log4net Logging Application Block
    Silverlight Flash
  • Extending the Notification Pattern Example

    Recently I've been looking at implementing the Notification pattern as described by Martin Fowler here. The UI calls methods on the domain to validate the data which is held in a Data Transfer Object (DTO). The DTO contains both the data needed by the class (for reconstituting domain objects) and a class for holding errors returned by the validation method. It's a nice way to remove managing errors in your domain layer but still have them available to your presentation layer.

    One thing I've always found with Fowlers posts is that the code is very thin (and it should be since it's just an example) and sometimes not complete. So for you guys I decided to put together a working example based on his sample for checking a policy claim number. A couple of things I've done to go beyond the example on Martin's page are:

    • Working solution file that compiles and runs (C# 2.0 although with slight modifications [removal of generics] it could work under 1.1)
    • Implementation of the Model View Presenter pattern. Martin uses the Autonomous View approach in his sample because he's really focused on Notification, but I thought it would be nice to show it implemented with MVP. Autonomous View is a pattern for putting all presentation state and behavior for a window in a single class, but that really doesn't support a base approach that I prefer namely MVP and separation of concerns so the MVP pattern is here for completeness.
    • Added Rhino mock tests to show how to test the presenter with a mocked out view. I thought this was important as the example is all about UI validation and this would be a good example to mock out a view using Rhino.

    The Tests

    It starts with the tests (it always starts with the tests). As I was re-implementing an example my tests were slighted a little bit towards how the sample worked. However I was focused on 3 main validations for the UI:

    • Policy Number is present
    • Claim Type is present
    • Incident Date is present and valid (cannot be set in the future)

    With those tests in mind, here's the check for the missing policy number (with mock setup and teardown):

       20 [SetUp]

       21 public void SetUp()

       22 {

       23     _mockery = new MockRepository();

       24     _view = _mockery.CreateMock<IRegisterClaimView>();

       25 }


       27 [TearDown]

       28 public void TearDown()

       29 {

       30     _mockery.VerifyAll();

       31 }


       33 [Test]

       34 public void MissingPolicyNumber()

       35 {

       36     Expect.Call(_view.PolicyNumber).Return(INVALID_POLICY_NUMBER);

       37     Expect.Call(_view.IncidentDate).Return(VALID_INCIDENT_DATE);

       38     Expect.Call(_view.ClaimType).Return(VALID_CLAIM_TYPE);

       39     _view.ResponseMessage = "Not registered, see errors";

       40     _view.SetError("txtPolicyNumber", "Policy number is missing");

       41     _mockery.ReplayAll();


       43     RegisterClaimPresenter presenter = new RegisterClaimPresenter(_view);

       44     presenter.RegisterClaim();

       45 }

    The constants are defined so it's easier to read and are defined like this:

       13 private const string INVALID_POLICY_NUMBER = "";

       14 private const string VALID_POLICY_NUMBER = "1";

       15 private const string INVALID_CLAIM_TYPE = "";

       16 private const string VALID_CLAIM_TYPE = "1";

       17 private static readonly DateTime INVALID_INCIDENT_DATE = DateTime.MinValue;

       18 private static readonly DateTime VALID_INCIDENT_DATE = DateTime.Now.AddDays(1);

    The view is mocked out (which is what we're testing) so we expect calling the 3 properties of the view (that match up to the UI). There's also a ResponseMessage property which displayed if there are errors or not. The SetError method needs a bit of explaining.

    In Martins example, he uses Autonomous View which is great and the way he resolves what control is causing what error it's easy to wire up. All the controls are there for the picking. When I implemented the MVP pattern I had a bit of a problem. I wasn't about to pollute my presenter with controls from the UI (otherwise it would be easy) so how could I get the view to wire up the right error message to the right control. The only way I could do it (in the implemenation of the view) was to pass in the control name as a string. Then in my view implementation I did this:

      115 public void SetError(string controlName, string errorMessage)

      116 {

      117     Control control = Controls[controlName];

      118     showError(control, errorMessage);

      119 }

    Then showError just handles setting the error via the built-in .NET error provider:

       37 void showError(Control arg, string message)

       38 {

       39     _errorProvider.SetError(arg, message);

       40 }

    Once I had the missing policy test working it was time to move onto the other requirements. MissingIncidentType and MissingIncidentDate are both the same (except there's no such thing as a null DateTime so I cheated a bit and returned DateTime.MinValue). The other check against the Incident Date is to ensure it's not set before the policy date. Since we don't have a policy screen I just stubbed it out in a stub Policy class and set it to the current date. So an invalid date would be something set in the past:

       76 [Test]

       77 public void CheckDateBeforePolicyStart()

       78 {

       79     Expect.Call(_view.PolicyNumber).Return(VALID_POLICY_NUMBER);

       80     Expect.Call(_view.ClaimType).Return(VALID_CLAIM_TYPE);

       81     Expect.Call(_view.IncidentDate).Return(VALID_INCIDENT_DATE.AddDays(-1));

       82     _view.ResponseMessage = "Not registered, see errors";

       83     _view.SetError("pkIncidentDate", "Incident Date is before we started doing this business");

       84     _mockery.ReplayAll();


       86     RegisterClaimPresenter presenter = new RegisterClaimPresenter(_view);

       87     presenter.RegisterClaim();

       88 }

    The presenter is pretty basic. In addition to just registering the view and talking to it, it has one main method called by the view (when the user clicks the submit button) called RegisterClaim. Here it is:

       25 public void RegisterClaim()

       26         {

       27             saveToClaim();

       28             _service.RegisterClaim(_claim);

       29             if(_claim.Notification.HasErrors)

       30             {

       31                 _view.ResponseMessage = "Not registered, see errors";

       32                 indicateErrors();

       33             }

       34             else

       35             {

       36                 _view.ResponseMessage = "Registration Succeeded";               

       37             }

       38         }

    Basically it calls saveToClaim (below) then calls a service layer method to register the claim. Information is stored in a Data Transfer Object which contains both the data from the view and any errors. The claim DTO has a Notification object which has errors (or not) and the presenter will tell the view if there are any problems, letting the view set the display accordingly.

    First here's the saveToClaim method in the presenter that will create a RegisterClaimDTO and populate it with information from the view:

       67 private void saveToClaim()

       68 {

       69     _claim = new RegisterClaimDTO();

       70     _claim.PolicyId = _view.PolicyNumber;

       71     _claim.IncidentDate = _view.IncidentDate;

       72     _claim.Type = _view.ClaimType;

       73 }

    The RegisterClaim method on the ClaimService object will just run it's own command (which does the actual registration of the claim and checks for any errors). The core part of the validation is in the Validate method on the RegisterClaim object:

       39 private void Validate()

       40 {

       41     failIfNullOrBlank(((RegisterClaimDTO)Data).PolicyId, RegisterClaimDTO.MISSING_POLICY_NUMBER);

       42     failIfNullOrBlank(((RegisterClaimDTO)Data).Type, RegisterClaimDTO.MISSING_INCIDENT_TYPE);

       43     fail(((RegisterClaimDTO)Data).IncidentDate == RegisterClaimDTO.BLANK_DATE, RegisterClaimDTO.MISSING_INCIDENT_DATE);

       44     if (isNullOrBlank(((RegisterClaimDTO)Data).PolicyId))

       45         return;

       46     Policy policy = FindPolicy(((RegisterClaimDTO)Data).PolicyId);

       47     if (policy == null)

       48     {

       49         Notification.Errors.Add(RegisterClaimDTO.UNKNOWN_POLICY_NUMBER);

       50     }

       51     else

       52     {

       53         fail((((RegisterClaimDTO)Data).IncidentDate.CompareTo(policy.InceptionDate) < 0),

       54             RegisterClaimDTO.DATE_BEFORE_POLICY_START);

       55     }

       56 }

    Here it checks the various business rules and then uses the Notification object to keep track of errors. The Notification object is an object embedded in the Data Transfer Object, which is passed into the service when it's created so our service layer has access to the DTO to register errors as it does it's validation.

    Finally coming back from the service layer, the presenter checks to see if the DTO's Notification object HasErrors. If it does, it sets the response message (mapped to a textbox in the UI) and calls a method called indicateErrors. This just runs through each error object in the DTO through a method to check the error:

       44 private void indicateErrors()

       45 {

       46     checkError(RegisterClaimDTO.MISSING_POLICY_NUMBER);

       47     checkError(RegisterClaimDTO.MISSING_INCIDENT_TYPE);

       48     checkError(RegisterClaimDTO.DATE_BEFORE_POLICY_START);

       49     checkError(RegisterClaimDTO.MISSING_INCIDENT_DATE);

       50 }

    checkError is a method that takes in the Error object which contains both the error message and the control it belongs to. If the Notification list contains the error it's checking, it then calls that ugly SetError method on the view. This will update the UI with the appropiate error message attached to the correct control:

       56 private void checkError(Error error)

       57 {

       58     if (_claim.Notification.IncludesError(error))

       59     {

       60         _view.SetError(error.ControlName, error.ToString());

       61     }

       62 }

    And that's about it. It's fairly simple but the sample has been broken down a bit further hopefully to help you understand the pattern better. 

    Here's the class diagram for all the classes in the system:


    And here's how the classes break down in the solution. Each folder would be a layer in your application (or split across multiple assemblies):

    Application Layer

    Domain Layer

    Presentation Layer

    Service Layer

    So other than the ugly SetError method which takes in a string for a control name, I think this isn't bad. Maybe someone has a suggestion out there how to get rid of the control name string but like I said, I didn't want to introduce UI controls into my presenter so I'm not sure (other than maybe an enumeration) how to hook up the right error message with the right control. Feel free to offer a suggestion for improvement here. 

    You can download the entire solution with source code here. Enjoy!

  • I keep Hugh in my back pocket

    I really do. Sort of.

    I had to get a new set of business cards. All those fish bowls and throwing cards away really has dwindled my supply. Rather than going with traditional cards, I've been using ones with designs by Hugh MacLeod on them. I just ordered a new batch tonight, landing solidly on my favorite cartoon to date from Hugh (besides the Blue Monster). Here's my new cards:




    A little hard to read but just the basic info, my tagline from my blog, my blog address, phone number and email. The usual suspects. I'm really happy with the cards and they make for an interesting conversation piece at conferences, user groups, and general geek fests. I highly recommend them. You can check out all 72 gapingvoid designs here on Streetcards (with new ones always being added when Hugh comes up with a brilliant idea, which is quite often).


  • Introducing the New Vista iPod

    Finally, after almost 9 months of pre-production she's finally arrived. Here she is:

    Vista Avalon Simser, ready for duty!

    Vista Avalon Simser was born May 18th at 18:10 MST, weighing in at 5 pounds and 11 ounces (well, 10.6 ounces but the hospital seems to round it up).

    Okay, first I know that most of you are reading this on the bus, at home, at work, and you're laughing. Some people are shocked and probably scratching your head why a nerd would potentially put their child through the slings and arrows of naming their spawn after an operating system. Hopefully by the time she's old enough for someone to make fun of her name, nobody will remember where it came from.

    Her name came as a discussion about what her name should be, as they always do. At first we didn't know what sex the baby was going to be so we started with a boys name. We rummaged around the Internet, baby name books, and our brains finally to arrive at Dev. Yeah, geek origins but it had meaning to us. Dev (as in short for Developer) sounded like a good boys name; had it's origins in Sanskrit, it was unique and interesting and we liked the sound of it. Then came the process of finding a good middle name and again after some time, we liked Orion (as in the constellation).

    We stared at the piece of paper with his name written out:



    There it was, plain as day. Our son's initials would be DOS. We laughed and laughed and then came the afterthought. Well, if our son's initials are DOS, a daughter would have be an upgrade. And thus the name Vista was born.

    Vista (the operating system) hadn't been released yet, but we looked at it on paper. Vista. I liked the sound of it. True, it was spawned from the name of Microsoft's next operating system but it was also a word seeded in the Italian language (from visto) meaning a sight. Well, a daughter who is a sight. That works for us. Besides, above all (other than the glares we'll get from geeks and this blog entry) Vista is a pretty name for a girl. As for the middle name, it was not driven by the fact that Avalon was the codename for Windows Presentation Foundation. Again we turned to unique names and needed something that fit. Something that sounded right to match Vista. Avalon being the paradise where Arthur was carried to after his death; Avalon the peninsula in Canada in Newfoundland; Avalon the Druidic site in Glastonbury, England. This just became the name we wanted for our daughter and it stuck.

    There are two reactions we get from her name. Probably everyone reading this blog, is the first reaction. "You named your daughter after an operating system?". The other is "Oh that's such a pretty name". We can separate the nerds from the norms with the reaction.

    Of course there are some advantages to being named after one of the most expensive operating systems in history (notice that I didn't say popular, good, or fast; let's not get into that holy war):

    • Her blog will contain the largest number of search hits with people looking for information about Vista
    • She has her very own carrying case (a laptop bag) and other personalized "logoware", most of which I can buy from the Microsoft store or any geek conference for the next 10 years
    • She'll be the only one at her school with a service pack (or two, or three, ...) named after her
    • If she's cute when she's older (and she will be) boys will make many crazy jokes about "starting her up" and "rebooting her" to which I will pummel them upside the head with an XPS laptop that I'll carry around to "interview" any potential suitors.

    Bottom line, we think it's a pretty name and it's hers for life. We like it, and she's our daughter not yours so deal.

    Not every entry into this world is perfect and there were complications. Needless to say, we were disappointed at the process (but not the result, not in the least). We had gone through obtaining a midwife. We're true believers of the natural way and were convinced having a midwife and a home birth was what we wanted. No drug induced delivery. No machines that go ping. No drip, drip, drip of some bag attached to the baby that's hooked up to a monitor. It was going to be natural, fun, and without stress.

    The best laid plans.

    As a result of a lot of factors (incompatible blood types between momma and poppa, go figure) it was difficult and we ended up doing everything we didn't want to happen. It was a hospital birth, we had machines that went ping, drugs were used to induce, an emergency C-Section was needed, etc. Like I said, the best laid plans.

    Vista spent a week in the hospital, mostly for jaundice. However she seemed to enjoy her personal suntan studio as you can see below.

    Everyone is home now. Mommy and Baby are doing great. Vista's two weeks old now and is thriving. This was a life changing experience for me. It will be long remembered, not only for the birth of my daughter; the changes we went through; and the journey ahead. I hope you've enjoyed the sharing of the experience and this is something that I'll someday show to her, so feel free to leave comments for her to read when she's old enough. So welcome Vista to the world as she'll be a big part of it.

    Here's Vista's entire Flickr set which of course grows every day with new images.

  • Do as I say, not as I do

    More fallout from the TestDriven.NET vs. Microsoft department. I read your comments to my blog (no, I didn't moderate any of them) and read through Dan Fernandez's well written and concise response here. Dan is the lead product manager for Visual Studio Express, so other than the legal guys, this is coming from the horses mouth. Phil Haack has a great piece here with his take on it (which has an interesting spin, as he declares MS violated the TestDriven.NET agreement by reverse engineering it to determine how TD.NET works with Express, touche).

    After reading through everything out there (including all the comments on Jamie's Slashdotted blog) I do understand both sides of the story. I do agree that MS is in their rights to put what they want in their EULA and they're right that users of TestDriven.NET in Express products are violating it. I don't agree that MS is playing by their own rules.

    Specifically I personally have a real problem with Microsoft saying TestDriven.NET violates a EULA, yet they themselves do exactly the same thing with Popfly Explorer, XNA Game Studio Express, and the Reporting add-in. It's no different than saying cops are allowed to break the law because they're cops. No. You write the rules and live by the rules (lead by example). Just because it's your product doesn't give you the right to violate your own agreement.

    There's also confusion in this issue because it's an EULA that TestDriven.NET violates. Let's look at that. End User License Agreement. IANAL, but in many cases of "agreements" they have never held up in court. They're simply that, an agreement. You either agree or disagree, but in the end there's no legal ramification either way against you. Remember the Dell incident where the EULA for Windows was shrink-wrapped but yet you had to agree to it. If you opened the package to read the EULA, you were agreeing to it even if you didn't agree after reading it.

    I do agree with the Blue Monster in that it's their right to put whatever they want in their EULA. It's theirs and they craft it. The thing here is that it's an End User agreement. So who's at fault here? Jamie for building a tool, or everyone who's using it. I believe it's the latter and most Americans will agree (yeah, I'm going to get slack for this generalization) in the same vein as it's not the gun manufacturers that are at fault, it's the people using them. So everyone who's installed TestDriven.NET on an Express SKU and allowed it to run is in violation. Where's the cease and desist orders for all of you? Jamie certainly isn't in the wrong to create the software he did (and MS recognizes this) but users (including himself perhaps, assuming he tested it) are violating their agreement with Microsoft by using it.

    The general consensus I'm seeing from the community (via comments and blogs) is that MS should patch the Express SKUs to not even allow loading add-ins. Of course, there's still the issue of their own add-ons but I'm sure they could get around that somehow. There's still the question of what specifically Jamie is violating (or rather what clause). Many people are asking that question but I guess it's a legal-speak problem as I can't find anything specific enough from Dan Fernandez's Blog:

    "Jamie has also made available a version of his product that extends the Visual Studio Express Editions which is a direct violation of both the EULA and “ethos” of the Express product line."

    If it's a "direct violation" what's the "specific clause" it's violating? Again I read it this afternoon and I can't see it. As for the violation of the "ethos" of the Express product line, ethos [meaning a distinguishing character according to Merriam-Webster] seems very subjective to me depending on who's looking at it. There's part of the EULA that states that you may not "work around any technical limitations in the software". Again, subjective here as I'm not sure that adding new functionality that didn't exist is a work around technical limitation. Express does not have a technical limitation running Unit Tests, it was just never designed with it in. Much like it can't edit images directly, do I voilate the EULA if I build something that let's me manipulate image files in Paint.NET instead of Visual Studio Express? Another comment is on reverse engineering the product (VS not TD) but I know how Jamie wrote his addin and it never reverse engineered anything. It uses a documented and public API that's been there for years.

    I do like and agree with Frans' comment on Phil's blog entry:

    "MS should have disabled add-in support in the toolkit. OF course they were tool lazy to do so or technically unable to do so, so they thought they could hide behind a silly phrase in an EULA which isn't even applicable here (as the EULA has no right on what Jamie distributes to OTHERS). If Jamie compiles his code on teh command line the whole EULA argument is moot, just to illustrate the point."

    So a few options could be pursued at this point:

    • Jamie removes the Express support for TD.NET. Maybe end of story? He did it before, and only since it was re-enabled has this bear reared it's ugly head.
    • MS issues a patch to the Express line to not load add-ins. Problem is their own add-ons won't load (unless they themselves circumvent that)
    • MS finds out everyone who's running TD.NET and issues a cease and desist letter them them because they're violating their EULA. Won't happen and again, they would have to tell their own users of Popfly Explorer and other tools to do this.
    • MS strong-arms Jamie to remove the product, support, or both. Jamie collapses under legal costs and gives up. Might happen as Microsoft has more than enough resources to just simply throw at this problem to make it go away.

    What a silly mess. Anyways, I'm done with this thread. Jamie has been Slashdotted, and life will find a way.

  • The pot calling the kettle black

    Who's the pot? Microsoft. Enough with the craziness over Jamie Cansdale's excellent (read:must install now) addon for Visual Studio TestDriven.NET. I'm a huge fan of the tool (bought a copy to support Jamie and his excellent efforts, I recommend any good developer to do the same) and have supported Jamie in his efforts, especially after they booted him from the MVP program over some questionable tactics and reasoning. I followed him via emails and his blog posts discussing the matter and the sillyness of it all. Now it's come to a head.

    The last few days it's been legal mayhem on his blog, posting the various letters and emails he's been getting and sending to MS lawyer-types. What really peeves me the most is the clause in the EULA that they are griping over:

    " may use the software only as expressly permitted in this agreement. In doing so you must comply with any technical limitations in the software that only allow you to use it certain ways... You may not work around any technical limitations in the software."

    What a load of crap. I'm sorry but let me rattle off two very big tools from Microsoft that voilate this EULA: Popfly Explorer and XNA Game Studio. Both are "add-ons" that *only* work with the Visual Studio Express SKU.

    Since when does building a tool that simply automates running a unit test runner constitute working around a technical limitation? Is the technical limitation that VSExpress doesn't have support for unit test frameworks. If that's true, then any macro that shells out and runs nunit-console.exe could be considered in volation. If they're willing to stretch TestDriven.NET to fall into this category, then I call foul on Popfly Explorer and XNA Game Studio. They are "manipulating" how Visual Studio Express works and there's obviously a technical limitation in that Visual Studio Express, OOTB, does not support the XNA content pipeline or understand Popfly so again, someone is in voilation here.

    Unfortunately for Jamie, he's between a rock and a hard place. EULA are just that. Agreements. IANAL but from what I know of past issues concerning EULAs, they're not legally binding. However with the Microsoft Man behind this nobody is going to be able to stand up (legally) against them.

    So is Microsoft going to sue themselves? Might as well, since the lawyers are already doing a damn fine job at making an ass out of themselves.

    My advice for Jamie, 1) pull the support for the Express SKU (again) if that will appease the Blue Monster and 2) contact the EFF. They have a good track record in these type of things and might be able to support you. I know I will so just yell if you need me.