• I think you would get your message across better if you turned this into a blog post that focused on the real issues. The video is too long, with too much fluff. The discussion of the AAA comments is silly - yes I know its late and your cranky - so why dump that on the rest of the world? And then complaining about the test naming because it follows the JP (as you called it) style instead of your own personal style seemed petty, especially since you have little company with that opinion.
    I didnt think the content had any value until you got to the ChangePasswordGetReturnsView test, and that was about 10 minutes in. By that point, my spare time was used up, so I turned it off. You had legitimate complaints that someone could learn from - but theyre buried. A quick blog post that takes a few of the bad tests and discusses the issues would be much more valuable.

  • This is great...please continue. Unit Test are a hot topic but most people don't have a formal process. I'll buy your book!

  • Thanks, nice review. We want more. Well, I do anyway.

  • Great video, especially for a test-newbie like me. I think there's need to be more videos/podcasts/whatever that leads people like me that i've started with TDD and grasps the basic but has a hard time implementing it correctly.

  • Awesome practical walk through of your philosophy around unit testing. It helped me realize I've got areas I can work to improve.


  • This was fun to watch and educational. I think you've stumbled upon a great form of geek entertainment. Next time, give yourself 100 paper cuts before reviewing the tests.

  • Awesome video! It was really good and I hope you will create more of these!

  • I have to agree with Joshua, you would have been better off going to bed and sleeping, before making the video.

    Which is a shame because there are things you show where people can learn a better way, writing tests is hard, and mistakes are common, I have made them in the past and probably will in the future, but please remember that pointing out an error is one thing, shooting someone for an error, even with words, is another.

    As for the naming, I actually quite like JPs naming conventions, but If you were to look at his tests the class provides the context, which means that you don't get repeating contexts in the names of the tests, each test name is read with the class name as the start of a sentence, its not incorrect, but it is different to your style of naming, both are valid, it is a matter of preference, like the test framework to use.

  • Enjoyed the video and learned quite a bit.

  • I am not sure if I agree with Joshua comments, sometimes a video creates a more visual way of describing things and while going through it than a blog. And I appreciate Roy creating one the effort of it counts, even if he is cranky :) Yes dime cast might have 10 mins cast but to do code review you don't want 10 mins, might as well just use Resharper clean code, if you want something quick.

    Also one has to remember that just having a blog or being loud does not make someone authoritative in it. Roy had some very valid points in the naming of the test, one form or the other (JP or RO) consistency is the main thing, as you can see some with _ (underscore) and other test name without _

    One should really think about the written test code quality, its not just if the test runs everything is good, if you care about your craft one should care about the unit test they write. I have seen test that says ThisShouldBeTrue, and you just wonder about naming, what is true, why its true, etc etc.

    Just my 2 cents :)

  • ...and that sucks.

    Lol! I thought it was a good video, some good tips on how to write tests.

    From the comments I thought I was going to see you screaming or something. I don't think you were that harsh. If anything I'd say stop apologizing in the video. I'd bet that Scott and team have a thick enough skin to view it as well. I think it's their way of a peer review. Why on earth they don't ask (or pay) for you (or others that are experts) to do review a beforehand is way beyond me as their stuff is kind of the "authority" and you calling them out can only be good not only for them but the community as well.

  • Great review Roy, ashamed it was only half hour long but I guess you do have to sleep inevitably. Looking forward to you doing alot more of these!
    I disagree with Joshua. You're entitled to say what you feel, and for however long you feel. And it kept me interested to the end.

  • Good stuff. Do get some sleep beforehand.

    And I'd also agree (not with you, with the test writer) with the whole AAA comments there: it provides structure, just like the naming of the test.
    Also - regarding the double assertions - I agree that a test should test only one thing, but if that one thing has several valid and invalid properties? Like, if a string must be over 4 chars long and also begin with a uppercase letter. Would you not test both assertions in one test?

  • Thanks for the review. I learned several things.

    I am curious on what software you were using to do the focus and zooming.

  • LOL, Harsh but educational. I think this is an excellent format for presenting this much detail. Keep it up!

  • LOL, harsh but educational. I think video is a great format for diving into this much detail.

    Clearly we write tests in very different ways, some of your suggestions I like. One I'm having a lot of difficulty with:

    "Use only one Assert per method."

    I just can't see myself ever doing that, my test code tends to walk through several uses asserting anything and everything before during and after... Is this really so bad?

  • I like that you criticize obscure magic strings. But you lost me when you criticized the use of values that change.

    Using changing values is a way to inexpensively increase test data coverage. I would much rather have a test that sometimes fails and sometimes passes, than to have a test that is highly likely to not discover the same problems it didn't discover the first time I ran it.

    Automated testing already suffers from the poverty that comes from using narrow automated oracles. Let's not compound that by reducing data coverage based on the principle of "don't discover anything new or unexpected."

    If you are worried about intermittency, then use logging to preserve the exact values used in each test run.

  • James -
    Regarding tests which sometimes fail - I believe you're wrong. I seem to remember most of the tests I wrote (or saw) which sometimes fail as "ignore if possible" (if the test doesn't break the build - ignore it, if it does - remove it).
    I have a big issue with the claim that a "sometimes" test can help you find bugs. That's BS - classic unittests are meant to describe one test scenario and provide you with the safety (regression) net of having it run all the time. You should never expect it to do more, unless you have the specific intent of generating "random" inputs ("fuzz testing").
    The problem with having "sometimes" tests is their irreproducibility. You NEED to be able to re-run a failed test and have it fail again in order to fix the bug, and that claim stands true for fuzz tests - you need to keep a failed input as a regression input for later tests.

  • I loved the video. Don't listen to the "blog about it" comments. There are tons of blogs, but how many video code reviews are there?

    Makes me want to get the book.

    I say, just keep picking out things from the top projects on CodePlex and go for it. Awesome.

  • Nicely done. I think @Chris nailed it, pure geek entertainment for me...

  • Good idea; poor execution.

    There were some laughs "Check something for god's sake", and some good ideas, but overall I think your message got lost in a sea of sleep-deprived irritation.

    A project like NerdDinner that aims to teach others should be held to the highest standards of quality. Likewise those who comment on it ought to show the highest levels of professionalism, respect and courtesy if they wish their contribution (of critique / of advice) to have a positive impact.

    I do think that test reviews are worth doing. Certainly they can be entertaining! But I think you should spend a little more time upfront preparing and be careful not to let your emotions and initial reactions get in the way of the lesson.

    Perhaps you could put together a series of timeboxed video reviews. Make them no more than 5 minutes each. Spend 30 minutes preparing beforehand so you can really focus on just a few concerns here and there. Be sure to highlight good or distinctive practices just as you recommend improvements.

    One thing that would be really cool in a review would be to actually try to run a couple of tests on supposedly well-covered code. Then make a few reasonable and unreasonable changes to the implementation and see what kinds of things they catch. You might explore simple refactorings, fault injection and feature addition. What happens to the tests? Do they still compile and run properly? Any false positives? How about false negatives? Permissive tests are often bad. Brittle tests are usually worse. It's a tough balance.

    BTW, you might be interested to know that MbUnit has two mechanisms for tolerating multple asserts in a test.

    #1. Put [MultipleAsserts] on the test.

    #2. Use Assert.Multiple(() => { .... }); in the test body.

    In each case, the test will report assertion failures, keep running until they finish or throw some other kind of exception, then fail.

    This is good for AAA-style tests where the Action part yields a composite result that requires multiple Asserts to verify completely. Of course it's important that the test still remains focused on just one scenario in a context. We don't want multiple asserts on completely unrelated behaviors!

  • Excellent video for test newbie!

    I found this site from a link on Scott Hanselmans facebook profile - did you get any feedback from the guys that wrote the code?

  • your naming of tests really sux man, you can't be serious...

    otherwise, nice video

  • Thanks for pointing out these issues. I am starting to work the test driven way now and just downloaded the NerdDinner sample application and would have made some stupid errors I am now aware of. Thanks.

  • Not sure if it's exactly the same as you meant but VSTS testing does let you run one test multiple times with different parameters (row test I think you called it). Check out data driven testing. In this you specify a database (which can be a CSV file), for each row in the database the test is called. This is all set up with attributes on the test and is reasonably straightforward to do

  • Hi Roy,
    love your work.

    I miss the RowTests in Visual Studio too!!

    We did RowTests using Data-Driven Tests; DataSource attribute with an Input file (our rows) and then reading TestContext.DataRow

    [DataSource("System.Data.OleDb", @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=.;Extended Properties='text;FMT=TabDelimited;HDR=YES'", @"righeInvalid.txt", DataAccessMethod.Sequential)]
    public void IsInValid_LoadRows()
    //snip snip

  • Keep it up! I loved it... informal down to earth what your opinion are.. then it is up the eachone to decide to agree or not.

    I am glad you mentioned it was late and that you were talking silently because your family is a sleep... I was actually wondering if this would be the case. Now I have seen a few presentations from you and kind of got the feeling on your humor and how you think and speak so I found this video entertaining and educational. I can understand why some here got irritated and annoyed... but that is there choice. I choose to see it in a different light and a very glad someone even bother to make this type of videos so that I don't have to read yet another blog post... there are enough blogs around anyway.

  • Nice review.Your naming conventions are great!!!
    I think I will buy your book.

Comments have been disabled for this content.