Archives

Archives / 2004 / November
  • Software Design Engineer/Test job openings on ASP.NET, IIS and Visual Web Developer

    A few weeks ago I wrote two separate blog posts about how we test products on my team, and the key role SDE/T's play in the product cycle.  We have a few new SDE/T position openings that are now available if you or someone you know is interested in submitting resumes for them.  The details on the positions can be found below.  Please send email to: webjobs@microsoft.com if you are interested.

    Software Design Engineer / Test – Web Platform & Tools (ASP.NET, IIS, & Visual Web Developer - Visual Studio .NET)

    Are you passionate about building great software? Are you interested in helping to build and drive Microsoft’s web application server platform?

    The Web Platform & Tools Team is looking for highly motivated exceptional testers who love to create applications and tools to ensure the highest quality in products. Primary responsibilities include developing, implementing and executing automated test suites across multiple platforms utilizing the .Net Framework. Additional responsibilities include providing feedback on product design, identifying user scenarios, establishing quality criteria, tracking code coverage, and working with other test and development teams. Qualifications should include a minimum of 3 years software testing experience including developing test applications, architecting and using automation tools/frameworks, knowledge of and experience with C/C++/C#, Java, and/or Visual Basic. Either a minimum of a BA/BS degree in Computer Science/Engineering or 3 years demonstrated programming experience is preferred.

    Lead Software Design Engineer/Test – Web Platform & Tools (ASP.NET, IIS, & Visual Web Developer - Visual Studio .NET)

    Are you passionate about building great software? Are you interested in helping to build and drive Microsoft’s web application server platform and web development tool? Do you enjoy leading and developing the careers of a team of technical testers?

    The Web Platform & Tools Team is looking for highly motivated exceptional test leads skilled in managing a team responsible for creating applications and tools to ensure the highest quality in products. Primary responsibilities include managing a team of testers responsible for developing, implementing and executing automated test suites across multiple platforms utilizing the .Net Framework. Additional responsibilities include peer development, resource management, schedule coordination, providing feedback on product design, establishing quality criteria and working with other test and development teams. Qualifications should include a minimum of 5 years software testing experience with at least 2 years of management experience.  Skills required include developing test applications, architecting and using automation tools/frameworks, knowledge of and experience with C/C++/C#, Java, and/or Visual Basic. Either a minimum of a BA/BS degree in Computer Science/Engineering or 5 years demonstrated programming experience is preferred.

  • Dynamically Varying Master Pages By Browser Type

    One of the common scenarios that I heard at ApacheCon this week was to be able to dynamically modify/optimize a site's layout and style content based on browser version.  There was an interesting talk by someone who runs http://www.theregister.co.uk who talked about how they did this today using SSI's and mod_rewrite throughout the site.  One of the big goals of this was to avoid exposing device specific urls to clients, and instead be able to use SSI includes and url rewriting for CSS to do content negotiation on the server-side (each SSI points to a single content file -- which is then intercepted by mod_rewrite and re-pointed at a device specific content snippet).

    As I was sitting in the audience, I thought about how easy it would be to do this instead with ASP.NET 2.0 and Master Pages.  Instead of having to use multiple SSI include files to pull in content all over a page (for example: a SSI include for the header that lays out the top level table, then a separate SSI for the footer, and maybe a few more for other content regions), you will instead be able to just create a master page with replaceable regions.

    For example, here is a simple psuedo-sample demonstrating a master page with two replaceable regions:

    ArticleMaster.master

    <html>
         <head>
              Link to some CSS stylesheet
         </head>
         <body>
              Some Header content (including tables for layout)

              <form runat=server>
                    
                     <asp:contentplaceholder id="ArticleTitle" runat="server"/>

                     <table>
                             <tr>
                                <td>
                                   <asp:contentplaceholder id="ArticleContent" runat="server"/>
                                </td>
                      </table>

              </form>
         </body>
    </html>

    As everyone who has looked at ASP.NET 2.0 knows, you can then easily build multiple .aspx pages that are based on this master:

    Article1.aspx:

        <%@ Page MasterPageFile="ArticleMaster.master" %>

        <asp:content id="ArticleTitle" runat="server">
              Dynamically Varying Master Pages By Browser Type
        </asp:content>

        <asp:content id="ArticleContent" runat="server">
              My article content...
        </asp:content>

    ASP.NET 2.0 automatically compiles the two files the first time they are hit (you can also do this before a browser hits the site by using the precompile utility).  As such, the files are never parsed on subsequent requests and execute lightening fast.

    Programatically Varying Master Pages

    What is less well known is the fact that Master Pages -- in addition to being set statically like above -- can be dynamically switched on the fly as well.  As such, you could have multiple versions of a master page -- including ones optimized for different devices -- on your site.  For example:

        ArticleMaster.master
        ArticleMaster_IE.master
        ArticleMaster_FireFox.master
        ArticleMaster_Safari.master
        ArticleMaster_Opera.master

    A page developer could then write code within the Page's PreInit event to update the Master template used for the page depending on what browser device hits it.  For example:

        void Page_PreInit(Object sender, EventArgs e) {
       
            if (Request.Browser.IsBrowser("IE")) {
               this.MasterPageFile = "ArticleMaster_IE.master";
            }
            else if (Request.Browser.IsBrowser("Mozilla")) {
               this.MasterPageFile = "ArticleMaster_FireFox.master";
            }
            else {
               this.MasterPageFile = "ArticleMaster.master";
            }
        }

    Declaratively Varying Master Pages

    A neat trick, though, is to use the new declarative device filter syntax in ASP.NET 2.0 to switch the master declaratively without having to write code at all.  Device filters are prefixes that can be used on all control properties and look like this:

        <asp:button id="Button1"
                           ie:text="Push me you IE user"
                           mozilla:text="Push me you FireFox user"
                           text="Push me you -- you are running neigher IE or Safari..."
                           runat="server"/>

    The ASP.NET parser will automatically generate a switch statement when compiling markup like this, and insert logic to set the "text" property only once depending on the client that hits the page.  Prefix names can be defined within the ASP.NET browser capabilities system, and you can easily add your own entries as new devices come out.  Note that we goofed and didn't have a FireFox entry defined for Beta1 -- we will be adding that though for Beta2.  In the meantime you can use the built-in "Mozilla" one (which will fire when FireFox hits the site), or just add a new FireFox entry yourself.

    As I alluded to above, in addition to setting control properties via declarative filters you can also set <%@ Page %> directive values.  This means that you can declaratively vary the master page used on file by doing something like this with the article page:

        <%@ Page MasterPageFile="~/ArticleMaster.master"
                           ie:MasterPageFile="~/ArticleMaster_IE.master"
                           mozilla:MasterPageFile="~/ArticleMaster_FireFox.master" %>

        <asp:content id="ArticleTitle" runat="server">
              Dynamically Varying Master Pages By Browser Type
        </asp:content>

        <asp:content id="ArticleContent" runat="server">
              My article content...
        </asp:content>

    This will execute the master page with an optimized IE or FireFox master template if hit by one of those broswers.  Failing that, it will fall back to render a browser neutral version.

    Designers can then edit and create the master templates just like they do regular HTML (leaving replaceable regions for the parts to fill in).  Because the template can live in one file, and not need to be separated out in dozens of smaller SSI include snippets, this should be significantly easier to-do then before.

    I think this should provide a nice clean solution...

  • Clever Impression Tracking Technique

    I mentioned earlier how I had been at ApacheCon this week.  One of the best talks I saw there was from Michael Radwin of Yahoo who delivered an excellent talk on "Cache-Busting for Content Publishers".  His slides can be found here.

    As part of his talk, he walked through a very cool technique that Yahoo uses across their sites for impression/usage tracking on ads and other resources.  It is designed to maximize the accuracy of impression-tracking while minimizing bandwidth costs on the host.  It does this by faking out and then leveraging both intermediate and private proxy caches.  Below is a quick description of how it is done:

    Scenario:

    Assume you have advertisements stored as images on a server (ad0001.jpg, as0002.jpg, ad0003.jpg, etc).  You then want to expose these advertisements in multiple places on multiple pages across a site.  You want to accurately count how many times each image has been seen by visitors so that you can appropriately bill your advertisors for impression tracking.

    Naive Implementation
    :

    One simple but sub-optimal implementation of accomplishing this scenario would be to write code on the server every-time the <img src=> attribute is written into the page, and  update a counter that keeps track of how many times that particular advertisement has been published. 

    This approach works, but has the downside that you have to update some counter on each request.  This can be a performance problem where there is lock contention at the store location (for example: a single row in a database) -- although there are ways to code around this (by keeping a local cache in the web-server and then doing periodic flushes to a backing store).  The biggest performance issue is the fact that you are always having to run code on the server in order to-do some counting -- which means that you can't use features like output caching to just quickly send previously generated html results back down to the client.

    Less-Naive Implementation:

    A slightly better implementation of ad tracking would be to not count advertisement impressions within your page's server code -- and instead rely on doing log-analysis to count ad impressions.  By simply analyzing the logs you can see how many requests there were for ad0001.jpg, and know that each one represents an ad impression.

    The benefit of this approach is that it works well with server-side output caching (or pre-generated page content), and as such the server load ends up being good.  The downside, though, is that if you do this you will end under-counting the total number of real-world impressions.  The reason for this is that proxies will cache the image and avoid sending the HTTP request back to the client if they already have the image in their cache.  For example: if browser A within a company hits yahoo and is given an impression of ad0001.jpg in a page -- the company's local proxy server will cache it.  If browser B in the same organizations hits yahoo and is given the same ad as well -- the browser will be able to have the image fetched from the local proxy server without ever hitting yahoo.  Because yahoo is not hit on this image lookup, it isn't logged, and it can't bill the advertiser.

    One way to fix this is to explictly mark the advertisement images as not allowing downlevel caching by sending a "cache-control: must revalidate" http header with each image.  Properly written downlevel caches should then honor this setting and call back to the origin server to check the last modified date.  You can then count the 304 Not Modified entries in the log files for that particular ad and append them to the file served log value to get a better sense of total impressions.

    Very Clever Implementation:

    The above implementation will work in cases where downlevel caches/proxies correctly honor the HTTP header setting and re-validate when they see another request for a cached url.  The downside is that some caches/proxies don't do this -- and regardless of the setting do not go back to the origin server to re-validate.  Content sites selling advertisements will loose money in these cases -- since their customers are seeing advertisement impressions but they don't get credit for them.

    Michael then walked us through a clever technique that Yahoo uses to get around this issue.  In a nutshell they follow the below approach:

    1) Instead of rendering static <img src="ad0001.jpg"> tags in their HTML, they render some in-line client-side script that dynamically constructs an image tag that points to an source url file -- and does so with a randomly generated querystring value appended to it that guarentees a unique url.  For example:

    <script type="text/javascript">


    var r = Math.random();


    var t = new Date();


    document.write("<img width='109' height='52'



    src='http://ads.example.com/ad/foo/bar.gif?t="


    + t.getTime() + ";r=" + r + "'>");


    </script>


    <noscript>


    <img width="109" height="52" src=



    "http://ads.example.com/ad/foo/bar.gif?js=0">


    </noscript>

    This code ensures that each visit to the HTML page generates a unique URL -- one that will avoid any cache hits in either a local browser cache or intermediate proxy server.  As such, the browser will always end up hitting the server to request the image.  This guarentees a billable log entry that the content publisher can then use to charge an advertiser.

    2) What is clever about Yahoo's approach is that when a request for an advertisement image with a querystring is received by their server, they do not serve it out.  Instead, they automatically send back an HTTP 302 status code (which is a redirect) that points back at the same image without the querystring.

    For example, a GET request for this url:

        http://ads.example.com/ad/foo/bar.gif?js=343434344343

    would immediately get back a 302 request for this one:

        http://ads.example.com/ad/foo/bar.gif

    The browser will then automatically follow the redirect url and fetch and display the advertisement image.

    3) When the non-querystring version of the image is requested, Yahoo will also add aggresive caching headers telling browsers and proxies to basically cache the file forever (specifically they set an ETag, Cache-Control, and a 10 year Expires header).

    The reason for this is to cause the image to be automatically stored in intermediate proxies as well as in the local browser cache. 

    4) When another browser going through the same proxy server as a previous visitor hits Yahoo and is selected to see the same ad-impression, the client-side javascript will generate a unique URL to the image file.  This will guarentee that the request bypasses the local cache and intermediate proxies, hits Yahoo again, and is then redirected back to the image without the querystring. Yahoo's logfiles will automatically be updated to include this request that sent back the 302.

    Instead of hitting Yahoo again to download the image without querystring, though, the second browser will this time be able to have the image serviced from an intermediate proxy (since the ad0001.jpg had an aggresive caching value set during the first browser's visit to the page).   The benefit from the customer's perspective is that this improves the perceived responsiveness of the site (since they can get the file from a closer location).  The big benefit from Yahoo's perspective then is that they don't have to pay the bandwidth cost of serving out this image advertisement (since the bytes aren't being sent over their pipes for this second request).   Instead, they only have to pay the cost of the relatively small first GET request that returned the 302 response.

    Yahoo then counts up the 302 redirect responses for an advertisement (normalizing the querystring in their log parser to just show the base filename), and has the exact number of impressions to bill an advertiser for.  Quite clever I thought.

    What Happened to Site Counters in Whidbey?

    In ASP.NET Whidbey Beta1 we had a feature called "Site Counters".  It was designed to provide an easy way for developers to add page/image impression and link tracking, and provided a really nice developer experience for enabling this.  Specifically, you could just set a property on our adrotator or navigation controls to automatically cause usage of them to update counters stored in a backend database.  A really nice and easy to use developer model.

    The downside, though, as we started to do real-world application building scenarios and get customer feedback was that we realized that although we had a really easy developer approach for doing usage counting, the implementation model we were using didn't take advantage of all the tricks and real-world best practices that sites like Yahoo and others have pioneered -- and would have likely suffered in performance compared to other approaches. 

    As a result of this, Site Counters is a feature that we've decided to take out of Whidbey and it will not be in Beta2.  Our team philosophy is that a half-baked feature can often be worse than not having the feature at all -- it is much better to postpone it and ensure it is super high quality in the future.  Site Counters will likely come back again in a future ASP.NET release, and this time automatically take advantage of approaches like the one above (and others) and deliver a feature that is both easy to use and provides best-in-breed performance.

    In the meantime, you can manually take advantage of the approach described in Michael's talk to make your own ad impression system work really well.

  • ApacheCon

    I'm glad to be near the end of what has been a fairly exhausting (but useful) week.  I spent most of it in Las Vegas at the ApacheCon conference (http://www.apachecon.com) where I spent 5 days hacking and playing with Apache.  The conference itself was ok -- although I was a little disapointed by the lack of depth in most of the talks, and the absence of slide handouts and/or up-to-date electronic versions of the decks.  The event was a good forcing function, though, for me to spend a lot of focused-time with an Apache installation and really immerse myself with it.

    Miguel de Icaza gave a keynote on Mono on Tuesday which was pretty fun (he has a good speaking style that really works well for a technical crowd).  Much of it was a pitch on why the .NET Framework is cool -- which was fun to hear while sitting in the crowd at an open-source conference.  An interesting bit of trivia is that Nat Friedman (http://www.nat.org), who co-founded Ximian with Miguel, was a college intern on the IIS4 team way back in 1997 when I also worked on it. 

    Dmitry (who is the ASP.NET dev manager) and I caught up with Miguel the day before his keynote and got someone to take a group picture of us:



    Dmitry is one on the left with the ASP.NET hat.  I'm the one who is unfortunately squinting in the shot, and who looks like he is two feet taller than everyone else (note: I'm actually only 6'4).

    The last time I was in Vegas before this month was in January 2002, when our .NET Framework 1.0 ship party was at the MGM Grand.  Ironically, I happened to be there twice in one week this month -- I was there the previous weekend doing the keynote at ASP.NET Connections.  I flew back to Redmond for 3 days last week to catch up on work before flying out very late Friday night to attend the conference Saturday morning.  I then got back around 3:00am Thursday and have been in back-to-back-to-back meetings since then (including a very early morning video conference call with my Senior VP who is currently in Paris -- fun).

    Dmitry decided to drive to Vegas this time (and why not -- it is only a 2200 mile round-trip!?!).   I'm not sure where he is right now on the road back -- but he hopefully will show up early next week....

    As for me, I'm headed to bed...

    :-)

  • ASP.NET 2.0 and Visual Web Developer hit ZBB

    It has been a long hard march the last few weeks, but we finished up both our full test pass and hit ZBB this week.  We officially declared ZBB early Saturday morning (around 4am) when the last bug more than 48-hours old was checked in. 

    We were a little worried about our ZBB date over the last few weeks as the trend line was looking like we might be a little late.  But the team really responded well and the pace of fixes accerated dramatically in recent weeks to bring us back on schedule.  We fixed an eye-popping 1142 bugs during the last 3 weeks alone (an average checkin fix rate of over 76 a day).

    All in all, this leaves us in very good shape for ASP.NET 2.0 and Visual Web Developer.  The only bugs still active in our bug system were ones found either Thursday or Friday -- we haven't postponed anything to be fixed beyond Beta2.  We are starting our big security push on Monday, during which time we'll code review every single line of code in the product hunting for possible security issues.  This process will take 4-6 weeks as there are a large number of checks and balances in place to make sure everything gets done right (there is also a heck of a lot of code <g>). 

    After this finishes, we'll then start the final Beta2 push -- during which time we'll slow the rate of checkins and raise the stress quality.  We'll also then start the process of deploying pre-beta2 bits on Microsoft internal and external web sites to get real-world bugs that will ensure the the readiness of the product for production scenarios.  All of this will still take some time (apologies for the delay), but I think it will definitely be worth the wait....

  • ASP.NET Validators now work Client-Side on Mozilla with Whidbey Beta2

    I was on a Microsoft Unplugged Q&A panel at ASP.NET Connections last night in Las Vegas, and someone asked about what our plans were for uplevel support for non-IE browsers.  I talked about the new controls we have in Whidbey like TreeView, Menu and others use a common sub-set of Javascript that works with IE, Mozilla, Safari and others. 

  • ScottGu OOF at ASP.NET Connections Next Week in Las Vegas

    I’m off to the ASP.NET Connections conference in Las Vegas this weekend and will be there Sunday->Tuesday. It should be a fun conference with more than 1,400 people already signed up to attend. I’ll be doing the keynote talk Sunday night, and then doing a ASP.NET 2.0 Master Pages/Themes talk on Monday afternoon, followed by a Microsoft Team Q&A session late Monday night.  Let me know if you will be in town for it – if so you’ll have to stop by and say hi!