Archives

Archives / 2004
  • When (not) to override Equals?

    In .NET, you can override operators as well as the default implementation of the Equals method. While this looks like a nice feature (it is if you know what you're doing), you should be very careful because it can have unexpected repercussions. First, read this. Then this.
    One unexpected effect of overriding Equals is that if you do it, you should also override GetHashCode, if only because the Hashtable implementation relies on both being in sync for the objects used as the keys.
    Your implementation should respect three rules:

    1. Two objects for which Equals returns true should have the same hash code.
    2. The hashcode distribution for instances of a class should be random.
    3. If you get a hash code for an object and modify the object's properties, the hash code should remain the same (just as the song).
    While the first requirement ensures consistency if your class instances are used as the key in a hashtable, the second ensures good performance of the hashtable.
    The third requirement has an annoying consequence: the properties that you use to compute the hash must be immutable (ie, they must be set from a constructor only and be impossible to set at any time after that).
    So what should you do if your Equals implementation involves mutable properties? Well, you could exclude these from the computation of the hash and only take into account the immutable ones, but doing so, you're destroying requirement number 2.
    The answer is that you should actually never override Equals on a mutable type. You should instead create a ContentsEquals (or whatever name you may choose) method to compare the instances and leave Equals do its default reference comparison. Don't touch GetHashCode in this case.
     
    Update: It may seem reasonable to say that it's ok to override Equals and GetHashCode on mutable objects if you document clearly that once the object has been used as the key in a hashtable, it should not be changed and that if it is, unpredictable things can happen. The problem with that, though, is that it's not easily discoverable (documentation only). Thus, it is better to avoid overriding them altogether on mutable objects.

  • Black hole evaporation paradox?

    I just sent this letter to Scientific American. I'd be interested to have any informed opinion on the matter.
     
    I’ve read the article about black hole computers with great interest, but there are still a few questions that I think remain unanswered.
     
    The article makes it quite clear how black holes could be memory devices with unique properties, but I didn’t quite understand what kind of logical operations they could perform on the data.
     
    But another, more fundamental question is bugging me ever since I read the article. From what I remember learning about black holes, if you are an observer outside the black hole, you will see objects falling into the black hole in asymptotically slow motion. The light coming from them will have to overcome a greater and greater gravitational potential as the object approaches the horizon, losing energy along the way and shifting to the red end of the spectrum. From our vantage point, it seems like the object does not reach the horizon in a finite time.
    From a frame that moves with the object, though, it takes finite time to cross the horizon.
    This is all very well and consistent so far. Enter black hole evaporation.
    From our external vantage point, a sufficiently small black hole would evaporate over a finite period of time. So how do we reconcile this with the perception that objects never actually enter the horizon?
    It seems like what would really happen is that as the horizon would actually become smaller over time, the incoming particles would actually never enter it.
    If this is true, and no matter ever enters it, would the black hole and the horizon exist at all?
    From the point of view of an incoming object, wouldn’t the horizon seem to recess exponentially fast and disappear before it is reached?
    If nothing ever enters the horizon, is it really a surprise that black hole evaporation conserves the amount of information?
    Does the rate of incoming matter modify the destiny of the black hole? If it grows faster than it evaporates, I suppose the scenario is modified, but how so?
    I know it is quite naïve to think in these terms and that a real response could only come from actual calculations, but still, I hope that you can give me an answer to what looks like a paradox to me. I don’t see how you can reconcile the perceptions of an external and a free-falling frame of reference if the black hole evaporates except if nothing ever enters the horizon.
     
    UPDATE: a recent paper presents a similar theory to solve the information paradox:

  • More on non-visual controls and the component tray

    Nikhil gives an excellent explanation of this and why data sources are controls (to summarize really quickly, they must be part of the page lifecycle).
    This also answers an ongoing discussion on TSS.NET about the SqlDataSource, on a subject similar to this old blog entry.

  • All abstractions leaky are leaky. All. But one?

    There's been a lot of talking about leaky abstractions lately. An abstraction is leaky by definition: it is something simple that stands for something more complex (we'll see later on that this is not entirely true in the fascinating world of physics).
    These arguments make sense until a certain point. And this point is determined by how much time will the abstraction gain you? The answer with ASP.NET is a lot of time as anyone who's developped web applications with the technology knows.
    So the abstraction may be leaky, but it doesn't matter: the really important thing is that it's useful.
    Joel's point in his paper was really to explain that at some point you'll need to learn what the abstraction is really standing for because as you use the abstraction in more and more specialized and complex cases, the abstraction will leak more and more. That's true, and the value of an abstraction can more or less be determined by the amount of time you can work with it without having to worry about the complexity that it stands for. Depending on what kind of application you develop, this time can be pretty long with ASP.NET.
    Now, what about physics? Well, in physics, there are leaky abstractions, like for example thermodynamics, which nicely reduce the complexity of the chaotic microscopic kinetic energy of molecules to very few variables like pression, temperature or volume. And the abstraction leaks if you start looking at too small a scale, or at a system outside of equilibrium. Still, it's one of the most useful abstractions ever devised: it basically enabled the industrial revolution.
    But there are more curious abstractions in physics. If we try to find the ultimate laws of nature, it seems like the closer we look, the simpler the world becomes. In other words, the layers of abstractions that we see in nature seem to become simpler as we go more fundamental. The less abstract a theory, the more leaky it seems, actually.
    Could it be that the universe is the ultimate abstraction, the only one that's not leaky?
    Well, the point is, the universe is no abstraction, it's reality. But if we're lucky and smart enough, we may someday find the only non-leaky abstraction, the one that maps one to one with the universe itself.

  • Why you shouldn't expose public properties from your pages

    We often have users asking us how they can access some variable that's in their page class from their user or custom controls.
    The answer is that your page class can expose public properties, and then any control can cast its Page property to your specific Page-inherited class and gain access to the new properties.
    But the second half of the answer is that you should really not do that even though it's possible.
    There is a double reason for that.
    The first is that it's your page that should orchestrate your controls (by accessing their properties and methods), not your controls orchestrating your page.
    And the second, which is very close, is that your controls should not depend on your page implementing special properties or methods or containing specific controls. Otherwise, you're breaking one of the most important qualities of WebControls, that is their reusability. Any control should have the ability to be dropped on any page and just work.
    Your user and custom controls should be components, that is, they should be independant, encapsulated and reusable entities. It's your page (or containing controls) only that should orchestrate the controls and glue them together. The glue should stay outside and should never ooze inside.
    A consequence of that is that your Page generally has no good reason to expose new public properties, because no one should have to consume them.

  • How to split server styles

    If you've been developing custom WebControls, in some cases, you may have had to split a server-side style on two HTML elements. Usually we want to apply the border and similar properties to a container like a div or td, and the Font properties and ForeColor to a text element such as a link (because a link forces the color and text-decoration, for example).

  • Session sharing between ASP and ASP.NET

    The question comes back every so often, so I thought I'd post about it.
     
    Almost all existing solutions are intrusive and need to modify the code of the ASP application, the ASP.NET application or both. All solutions incur a performance cost as the data has to be marshaled between the COM world of ASP and the .NET world of ASP.NET.
     
    First, there’s a solution in MSDN: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnaspp/html/converttoaspnet.asp but it necessitates modifications on both sides and needs a database, which could degrade the application’s performance.
     
    There are also a few commercial products, all in the $200 to $300 range:
     
    http://www.consonica.com/solutions/statestitch/ which just requires one include on top of each ASP page that uses the session. One drawback is that it doesn’t support COM objects in the session, except for Dictionary.
     
    http://www.sessionbridge.com/ which requires more code changes to all places where the ASP session is used.
     
    And then, there is my own approach, which is the only one that I know of that requires no source code change anywhere (except a few very particular cases), but it’s just a proof of concept, nothing I would use in a production environment (we have no performance data):
    http://www.dotnetguru.org/articles/us/ASP2ASPNET/ASP2ASPNET.html
     
    And a very similar attempt which may scale better:
     
    My own approach is shared source, so anyone is free to improve on it.

  • ViewState restoration does not call any setters


    public bool Selected {
     
    get {
       
    object o = ViewState["Selected"];
       
    if (o == null) {
         
    return false;
        }
       
    return (bool)o;
      }
     
    set {
        ViewState["Selected"] =
    value;
       
    if (Owner == null) return;
       
    if (value) {
          Owner.SetSelectedNode(
    this);
        }
       
    else if (this == Owner.SelectedNode) {
          Owner.SetSelectedNode(
    null);
        }
      }
    }

  • A few things I remember about quantum mechanics

    A post on the ASP.NET forums recently went a little crazy by shifting from a perfectly normal question on how to get the response object from a class that doesn't derive  from Page or Control (to which the answer is to use HttpContext.Current.Response) to quantum mechanics and the multiverse theories.
    I happen to know a few things on quantum mechanics, dating back from my PhD, so I can shed some light on these subjects (or make them even more obscure, we'll see).
     
    Here are a few things that have been said in this thread and a few comments:
    "Light will act like a wave until observed, at which time it collapses to a point."
    It would be more precise to say a photon, or quantum of light, that is, the minimum quantity of light you can get.
     
    "wherever an elemental "decision" is made (whether the light went through the top or bottom hole of the twin-hole experiment; or whether Schrodinger's cat is alive or dead), the universe splits to accommodate both decisions."
    That was the original idea of the multiverse, but we'll see that there may be a much better and simple explanation.
     
    "Another "solution" to the riddle proposed by Schrodinger's cat is the idea that light travels backwards in time, just as it travels forward."
    For that to solve any problem, all particles would have to be able to travel back in time: light is not the only way to transmit information. As a matter of facts, virtual particles are able to travel faster than light, but there's no way to observe these directly, so they can't convey any information. As far as we know, no faster than light phenomenon can transport any information. Another way to say that is that no signal can travel faster than light. If it travels faster than light, it's not a signal. One example uses a pulsar as a beacon: the pulsar sends a jet of particles in some direction which rotates with the pulsar (like a beacon). Imagine now some enormous projection screen (interstellar gas clouds play this role very well) that is able to emit some light when the particle jet hits it. If the screen is far enough from the pulsar, the spot of light it projects on the screen can move well above the speed of light (its speed is the angular rotation speed of the pulsar times the distance from it to the screen). The explanation is that the spot of light you see at one point in time was not created by the same particles as a little later. In other words, what you see is not an object moving, what you see is a succession of different objects that give the illusion of movement (the real movement is perpendicular to the screen, whereas the one you think you see is parallel to it). A similar phenomenon gives the illusion that a particle can quantum-tunnel through a barrier faster than light. It's a little trickier to explain but in this case too, no faster than light signal can be transmitted.
     
    "I know that the multiverse theory has moved on from that, and rather than splitting universes there are now bubbling multiverses and virtual multiverses"
    True, now it's a completely different theory, based on string theory. It states that there is only one universe (which is the definition of the universe after all) that has different, causally disconnected regions in which the laws of physics are different. These new "bubbles" can appear when a region of an existing bubble tunnels into a state with a lower vacuum energy, which results in the rapid expansion of this bubble as the extra energy is transformed into space, so fast that it disconnects it from the bubble that formed it. There's an excellent article about that in the September issue of Scientific American.
     
    "I mean Schrodinger was trying to explain the role of the observer in deciding the quantum state of a particle. In his experiment he assumed that the only observer was the experimenter that opened the box - until the box was opened the particle was 'in' a state of quantum uncertainty. But, what I always say when someone mentions the experiment - what about the cat???!!! Surely, it knows whether it is alive or dead!"
    Absolutely, this is what makes Schrödinger's cat thought experience completely bogus as it's usually told: the cat is an observer and is classic enough to collapse the particle's state. It's never half-dead, half alive.
    But there are real Schrödinger's cats that actually fulfill exactly the original prediction. The difference is that they are not cats, but rather small lumps of matter. Scientists are now able to make these lumps bigger and bigger, but it will always be impossible to do the experiment with an actual cat.
    What happens when you measure a quantum phenomenon has been fascinating since it was discovered, more than any other aspect of quantum physics. The reason is clearly that it is the only case in modern physics where pure chance seems to have a role: it looks undeterministic. Of course, this has been hastily interpreted by many as the finger of God, or as what enables us to have free will. I'll get back to that as soon as I've exposed a more modern theory of quantum measurement that seems to give very good results while making it all deterministic again. I can't find the references of papers about this so I'll rely on my memory here. If someone reads this and knows where to find the relevant papers, please drop me a note.
    The idea is that a measurement device is a quantum system (like everything) that has many degrees of freedom and that a measurement is actually a complex interaction with such an object. What happens is that this interaction results in the quantum object to collapse into a classical state. This theory is able to predict the time that it takes the object to collapse, and how complex an object has to be to cause the collapse. Experimental data seems to confirm this theory (I think the experiments were done at the Ecole Normale Supérieure de Paris).
    So according to this theory, there is nothing strange or random in a measurement, it's just one quantum interaction like everything. In a way, the chaos of the state of the device replaces chance. And everything is deterministic again.
    Including the human brain.
    So where does that leave our freedom of choice? Well, we would have none, obviously, if we are made of quantum particles like the rest of the universe. But that's not a problem, the illusion of it is enough.

  • Sorry about the comments on old posts

    For some obscure reasons that have to do with spam but that I failed to understand, comments are not allowed anymore on posts older than 30 days on weblogs.asp.net blogs.
    This is very frustrating and goes against the very principle of blogs. I'm really sorry about that, but there's nothing I can do oher than send internal mail to complain about it (which I already did). I just hope that this limitation is removed as soon as possible.
    For now, if you have comments, you can send them to me using the contact feature of the blog, and I'll store them until I can post them for you (yes, amazing as it may seem, even I can't comment my own blog).

  • Declarative programmation will bloom with ASP.NET 2.0 auto-compilation

    You may already know that, but ASP.NET 2.0 introduces a new Code directory (or Application_Code, the name is not final yet) that enables you to just deploy the source files of your libraries, and they will get compiled on-the-fly. What you may not know is that you can extend this by creating your own build providers.
    When I first learned about the extensibility of the auto-compilation, I immediately thought about how an Object/Relational  mapping tool could take advantage of it and generate the DAL transparently on-the-fly from the XML mapping file. It would make it marvelously transparent and easy to use, update and manage. An additional bonus is that any change to the xml file would immediately result in Visual Studio Intellisense picking up the change and displaying the new types. Just perfect.
    It seems like I'm not the only one who thought about that: here's an article from Fritz Onion that explains exactly how to do this now, with the public beta of ASP.NET 2.0. Check it out, it works now.
    The possibilities are endless. For example, a business rule engine could use it, or a form generator.
    Play with it, invent great new applications, and if you find anything limiting you in doing so, just tell us about it. Now is the right time.

  • Optimize your images

    It is vitally important that we optimize all the images we ship as resources in the framework: these images will potentially be downloaded several times on each of the pages our users build using the controls that use them.

  • Die, NumLock, Die!

    Why do we still have this stupid NumLock key on modern keyboards? Who still uses it with all of the unlocked keys duplicated about half a centimeter to the left? Hello? Keyboard designers? Please get rid of it now. Everybody hates seeing the cursor go crazy when all they wanted to do was to type a number.
    Now, the people who design keyboards at Microsoft have recently come up with a new way to torture us: the F Lock key. They've decided that we needed new fixed function keys for common tasks such as save or print. This is all very well and I'm sure that having access to these functions with just one keystroke and without having to know complex ctrl key combinations is very useful to disabled persons. But why did they have to put these on our function keys?? Just add new keys, but don't replace useful keys that we're used to. Of course, the new key meanings are on by default and you need to hit F Lock to restore the old F keys. And naturally, every time you reboot, you have to hit it again.
     
    Die, F Lock, die!
     
    Actually, why not make these lock keys real switches if you really want to keep them? After all, they're a matter of personal preference that you just want to set once and forget about. So they could be real switches that you can't accidentally hit. That would be so much better, and it would get rid of the stupid problem that wireless keyboards have which is that they can't display the status of these keys for power consumption reasons.
    Well, in fact, they could also be software settings in the keyboard drivers for all I know.
     
    Finally, what's with the PrtScn key? Whenever I want to do a screen copy, I have to figure out some strange combination of F Lock, Ctrl, Alt and Shift to get the right result. And what does SysRq mean? Does any application still react to ScrLk?
     
    Last minute: To get your F keys back, try this: http://www.mvps.org/jtsang/flock.html

  • The case of the missing data source

    If you've used the beta version of ASP.NET 2.0 a little, the Menu and TreeView controls in particular, you may have noticed that the datasources they consume are different from those that GridView or FormView use. But of course, Menu and TreeView have to use hierarchical data sources, whereas other controls use tabular data sources.
    Tabular data sources that you get out of the box are mainly SqlDataSource and ObjectDataSource. Of course, you can also build your own data source (more on this in later posts). The nice thing is that with ObjectDataSource, you can potentially get data from any data store.
    Now, for hierarchical data sources, you've got XmlDataSource and you've got SiteMapDataSource, and... and that's it. So if you want to populate a Menu or TreeView from a database, you'll pretty much have to do it from code.
    Enter CompositeHierarchicalDataSource... This control I've developed on my free time (which is mainly composed of an aggregation of my serious work's compilation time) composes several tabular data sources into one hierarchical data source using relations.
    Here's a simple page that displays a menu that takes its data from a composition of one ObjectDataSource and two AccessDataSources:

    <%@
    Page Language="C#" debug="true" %>
    <%@
    Register Namespace=MyControls.DataSources.CompositeHierarchicalDataSource TagPrefix=my
    %>
    <!
    DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
    "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
    <
    html xmlns="http://www.w3.org/1999/xhtml"
    >

    <
    head id="Head1" runat
    ="server">
       
    <title>CompositeHierarchicalDataSource</title
    >
    </
    head
    >

    <
    body
    >
    <form id="form1" runat
    ="server">
       
    <div
    >
          
    <my:CompositeHierarchicalDataSource runat=server ID=Composite1 RootViewName
    ="Categories:DefaultView">
             
    <DataSources
    >
                
    <asp:ObjectDataSource ID="Categories" Runat="server" TypeName="Categories"
                   
    SelectMethod="GetCategories"
    />
                
    <asp:AccessDataSource ID="SubCategories" Runat="server" DataFile
    ="~/data/things.mdb"
                   
    SelectCommand
    ="SELECT [Id], [CategoryId], [Name] FROM [SubCategories]"/>
                
    <asp:AccessDataSource ID="Things" Runat="server" DataFile
    ="~/data/things.mdb"
                   
    SelectCommand
    ="SELECT [Id], [SubCategoryId], [Name], [Description], [Url] FROM [Things]"/>
             
    </DataSources
    >
             
    <Relations
    >
                
    <my:Relation ParentDataSourceId="Categories" ParentView="DefaultView" ParentColumns
    ="value"
                   
    ChildDataSourceId="SubCategories" ChildView="DefaultView" ChildColumns
    ="CategoryId"/>
                
    <my:Relation ParentDataSourceId="SubCategories" ParentView="DefaultView" ParentColumns
    ="Id"
                   
    ChildDataSourceId="Things" ChildView="DefaultView" ChildColumns
    ="SubCategoryId"/>
             
    </Relations
    >
          
    </my:CompositeHierarchicalDataSource
    >

          
    <asp:Menu Runat=Server ID=myMenu DataSourceID
    =Composite1>
             
    <DataBindings
    >
                
    <asp:MenuItemBinding DataMember="Categories:DefaultView" TextField="text" ValueField="value"
    />
                
    <asp:MenuItemBinding DataMember="SubCategories:DefaultView" TextField="Name" ValueField="Id"
    />
                
    <asp:MenuItemBinding DataMember="Things:DefaultView" TextField="Name" ValueField
    ="Id"
                   
    ToolTipField="Description" NavigateUrlField="Url"
    />
             
    </DataBindings
    >
          
    </asp:Menu
    >
       </div
    >
    </form
    >
    </
    body
    >
    </
    html>
     
    The nice thing is that you can in principle use any tabular data source to build your hierarchical data source. In principle only: this is still a proof of concept and is currently limited to SqlDataSources, ObjectDataSources whose views return DataViews, and datasources whose views implement ITabularDataSourceView. Another problem it currently has is that it loads all of the data at one time. It would be nice to have lazy loading for populate on demand TreeView scenarios.
     
    It's shared source, so you're free to use and modify the code as you wish. Enjoy!
     
    In future posts, I'll explain how it works in details, which should make a nice tutorial on how to develop your own hierarchical data source (hint of other missing data sources: LdapDataSource, FileSystemDataSource).
     
    The GotDotNet CodePlex workspace for this data source can be found here:

  • Number geeks!

    Robin Debreuil has a great post about how the world would be a better place if we had four fingers like the Simpsons (or virtually any cartoon characters). He doesn't say anything about the yellow color or the rubbery hair, but it would sure look cool too.
    His article is amazingly well documented and thought through. You can see that this guy has been thinking about building a new numbering system for years. Very impressive. And very geeky too, in a hilarious kind of way.
    Unfortunately, his proposition for easier arithmetics has even less of a chance to be widely adopted as, say, the US has to adopt the hugely superior metric system or Swatch has to impose its "internet time".
    So it is a total waste of energy, totally useless, impressively time-consuming and funny at the same time.
    And thus highly recommended reading.
     
    A few things worth noting and objections, though:
    - I would actually have loved a numeric system where 4233 sounds like "butthole sniff sniff". How boring would the solar system be if we didn't have Uranus?
    - A complete tutorial with exercises and a diploma at the end would be great (there are exercises at the end of the paper, but that's not enough: I want to learn)
    - As has been noted in the comments, the balanced ternary system is surprisingly good too. Any chance of mixing the two for a balanced ternary bioctal system? And for a better name than that?
    - I'll be saying bioctal too from now on instead of hexadecimal
    - You're completely mad to spend so much time on trying to improve everyone's life whereas you should be working on your SWF/C# project
    - If we forget about one of our fingers, how easier is it to count on them? Anything you can't do with the decimal system?
    - Bioctal sounds like a french anti-acne medication, which is ok as geeks keep their acne quite late.

  • Yield and generics rock!

    public sealed class FilteredEnumerable<T> : IEnumerable<T>, IEnumerable {

      private IEnumerable<T> _enumerable;
      private Predicate<T> _filter;

      public FilteredEnumerable(IEnumerable<T> enumerable, Predicate<T> filter) : base() {
        _enumerable = enumerable;
        _filter = filter;
      }

      IEnumerator<T> IEnumerable<T>.GetEnumerator() {
        foreach (T item in _enumerable) {
          if (_filter == null || _filter(item)) {
            yield return item;
          }
        }
      }

      IEnumerator IEnumerable.GetEnumerator() {
        return (IEnumerator)(((IEnumerable<T>)this).GetEnumerator());
      }
    }

  • The ASP.NET 2.0 page lifecycle in details

    Needless to say, this is a poster you can now find in nearly all offices here in the ASP.NET team at Microsoft. Thanks for the great work, Léo!
    Read it, print it, use it every day!
     
     
    UPDATE: updated the links to the new locations for these resources. The poster would probably need some updating in particular where callbacks are concerned but it's still very useful.

  • What level of control do you need over the rendered HTML?

    I'm answering a post from Dimitri Glazkov here. Dimitri tracked this back to my post about UI reusability. It's probably a good idea to read his post before you go on reading this if you want to understand what this is about.
     
    In an architect's ideal dreamworld, I'd say you're absolutely right, Dimitri. In the real world, though, I'd mitigate this.
    After all, that's what server controls are all about: abstracting the HTML rendering and substituting higher-level abstractions for it. The controls are not ethereal entities, and they need to have some level of control over their rendering to actually work. If you want to have complete control over the rendered HTML, the only thing you can do is output it yourself, and you're back to classic ASP (or PHP). So we should probably be somewhere between complete control and pages made of only server controls.
     
    I'm sure you're aware of this, but I'll say it anyways for your readers and mine who may not be as advanced as you are in ASP.NET.
     
    There are a few things you can do to control the rendering of ASP.NET controls:
    - Use CSS (works with any server-side web technology)
    - Use styles (and in particular their CssClass property to link them to CSS) (v1)
    - Use templates, which give you total control over the HTML that's rendered by some parts of the controls (usually the ones that are the most visual and are not vital for the control to actually work). Templates rule! (v1)
    - Know the control set: there is a fine granularity over the control you can have over the rendering just by choosing the right control. For example, DataGrid, DataList and Repeater are similar list controls that give you more and more control over the final rendering. (v1)
    - Develop your own controls, from scratch or by inheriting from an existing one. This way, you can override part or all of the rendering code. (v1)
    - Use themes and skins to isolate the general presentation of the site. Themes are more or less equivalent to server-side CSS: they act at the same level of abstraction as controls, and enable to set any property (hint: even TEMPLATES) of any control, site-wide or based on a skin ID. Themes are very easy to write as they have the same syntax as a page. (v2)
     
    About adapters, you're right in mentioning that there is a yet unfulfilled potential there. But it may be not in their implementation but in their very use. They may be used for something else than just device adapting. I'll try to blog on that if I have time to experiment with the concept a little more.
     
    Your point about the three roles in the designer is a good one and there may be things more or less along these lines in Orcas. But if you look at it as it is currently, we're already kind of there... You have the visual design view, for designers, you have the HTML view, for what you call the prototype, and you have codebehind for the actual plumbing of the page. Yes, the first two actually act on the same thing, but at a different abstraction level.
    I do not understand your third role, though: why would theme development be the role of an advanced developer? I would have given this role to the graphics designer. Well, at least, the designer can determine the general look of the page and a developer can transform that into a theme.

  • Don't redirect after setting a Session variable (or do it right)

    A problem I see over and over again on the ASP.NET forums is the following:
    In a login page, if the user and password have been validated, the page developer wants to redirect to the default page. To do this, he writes the following code:
    Session["Login"] = true;
    Response.Redirect("~/default.aspx");
    Well, this doesn't work. Can you see why? Yes, it's because of the way Redirect and session variables work.
    When you create a new session (that is, the first time you write to a Session variable), ASP.NET sets a volatile cookie on the client that contains the session token. On all subsequent requests, and as long as the server session and the client cookie have not expired, ASP.NET can look at this cookie and find the right session.
    Now, what Redirect does is to send a special header to the client so that it asks the server for a different page than the one it was waiting for. Server-side, after sending this header, Redirect ends the response. This is a very violent thing to do. Response.End actually stops the execution of the page wherever it is using a ThreadAbortException.
    What happens really here is that the session token gets lost in the battle.
    There are a few things you can do to solve this problem.
    First, in the case of the forms authentication, we already provide a special redirect method: FormsAuthentication.RedirectFromLoginPage. This method is great because, well, it works, and also because it will return the user to the page he was asking for in the first place, and not always default. This means that the user can bookmark protected pages on the site, among other things.
    Another thing you can do is use the overloaded version of Redirect:
    Response.Redirect("~/default.aspx", false);
    This does not abort the thread and thus conserve the session token. Actually, this overload is used internally by RedirectFromLoginPage. As a matter of facts, I would advise to always use this overloaded version over the other just to avoid the nasty effects of the exception. The non-overloaded version is actually here to stay syntactically compatible with classic ASP.
    UPDATE: session loss problems can also result from a misconfigured application pool. For example, if the application pool your site is running is configured as a web farm or a web garden (by setting the maximum number of worker processes to more than one), and if you're not using the session service or SQL sessions, incoming requests will unpredictably go to one of the worker processes, and if it's not the one the session was created on, it's lost.
    The solutions to this problem is either not to use a web garden if you don't need the performance boost, or use one of the out of process session providers.
    Thanks to Frédéric Gareau for pointing that out.
    UPDATE 2: Another thing that can cause similar problems is if your server has a name that contains underscores. Underscores are not allowed in host names by RFC 952 and may interfere with the ability to set cookies and thus to persist sessions.
    UPDATE 3: It appears like some bug fixes to Session have permanently fixed this problem. At least the one caused by the thread aborted redirect. Still, it is good practice to not abort the thread (and thus use the overload with the false parameter).

  • Troll Board

    If your post ended up here, it was off-topic, uninteresting, unoriginal, unargumented and / or not funny enough. In other words, congratulations, you're a troll.
    My own posts that are on this board were kept here to balance the trolling a little.
     
    Let the troll board begin:
     
    # re: Some comments on Oracle's comparison of PHP and ASP.NET Remove Comment 197033
    http://www.microsoft.com/mscorp/facts
     
    That URL contains far more bullshit than the Oracle article..
    7/26/2004 5:59 AM | nofool
    # re: Some comments on Oracle's comparison of PHP and ASP.NET Remove Comment 197085
    Guys, before anyone posts anymore comments here please look at the bottom of this site and see what it's being served on (ASP.NET). That made it more clear to me why there is all of this MicroShaft propaganda on this site.
    7/26/2004 7:21 AM | Doesn't matter.
    # re: Some comments on Oracle's comparison of PHP and ASP.NET Remove Comment 197133
    Too funny! Quoting the Get The Facts pages as "proof" of TCO equality is a farce. Obviously, Microsoft funded the TCO study and surprise! it came out in their favour. Who runs Linux on a mainframe for file serving? In any case, you can run PHP on IIS. I know, because I do.
    7/26/2004 8:04 AM | General Protection Fault
    # re: Some comments on Oracle's comparison of PHP and ASP.NET Remove Comment 197699
    Just how cynical can you be! Security, to cite one example, has been a problem worth billions for Microsoft customers around the world FOR ABOUT A DECADE NOW. You call that prompt customer support?
    7/26/2004 3:58 PM | Anona
    # re: Some comments on Oracle's comparison of PHP and ASP.NET Remove Comment 197700
    Anona: Just how much can you oversimplify this? Security has been a problem worth billions for EVERYONE for more than a decade, not only MS customers. It is also a problem for Apache customers, for Oracle customers, you name it.
    Just try the MS customer support. The response during the last virus/worm crises has been amazing. We've been helping countless customers to recover their machines and configure them so that they are properly secured.
    Yes, I call /that/ prompt customer support.
    7/26/2004 4:04 PM | Bertrand Le Roy
    # re: Some comments on Oracle's comparison of PHP and ASP.NET Remove Comment 197718
    Apesta!!, claro que van a defender a su inche ASP.NET, porque??, porque el sitio esta montado en Windows !! Guacala!.
     
    Y aunque les duela PHP5 es mejor!!! por donde le busquen!
     
    Viva PHP5!!!!!!!
    7/26/2004 4:33 PM | Anonimo
    # re: Some comments on Oracle's comparison of PHP and ASP.NET Remove Comment 197720
    "Security has been a problem worth billions for EVERYONE for more than a decade, not only MS customers."
     
    Utter nonsense. During the last decade, I or my Mac-using clients, for example, have NEVER had a virus, trojan, worm, spyware or adware problem. Not once. Did I say, not once? Contrast that to Windows user over the last decade. Are you telling me these are comparable situations? Please. Even your ex-CEO admitted your security problem. I'm not going to let you sweep it under the carpet.
     
    I didn't build Outlook or IE, you did. I didn't make the architectural choices that led to these abominable apps, you did. I didn't create the business model of "features before security", you did. I'm not the one who's trying the convince the computing public this is an acceptable/unavoidable state of affairs, you are.
     
    Frankly, the problem is not MS (you do what you do), it's the unbelievably mypoic IT drones for using such sloppy products.
     
    7/26/2004 4:33 PM | Anona
    # re: Some comments on Oracle's comparison of PHP and ASP.NET Remove Comment 197726
    Anona: This is getting really tiring. I've read these arguments millions of times. We're not getting anywhere here. Of course there are more worms for Windows than there are for any other platform. It is the most common platform. Just check the numbers on Apache and IIS and you'll see that the perception of security is something completely different from the security itself.
    No, I did not write Outlook or IE. And please choose your words carefully. "Abominable"?
    For your information, I've been using Microsoft products for about 12 years. Outlook and IE have been my mailer and my browser for as long as I can remember (that is probably for as long as they existed), and I have NEVER had a virus, trojan, spyware or adware problem. Not Once. Did I say, not once? Same thing goes for my wife, who does not have any computer science education. Same thing for my mother, who is 65 and knows nothing about computers. Is my experience relevant? Probably as much as yours with your mac-using clients.
    Do not take individual experience for a generality. Things are not as simple as they seem to be in your head. Of course we need to improve on security because we are the leaders on this market, and that's what we're doing everyday.
     
    Please go post somewhere else.
    7/26/2004 4:46 PM | Bertrand Le Roy
    # re: Some comments on Oracle's comparison of PHP and ASP.NET Remove Comment 197933
    your article is not better than the oracle's one -
     
    7/27/2004 12:25 AM | mattia
    # re: Some comments on Oracle's comparison of PHP and ASP.NET Remove Comment 197957
    :)
     
    Article sucs! ORACLE sucs too.
     
    Incredible stupid article. :) They know nothing about ASP.NET!
     
    7/27/2004 1:11 AM | BlackTiger
    # re: Some comments on Oracle's comparison of PHP and ASP.NET Remove Comment 198377
    Bleroy:
     
    "I've been using Microsoft products for about 12 years. Outlook and IE have been my mailer and my browser for as long as I can remember (that is probably for as long as they existed), and I have NEVER had a virus, trojan, spyware or adware problem. Not Once. Did I say, not once? Same thing goes for my wife, who does not have any computer science education. Same thing for my mother, who is 65 and knows nothing about computers. Is my experience relevant? Probably as much as yours with your mac-using clients."
     
    A good quote. Perfectly sums up Microsoft's attitude towards security. And pretty much kills your credibility.
    7/27/2004 6:21 AM | Bob M.
    # re: Some comments on Oracle's comparison of PHP and ASP.NET Remove Comment 198680
    "This is getting really tiring"
     
    Yes, because you MS people parrot the same argument over and over again: don't blame us.
     
    "Of course there are more worms for Windows than there are for any other platform. It is the most common platform."
     
    How many worms are there for Mac OS? How many have there been in the last decade? Don't evade it, just answer it.
     
    The answer is not "fewer" it's "none."
     
    "And please choose your words carefully. 'Abominable'?"
     
    Any client/browser that has given so much grief to so many for so long couldn't be described otherwise.
     
    "Is my experience relevant? Probably as much as yours with your mac-using clients."
     
    So are you denying that there have been masive security problems with your OS/apps year after year? Has it come to that level of denial?
     
    "Do not take individual experience for a generality. Things are not as simple as they seem to be in your head."
     
    Let's see: Who's affected by the vast majority of security issues out there? MS users. It just doesn't get any simpler than that. What has MS done over the last decade to eradicate this? Not much.
     
    "Of course we need to improve on security because we are the leaders on this market, and that's what we're doing everyday.
     
    Let the record speak for itself.
    7/27/2004 10:46 AM | Anona
    # re: Some comments on Oracle's comparison of PHP and ASP.NET Remove Comment 198781
    Mattia, Warren, etc.: if you have nothing more constructive to say than "your article suckz, php rulez, MS suckz", please go away.
     
    Bob: of course, you dropped the main part of the citation, which was that individual cases should not be taken for generalities. I'm only talking about verifiable things, you guys are talking about purely emotional things. I also said that we still had a lot of work to do, but of course, you didn't want to hear that, you only hear what conforts your system of beliefs.
    Just compare the number of security issues in IIS 5 and IIS6 and see how much progress we've made in just a few years on this huge code base. We're doing the same kind of work on Windows itself, and this will give XP SP2 and Longhorn.
     
    Anona: please do your homework before you post such preposterous nonsense. Open a web browser, go to Google, type "mac worm", click on the search button with your single-button mouse and click on the first thing that shows in the many answers:
    http://securityresponse.symantec.com/avcenter/venc/data/mac.simpsons@mm.html
    Oh, it's a worm, and it's for the Mac.
    Of course, a worm for MacOS won't get very far as there are so few macs. Like a virus that would target people with Vayron eyes.
    Why are there anti-viruses for the Mac by the way?
    I also did a search on Apple Mac OSX Server on http://www.securityfocus.com/bid/vendor/ and there are just too many vulnerabilities for me to bother counting. Do the same search on Windows Server 2003: there are 2.
    Of course I'm not denying that there have been a lot of problems, but like Bob, you don't want to listen to what I'm saying.
    Get me right this time: I'm not denying. What I'm saying is that we've already made a lot of progress (see the numbers for yourself: the record speaks for itself indeed) and that we're still working.
    You're saying that we haven't done "much" to solve security problems? How do you explain the numbers on security focus then?
    Get real. Get the facts.
    And go away.
    7/27/2004 11:19 AM | Bertrand Le Roy
    # re: Some comments on Oracle's comparison of PHP and ASP.NET Remove Comment 198847
    "Oh, it's a worm, and it's for the Mac."
     
    No kiddin'. The script can't use Entourage (or Mail.app for that matter) as a vector to send email without user permission, because MacBU people had the good sense to not allow that. Anything is theoretically possible, but in reality, how many sites did it affect? Symantec says, 0-2. Let's repeat that: 0-2 sites. End of story. This is the best you can come up with? Shame on you.
     
    "Of course, a worm for MacOS won't get very far as there are so few macs."
     
    There are more than 25 million Macs around. How many were affected by this worm, which supposely appeared on 0-2 sites? The Mac OS architecture and app policy is not pestilence-friendly like Windows. This is the best FUD you can come up with?
     
    "I'm not denying. What I'm saying is that we've already made a lot of progress..."
     
    When you start with such abysmal numbers you can only go up, I guess. The vast, vast majority of security problems in the last decade took place on Windows. It's still happening on Windows. And it wil still happen on Windows. That's a fact.
     
    "And go away."
     
    Why? The facts are interfering with your FUD?
    7/27/2004 11:56 AM | Anona
    # re: Some comments on Oracle's comparison of PHP and ASP.NET Remove Comment 198894
    Anona: No, it's not the best I could come up with, I just took the first thing google gave me. I didn't took the time to look at it. The point is just that there ARE worms on the mac, there ARE vulnerabilities (some of them very serious, see securityfocus), and much more than on Windows Server 2003. And of course, no, there are not many worms because 25 million is just a ridiculously small number of machines to attack. It's much more efficient to target unprotected PCs, because yes, there are more unprotected PCs out there than there are Macs.
    Why is that? Not because Windows is unsafe in its current version: activate the built-in firewall, auto-update, and install an anti-virus, that's all there is to it. No, this is so because people don't patch their machines (yes, you have to patch any system, because security and attacks evolve, it's not a static thing, take integer overflows for example) and do stupid things. We have to educate our users as much as we have to make the system safer overall. Both are very important.
    You just won't listen. Is TWO an abysmal number? Just look at your own numbers. You're citing the Mac? Get a grip, just check the numbers, this is currently an unsafe system.
    Windows Server 2003 has had close to zero serious security problems. No other OS can show that kind of results (even FreeBSD 5).
     
    Go away because:
    1. This is my blog
    2. I don't want you here
    3. You're off-topic
    4. What you have to say has been said and answered a million times
    5. You're answering emotionally to verifiable facts
    6. I have better things to do than answer your messages (which I won't do any more from now on)
    7/27/2004 12:31 PM | Bertrand Le Roy
    # re: Some comments on Oracle's comparison of PHP and ASP.NET Remove Comment 198921
    "No, it's not the best I could come up with, I just took the first thing google gave me. I didn't took the time to look at it."
     
    In other words, you didn't do your homework, something you emotionally accused me of.
     
    "The point is just that there ARE worms on the mac..."
     
    Where?
     
    It's ironic that today, this very day, the Net is under attack from a MyDoom variant. Is this happening on the Mac?
     
    "It's much more efficient to target unprotected PCs.."
     
    I wonder why!! Is it because Microsoft has been shipping an unsecure-by-default OS called Windows for years?
     
    "Why is that? Not because Windows is unsafe in its current version: activate the built-in firewall, auto-update, and install an anti-virus, that's all there is to it."
     
    Make the user do the dirty work?
     
    "No, this is so because people don't patch their machines"
     
    Finally, finally, the ultimate excuse: blame the user!
     
    'Nuff said.
     
    7/27/2004 12:43 PM | Anona
    # re: Some comments on Oracle's comparison of PHP and ASP.NET Remove Comment 198929
    Not blaming the user: I said that we had to educate the users, if you paid attention.
    The firewall and automatic patching are now activated by default.
    I don't have to do YOUR homework.
     
    Go away.
    7/27/2004 12:48 PM | Bertrand Le Roy
    # re: Some comments on Oracle's comparison of PHP and ASP.NET Remove Comment 198954
    "I said that we had to educate the users"
     
    Users don't need education, Microsoft does. Users haven't been shipping an unsecure-by-default OS/email cleint/browser, for years. Users don't need to be "educated" about the moronic security architecure of, say, ActiveX. Users haven't made those structural choices, you did.
     
    "The firewall and automatic patching are now activated by default."
     
    Thanks, for the admission of guilt. Unfortunately, this comes after having created untold numbers of unprotected PCs out there that are impacting untold millions of non-Windows users as well. We all have to suffer Microsoft's incompetence.
    7/27/2004 12:57 PM | Anona
    # re: Some comments on Oracle's comparison of PHP and ASP.NET Remove Comment 198980
    Yes, users need education, because some choices are not obvious and keeping a system safe requires a little care from the user. Some things can't be automatic. For example, you still want to be able to install software on your computer. Some software that you may want to install may be dangerous. The system can warn you, but there is a point where it is your responsibility. Not rejecting the responsibility, just stating obvious stuff.
    We did release some code that had security problems, of course I'm not denying that, but so have absolutely every other software company in the world. We have an obligation to be better than anyone else, though, because we are the leaders. And that's precisely what we're doing, and the results we have show that we are successful at that. But you don't want to face the facts.
     
    Where are we today in terms of security when compared to the competition?
     
    Did I say all that already? Yes, but you won't listen.
    If you have nothing original to say, go away. Otherwise, be done with it and say it.
    7/27/2004 1:10 PM | Bertrand Le Roy
    # re: Some comments on Oracle's comparison of PHP and ASP.NET Remove Comment 198999
    "We did release some code that had security problems, of course I'm not denying that, but so have absolutely every other software company in the world."
     
    That's like a 500 lb person saying he had a few extra donuts and who has not.
     
    When was the last time tens of thousands of Mac machines around the world were shut down by a worm or a virus? This seems to happen with monthly regularity these days for Windows users. And you call this "some code that had security problems"?
     
    "We have an obligation to be better than anyone else, though, because we are the leaders."
     
    Leaders in what? Security? You are actually claiming leadership in security? Man, I thought 1984 was a fiction book!
    7/27/2004 1:23 PM | Anona
    # re: Some comments on Oracle's comparison of PHP and ASP.NET Remove Comment 199003
    Yes, we are currently leaders in OS market shares and in security techonlogy. Look at the freaking numbers and compare.
    I've already explained (as well as many other people) why a worm can't propagate efficiently on MacOS, but you won't listen.
    You're a troll, go away.
    7/27/2004 1:26 PM | Bertrand Le Roy
    # re: Some comments on Oracle's comparison of PHP and ASP.NET Remove Comment 199013
    "Yes, we are currently leaders in OS market shares and in security techonlogy. Look at the freaking numbers and compare."
     
    Yes, I'm looking at the number of PCs beset by security problems and comparing them to Mac machines. It turns out the "security technology" provide by the vendor (Microsoft) is shamefully ineffective in protecting the OS and the apps. After you put way all the FUD, mombo jumbo and blame-the-user stuff, the fact remains that Windows machines are less secure and more infected than any other platform, in absolute or in proportional numbers.
     
    That's some mighty leadership!
    7/27/2004 1:35 PM | Anona
    # re: Some comments on Oracle's comparison of PHP and ASP.NET Remove Comment 199100
    "This is my technical / professional blog"
     
    There has got to be a better use of your time, rather than writing so much just about yourself and your opinions.
    7/27/2004 3:19 PM | Shaq
     
    Sure thing!
    Me

  • Some comments on Oracle's comparison of PHP and ASP.NET

    Oracle recently published an outrageous article in a rather strange attempt to convince people that PHP is the best platform to write web applications. Not ASP.NET, which is not surprising coming from Oracle, but not Java either, which is a little more puzzling.
     
    In this blog entry, I'm explaining what I thought when I read this paper. The disclaimer on the left applies, of course: these are my own opinions, and I'm not talking on behalf of my employer.
     

  • Some comments on Oracle's comparison of PHP and ASP.NET

    Oracle recently published an outrageous article in a rather strange attempt to convince people that PHP is the best platform to write web applications. Not ASP.NET, which is not surprising coming from Oracle, but not Java either, which is a little more puzzling.
     
    In this blog entry, I'm explaining what I thought when I read this paper. The disclaimer on the left applies, of course: these are my own opinions, and I'm not talking on behalf of my employer.
     
    This article is completely unreal. Arguing that PHP is preferrable to ASP.NET is a very difficult exercise. I can imagine the marketing people at Oracle ordering this article and determining its conclusions even before it was written... It is even touching to see that the author can't hide all of the ASP.NET qualities nor all the problems of PHP.
    Let's read the article together and comment it along the way.
     
    First, the subtitle, "One developer's view of the pros and cons of the two most popular means of building web applications" should probably be more along the lines of "One PHP developer who has no clue what ASP.NET is reviews what he thinks are the pros and cons of the two most popular means of building web applications". I've built applications for years with both systems before I was hired by Microsoft, so I can probably spot most of the many voluntary or unvoluntary errors and inaccuracies in the text.
     
    In the first paragraph, the author tries to convince us that PHP and ASP.NET fall into the same category of web platforms that "[embed] code into HTML pages with special tags that signal to a preprocessor that they contain code, and that it should do something with it". This is true of PHP, but not of ASP.NET, where the code can, but should not be in the HTML markup. Instead, a well-written page would have a declarative or templated part (the aspx file) and some codebehind (or not) that orchestrates the controls and communicates with other layers of the application if there are any.
    So whereas PHP follows the flow of the page and inserts dynamic text in some places, ASP.NET separates the code from the declarative markup and has a much richer page lifecycle. Most important, in ASP.NET, the execution flow is distinct from the markup's flow.
    This difference is absolutely fundamental and differentiates a platform that encourages spaghetti code from one that encourages good design and separation of concerns.
    The author also tries with this simple sentence to have us believe that the "special tags" of PHP and ASP.NET are equivalent. Nothing could be more wrong: while PHP only has tags to signal the limits of the server code from the rest of the markup, ASP.NET's tags are really abstract representations for full-blown controls that embed complex behavior (such as treeviews, grids, etc.). While one forces you to output raw HTML, the other enables you to use familiar widgets like those that you would expect from a desktop application framework. Using PHP to create web pages is a little like creating desktop applications with a tool that would force you to draw every control using dots and lines (just a little, I'm pushing the analogy: PHP is not THAT bad). Abstractions are good.
     
    The second paragraph is very touching because the author explains us why he is so biased. No comment.
     
    So what is ASP.NET to Oracle (except for a menace)?
     
    "ASP.NET works with scripted languages such as VBScript, JScript, Perlscript, and Python, as well as compiled languages such as VB, C#, C, Cobol, Smalltalk, and Lisp". Wrong. ASP.NET works only with compiled languages such as VB.NET and C#. There are .NET compiled versions of "scripting languages" like Perl, JScript or Python, though, and that's probably what caused this confusion.
     
    The next paragraph is more or less accurate, but let's note for later that the author is aware of the fact that the .NET class library contains classes that do "image manipulation". Later, he'll tell us that "with ASP, however, you're investing from the very beginning, and you're spending for add-on technologies—libraries for doing graphics manipulations, for instance".
     
    "in ASP.NET, integration with databases can be accomplished through ODBC". Technically, he's not lying, here, but he fails to mention that ODBC is just one of several ways to access a database, the one that should be used only if all other options are impossible. There are direct providers for Sql Server and Oracle, and third parties offer native providers for all major databases (including an Oracle provider for Oracle databases, in addition to the MS provider). Most importantly, these APIs derive from a common base, which makes it almost equivalent from a developer's point of view to develop against Sql Server or Oracle, or any other database. Whidbey also makes it a lot easier to make your code database-agnostic (like, I have to admit, Mono did before us). More on this later.
     
    "ASP.NET's strength lies clearly in its clean design and implementation. It is an object-oriented programmer's dream, with language flexibility, and with sophisticated object-oriented features supported. In that sense, it is truly interoperable with your programmers' existing skills. Another strength of ASP.NET is the development environment." Say no more. ASP.NET rules! Oracle says so!
     
    But don't worry, the next sillyness is in view: "But what you gain in robustness, you pay for in efficiency. ASP.NET is expensive with respect to memory usage and execution time, which is due in large part to a longer code path."
    Oh? Really? What backs these gratuitous affirmations? Execution time? Can we have some pointers to check that? Because as far as I know, a web platform that is natively compiled and that has built-in page and fragment caching is very likely to be faster than a scripted, non-cached platform. Of course, you can compile PHP using a free tool, but it's an afterthought. And you have to pay for the page caching solution, whereas it come for free with ASP.NET.
    I can see where the memory thing comes from. It is true that if you open the task manager on a server that's running an ASP.NET web site, you could be a little frightened by the amount of memory ASP.NET uses, if you know nothing about servers. Guess what! Memory that's not used is useless. The rational thing to do on a server is to use the memory you have (to cache stuff, for example). The quantity of memory that's used by ASP.NET can be configured in machine.config if you feel you can tweak it better than the default setting (which you usually can't, that's why  it's the default setting).
    The performance of ASP.NET is certainly sufficient for "small traffic" sites such as Microsoft.com, MSN, match.com, etc...
     
    The "What is PHP" section is focused on database access, but fails to mention Sql Server as a possible database for PHP (this is an Oracle paper, after all).
    It tries to convince you that database abstraction is bad (just to tie you to Oracle, but you got that part yourself) because you so badly need these marvelous Oracle features: LOB, BLOB, CLOB and BFILE.
    This coming from the same people who will explain later that OS independance is an absolute necessity...

    So let's summarize:
    Oracle dependance: Goood!
    Microsoft dependance: Baaaad!

    Seriously, database independance is an important feature for many modern applications.
    Due to the uncoordinated development by different teams, the database access libraries in PHP have been notoriously inconsistent to the point where the code you'd write to access a MySql database is different from the code you'd write to access a PostgreSQL database. Not in the SQL queries, which is more or less normal, but in the actual PHP code! So other people have developed so-called database abstraction layers (dba, odbc, etc.)... which do not work with all databases, and are of course largely inconsistent with one another as well as with any specialized provider.
     
    In the "strengths and weaknesses", we only see weaknesses, except for platform independance (but not database independance), open-source development (if you happen to consider that as a strength), and "a smaller code path," whatever that means.
    He misses a few other important weaknesses, like the fact that its library is terribly messy, being a function library instead of a hierarchical class library like that of .NET or Java, and having horrible names (can you guess what readline_completion_function does? It "Registers a completion function". Yes, I know, that's not very much clearer, but this is the kind of documentation you get with PHP: no sample, no clear explanation).
    The author then goes on to showing us how great the new PHP5 is (whereas to say the truth, it barely gets where Python was years ago). The code example is absolutely hilarious. Anyone writing this kind of code in a job interview with me would be politely but immediately shown the door. I have to show you:
     
    class blue {
     
      function openFile ($inFile) {
        if (file_exists ($inFile)) {
          # code to open the file here
        } else {
          throw new Exception
     ("Cannot open file: $inFile");
        }
      }
    }
     
    $blueObj = new blue ();
     
    try {
      $blueObj->openFile ('/home/shull/file.txt');
     
    } catch (Exception $myException) {
      echo $myException->getMessage ();
     
      # rest of exception handling code here
    }
     
    Do you really think you should throw an exception to test a perfectly normal application error condition? Shouldn't you throw and catch something more specific than Exception? Shouldn't openFile be static? This code sucks. Just write this instead:
     
    $fileName = '/home/shull/file.txt';
    if (file_exists($fileName) {
      #work with the file
    } else {
      echo "File: $fileName does not exist";
    }
     
    If this is how you explain the benefits of OOP and structured exception handling to PHP users, we'll just get unamageable and ununderstandable object-oriented spaghetti code instead of plain unmanageable spaghetti code.
     
    I'm skipping the "security comparison" FUD for now, I'll get back to it later. Let's go directly to the "database coding examples" section.
     
    "With ASP.NET, however, it's a little more complicated, because you have the option of any of a number of languages to choose from." How that makes it more complicated and how it has anything to do with database programming elude me completely.
    Let's look at the code sample. The PHP code does absolutely nothing except create and destroy a database connection (please note the "very elegant" error handling code, though). The destructor prints a useless message for no identifiable reason.
     
    class oracle_object {
      protected $theDB;
      protected $user;
      protected $pass;
      protected $db;
     
      function __construct($u, $p, $d) {
        $this->user = $u;
        $this->pass = $p;
        $this->db = $d;
      }
     
      function db_open () {
        $theDB  =  @OCILogon($this->user,  $this->pass,  $this->db);
        db_check_errors($php_errormsg);
      }
     
      function db_close() {
        @OCILogoff($theDB);
        db_check_errors($php_errormsg);
      }
     
      function __destruct () {
        print ("so long...");
      }
     
    }
     
    Many things can be said about this code: the fields are not encapsulated, and it is generally not a good idea to open a connection if you're not going to use it right away (because of connection pooling), so if you write a database access helper class, opening and closing the connection should be done just around the request to the database itself. At least in .NET where the connection pool is automatically managed.
    And now, the VB.NET code that is supposed to be equivalent to the one above:
     
    Imports System
    Imports System.Data
    Imports System.Data.OracleClient
    Imports Microsoft.VisualBasic
     
    Class Sample
     
      Public Shared Sub Main()
     
        Dim oraConn As OracleConnection = New OracleConnection("Data Source=MyOracleServer;Integrated Security=yes;")
     
        Dim oraCMD As OracleCommand = New OracleCommand("SELECT CUSTOMER_ID, NAME FROM DEMO.CUSTOMER", oraConn)
     
        oraConn.Open()
     
        Dim myReader As OracleDataReader = oraCMD.ExecuteReader()
     
        Do While (myReader.Read())
          Console.WriteLine(vbTab & "{0}" & vbTab & "{1}", myReader.GetInt32(0), myReader.GetString(1))
        Loop
     
        myReader.Close()
        oraConn.Close()
      End Sub
    End Class
     
    Why are they skipped lines in there? To make the code seem longer? And who wouldn't notice that this code does a lot more than the PHP code?? It opens a connection, queries the database and outputs the results before it closes the connection. So what does this prove? Absolutely nothing.
     
    It should be pointed out that displaying database data in a table in ASP.NET Whidbey is as simple as that:
    <asp:SqlDataSource runat="server" ID="myDataSource" DataSourceMode="DataReader"
      ConnectionString="<%$ ConnectionStrings:MyOracleConnectionString%>"
      SelectCommand="SELECT CUSTOMER_ID, NAME FROM DEMO.CUSTOMER" />
    <asp:GridView runat="server" ID="MyGridView" DataSourceID="myDataSource"/>
     
    Of course, this is the quick and dirty solution, and you can substitute an ObjectDataSource to the SqlDataSource if you have properly defined your own DAL, business and service layers.
     
    Now, let's "make a choice"... The author pretends to think that "[PHP's] only weakness is its lack of a pure and perfect OOP implementation". Err, see above. He then says "Though language constructs do help, ultimately, good coding is a matter of practice, execution, good habits, and discipline". Sure, but what if you are incapable of that? I'm not pointing to anybody... Oh well, yes I am.

    We now can read a very informative (not!) table "summarizing" the weak and strong points of each platform. The criteria that have been chosen are completely arbitrary, as well as the "values" in the table (what do $$, weak or strong mean? Is it something that I can measure? How much is $$?). We also note that ASP.NET security is "strong" whereas one of the main points against it according to the author is precisely the security. Consistency anywhere?
     
    Price. ASP.NET is free, and the TCO of Windows Web Server Edition can be favorably compared to that of a LAMP approach (see http://www.microsoft.com/mscorp/facts).
     
    Speed. I really don't know. I have yet to read a performance study that compares PHP and ASP.NET performance. If anyone knows one, I'd be happy to talk about it. The article does not point to such a study. PHP has a reputation for speed, as does ASP.NET.
    "Speed is not the only consideration. Memory usage is also important." See above? Why is that important? We won't find out from the article.
     
    Security. This is my favorite part. After all the usual FUD about IIS security, the author gives us a link to a site that proves him wrong. This is very nice of him. Let's follow the link to www.securityfocus.com and do less than 5 minutes research. First, let's do a search on IIS 6. The first article that comes out has this to say about IIS:
    "[IIS] provides a reliable and secure infrastructure to host web applications."
    Then, let's look for vulnerabilities: choose Microsoft / IIS / 6.0. Results:
    1 (One!) x-site scripting vulnerability in a web administration tool that's not even installed by default
    And three for ASP.NET
     
    OK, let's do the same for Apache 2. Results:
    25 (Twenty-five!) vulnerabilities, including DOS and Buffer Overflows
    Wow, that's a lot! Let's look for PHP 4 now... Results:
    19 (Nineteen!) vulnerabilities, including integer overflows, arbitrary file disclosures, cross-site scripting, etc.
     
    Is this guy so stupid that he really thinks noone will click his link and verify what he claims? Or does he think his readers are stupid? In either case, I wouldn't give him a web site to develop...
     
    Cross-platform applicability. Sure, if that's really paramount to you, choose J2EE ;) at least for the moment...
     
    Open-source opportunity. Sure, if that's important to you. If consistency and accountability are more important, then I guess that's a different story.
     
    And of course, one thing you won't hear about in Oracle's article is developer productivity. ASP.NET is the platform that will make your web developers the most productive, because it manipulates higher level abstractions, it handles all the plumbing for you and it encourages reusable code. But Oracle doesn't want you to know about that.

  • Are the UI layers disposable or should they be as easy to maintain as other layers?

    The discussion began in french on the www.dotnetguru.org web site, but was unfortunately deleted by the administrator of the site because of a few aggressive comments.
    I wish to continue this discussion here.
    I'll post my own reflexions as soon as I have time to rewrite them or the DNG admin sends the deleted thread to me.
    Please feel free to post your own and stay courteous. I'll delete all offensive comments, but only these.
     
    Update 6/23/2004 19:00: Sami Jaber contributed to the debate through a blog entry. Thank you Sami (I would have liked to get my texts back, but I appreciate the effort). I'll try to answer his argumentation:
    Sami explains that the UI layers are less stable because the lifetime of the technologies that support them are supposedly shorter than that of other layers. He cites:
    - In the Java world, Servlets -> JSP -> Struts -> JSF, that is 4 (r)evolutions in about 6 years. Well, I won't argue on the instability of the Java world, but no one is forced to follow every new trend.
    - POJO components (a relatively recently resurrected obvious concept: make it simple) implemented 6 years ago have remained stable, except if they followed the EJB specifications (two evolutions). Sure, an object is an object, and if it does not have any external dependancies, there's no reason why it would have to change. But this is of course an asymptotical goal...
    - On Windows, we had MFC, then WinForms and Avalon. Sure, and what was the longevity of these technologies? Well, MFC is not dead, but between it (1992) and WinForms (2001), 9 years passed. Avalon is not due before 2006, that makes at least 5 years longevity for WinForms (assuming that every one will instantly migrate to Longhorn, which I'd like, but is not very likely). That makes technology lifetimes that can very well be compared with the lifetime of the technologies underlying other layers.
    - Same thing goes for Microsoft Web technologies: ASP (around 1997 IIRC) lasted for about 4 years before being replaced by ASP.NET
    - Sami argues that writing UI layers is very complex and that it is very difficult to achieve any kind of reusability. Well, I absolutely can't agree with that. First, I've been a web developer for years before being hired by Microsoft, and reusability of UI components is one of the things I've been the most successful at, through many advanced WebControls and the MagnitSite content management platform. WebControls are a major innovation that enables great reusability of UI elements, and Whidbey goes even farther in that direction, reducing the amount of boilerplate code to orchestrate the controls to almost nothing. Now, of course, you still have to write code for the specific interactions between your graphical components, but that is also the case for other layers. It doesn't mean that there is any reason why UI would be less manageable.
    - A comment on Sami's blog points out that an IT person who would decide to migrate each application to each new trendy technology would be a fool. This is absolutely true, and the key is interoperability. I personnally have NEVER migrated a UI to a new technology. All my classic ASP sites remained classic ASP, and I developed only new applications using ASP.NET. Now, they were able to interoperate and this is what's really important. On the other hand, I've had numerous migrations of data layers to new versions or different databases.
     
    Comments anyone?

  • Do data source controls belong on the page?

    I get a lot of feedback on this subject (see this post if you have time for example). More and more developers are now finally getting the multi-layered application architecture concept, which is a great improvement over the situation we had even five years ago. So many of them, the first time the see data source controls on the page, go WTF is this doing in my UI layer? Even though the ObjectDataSource is here to make them feel better about it.
    Well, first of all, in ASP.NET, the Page is not the UI layer exactly. It contains the UI (the Template View, that is, the CodeFront), but it also contains some form of controller or rather Page Controller (see Martin Fowler's Patterns of Enterprise Application Architecture). So it's actually more an application surface than a simple UI surface.
    But it is also wrong to see the CodeFront as the UI and the CodeBehind (or CodeBeside) as the controller. You should see it more as the declarative part and the procedural part of the same object.
    So what did we have in v1? To bind a control to data, you had to do it from the procedural part of the page. If you were doing it quick and dirty, you were instantiating a Connection, a Command or DataAdapter, and filling a DataSet or DataReader with it. Then, you would attach this DataSet or DataReader as the data source of your controls and call databind. If you were doing multi-layered development, you were instantiating objects and binding them to the controls in pretty much the same way. It should be noted at this point that if you wanted to prototype a quick-and-dirty page and then migrate this to a multi-layered page later, you had to rewrite a large part of this boilerplate code. The designer made all this a little more confusing by displaying some of the procedurally defined components on the designer surface despite the fact that they were nowhere to be seen in the CodeFront markup.
    As framework developers, every time we see code that's copied all over any application with little variations, we have to ask ourselves if we couldn't make it declarative.
    And that's what data source controls are: a declarative way to bind controls to data. Is anyone shocked by the presence of jsp:useBean tags in a JSP page? Well, you shouldn't be any more shocked by the presence of a data source control in an ASP.NET page. On the other hand, what's wrong is procedural code in the declarative part, and you should avoid this as much as possible (it is IMHO a great design flaw in JSP to define procedural markup).
    By going from the procedural part to the declarative part, the data-binding code did not change layers, it just migrated to a different part of the same object.
    The end result is improved productivity as you don't have to rewrite all this boilerplate code. You will also quickly notice that the migration from quick-and-dirty SqlDataSource to an ObjectDataSource is really easy as there is no source-specific code. All the visual controls see is a data source, they don't have to know where the data came from. All you have to really change is the data source itself.
    But the data source controls have additional advantages. My favorite are parameters. You can add parameters to any data source. These parameters will allow you to declaratively filter the source's data according to a query string parameter, a form field, a control value or an arbitrary object value. Having a DropDownList filter the contents of a DataGrid has never been so easy: you can have such a page without writing a single line of code.
    I'm currently writing a web site with Whidbey, and my goal is to have zero code in the web site project itself. It features declaratively interchangeable data stores and a fully skinnable UI. Having zero code in the web site is not a contrived exercise, it's actually promoting good design and the good news is that it's amazingly easy to do in ASP.NET v2.
    So I'll say it loud and clear: ASP.NET 2.0 promotes good design.

  • VB.NET has "Using"! Hurray!

    While looking for something completely different, I found this in the MSDN documentation for Whidbey. VB.NET now has Using, which was one of the many constructs that C# had and that were missing in VB.NET.
    Let me remind you what using is. If you're using a resource that needs to be disposed of, like a connection, a stream reader or some weird unmanaged COM object, you typically have to write something like that:
    Dim A as SomethingThatImplementsIDisposable
    Try
      A = new SomethingThatImplementsIDisposable
      ' Do something with A
    Finally
      If not A is nothing Then
        A.Dispose()
      End If
    End Try
    Well, to do the same thing in C#, you would do this:
    using (SomethingThatImplementsIDisposable A = new SomethingThatImplementsIDisposable()) {
      // Do something with A
    }
    And now, in VB.NET 2005, you can do this, which is pretty much the same thing as in C#, except for the curly brackets:
    Using A as new SomethingThatImplementsIDisposable
      'Do something with A
    End Using
    This is very important because contracting the habit to use Using whenever possible not only makes your code simpler, it also makes it less error prone. And unreleased resources are one of the toughest bugs to spot because the problem does not appear during development but a lot later, usually when the application goes into production (if you're careless enough not to do any stress testing before release...) or even much later. The resources actually get released, but during garbage collection.
    As a rule of thumbs, when you see yourself writing A.Dispose(), you should ask yourself if you can replace it with a Using block, whether you develop in C# or VB.NET.

  • It's alive!

    The beta 1 of Whidbey can now be downloaded by anyone. At last, people will be able to experience first-hand what all the fuss is about.
    We're very excited about this new set of tools, and especially about the Express stuff that will enable non-professional developers to discover the platform and share the fun.
    We still have to figure out how to use BitTorrent as a deployment medium. Perhaps beta2? For this one, you'll have to be patient as a lot of other people are downloading the bits right now.
     
    Download one of the five Express Visual Studio versions and Sql Server Express from there...

  • Dude, where's my checkbox?

    Thomas, one of our users and the author of a soon to be released open-source survey tool, submitted this puzzle to me today.
    He has a composite control that has a TextBox and a checked by default CheckBox as its child controls. He was wondering why, when in LoadPostData during a post back, his TextBox had its posted value available in the Text property, whereas the CheckBox was still in its default checked state.
    Well, first of all, the new value of any control is not available during all stages of the page lifecycle. That's what the lifecycle is about, actually. In particular, there is no guaranty as to the state of any control during LoadPostData.
    Now, why does the TextBox already have its state restored, whereas the checkbox is still in its default state? What's different in these two controls that would make them behave differently?
    If you debug into this, the first thing you notice is that LoadPostData is called first for the checkbox, then for the composite control, and finally for the checkbox, no matter what the order of the controls in the control tree is.
    This can look strange, but the answer lies in the private Page.ProcessPostData method (Reflector is your friend if you really want to look at the source code). This method starts by scanning each field in the POST data. For each field, it looks for a control with the same unique ID and calls LoadPostData on it if it finds it.
    Then, and only then, it scans the list of controls that registered for postback treatment but have no POST field, and calls LoadPostData on them.
    Of course, an unchecked checkbox does not send a POST field at all, which is where it's different from a TextBox that returns an empty field if it's empty.
    And this explains why LoadPostData is called on the checkbox after it's called on our composite control, and the checked state of the checkbox is wrong at this point.
    So what can you trust from LoadPostData? Well, the method has a postCollection argument (some kind of copy of Request.Form) that is perfectly safe and clean to use, and this is where you should get your state data from.

  • Some Whidbey feedback

    This is the transcription of a mail conversation I had with Alister, a.k.a. SomeNewKid2, one of the most active users on the www.asp.net forums. He knows ASP.NET very well and was uneasy about the general spirit of what we’ve been doing for the next version. I hope I reassured him, and I’m sure this conversation can be very interesting for our other users who may have similar concerns. It’s quite long, but I think it’s worth reading.
     
     
     

  • Some Whidbey feedback

    This is the transcription of a mail conversation I had with Alister, a.k.a. SomeNewKid2, one of the most active users on the www.asp.net forums. He knows ASP.NET very well and was uneasy about the general spirit of what we’ve been doing for the next version. I hope I reassured him, and I’m sure this conversation can be very interesting for our other users who may have similar concerns. It’s quite long, but I think it’s worth reading.