Reflection Is It Worth The Cost?
Back in March I wrote an article for www.TheServerSide.net about using validators in the middle tier. The article Validators In The Middle-Tier enables you to use declarative programming techniques such as (Download the Validation Framework Here)
public void SomeMethod ( [RegExAttribute("[a,e,i,o,u]",RegexOptions.None)]string someParameter)
{
// Class to perform validation.
MethodValidator validator =new MethodValidator(MethodBase.GetCurrentMethod(),someParameter);
validator.Validate();
// yada...yada...yada
}
The technique heavily leverages the .Net System.Reflection namespace. Since then, a number of people have asked me about the cost of using Reflection. The simple answer is, Reflection does cost you and your software a performance hit. However, the benefit of using a common validation scheme does outweight the costs. Declarative programming using attributes is a form of Design-By-Contract. By nailing down what your system considers accceptable inputs, you protect your system from unanticipated data as well as malicious attacks by people looking for holes in your system. To me the cost of using Reflection is a drop in the bucket. However, it is a fair question. It is something we must always consider. For example, I don't advocate using the validation technique on all levels of your software. I would concentrate on main entry points into the system. Your lower-levels should, for the most part, let exceptions bubble up to the top anyway.
However, as architects and developers we are always making this trade-off. Much of the software we use today is a balance between maintenance and performance. You needn't look further then your existing .Net development environment to see proof of this. .Net is slower then many programming languages out there if you compare it to languages that let you compile to machine language. WebServices...another prime example. It is considerably slower then making a native .Net call....let alone a native c/c++ call. I get frustrated when people look at Webservices and SOA as the end-all-answer to everything. SOA architectures have explicit boundaries and the cost of traversing these boundaries are not small. However, SOA and WebServices enable you to cross platform boundaries and reuse business logic so they definately have a place in our world.
N-Tier Development. Another prime example of performance versus maintainability. Breaking up our software enables us to resuse software; however, if we wanted truly performant code...how much of this would we really do? Would we load another assembly in our BLL just to make a database call? Well yes we would because we have become so accustomed to doing so; however, if performance were our only objective, would we?
It is an age old story that as machine speed increases and memory becomes cheaper we move closer to tier purity and making performance sacrifices in our quest for purity. 12 years ago we were battling the 640K memory barrier with DOS and willing to make many compromises to minimize our use of UMB (upper memory block). In the java world we are seeing people battle over using the static model versus the dynamic model for Aspect Oriented Programming (static does evaluation at compile time; dynamic at run-time). My guess is that in 5 years no one will really care. Reflection....i don't think we should use it willy-nilly but if it helps, use it now.
-Mathew Nolton