The Shadowfax Architecture
As we are approaching to the first Technical Preview of the MS .NET Reference Architecture codenamed “Shadowfax”, hence we have a stable and less volatile code base, I will start to throw my ideas and insights about different aspects of the Architecture.
Beyond all that SOA hype (there are tons of lines about that and a good one in Clemens blog), here we have a real-life-business-ready (or at least, almost ready) implementation of a comprehensive sound architecture. So if you are one of the followers of patterns and practice people (aka PAG group) with all of its guides and ready-to-use Applications Blocks, I bet that you might be wondering when these guys will finally cook a terrific managed code business architecture to use in an enterprise wide solution.
Well, the time has come and Shadowfax will soon be out in the street. But wait, if you are thinking that this is a bunch of Application Blocks glued in a nice academic model with some cool features, you are completely missing the point. Of course that Application Blocks are heavily used but there are a lot more things to consider and those are the ones that I will be posting here. One of the interesting (IMO) points of this project was one related to its origins.
At the first phases of the project, that is, the envisioning and design phases, the architects decided to gather the best experiences of successful .NET Enterprise Architecture implementations from MS partners and MS Consulting Groups from all over the world. They took the best three (as far as I know, and one was our successful MBI ) and with all this knowledge at their hands, start to design a brand knew model to solve several goals as we will see next. This last point is crucial because all of these proven architectures solve real customer requirements and this will be a paramount objective of the Shadowfax architecture.
Before going further let me state clear that by now there is documentation material to probably fill hundreds of pages on this architecture and therefore a blog entry cannot come even close to give the complete picture so I will touch some topics very briefly and others I will delve into deeper fine-grained observations. Recall that you can find valuable information at the Shadowfax Workspace as well as some interesting threaded discussions and user feedback.
Architectural Requirements and Considerations
The Shadowfax services framework architecture has been developed to meet a number of key architecture requirements and goals. Here we have one of the main requirements:
· Enable separation between stable service interfaces and possibly volatile and unreliable service implementations.
· Make it possible for developers to keep handlers-like logic, for example monitoring or auditing logic, separate from service implementation logic. This is carried out in the so called Handlers that are executed in the pipeline. (Don’t worry if you get lost with these concepts, we will came back later with all the inner details of these things.)
· Help developers build robust services that can be accessed by client applications through multiple transports and handling the service requests in a single and consistent mode as well. We support several transports like Web Services, MSMQ, Remoting, and DCOM.
· Establish a level of indirection between service invocation and service implementation to buffer application changes. This is addressed by the “double pipeline” model.
Since I don’t want to extend too much with the overall architecture description, I prefer to stop on my preferred topics and subsystems that comprise the project.
Hence I was working in the design/implementations of several blocks, I ‘m rather biased to write about them so I will start to list them (highlighted in bold) and left for future posts, their full description. I’ll be writing on topics like performance issues and security considerations of each of these modules as well.
Starting from the client view, the first endpoint is the Service Interface. The components of the Shadowfax services framework help expose a service interface over multiple transports. A service request sent from the client may come from different transports such as a Web Services transport, a message queue transport or a .NET Remoting transport. (even a DCOM transport is considered).
The transport’s listener receives the request message (the request format may varies depending on its transport), puts it into an envelope called the context, and passes the context to an instance of what is being termed a Service Interface pipeline.
In a nutshell, the pipeline consists of a sequence of message (request) pre-processing Handlers, followed by a Target, followed by a sequence of message (response) post-processing handlers.
A handler is a block of code that satisfies some cross-cutting concern such as authentication, logging or transaction support (in fact there are quite a lot built-in handlers that comes out-of-the-box, more that a dozen at the time of this writing). Handlers come in three flavors: Atomic, StatefulAround and StatelessAround handlers. StatelessAround handlers are executed both before and after the target is executed. Atomic and StatefulAround handlers occur only once in the flow. The latter executes in a chain-alike fashion where each handler calls the next one in the chain and waits the response in synchronous scenario.
There are two pipelines implementations that provide great flexibility for physical/logical layers separation (typical IIS-DMZ (first pipeline) and an internal application server (second pipeline) and introduce a further "layer of indirection". However if the application does not require a distributed scenario, a simpler approach might be used and only the first pipeline will be suffice to accomplish the target execution.
In a double pipeline scenario, the first pipeline, the Service Interface pipeline, is transport-specific and focuses on service boundary handlers such as authentication, monitoring and message request validation. Shadowfax is highly configurable, so a pipeline can be configured to process any groups of aspects in any order. The target of the first pipeline is a port. The port is responsible for invoking the second pipeline, called the Service Implementation pipeline.
The service implementation pipeline is service-specific and focuses on business handlers such as raising business events, logging requests or demarcating transactions. The target of the second pipeline is responsible for invoking the service implementation referred to as the Business Action.
A business action is the actual implementation of the requested service. It is either an internal business component, or it is a service agent which invokes some remote component. In either case, this is the ultimate target of the original service request. A business action is executed and in most cases produces a response.
The business action has three invocations mechanisms implemented in a factory pattern fashion: Context, Serialization and Explicit. The first one called using the IBusinessAction interface and the other two, using reflection and passing parameters by message deserialization and reflecting them respectively.
Once a business action has completed, the second pipeline receives the response, applies outbound handlers, and returns the response to the first pipeline, also via the context. Then the first pipeline executes its remaining handlers. The response is then returned back to the client application.
Another important subsystem is the configuration subsystem that relies on the CMAB (Configuration Management Application Block) and one of its beauties is that support “on-the-fly” updates so a change in the configuration file is automatically refreshed to all subsystems without stopping or restarting the application (unlike ASP.NET and it’s config files management). It’s worth mention that these configuration settings control almost every aspect of the architecture, including the handlers, services, events and Business Actions.
So far we have seen a brief (yes I said a brief) overview of the main blocks (highlighted) and as I said before, I will be hopefully posting with more details and comments in the near future.
Note: Some of the terms and names along this document might undergo structural and naming changes before its final release.
See the next post of this series: Life in the Pipeline
The opinions expressed herein are personal and do not necessarily reflect the position of neither my employee nor Microsoft.