Share your interfaces - avoid static proxies.
The advantage of the static proxy approach is that it we local code for us based on remote metadata (WSDL). Even if the service is somewhere out of our reach and control, all we need is for it to expose metadata for us to access it. We get a copy of the service contract and interfaces and we're good to go.
The problems with it is that I have to maintain that proxy. If the service changes, our proxy needs to adapt too. I'm not referring to versioning issues and new methods added, but to big changes in the service contract itself. This may not be an issue when accessing stable services, but certainly happens a lot during development.
The second approach is much more limited - for it to work we need to have a reference to a shared assembly containing the contracts. It means our services and contracts are .NET classes using the WCF framework, rather than generic WS-* services that can have any implementation. This approach is only useful when we have access to the our services' code or assemblies, so it's out of the question for public services.
In short, the dynamic proxy approach is only for use when we control both client and server in the scope of a single application (or group of familiar applications).
But in this context, this is the best way to work. I'll stress the point again - if we meet all the criteria above for using dynamic proxies, we should use them without hesitation.
The amount of work that goes into maintaining the static proxies, making sure that the client and server copies of the contracts are identical, hours of debugging mysterious errors caused by contract mismatches - all with perplexing error messages and little documentation - all these things are simply not worth it.
I'll say it again - if you're writing an N-tier application that uses WCF for communication, have the contracts shared by both client and server and use the GenericProxy<T> class to access it rather than relying on generated proxies and SVCUTIL. Trust me. Your deadline will thank you for it.
5 Comments
Comments have been disabled for this content.
ralph said
Avner, I don't quite get what you mean when you say, "The second approach is much more limited -- for it to work we need to have a reference to a shared assembly containing the contracts." If I'm reading you correctly, by "dynamic" you mean either that you're using ChannelFactories to create a proxy object that implements the contract, or that you're using MetadataResolver to create a proxy object that implements the contract, or some such thing. But if that's correct, then it does not matter how you obtained the contract at all. You can use dynamic assemblies when the contract was obtained through an earlier svcutil request. You do not depend on shared assemblies EVER unless the service does not share any metadata at all, in which case, yes, the service must provide you with an assembly. I guess what I'm saying is that clients only need a description of the contract to create a proxy. That description could be generated by svcutil, VS, or by managed code in the form of a shared assembly. Do I misunderstand? Cheers, Ralph
AvnerK said
When you use SvcUtil to generate a proxy object, you're in effect duplicating code. If earlier I had the IMyService interface defined in MyService.Interfaces.dll, now I have it in my client's DLL as well. In the shared interface scenario, the code is written only in one place. Now suppose I changed something in IMyService. It doesn't have to be something major - let's say I added a new [ServiceKnownType] on an operation there. If both client and server use the same contract, they're both automatically updated to use the updates. There's no manual step required to sync the two copies of the class. You may say updating the service reference isn't a big deal, and it isn't, but it's yet another manual step you have to take to make sure your code is synchronized. And with services supporting versioning, you may not notice the error until much later, when you have no idea what caused it.
Danny said
Hi, A question about WCF / remoting. Is there a way (technical possibility) to dynamically switch between using local objects (classes) and remote objects hosted by WCF ? This idea behind this is that a client can choose to execute long-running processes either local or on a faster remote machine. I've seen that this is basically not a problem for plain method execution, but I also experienced that using constructors and setting properties on objects become "unavailable" in the WCF/Remoting model. So my question is, how can you make the FULL programming model of a .NET class available in a WCF/Remoting model so you can really switch from local to remote and not changing any local code (like setting/getting properties) ? Thanks for replying... grtz, Danny
Garry McGlennon said
Danny - I think you've missed how WCF works. Its not about running objects on the server, which I think DCOM did. Its a stateless (ok you can make them stateful but that's bad) service that executes code on the server and returns 'data' not code. In that way you can't set a property and have it execute on the server you set all the properties of the data contract to be sent then you sent the whole thing at once. Services are about chunky talk not chatty property talk. Btw I agree with using the shared dll if its possible. It wont prevent you from having external clients that don't have/use the dll's btw. Its simply a convenience for YOUR client apps. If you also share the data contracts (the objects you're sending back and forth), then you also don't have that duplicated and if there was any validation logic you don't have to recode that twice either. If a java client uses your service then it all works the same, but they would have to duplicate the validation logic if they wanted to verify BEFORE making the service call.
Brinkman said
Quality articles or reviews is the important to interest the users to visit the website, that's what this website is providing.