Capacity improvements package for .NET Remoting / WCF
This solution deals with transferring huge DataTables over WCF and .NET Remoting.
When transporting large DataTables between a Server and a remote Client, there are several issues; that stem from .NET serialization.
Serialization of a large DataTable is memory thirsty.
A large enough DataTable will cause the client to get System.OutOfMemoryException, or System.InsufficientMemoryException.
Those exceptions cannot be caught at the server side, as they occur in the innards of the framework code that deals with seriailization and transport. If the DataTable is really large, the framework will throw the server process all together, with brute force. There is no way around that.
Another issue is throughput, which is not outstanding, and becomes noticeable when the table gets large.
The solution at hand circumvents this problem by partitioning the DataTable to chunks and transferring the chunks in a multi-threaded fashion.
This solution is comprised of two separate approaches:
1. .Net Remoting Transporter
2. WCF Transporter
In a nutshell, the Server returns a poker object to the client, through which the client can make pokes (concurrent calls back at the server).
This approach enables an unlimited-sized table to be transferred between server and client.
It also improves throughput by up to a staggering 75%
(Plain WCF can transfer a table at 3,154 Kb/sec, whereas the current solution boosts throughput to 12,245 Kb/sec),
due to the concurrent requests. (Acts like a web accelerator)
Downlod packageDiagnostics Client sample code...
- This package includes a library; that implements the ChunkTransporter, for WCF and for .NET Remoting
- A sample server application, demonstrating the transport of a large data table
- A host application, hosting a WCF Service, and a .NET Remoting server
- A sample command line client application, printing out diagnostics