It appears there is a big performance and scalablity issue with the current architecture that would render it inefficient as a web based Saas implementation.
I hope this comment is seen as constructive as I think this is an awesome sample application that does demonstrate a lot of stuff and I'd like to see this performance issue overcome.
First thing I noticed was that hte web application wasn't snappy. Running locally on a quad-core 2.63Ghz machine with 4Gb ram this was odd.
As each web request is made the logic will create and explicitly dispose one or more ClientBase<T> objects to make a call to the relevant services. This is done so that the web site logic can impersonate the user's credentials by reusing the user's
token from the STS via the BrokeredSender.CustomClientCredendials object.
This allows the backend services to authorize the user themselves independant of the web site.
WCF proxies must be recreated for each web request so the credentials used are those of the web site user. The problem is that creating and destroying WCF proxies is an expensive operation and the negotiation of credentials each time is also slow leading to
a unresponsive web experience. Imagine if the pages are then firing of little ajax calls all the time that each result in new HttpRequests.
One option would be to cache the proxies so that the expensive operation of creating the proxy and opening a secure channel is done once for each web user. To do that a proxy would need to be cached for each service and for each user as the credentials used
are fixed to the proxy once it is open. That would cause a scalability issue as a proxy would need to remain in memory for each user's session. So if there are 10000 active users but only 100 concurrent connections then you would need 10000 proxies.
Multiply that by the number of services in litware (~13) = 130000. That's a lot of resources!
Another option would be for the web site to act as a trusted sub system. The web site process would run under an account that has permission to call the required methods on the services. The web site would be responsible for checking that the current user has
permission to perform an action before it calls the service method. Using this approach the web site can then have an open proxy for each service. In the simple case there would be 1 proxy for each service, ie 13 in total for Litware. For thread
safety each web request could take and return a proxy to a pool.
The down side is that the services are being accessed by the ASP worker process account which will have permission to do lots of stuff. So we have the risk that the user may perform actions they should not be allowed to.
This is not an issue for the rich client model as one proxy can be made for each service and kept open for the life of the application, similar to what the web site would do as a trusted subsystem.
So is this architecture fundamentally not useable as a Saas web site?