A funny thing I noticed recently while having a discussion about parallelism is that everybody is convinced that Microsoft should solve this issue. I’m sure they are on the good way with the Parallel Framework, although they we’ re kind of late.
Haven’t they promised use to do the spreading over available resources inside the CLR? That we shouldn’t even think about it? And now this vision is changing? Where are those optimised iterations? There such an advanced and improved build process in .NET 3.5, so can’t we figure out what can be done for optimising my whole AST(Abstract syntax tree out of my code) during compile time? This has been done for C++ for many years now. But as Microsoft always said, we don’t have to bother about it as they would do it in the JIT.
Bringing out Parallelism as a BCL feature will introduce a lot of problems and questions for ISVs. I’d love to see the whole Sitecore framework to run of several processors. But it will introduce a lot of challenges:
- Should we make our full library parallel?
- What about debugging and tracing? So we should make it configurable?
- Should you be aware of it, e.g. should you be able to decide wether an action will be performed in parallel or not using the API?
- What should be the best practices on implementing parallelism in your own custom code?
- In a more general level: are we allowed to use all of the resources of a webserver? Where can we limit these? Is this our responsibility our IIS’s?
As you can see we have thought about it. And I think if we’re really going to be in the fase that .NET 4.0 will be released and Sitecore will introduce parallelism in the product, it should be done from the underlying platform but also from our side. Hopefully the PDC will bring some answers, as I’m quite sure Microsoft hasn’t thought about us so far…