Finally got around to reading this paper about crash-only software. What a great idea. Given that fatal errors that require a component be restarted are by their nature unanticipated and almost impossible to design out, why not make crash/reboot the default mode of operation? If a system is designed to be safely arbitrarily crashed and restarted, then the number of fail/recovery scenarios is reduced to one.
The paper takes a number of good ideas for building robust distributed systems, and turns them up to 11. Go read it.
One of my colleagues, Steve, spoke to me the other day about using SMTP for application integration.
Its something that I’ve thought about from time to time, and it just seems to make sense. Here is a protocol that has been around for years, is battle-tested, and is used by millions of systems all over the world to reliably transfer messages. In case you haven’t guessed yet, SMTP is the protocol used for email. Why spend a fortune on enterprise message brokers, when you could download exim or postfix (or sendmail if you’re a masochist). Most linux, BSD or unix boxes will probably have one of these installed already.
SMTP is point to point, so what if we want pub-sub? Handily, there is a protocol called NNTP which has been around about as long as SMTP. This is the protocol that USENET (aka newsgroups) uses.
I definitely think that there is much our industry could learn from technologies invented 20 or more years ago.
Michael Feathers tells it like it is:
You think your design is good? Pick a class, any class, and try to instantiate it in a test harness. I used to think that my earlier designs were good until I started to apply that test. We can talk about coupling and encapsulation and all those nice pretty things, but put your money where your mouth is.
Just read a Byte article dating back to 1995 that predicted that Constraint Logic Programming in Prolog would be the programming paradigm that would gain the most commercial significance over the next 5 years.
Hands up if you’ve heard of Constraint Logic Programming.
Slightly sad thing is that CLP does actually look very interesting, as it seeks to tackle the NP-complete class of problems that traditional programming solutions are generally bad at.
Genetic algorithms appear to be more in vogue for NP-complete problems at the moment, possibly because to implement a performant CLP system you generally need an implementation of Prolog, whereas its possible to write a genetic algorithm in any current commodity OO language.
Its somewhat amusing that a DOS utility that came into being about 20 years ago (around the same time as DOS first got directories) is now one of the recommended methods for deploying .NET applications. A commendable example of software reuse, and evidence that the simplest solution is often the correct one.
‘StupidSoft PointlessTool X has been installed. Your computer will now be rebooted. Press OK to reboot’.
What gives a piece of software the right to decide when I reboot my computer? Its such a nonsensical myth anyway. It just doesn’t have to be necessary to shut down the whole operating system simply to install an application. How can Windows be taken seriously as a server platform while this sort of thing is still going on?
I’ve recently been working in the .NET environment, and have had occasion to go wandering the web looking for various tools and libraries to speed up the development effort. One thing that has become very clear is that there is a very fundamental difference in mindset between the .NET and Java communities when it comes to tools. In general, its possible to find open source utilities and libraries for Java to accomplish most common tasks, whereas in .NET it seems far more likely that a similar search will arrive at the website of a commercial company with a product to sell.
It may simply be that there are more open source tools for the Java platform because it has been around longer. But it may also be the case that the Windows development community is larger, and more used to paying for their tools. There appears to be a second-tier of companies who make their living primarily by making and selling tools to the developer community. It seems to me that this tier can only really be profitable if there are a sufficient number of organisations selling software to the end-user community, and that those organisations feel there is a cost-saving to be had from buying tools and libraries rather than building their own.
Assuming those two speculations are true, what does this mean for open source in the .NET world?
My take is this: any company that sells products to developers to make those developers’ projects more cost effective is probably under constant pressure to perform. If their product isn’t powerful enough, their customers can simply roll their own (which is usually not the case for end-users), and if the price is too high, the ‘buy vs build’ pendulum will swing against them.
Open source projects traditionally fall into this developer-tools space, and as such, probably cause a lot of concern. How do you compete on price when something is free? On the other hand, you have a community that is used to buying its tools, and the tools are probably of pretty high quality, due to the competitive pressures I alluded to earlier. My gut feeling is that there is a higher barrier to entry for open source in the .NET world than the Java world.
Ultimately it comes down to the cost-benefit. If organisations developing end-user software can do so more cheaply by buying their tools, they will do so. The more complex and innovative the library, the less likely it is that an open source competitor will emerge. Commodity libraries that are closer to the buy-vs-build inflection point may well be supplanted by free alternatives, but this is simply market forces in action, and probably no bad thing.