Compile Time Outmoded

Much is made of the ability of statically typed languages to catch certain types of mistake at compile time instead of having things fail at runtime. If I try and write a call to a method that doesn’t exist, the compiler will tell me and refuse to compile my code. This is often held up as an advantage over dynamically typed languages such as Ruby or Smalltalk, which will fail at runtime if I mistype the name of a method I want to call.

It turns out that the lack of compile time checking is nowhere near as much of an issue as common sense wants to tell you it is. One of the reasons for this is that dynamically typed languages are usually interpreted (or pseudo-interpreted) in the sense that there is no manual compilation step. What this means is that if while developing I get an unexpected ‘Object doesNotUnderstand: someMethad’ (in Smalltalk), its the work of a moment to go and correct the typo in the calling code.

It gets more interesting when you get errors that aren’t caught by the compiler (eg. calling a method on a null reference). In Java or C# the defect is already ossified in the compiled bytecode, and to fix it I have to change the source and recompile it plus everything that depends on it. On a big project this can be a time consuming process. On the other hand, in Smalltalk the error would be ‘UndefinedObject doesNotUnderstand:someMethod’. And it would be as easy to correct as the mispelled method name from the previous example.

Seems to me that compile time checking is only important when you have the inconvenience of compile time in the first place.

Code Generation

Is code generation a sign that you are missing a layer of abstraction?

This doesn’t feel like a new thought, so I may have read it somewhere, or possibly even blogged it.

It seems to me that code generation is most often seen in statically typed languages such as java, and even then could conceivably be alternatively implemented with an additional layer of abstraction that expressed the intent of the generated code at a higher level.

Crypto Cash

Just completed ‘Crypto’ by Steven Levy. Excellent book, very accessible account of the development of public-key cryptography and all the security stuff we take for granted.

Made me think though, as I filled in my credit card details on yet another e-commerce site: why do I need to hand out my credit card details to every site I want to buy stuff from? The technology for digital cash already exists and is well understood by the crypto community. It is technically feasible for a customer (me) to visit a trusted site (eg. my credit card provider), and request a secure token that represents a sum of money equal or greater to the amount I want to spend, and hand that over to the merchant. The token is digitally signed to verify its authenticity and value. The transaction can then take place with the merchant needing no credit card information, or even personal details (although a delivery address might be useful). Opportunities for fraud would be limited to (a) the amount the token is good for, and (b) a window of time before the merchant has handed it to the credit card company, at which point it is tagged as ‘used’ and worthless. Used tokens could be posted on a website for all the good they would do fraudsters, so high profile cases of hackers downloading a database full of credit card numbers would be a thing of the past.

Of course, there is the one extra step of having to obtain the token first, which is a tiny inconvience, and therefore far less preferable than having your credit card number stolen, which usually involves no effort whatsoever.

Adventures in Squeak

This weekend I spent a bit of time playing with Squeak (as my non-commercial Gemstone/S was missing a valid license key, so the plan to play with that went out the window).

Starting simply, I tried the following, based on some example code:

HTTPSocket httpShowPage: ''

Sure enough, up popped a window with the raw html of my homepage.

Wanting to try out multiple requests, I modified the code to this:

10 timesRepeat:[HTTPSocket httpShowPage: ''].

After a significant pause, 10 windows appeared one after the other. This was probably due to the 10 connections being single threaded and run one at a time. I then tried this:

10 timesRepeat:[[HTTPSocket httpShowPage: ''] fork].

It took about the same amount of time, but the windows started coming back quicker.

Being ambitious, I then tried the following:

100 timesRepeat:[[HTTPSocket httpShowPage: ''] fork].

And all hell broke loose. Major UI corruption. Turns out that HTTPSocket::httpShowPage is not thread safe, and updating the UI from more than one thread is a bad idea. This is fair enough – the method is basically a helper that pops a window with the contents of a url.

The offending line was this one in HTTPSocket::showPage

(StringHolder new contents: doc) openLabel: url.

Wanting a quick way to make it work for me, I went on a hunt for mechanisms to ensure thread safe UI updating and found this method on WorldState:

WorldState addDeferredUIMessage:

Changing the line in httpShowPage like so:

WorldState addDeferredUIMessage:[(StringHolder new contents: doc) openLabel: url].

And all was well. Now the multiple forked requests to httpShowPage would queue up their UI updates and play nicely together.

But thats not quite the end of the story. I now had 100 windows open, all showing the raw html from my homepage, that I had to get rid of. I could have clicked on each one individually, but I’m a developer, and this is Squeak, so after much experimentation, I came up with this:

| holders darrens |
holders := SystemNavigation default allObjectsSelect:[:anObject | anObject class = SystemWindow].
darrens := holders select:[:each | each label = ''].
darrens do: [:each | each delete].

And presto, 100 windows deleted.

The moral being that a system that describes itself and written in itself and is open to change is hugely powerful, and very pleasing to work with.

Observations on technology selection

You’re IT manager/director at a moderate sized, fairly successful company. You have to decide which technology platform to standardise on, because standards are good, and having a unified strategy will save money. Economies of scale and all that. You’re seeing lots of articles about .Net and Java, and how they increase productivity and leverage industry best practice. Looks good. If its industry best practice then most organisations must be doing it, and you wouldn’t want to be at a competitive disadvantage. Not only that but you will have a large pool of developers to recruit from, which is another big plus for risk reduction. You don’t want to select a technology platform that the majority of developers out there don’t have experience with.

You’re a developer looking to maintain your employability and keep your skills current. You keep an eye on the job listings and the technical press. You’re seeing lots of articles about .Net and Java, and how they leverage industry best practice and increase developer productivity. Looks good. If you can increase your productivity then you’ll be more attractive to employers, and less at risk of struggling to find work. Checking the job websites shows a healthy number of positions open in .Net and Java, which is another big plus for risk reduction. You don’t want to learn a technology platform that the majority of employers out there aren’t interested in.