Much is made of the ability of statically typed languages to catch certain types of mistake at compile time instead of having things fail at runtime. If I try and write a call to a method that doesn’t exist, the compiler will tell me and refuse to compile my code. This is often held up as an advantage over dynamically typed languages such as Ruby or Smalltalk, which will fail at runtime if I mistype the name of a method I want to call.
It turns out that the lack of compile time checking is nowhere near as much of an issue as common sense wants to tell you it is. One of the reasons for this is that dynamically typed languages are usually interpreted (or pseudo-interpreted) in the sense that there is no manual compilation step. What this means is that if while developing I get an unexpected ‘Object doesNotUnderstand: someMethad’ (in Smalltalk), its the work of a moment to go and correct the typo in the calling code.
It gets more interesting when you get errors that aren’t caught by the compiler (eg. calling a method on a null reference). In Java or C# the defect is already ossified in the compiled bytecode, and to fix it I have to change the source and recompile it plus everything that depends on it. On a big project this can be a time consuming process. On the other hand, in Smalltalk the error would be ‘UndefinedObject doesNotUnderstand:someMethod’. And it would be as easy to correct as the mispelled method name from the previous example.
Seems to me that compile time checking is only important when you have the inconvenience of compile time in the first place.