Just say no…

What’s in a Game?.

I haven’t even seen – let alone played – GTA3 for example, but my GBA still has me hooked, so I would disagree with that. Maybe I’m just a kid at heart…


[Russell Beattie Notebook]

Do yourself a favour, don’t buy GTA3. Its horrifyingly addictive. Maybe its because its the first game I’ve bought for a while now, and the technology has moved on somewhat. My last couple of game purchases were of the ‘big complex strategy’ variety that don’t require rapid reflexes and thus don’t stimulate the adrenaline rush of action games.

Its also the first ’18’ rated game I’ve ever bought, and they really, really mean it. Don’t play it with your kids. Unless they’ve grown up and left home.

Of course, if you like losing 4 hours of your life every time you sit down to play a ‘quick game’ then go right ahead…

Hacker Humour

YKYBHTLW…. You know you’ve been hacking too long when you come up with an idea for a silly Science Fiction/Horror short story, in which the universe implements reference-counting, and as soon as the number of `references’ (i.e. connections to other people) you maintain reaches zero, you are visited by The Garbage Collector. (52 Words) [The Fishbowl]

You know you’ve been hacking too long when this makes you laugh out loud.

Temporal Decoupling

The subject of messaging systems came up in a conversation last week, and some

wheels started turning. I like the idea of messaging systems. It feels

architecturally clean to have all your components abstracted away from each

other, so that each one only sees the message bus, and doesn’t give a hoot where

its messages are going to, or coming from. Even more specifically, each

component can choose which messages it wants to get, and not be bothered with

the rest. This again feels tidy.

Not only do messaging systems give you an architectural layer of abstraction,

but you also get the thing that prompted this post: temporal decoupling. If

you have a part of the system that isn’t running 24/7, you can either build

‘time-locks’ into the UI so that it can only be accessed when the system is up,

or you could use a combination of an off-line cache and a store-and-forward

message queue to allow the system to be used in limited fashion even when parts

of it are temporarily down. The cache can be implemented as ‘just another

subscriber’ to the message queue. Another bonus: adding message consumers is

transparent to message producers, and vice versa. This allows a many-to-many

relationship between components, such that multiple machines can look like one

big virtual one to anything on the other side of the message queue.

Disadvantages? Two network hops (Producer-Queue-Consumer) where before there

was only one. Added development complexity. System administration – now

there’s a message queue component to look after as well. Point of failure –

losing your message queue would not be funny. The store-and-forward approach

would not work well for time-critical messages (eg. stock market transactions).

Better to report failure immediately than complete the transaction at some

indeterminate point in the future, when market conditions may be wildly different.

Introduce Interface Refactor

Dr. Cedric In the House. Cedric has a new style on his blog (very nice) and an interesting new post on using Interfaces. In it he makes two interesting assertions: never supply an interface without a factory and “new” should only appear in factories. He’s going to expand on the second one later, but the idea is that the first rule avoids this type of code:

IEmployee emp = new EmployeeImpl();

Woof, that’s ugly, ain’t it? Check out Cedric’s site for the details.

Now someone help me out, I’m starting to see posts about Test Driven Development (TDD) where they’re talking about creating lots of interfaces. Is this true? Tell me it isn’t so! I need some education on this topic…

-Russ [Russell Beattie Notebook]

Pure greenfield Test-Driven development means only doing things as the tests need them, so interfaces should only appear where they’re supposed to. However, pure greenfield development is something of a rare beast. A fair proportion of my time is spent doing Test-Driven Refactoring, where I’m trying to retrofit the attitude and practice of Test-First to lumps of intransigent legacy code. Interfaces are the programmer’s equivalent of a crowbar in these situations. You have to lever apart the coupling in order to fit a TestCase into the gap.

Open Source Security

Safe and unsafe.

A report

says that Open Source Software is more vulnerable than Closed Source:

Advocates of the open-source process often claim that their products are more

secure thanks to the larger number of people poring over the code

This is one of the most widespread fallacies about Open Source code. 

The truth is that Open Source developers spend much more time writing code than

reading it.  And it makes sense, right?  You are most likely

contributing to an Open Source project to have some fun on your spare time, and

what fun is there in trying to make sense of code written by an unknown

developer living probably on a different continent than yours?

Since there is no fun in doing that, there needs to be an incentive, like

money.  The bottom line is that fixing security flaws in Open Source software can

only happen if the project is backed by a company that is actually paying a

salary to the members of the project, and if the said company has a clear

interest in having this security hole plugged.

Short of having that, Open Source is just as unsafe as Closed Source is. [Otaku, Cedric’s weblog]

Yes. All source code has the potential for security flaws. The real differentiator for open source is the sheer speed with which flaws are tackled once discovered. Its usually on the order of days. Compare the amount of time it takes the FreeBSD team to release an operating system patch once a hole is found with, say, your favourite proprietary desktop operating system.

Open source projects also tend to generate more loyalty and pride of workmanship from their developers, so a higher level of care tends to be taken over the work. Paraphrasing (poorly) from somewhere, you’re only as good as your last commit. When all the world can see your code, bad as well as good, would you not be a little more hesitant about releasing cruft?

Design Driven Development

Coding in the Small.

For me, TDD is a specification, design and documentation tool. From writing a test alone you can specify what you want, derive an intuitive interface and end up with a snippet of code as an example of how to use it. With this specification in place you also have a goal, the green bar – your tools will tell you when you’re done. And it’s hard to practise TDD without components that are all munged together, so loose coupling tends to appear. The fact that you get unit-tests for free is also a nice bonus which allows daring refactoring.

[Joe’s Jelly]

This is a revelation that crept up on me over time. Like (I’m sure) most developers upon stumbling across test-driven development, I initially fixated on the ‘test’ aspect of it. Its reasonably natural to do so, as everything stems from writing tests, its easy to assume the testing is the most important aspect of TDD. The fact is, test-first is an immensely powerful design tool, that also happens to leave you with loosely coupled, highly tested code. Bonus.

Interfaces considered Important

Coding Conventions – _I_ want to kill the Impl. This is actually a fun debate… I’ll wade in:

First, Cedric pontificated recently about using Hungarian notation in Java. You know, stuff like “lpszStringName”… You can just guess my thoughts: Oh. God. No. And this isn’t just because it’s a Microsoft thing, it’s because it’s damn ugly. And it doesn’t even make sense in Java… I think lpsz means something like “long pointer zero terminated string”. How does that even relate to a String object in Java? That’s generally what’s wrong with Hungarian notation, it just doesn’t fit most of the time (though I’ve seen people try).

Cedric then continues today (yesterday?) with some thoughts about interfaces – specifically using an I in front of interface names to differentiate between them and classes (e.g. “IWorkspace”). And THIS is where it gets interesting because Charles thinks that that’s about the most evil thing he’s heard in a while. Yeah! Let the beetle battle begin! (Sorry, too much Dr. Seuss lately…)

I detest, loathe, hate, and want to vomit on any class that ends with the letters “Impl”. UUUUGH! IBM stuff is super-famous for this. Almost everything that comes out of there has this convention going. It’s like someone at IBM skipped a chapter or two in the OO book and decided that classes were less important than interfaces, so the interfaces get all the readable names and the classes get a Impl stamped on their ass. It must have something to do with the IBM mentality. I don’t know. What I do know, however, is that classes are the principle objects in Java, not interfaces. Interfaces are handy, dandy and cool, but they’re there to help structure your classes and allow interoperation without multiple inheritance, NOT to be the prime way of programming. If you’re thinking “I’ll just program to interfaces and forget about those classes”, you need to come up to speed because I think that fad went out a couple years ago.

-Russ [Russell Beattie Notebook]

This is interesting, mostly because I used to agree, and now disagree (sorry Russ). I think interfaces are one of the most under-used and useful parts of Java. I certainly believe that if you do have a class and an interface that it implements, the interface gets priority, ie. the unadorned name of the thing. The class is usually ThingImpl (or SimpleThing, PersistentThing, etc). Because if there is an interface, then you really, really should be programming against that, and not the implementation directly. I seem to spend a lot of my time retrofitting unit tests to legacy code, and interfaces are my most powerful weapon in this arena. Non test-driven code is frequently tightly coupled and hard to pull apart. Making the legacy objects implement simple interfaces immediately gives me a point of leverage to pry apart the coupling, and stub out parts of the system with Mocks, or make the test cases themselves implement the interfaces. This lets me very quickly build ‘scaffolding’ around the parts I’m testing, and makes refactoring a much less fraught affair.

Not to mention that well thought out use of interfaces makes it possible to do all sorts of cool stuff with your code, like adding dynamic proxies. Which, as we all know, are pretty much the gateway to AOP, developer nirvana, world peace etc. etc.