The value of software

OSAF Post Feedbacks.

I disagree that there is no sense of value for software in this country.  I do agree that I seem to be buying less software than before, but I ask what factors might have caused this change?  Microsoft contributed, but there are other factors involved.

[Don Park’s Blog]

I do pay for software. I would pay for more software, if there was quality software out there worth paying for. I think I’ve said it before, but free / open-source software entering a market won’t ipso-facto kill commercial software, but it certainly will raise the quality bar. If the only means for a commercial product to survive in a marketplace is to monopolise it, then it probably doesn’t deserve to survive. I bought The Bat, not because there are no free (or indeed pre-installed) email clients, but because it was sufficiently better to warrant its price. If a product takes a team of 50 a year to develop and costs tens of thousands per cpu, should it really have anything to fear from a competing OSS alternative developed in a few months by 3 or 4 people who’ve never met working in their evenings and weekends? And if it does, does that say anything about its true ‘value’?

RSS Validation

Mark Pilgrim and Sam Ruby have released a nifty RSS

Validator. I had a very quick go at running it through Jython, but it fell over as Jython doesn’t have

the ‘select’ module. A java port would be nice, but its unlikely I’ll have the

time so rather than make promises I can’t keep I thought I’d just flag it up and

see if any of the other java guys feels like taking up the challenge.

Oh yes, and I’m valid apparently. Although I do now feel like an extra in Gattaca.

Namespace collision

Someone else has a software blog entitled ‘Pushing the

envelope‘. Someone from Microsoft to be precise. I knew it wasn’t a

sparklingly original title, and when I first started I was going to call it

‘Random Thoughts’ – until I saw Rickard’s. I’d be annoyed, except the other PTE

was there first, so my bad (although there only appear to be a couple of posts).

I’m pondering the merits of switching to a fully web-based system such as

moveable type, roller, or miniblog, which will involve a URL change and all

sorts of other inconveniences, so maybe I’ll change the title at the same time

just to add to the fun. Darren’s Daily Diatribe, perhaps?

On the other hand I’m no.1 on google for ‘pushing the envelope’ so maybe I can

just pull rank (bad pun intended).

Delegating SAX Parsers vs Digester

Delegating SAX

Parser Handler

Delegating

SAX Parser Handler.
At

work I’m working on refactoring / redesigning something that started as a

cool idea. Basically, you register sub handlers to a root handler with the

path you’re interested in getting messages for (like “/document/header/title”

would get you the events for the document title).[Jason

Carreira]

It does sound quite similar to digester

which allows you to register interest in SAX events using abolute paths,

relative paths, wildcards and so forth and apply Rules when the SAX events

fire. Plus there’s default rules for all kinds of things like creating beans,

setting bean properties, invoking methods. There’s a default object stack

so its very handy for parsing XML config files and turning them into your

domain objects.

Competition in open source can be healthy, though I do prefer reuse when

it makes sense since it promotes a bigger user community which often results

in better software. So I’d recommend evaluating digester first to see if the effort of starting your own project

and supporting it is worth the effort.

[James Strachan’s Radio Weblog]

I discovered Digester a few weeks ago and have found it very useful. Couple it

with BeanUtils

and you’ve got a great way of automagically populating your beans from an XML

config. I have a Configuration object that contains a Map of the ‘digested’

name-value pairs and uses BeanUtils.copyProperties to set the fields on any

Object it gets passed. All I have to do is obey the javabean naming conventions

and it just works. For an additional check you could include the bean classname

in the XML and have the Configuration object complain if it was passed an object

of the wrong type.

Fighting talk

In the script group, the Perl subjects may

be more capable than the others, because the Perl language appears more than

others to attract especially capable

people.

[Lutz

Prechelt, An Empirical Comparison of C, C++, Java, Perl, Python, Rexx, and Tcl]

Ready flamethrowers… Fire!

Could this perhaps be that it takes a better than average developer to wield

Perl in anger without hurting themselves or others? I have something of a

love-hate relationship with Perl. Its power and flexibility is undeniable, its

syntax questionable. I have this nagging fear that the more I learn, the

greater its seductive attraction will become, simply because it allows you to

get away with almost anything.

That’s it. I’m not posting anymore on this (although I reserve the right to

change my mind). There are more important things in life than ‘my language is

better than yours’ catfights. Put your energy into writing code instead.

Musing about Markup

With regard to Joe’s recent

post about the deficiences of XML, I have something of a counterpoint to

offer. XML was invented as an attempt to unify and simplify data interchange

between disparate systems. This had been attempted before, but the efforts

never gained sufficient momentum to achieve general acceptance.

XML is a subset of SGML, which has been around for a number of years. SGML is

also the language from which HTML is derived. SGML itself is very complex, as

it includes all sorts of mechanisms for defining domain-specific dialects (such

as HTML and XML). XML was released on the back of the general and massive

uptake of HTML, and was similar enough to HTML to be explained as ‘HTML that

computers can understand’. Part of the reason for XML’s success is the huge

surge in popularity of the internet and its promise of global connectivity, part

is due to its design. XML is simple and formal enough to be relatively easy to

design parsers for, while being flexible enough to describe most types of data.

Developers were also used to dealing with HTML style markup. This combination

of factors probably accounts for XML’s huge popularity. The biggest hurdle for

any attempt to standardise on a data interchange format was always going to be

garnering enough general support to make it the ‘de facto’ standard.

There is always more than one way to do things, and XML may not be the prettiest

or the best, but the details of its design are probably less important than the

fact that it succeeded in its goal of achieving a standard means of describing

data that was easy to pass around between otherwise incompatible systems. Now

that we have come to expect easy data exchange, we are free to explore

improvements, but we wouldn’t be in this happy position were it not for XML.

Self referential meta blogging

Skimming over some of my old posts, I can tend to spot which ones were written

from home and which ones from work (it helps that I also remember writing

them!). Generally speaking, the more ‘off-topic’ and emotive posts tended to be

written from home. It seems that being at work causes me to put on my

‘professional’ hat, while at home I’m more likely to just bash out whatever’s on

my mind at the time. Interesting.

More interesting matters

As promised…

XDoclet 1.2 beta looks good. I’ve been trying out the castor and servlet tags today. It makes it a lot easier to evolve the design when you don’t have to keep changing your mapping file. It was the work of moments to throw together a couple of beans and collection objects and dump them out to XML. Nice.

Still not sure what to do with regard to persistence. I don’t need a relational database (or the hassle), just something quick’n’easy. I toyed with the idea of just storing the raw xml in Lucene, indexed on the various fields and attributes, but I need to look into the querying side a bit more to see how easy (or even possible) it would be to construct queries like ‘select * from documents where date is between 10-OCT-2002 and 20-OCT-2002’. I have a feeling this may be difficult, and probably isn’t the best use of a search engine anyway.

Things to check out further:

Any other suggestions out there?

Bugs

It turns out my radio problem is a known bug. Don’t know why I got so annoyed, other than the fact that most of the times I break something out of curiosity the only person who suffers is me, and most of the time I can dig around the source until I know enough to fix it. Or simply roll back. Its a little hard to hide it if you break your blog.

Found a fix on google, so this will be my last rant on that subject. Back to more interesting matters.

Why I hate proprietary formats

I’ve broken radio. All the so-called ‘dynamic’ links have frozen pointing to ‘www.darrenhobbs.com’ after I played around with the upstream via ftp option yesterday. I aborted that idea after finding out that turning on ftp switched off the normal upstreaming to my radio account. Now I’ve found out that if you ever use the ftp option you can’t apparently ever go back. Thank you Userland.

This wouldn’t have annoyed me if I could go in and fix the problem, but I can’t find the reference that the macros are using, leading me to believe it’s buried somewhere in one of the mysterious .root files, which are, naturally, binary.

Now nobody new will be able to subscribe to my RSS feed until I hard code the links back to what they should be.

And to think this morning I’d just about decided to stick with radio for the time being, to minimise the inconvenience to my rss subscribers. Thank you for making the decision for me, radio.

Looks like I will be moving blog software after all.