Interoperability – The Second Front Opens

Often tired and frequently oversold, the struggle between the software industry’s giants, otherwise known as the Open Source /Linux debate, has, over the summer, opened a second front, interoperability, injecting more life into an argument of increasingly strategic importance.

For a while now, the case in favour and against the introduction of Open Source / Linux solutions, has surrounded fundamental questions of reliability, security and Total Cost of Ownership (TCO).

A search for the ‘silver bullet’ argument in the analyst reports on any one of the vendor websites remains elusive and what one finds mostly surrounds the question of why one side’s TCO benefit is greater than another’s. After all, TCO and ROI (return on investment) are less factors of the underlying Operating System than the applications and services that support the Server platforms. Furthermore, it’s argued that TCO doesn’t examine the underlying flexibility of a technology and with it, any potential savings that accompany its implementation.

If the security and TCO argument isn’t conclusive or appears to be moving in a new direction, then we have to start looking elsewhere if we’re going to have a better opportunity to “get the facts” or just as importantly, judge how apparently opposing technologies can cost-effectively and productively co-exist if and when peace finally breaks out.

With the three main argument of the OS/Linux offensive bogged-down in the detail, where do we go next for some decisive action? In my view it’s toward the promise of interoperability and being clear about the true cost of acquisition; more importantly, understanding why these are important questions which shouldn’t be neglected as a strategic consideration in the broader technology decision-making process.

With all of us now living in the same moment in an increasingly connected age, interoperability is simply ability of different IT networks, applications, or components to exchange and use information, i.e., to “talk” to each other. This goal can be achieved through the development of software that is “interoperable by design” (e.g., inclusion of XML technology in software to facilitate the easy exchange of data across different applications), by licensing / cross licensing of proprietary technologies and essential intellectual property, through collaboration with partners, competitors, customers and through the  implementation of industry standards (including open standards and broadly accessible proprietary standards) in products and services.

Whether you happen to be Microsoft or even IBM or Novell, this makes sense, as you may have read in an earlier Silicon story which mentioned the interoperability benefits of JBoss working with Windows. After all, in a future which operates from behind a cloud of Web and software services, a liberal ‘multicultural’ approach to software interoperability offers considerable benefits to all the players involved.

Where it all may go badly wrong however, is when the interoperability that business and the public sector desires. is ‘cobbled-together’ or poorly engineered, because one solution or licensing model is mandated to the exclusion of another which may offer much greater flexibility and lower costs over time.

This is the main substance of the ‘second front’ argument which is likely to occupy a great deal of editorial space over the next twelve months. Has business and in particular, the public sector started along a path, which though recognising the benefits of interoperability, pays only lip-service to the sibling value of flexibility through the promotion of one technology over another. Selecting a narrow, ‘single-source’ route  for the delivery of  interoperable products?

A catalogue of high-profile public sector project failures over the last twelve months may encourage everyone involved in planning large IT projects to think more pragmatically on how software and services can cooperate and be truly interoperable in a Rubik’s cube world of emerging complexity. In looking to a future that argues for greater co-existence between both Microsoft and Open Source/Linux as facts of life, we might be better advised to explore the intrinsic flexibility and value for-money faces of different solutions. The alternative is to continue along a narrow path which increasingly specifies ‘Open solutions’ that in principal, deliver the opposite of what “Open” and interoperable should mean in the first place, through  simply imposing one set of arguments or (proprietary/open) standards and technologies over another.

Or to quote Winston Churchill: “If we open a quarrel between past and present, we shall find that we have lost the future.”


Popular posts from this blog

Civilisational Data Mining

The Nature of Nurture?