What is interoperability? (Part 1)

Used as an adjective, when two things are described as ‘interoperable’ it means they can work together. In the specific case of two computer systems, the characteristic of being interoperable relates to the ability to transmit and receive data between them i.e., to communicate. But is it that simple?

I use the term ‘data’ here in its widest sense[1], being numbers, characters, or symbols on which data processing operations are performed. The kind of information this data represents depends on knowing an associated context to allow interpretation and attachment of meaning. A sequence of numbers – 23.5, 23, 24, 24, 23.5, 26, 26, 25.5, etc. – is data. Received from an automated weather station, you could be reasonably certain these numbers represent Celsius temperatures. Without a concrete observation context though, you’re unable to determine whether these numbers represent a temperature sequence or my dog’s fluctuating weight in kilograms[2]. Even knowing that they’re temperatures in degrees Celsius or weights in kilograms still doesn’t give you any information about what they mean.

Simple or syntactic interoperability

Essential to late 20th century engineering of computer, defence and telecoms systems, interoperability has mostly been about interfaces, voltages, signal levels, bit-rates, formats of data (representations, including character sets) and the signalling and messaging protocols that control data exchange. To say two systems are interoperable is to say they can exchange data by means of common shared protocols and if not, then dissimilar formats / protocols are mediated via an interworking gateway. Nowadays, and when viewed from higher levels of design abstraction, this kind of interoperability is referred to as ‘syntactic interoperability’ i.e., the ability to work together based on common data formats and communications protocols. The terms data and information were used interchangeably, represented as bits and bytes in service / protocol data units on a communications network, without distinction between data, information, understanding and knowledge. Indeed, such distinction was only just emerging in academic circles during that time[3].

Semantic interoperability

More interesting is the notion of ‘semantic interoperability’. This is the idea that systems can work together based on an understanding of the meaning of what is being exchanged between them. Returning to the above example, a network of automated weather stations each send their six hourly observation updates in a standard format to the central weather bureau. There, a software program on the bureau’s forecasting computer interprets the received data and models the weather to produce a forecast. This is a rather asymmetric interoperability scenario, insofar as one side (the automated weather instruments) is essentially a dumb data generator, while the other (weather bureau computer with its forecasting program) is the ‘intelligent’ half, interpreting and ascribing meaning to the observations data[4]. A more symmetric example of semantic interoperability might occur when two bureaux exchange information, each to refine the forecast of the other.

The key point here is that semantic interoperability involves possession of a shared and congruent understanding of the context, including the important assumptions, principles, facts, notions and relations existing within that context. In a more dynamic form, it involves possession of the capability to infer and build that understanding i.e., build the context from information exchanged.

By ‘understanding’ we mean: ‘intelligent, having the ability to make judgements or decisions based on knowledge of something’. Of course, machines only exhibit pseudo-intelligence or apparent understanding, in the sense that they’re not intelligent at all but merely processing sequences of instructions based on some encoded representation of facts, information, rules, etc. Algorithms embedded as software programs (and, increasingly in the postmodern era, deep learning methods) are necessary for such capabilities to exist. We see this as giving rise to the notion that multiple levels of semantic interoperability can exist; from straightforward and simple interoperability based on pre-existing knowledge of the context, to complex and dynamic, where the context must be worked out from the available data. Hence, the need to better understand the process of interpreting data i.e., how we add meaning to data to obtain information from it.

Other kinds of interoperability

Both forms of interoperability explained so far, syntactic and semantic are examples of interoperability in technical systems. But they seem also to be applicable in social (community), political (governance) and legal dimensions. In the social and political dimensions, we agree easily on the syntax – the language and grammar we use – English language, for example. But differing cultural backgrounds and World views can lead often to tortuous misunderstandings.

Like technical interoperability, legal interoperability applies within specified contexts – communication between systems, use of data, use of software – is it allowed or not? And with what constraints? Determining the legal use conditions should be possible and straightforward; not only of or between first-order items i.e., the items to which the conditions principally are attached but also to the creation and use of combinations and derivatives[5]. Having to seek authorization from rights holders on a case-by-case basis acts as a barrier to exercising legal use and thus interoperability is impaired. Constraining legal use today, there are multiple examples where data are not ‘findable, accessible, interoperable and reusable’[6].

In the military domain, concepts of interoperability are strongly developed on multiple levels. The highest level is strategic and political and is concerned with achieving and maintaining strong coalitions and alliances. Operational and tactical interoperability involves multiple organisations (armies, etc.), their people and how they work together on battlefields. Like strategic / political interoperability, operational interoperability is mainly about dealing with human factors, but also includes organizational design, processes and procedures. Lastly, different weapons systems require technical interoperability, both in the pursuit of working together to achieve battle aims and for safety purposes to avoid friendly-fire incidents and collateral damage to civilians.

In exploring interoperability, the concept reveals itself to be complex and more facetted than one might first imagine.


[1] http://www.oed.com/view/Entry/296948, definition 2b: ‘Quantities, characters, or symbols on which operations are performed by a computer, considered collectively.’

[2] Observation objects (according to, for example OGC O&M standard or the Semantic Sensor Network ontology, etc.) capture context. They relate the observation value to time, to the sensor that made the observation, the property and the feature observed.

[3] https://en.wikipedia.org/wiki/DIKW_pyramid#History.

[4] The reality is that the forecasting program is not interpreting at all! It’s just executing a sequence of calculations representing a mathematical model of the weather system. Pre-programmed equations use received data values as their parameters.

[5] See for example considerations and recommendations arising from the RDA-CODATA Legal Interoperability Interest Group, 2016.

[6]  FAIR principles; Wilkinson et al., 2016.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s