What is interoperability? (Part 2)

In a companion article, “What is interoperability? (Part 1)” I began explaining what is meant by interoperability between systems; a concept that reveals itself to be more complex and more faceted than one might first imagine. My interest in exploring this is to develop thinking for my work in the GLOBIS-B project fostering global cooperation between providers of biodiversity research infrastructures to advance implementation support for Essential Biodiversity Variables. During the first half of this year, we’re aiming to develop a manifesto to steer global co-operation on informatics interoperability and I’m looking for ways to express this in concrete form that’s easy to work towards.

In this article, I go more in-depth looking at the way interoperability is understood today in several communities having relevance to the work. Let’s start with the ENVRI Community view of interoperability.


ENVRI Community view of interoperability

In the ENVRI Community cluster of research infrastructures (RI) for environmental sciences[1], the importance of interoperability within and among disciplines has been  highlighted[2]. Long-term, the ENVRI Community aims for interoperable datasets, software tools and workflows. Interoperability is important for conducting the multi-disciplinary, systems-oriented environmental science needed to respond to societal challenges; such as climate change, natural disasters, food and water security, etc. The ENVRI Community talks about requiring infrastructure interoperability, such that users can benefit from the combined infrastructure facilities. This means users should be able to discover, retrieve and use environmental sciences datasets from within RIs other than the one with which they are directly authenticated[3]. Similarly, users should be able to discover and make use of other resources, such as computational models and capacity, workflows, instruments, etc., just as if they were authenticated directly to the RI hosting those assets. Such infrastructure interoperability has the elements of both syntactic and semantic interoperability already discussed. To make the described scenario possible requires exchange of data and establishment of surrounding context (i.e., purpose) of that exchange. Common standards for syntactic interoperability, dealing with data and meta-information formats, and the protocols for controlling exchanges are needed. Beyond this common basis of communication, a mechanism for establishing the context, and thus for achieving the infrastructure interoperability is required. The presently proposed solution for this is to establish a super-catalogue of metadata (more accurately: meta-information for the different purposes of discovery, access, use, etc.) about the available resources and assets of each participating RI.

Meta-information should not be confused with the more common term: meta-data, that is “data that describes data” (or some object). Metadata about a book, for example is its title, author, date of publication, subject, a unique ISBN number, its dimensions, number of pages, and the language of the text; with the actual text content being the described data. These meta-data are considered as meta-information for discovering the book in a library; namely, for detecting the library specific index number guiding to the actual position on the library shelves. Once the index number is known, these meta-data become irrelevant for purpose of access; hence is just data in the context of that purpose. The index number and the actual position on the library shelf are the meta-information for accessing the book. In turn they become irrelevant for reading the book. [4] So, meta-information is meta-data with a surrounding context.

Interoperability for Essential Biodiversity Variables

Essential Biodiversity Variables (EBV)[5] and other essential variables are good examples of a global use case that drives the need for co-operation on interoperability between data and services offered by infrastructures[6]. Not only must EBVs data products be complementary and comparable across space, time and taxonomy, it must be possible to produce them using any research e-infrastructure[7] with common, standardised workflows working on primary data that can be stored in and across multiple infrastructures.

Considering interoperability for the ecological and biodiversity sciences in the CReATIVE-B project, we concluded that ‘interoperable service logic’[8] is necessary for achieving interoperation between infrastructures. In service-oriented systems, service logic is the collection of atomic (individual) functional components that each performs the specified actions or activities of the system. A service that looks up the name of a species in a database and returns information about all available synonyms and the species’ position in the taxonomic hierarchy is a functional component.  A service that executes a workflow (perhaps including looking up species names) on behalf of a user is a functional component. By interoperable service logic, we mean that services are compatible with one another. This means they have compatible inputs and outputs, where the meanings of those inputs and outputs are the same (or very similar) from one service to the next i.e., they share common semantics; not only in terms of data and metadata meaning but also in terms of workflow and provenance specifications and engines. Service logic interoperability is one of the most expensive scenarios in terms of human resources as it means either working towards agreeing and adopting common standards for services or using technology bridging (e.g., by mapping gateways or brokering) both in terms of syntax of communications and semantics of operations. This kind of interoperability requires a deep understanding of how the services logics operate both within and among infrastructures, their data structures and their higher-level communication protocols.

GEO System of Systems (GEOSS) approach to interoperability

GEOSS interoperability is based on a brokered system of systems approach, whereby dedicated mediation and transformation components (the brokers) act between the interoperating systems. The novelty of the approach is in the principle to offer interoperability arrangements “focusing on the modularity of interdisciplinary concepts rather than just on the technical interoperability of systems[9].

The brokering approach relaxes the requirement (typically seen in federation approaches) for a common data model and exchange protocol. It uses third-party brokering components, with their own internal data model that adapt to the published interfaces and data models of the individual contributing systems. Additional functionalities elevate the broker to acting on interoperability at the level of interdisciplinary concepts by providing both automated and user-assisted query expansion through a collection of carefully selected controlled vocabularies, thesauri, gazetteers and ontologies. This approach, it is claimed can accommodate the “specificity of each discipline and the need to make explicit the tacit body of knowledge that underpins it[10] in a way that federation approaches are unable to. Users of system(s) in one discipline can more easily exploit those of a different domain because semantics are brokered as well as syntax (formats and protocols). As well as mediating the specific meaning within a domain, attempts are made at mediating the meaning more abstractly across domains. When looking at the Levels of Conceptual Interoperability Model (LCIM) proposed by Turnitsa (2005)[11], this is said to equate to level 6 i.e., conceptual interoperability. Nevertheless, it is not clear that this approach is also accommodating levels 4 (pragmatic interoperability) and 5 (dynamic interoperability) of the LCIM.

Interoperation of autonomous systems for complex situations

Concepts of interoperability are well developed in the realm of modelling and simulation, particularly in the context of collaborating autonomous systems (i.e., robots, automatons, etc.). Such systems must automatically resolve complex situations. Here, not only is it necessary to meaningfully exchange data and use each other’s services, it is also necessary that such systems can come to a consistent interpretation of the ‘truth’ of a situation before they proceed to act in that situation. This characteristic (i.e., the alignment of activity, based on purposeful abstractions of reality) is described as ‘composability’ ‘composability’[12] and is a complement at the logical level to interoperability at the physical level.

Consistent, tangible elements remain elusive

I’ve ve explained in summary several different approaches to interoperability, some more nuanced than others. There are probably other approaches too. Certainly, the field of modelling and simulation (M&S) has its own views when it comes to establishing frameworks for interoperable models. Standards, of course remain a way of establishing the dimensions and parameters of interoperability. But, as remarked by Leonardo Candela (CNR, Italy): “Interoperability is not a feature that can be achieved in absolute terms, it is always a relative concept one can claim to have resolved in a given context.[13]. So, if interoperability is not an absolute but is relative to something, how can we measure it? It’s a benefit accruing from the implementation of multiple features by two systems. How can we assess a claim that interoperability has been achieved? How do we know what we must build into our system to obtain the benefit? What more must we put in place to make this clear for systems architects and designers?


[1] http://envri.eu/

[2] Environmental Research Infrastructures Strategy for 2030 (ERIS2030), https://figshare.com/articles/ERIS_Environmental_Research_Infrastructures_Strategy_for_2030/2067537.

[3] Authentication is the act of establishing the identity and validity of a person wanting to make use of resources. It is distinct from authorization, which is concerned with establishing the person’s rights and giving permission to use resources.

[4] EC FP7 LifeWatch Preparatory Phase project, grant agreement number 211372, Deliverable 5.1.3 Data & Modelling Tool Structures -­‐ Reference Model, 8th January 2010, http://orca.cf.ac.uk/56502/.

[5] http://geobon.org/essential-biodiversity-variables/what-are-ebvs/.

[6] Words of Donald Hobern, Executive Secretary, Global Biodiversity Information Facility (GBIF).

[7] Within reasonable expectation, meaning any appropriate and properly equipped institutional, national or regional level research e- / cyber-infrastructure.

[8] As opposed to interoperability at the level of either infrastructural components or at the application level. EC FP7 funded project 261323 CReATIVE-B, deliverable D3.2 section 5 pages 25-28 refers (http://orca.cf.ac.uk/92562/).

[9] Craglia M., Nativi S. (2018). Mind the Gap: Big Data vs. Interoperability and Reproducibility of Science. In: Mathieu PP., Aubrecht C. (eds) Earth Observation Open Science and Innovation. ISSI Scientific Report Series, vol 15. Springer, Cham. doi: https://doi.org/10.1007/978-3-319-65633-5_6.

[10] Nativi et al. (2013). http://ieeexplore.ieee.org/document/6506981/.

[11] https://en.wikipedia.org/wiki/Conceptual_interoperability.

[12] Note that this is a different meaning of the term ‘composable’ from that normally meant i.e., the ability to combine and recombine components for different purposes. However, it’s possible to see how, from the nature of the characteristic as alignment of activity how the two uses of the term are similar.

[13] https://www.rd-alliance.org/group/virtual-research-environment-ig-vre-ig/post/need-vre-manifesto-digital-library-manifesto.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s