CETISFIS Sept10 structure and semantics notes

From CETISwiki

Jump to: navigation, search

Summarizing a discussion from the Future of Interoperability Standards September 2010 meeting.

This group comprised Dan Rehak, Erland Øverby, Steve Jay, Bahareh Heravi, Bill Olivier, Tore Hoel and Phil Barker (note-taker)

We started by characterising existing standards as monolithic and large, which was generally seen as problematic.

Large (and by implication, complex) standards were seen as causing problems for implementers and developers. Monolithic standard were seen as problematic if implementers are not allowed pick and mix elements of the standard, or make application profiles, and still be said to conform it to some degree. We seem to have recreated a granularity problem analogous to that we had with learning resources from the CD-ROM era.

One issue relating to implementing only a subset of a standard is that of an application or system telling another application or system with which it wants to work what profile, what part of the standard it can deal with. Different solutions will be applicable depending on the relative roles of the two implementations (for example an assessment authoring tool that has only implemented part of QTI will not have problems providing instances to an item bank system that has implemented all of the standard, whereas an assessment player that has only a partial implementation may have problems when receiving items containing elements of the spec that it didn't understand). Our conclusion was that the degree of automation that one could expect would depend on the maturity of the standard and highly sophisticated approaches providing highly automated solution may be an unnecessary complexity for the early stages of a specification.

We wish to emphasise the problematic nature of some IP licensing approaches: if the default licence for a specification does not allow derivative works (we believe this is the case with W3C, IMS, and OASIS) then legal uncertainty or licence negotiations become an unwelcome additional burden to those who wish to publish an application profile.

No one spoke fully in favour of large or monolithic standards: however it was recognised that picking and choosing from smaller standards and profiles is not fully understood. If the choice is between working with one large standard or lots of small standards the overall complexity of the task may actually increase. Various approaches are available for mapping syntax from one standard to another, but there is also the (not insignificant) problem of mapping semantics from one standard to another.

One approach to tacking the problem of fitting together many standards is to work with an agreed domain model. For example, the set of IMS specs begins to provide conceptual map of the learning domain, and one can work (to some extent) with multiple IMS specs. We do have to be careful to allow extensibility and variants on concept map since no one conception of education will fit all contexts.

Picking up on the earlier point about interaction between the maturity of a specification and its complexity, and using HTML as an example, we considered how a simple standard might be successful and how successful implementation of the simple version might lead to acceptance of progressively more complex versions providing extensions or refinements. Something similar was tried with Learning Design (conformance levels A, B, C). It was noted that the concept of levels of conformance can cause a lack of clarity.

On a similar tack, from the implementers point of view, allowing pragmatic implementation is better than insisting on strict implementation. In some situations a standard is part of the infrastructure and needs to work reliably and invisibly to most of those who rely on it (c.f. TCP/IP), these tend to be very static standards; however we are in a field which is highly dynamic and where precise conformance should not inhibit innovation.


Are we starting in the right place? Our history may be a problem. All current LET standards come from prior work that pre-dates the web. All are based on idea that data is record-based and stored centrally and exchanged between centralized stores. Is this a suitable place to be? Do we need to re-think the model, to think more about connexions than stores; to conceptualize the web as activity-based rather than document-based.

This would lead to different standards, with data obtained through messages and negotiation rather than records, possibly bringing in data from diverse and distributed services; content provided through embedded players as streams rather than downloaded to applications. (Note: semantic web and widgets are examples of these approaches)

Dan described a model for content showing how Content could be described by metadata, while the use of content in a particular context was associated with paradata. This model is a semantic model of nouns, of the things in the system, one would also need a model for the verbs (activities) and the actors.

Bill commented on work from Prof Yu In China, who is developing a learning cell model which links networks of content to networks of people (with semantic model). He is looking towards actors as central to education, leading to personalized systems rather than enterprise systems which have previously been the focus of much standardization work.

What is to be done? Do we change process of doing standards or just change the type of standard? Tore argued that we should change way we start projects. Currently there is for each standards and specification model an implied or explicit model for what outputs should be produced, who should be involved and how the work should be carried out. It would be better to start each piece of work with a discussion on what these should be. There also needs to be a decision at the outset for how strict standard needs to be. In general more discussion and understanding about good practice in making standards, and what makes a standard good as a technical artifact.


Conceptual modelling is important in the understanding of relationships relevant to the standard and facilitating the standardization process as a dialogue between domain and technical exports. There was some discussion of whether the model should be emerge from the standardization work or predefined; but this probably not so important as in a real standardization process you have limited time to produce standard. If at end of it you haven’t even produce domain model then you should not be producing a standard. If you produce a domain model at end of month 6 which is proven wrong in month 18, then again you should not be producing the standard.