CETISFIS Sept10 structure+semantics notes

Characterised existing specs as monolithic, and large. Dan: understand within 2 hours or implement within a day, otherwise spec is too complex Problem for adoptability if cannot pick and mix elements of spec. Or make application profiles which select from standard. Recreated granularity problem that we had learning resources. No one fully in favour of large specs: picking and choosing from smaller specs and profiles is not fully understood. BUT we don’t really know how to put small specs from different orgs together: sometimes can’t even agree on what semantic modelling approach to use. “Micro schemas” which can be imported where needed. Mechanism will depend on whether users are humans of machines. SGML architectures allowed semantics to be maintained even if name was changed. But semantics across specs... Sometimes don’t even get syntactics common. Set of IMS specs begins to provide conceptual map of learning domain, can work (to some extent) with multiple IMS specs. Do have to allow extensibility to allow variants on concept map. All becomes part of a larger modular system. How do you tell other systems what profile, what part of spec you have implemented. Tell provider what elements you require? Degree of automation will depend on maturity of spec. IP implication of profiles: cannot make derivative works of W3C, IMS, OASIS. Pathways (thinking about http/html) Progressive implementation: start simple work on more complex as adoption takes up. Series of mini-profiles? As done for Learning Design (but could be simpler than level A) What does this mean for conformance. Need modular conformance. In CMI SCORM idea of levels has been very problematic. Pragmatic implementation better than strict implementation. Write (aim to) strict but read lax as paradigm Most of what we are talking about is highly dynamic, strict conformance not going to happen (ex. Of TCP/IP as high-conformance std stable infrastructure std, HTML is low conformance). Need to be careful to allow innovation.

Are we starting in the right place? Our history is a problem. All specs come from prior work that pre-dates the web. All based on idea that data was record-based and stored centrally and exchanged between centralized stores. Is this a suitable place to be??? Do we need to re-think the model. More about connexions than stores. Need different data models, not record-based. Activity-based web not document-based web? Messages not records. Streams not download. More dynamic can disaggregate records. Race to the top. Don’t package assessment, access and use services.

How do share specs from different domains? Even within single org like IMS people can use same terms with different meanings. Model for content from Dan: Content –describedBy metadata, Content—usedIn context, Context associated with paradata. Semantic model of nouns (need verbs and actors) IMS stds: content  activities   ?. Prof Yu In China, developing learning cell model links networks of content to networks of people (with semantic model). looking towards actors as central to education: Personalized systems rather than enterprise systems. Semantics of content or semantics of networks. Pushing chunks of content aligns with knowledge-transfer model of education. Education about interaction (with people) and context.

Implicit models Vs explicit models; pre-defined explicit model Vs emerging. Feature declaration, e.g. Open Social. (when do need to know what features are available) What is to be done? Change process of doing standards or type of standard? Change way we start projects, discussion on what output should be: large small modular, semantic structural. Tends to be implied by what has happened before. Need to decide from outset how “hard” standard needs to be. Do we start with standard or start with working service? Very forgiving implementations. Will standard, content or application be blamed when problems happen? We like the way HTML was developed, how often do we follow that approach in LET? “Must understand” feature in content, if not user experience will be invalid. Standard way to build standards. Good practice. Procedural rather than prescriptive. Conceptual modelling is important: emergent or predefined. Aim: understanding of relationships. Standardization is dialogue between domain and technical exports. Probably not so important: you have limited time to produce standard if at end of it you haven’t even produce domain model then you should not be producing a standard—cancel it. If you produce a domain model at end of month 6 which is proven wrong in month 18—cancel standard. From last meeting relatively little said about how improve technical quality of standards. Are the right stakeholders involved in process? In real world don’t get enough input. Default vote is yes. If we treat std as technical artefact: what do we judge it by, what constructs do we need?: We say small standards are better how do we know that?