Accessibility SIG Meeting 24th July 2007

Back to Previous Accessibility SIG Meetings Index.

16th Accessibility SIG Meeting

Introduction
The 16th CETIS-TechDis Accessibility SIG Meeting was held on Tuesday 24th July, 2007 at the University of Wolverhampton.

This meeting followed the more traditional SIG meeting format and consisted of presentations and demonstrations.

Back to Top of Page

JAWS Dropping: Ever Mending, Never Ending! A DIY Approach to Accessibility
by Shane Sutherland and Emelye Evans, University of Wolverhampton.

PebblePad is an ePortfolio system built in Flash, which allows users to collect items relating to their studies in one place, including links to any student evidence. It is used at all levels of education from primary schools to professional learning and can also be accessed via the web. Shane gave an introduction to PebblePad and described some of the design considerations that were taken into account when developing the application. Although it is described as an ePortfolio system, it also takes on some of the characteristics of a personal learning space as it allows users to share items, and to obtain feedback and support. The application uses the IMS ePortfolio Specification, RSS (Really Simple Syndication), stylesheets, etc, but the aim was to avoid users having to worry about any such complexity. The software had to be easy to use and personal.

Accessibility and usability were one of the key elements of the software and were built-in from the very beginning. There are some preferences, which can be set by the user, including the ability to change the colour of menu backgrounds and labels. The personalisation options help to make the space feel as though it belongs to the user. Pebble Learning wanted to make the application fun and interesting to use. However, the challenge was to make it accessible and usable, which did cause some problems for the developers!

The developers first started designing with Flash 7, but using Flash was the cause of several problems. For example, although the JAWS screen reader was being updated on a regular basis, Flash was not keeping up. Also, Flash is not really designed to be accessible for complex systems. Although the latest versions of Flash are more and more accessible, universities tend to be slower to update their versions. JAWS tends to have problems with some Flash components, such as those components that make up web forms, as Macromedia/Adobe have not made these features very accessible. The developers tried hard to make PebblePad work with JAWS and although TechDis was approached, they were unable to help as Pebble Learning is classed as a commercial company.

As the application development progressed, it was getting harder to control how JAWS interacted with the application. In the end, it was decided that it might be easier to build an avatar called Mia, to act as the text-to-speech delivery mechanism. Moving away from JAWS has given the developers more control but it has also taken longer to design the code. Avatars generally come with some text-to-speech components and non-visually impaired tutors working with visually impaired students have found that actually seeing the avatar talk provides a useful clue. People with dyslexia might also find useful the avatar's ability to read back text. The developers built their own avatar as it was too expensive to buy one off-the-shelf.

Emelye then continued the presentation (her notes are available in Word format) and described how she had spent a couple of years wrestling with JAWS, and that it only took a week or so to build the avatar, so that no assistive technology is required by users to interact with the software. Pebble Learning is a small company, but even so, they have taken the forward step of employing a full-time accessibility developer (Emelye), which although costly initially, should improve the product for everyone.

Microsoft's SAPI (Speech Application Programming Interface) was set up on a server to take care of the dynamic text-to-speech requirements. The avatar's lips are animated and synchronised to the MP3s. Long pieces of text have to be comma-separated to ensure that the text is sent in chunks to the server. The alphabet is already pre-loaded, so that users will not experience any delays when entering text into free-form fields. Of course, this means that it is not quite as fast as a standard screen reader. At present, the avatar comes with the free Microsoft voice, with the opportunity to change the pitch and speed. Although this voice is a little robotic, other voices could be used and it is anticipated that a default voice will be chosen by each institution.

Visually impaired users then to have customised shortcuts and intially, it was found that PebblePad conflicted with them. The only solution has been to ask assistive technology users to turn off their assistive technology before they using PebblePad. However, users can press CTRL-F3 to access any shortcuts relating to the application. This shortcut is the only one that users have to remember. It is possible that user defined shortcuts may be allowed in a future version of the product. The avatar will also describe the areas on the screen and will give as much information as possible, so that users can build up a mental image of the screen (this feature is still under development). It was decided that the more information that could be given to the user, the better. However, one of the disadvantages is that if the user wants to move to another software application, such as Word or a browser, at the same time as using PebblePad, the user will need to start up any assistive technology again. The developers are looking into this issue and are working with the RNCB (Royal National College for the Blind).

Back to Top of Page

Development in Standards and their Application to User Focused Usability
by Andy Heath, Open University.

Andy Heath gave a presentation about the various accessibility specifications and standards that are available to developers.

W3C's (World Wide Web Consortium) WCAG (Web Content Accessibility Guidelines) focuses on universal accessibility but there can be conflicts in the recommendations, so that if it were implemented to the letter, the web resource may not work for all the people all of the time. In any case, it's hard to imagine all of the circumstances in which a web-based resource would be used. Nevertheless, despite the fact that universal design doesn't work, it does have some value, and should be used in conjunction with resource adaptation, whereby resources are matched at the time of delivery with a user's preferences. There will be times when alternative versions of resources will need to be developed for different needs. Therefore, there is a need to label resources with information relating to their format and to then match them with what the user needs and is able to access. There are several possible stages to matching resources with users:

 Label the resource with its accessibility properties - e.g. visual, aural, tactile, text (note: text can be quite complex as it is represented and used in different ways, c.f. reading levels, etc). Provide alternative versions of the resource - e.g. an aural version of a visual resource, etc. Develop a standard machine readable format of the user's requirements (preferences). Match the user's requirements with the available resources.

Despite all this critism of the WCAG approach, it is still vital, as it is important to make resources as accessible as possible in the first place. One example, is TILE (The Inclusive Learning Exchange) which allows a user to describe their preferences and access resources that match those preferences.

The IMS ACCLIP (Accessibility for Learner Information Package) Specification provides a means of capturing content requirements and user preferences for display and control. Both the ACCLIP and ACCMD (AccessForAll Metadata) Specifications will be brought in line with the ISO Standard (see next paragraph). A charter for this work is in the process of being developed. It will include bindings to REST (Representational State Transfer), and RDF (Resource Development Framework), XML (eXtensible Mark-up Language)/WSDL (Web Services Description Language). The work will take about two years and will include demonstration and implementation examples.

Although the IMS Content Packaging Specification (which describes how resources can be contained and organised) originally only had a format for exchanging content between learning management systems, it now includes a facility to describe alternative resources. Version 1.2 of the Specification allows alternative resources to be contained in a content package along with any metadata. Alternative resources can also contain further alternative resources.

The EU4ALL Project will have a service oriented architecture, that will match the ACCLIP and ACCMD with the CC/PP (Composite Capabilities/Preference Profiles) work to ensure a blended provision of learning resources.

The ISO/IEC (International Organization for Standardization/International Electrotechnical Commission) JTC1 (Joint Technical Committee 1) SC36 (SubCommittee 36) Individualized Adaptability and Accessibility in E-learning, Education and Training Standard is currently at draft level but should be made public by the end of 2007. There will also be new additions to the ISO work, including:


 * A means of describing events and places;
 * Blended alternatives to online content (e.g. the provision of notes in a large font several weeks in advance of a lecture);
 * Language equivalents - e.g. equivalents to metaphors and idioms in various languages.

The Fluid Project led by the University of Toronto, will be implementing the W3C's ARIA (Accessible Rich Internet Applications) work and developing new ACCLIP and ACCMD metadata for the resulting interfaces. ARIA enables the role of widgets in web pages which communicate with a server to be recognised and to ascertain whether their state has changed, and will improve accessibility.

Back to Top of Page

The EU4ALL Project
by Chris Douce, Open University.

Chris Douce gave an overview of the EU4ALL Project, of which the Open University is one of nine European partners. The project is funded under the Sixth Framework Programme and is led by UNED (Universidad Nacional de Educación a Distancia) in Spain. The Open University part of the project is currently housed in the Institute of Educational Technology. Chris is part of the development team.

The EU4ALL Project aims to construct a standards based accessible architecture that will support the needs of individuals within HE (Higher Education). There is an emphasis on lifelong learning in order to take into account Europe's aging population.

The key efforts will be:


 * Embedding of accessibility into learning environments, both cultural and technical (e.g. BlackBoard, Moodle, etc);
 * Customisation/personalisation of the user experience at different levels. There is often confusion over the granularity of personalisation.  The project will attempt personalisation at the media, activity and learning object levels;
 * Investigation into the process of user needs provision;
 * Development and exploration of international standards;
 * Creation of an open SOA (Service Oriented Architecture).

There are seven phases of activity and the project is currently at the second phase (Phase 1: Requirements Gathering, Phase 2: Architecture). The development team is currently working on an integration architecture, aspects of which will be informed by interviews that the Open University is currently conducting with members of the accessibility community. The implementation will resemble the ELF (E-Learning Framework) - there will be basic and user services, it will be implemented using OpenACS (Open Architecture Community System) and will be evaluated using real users. There will be a range of evaluations. Some will be small scale (10-12 users), whilst others will be larger (over 100 users) and be co-ordinated by the Open University and UNED. The aim is also to create:


 * A set of pedagogical guidelines, which will help people design for VLEs (Virtual Learning Environments);
 * Psychological guidelines;
 * Best practices in online assessment;
 * Report on accessibility issues in ePortfolios;
 * Recommendations of the most relevant standands.

The project aims to connect everything together. The development will attempt to ensure that standards can be successfully implemented. It can be challenging to unpick the words and to understand the author's intentions - the interpretation the developer places on a line of text may not be the one the author intended. There is a problem understanding standards - they are often developed by vendors and what they want is put in the standards rather than what the user wants.

The ACCMD Specification and ACCLIP services are not well developed, which makes interoperability difficult. For example, who is actually responsible for running the ACCLIP service? It might be necessary to have it all in one place - e.g. on a dedicated website or on a USB (Universal Serial Bus) device, although the average person may not necessarily carry one around with them all the time!

In summary, the aim is to develop an open standard based architecture of services. It will contribute to e-learning, international learning technology, and the development of European policy.

Back to Top of Page

Putting the User at the Heart of the W3C Process
by Jonathan Chetwynd, Peepo.

Jonathan gave a presentation about the W3C specifications do not always reflect user needs. An MP3 and transcript of Jonathan's presentation is available.

Peepo was launched in 2001 and lasted for two years. Most of the work centred around the assessment of issues that people with cognitive difficulties might have around accessing web based data. Peepo allows an icon to be enlarged to full screen size and the images can be tabbed through for people who have visual impairments or who can not process complicated information. Some people may only want to see ten links at a time, for example. It has been predominately developed using CSS (Cascading Style Sheets) and SVG (Scalable Vector Graphics) with some RDF. However, some scren readers don't seem to work with SVG, particularly as any sound can't be turned off at present.

A demonstration of iSketch was given. This is a sketching game similar to Pictionary and plays a sound when the word has been correctly guessed. People type guess what is being drawn by typing in a chat window. It is a simple interface for young children to use and they can learn to associate a graphic with a particular sound. It's often difficult to find simple tools to use.

Jaiku is a social networking website, which can pull in information from twitter, blogs and del.icio.us feeds. Twitter is simple to use and produces a feed that can be easily passed on to friends. However, writing skills are required and these sites are of little use to people who prefer graphical information.

Amaya is a W3C browser which uses SVG and aims to be accessible. Individual parts of a diagram on a web page can selected and pasted into a new document. This can only be done in Amaya. As people may want to copy and paste symbols, there should be some sort of specification to cover this, but at present, it doesn't exist. Big companies sponsor the W3C's work, and so they tend to have more influence. So are the specifications designed for the user or for vendors?

The SVG specification doesn't make any allowances for keyboard access or tabbing. Nevertheless, both the Mozilla and Opera browsers have implemented keyboard access despite there being no reference to such a function in the specification.

Luis van Ahn has an interesting captioned video on captchas and has designed a game where two people have to agree on the same description of a random image without using certain words. Points are achieved by agreeing on the same word - it is an enjoyable way of metatagging.

It might be useful is Google searches could provide Flesch-Kincaid indexes, which measure word length, the number of long words, and the number of words in a sentence (some long words which are in common parlance are allowed).

Back to Top of Page

Demonstration of University of Wolverhampton's "Student Development and Enablement Centre"
'''by Nick Musgrove and Richard Homfray, University of Wolverhampton.

We then had a visit to the CETL (Centre of Excellence in Teaching and Learning) funded SDEC (Student Development and Enablement Centre) managed by the School of Applied Sciences.

The centre has around half a dozen workstations, each of which has twin monitors, so that one can be used for navigation, for example, and the other to magnify the content. It offers and exclusive, inclusive face to those students who are eligible. Any student registered with the special needs tutor can use the facility at any time and it is only available for those students who need it. These students are encouraged to use the room, although it is only available for students of the School of Applied Sciences.

At present, there aren't any students who use a screen reader but the latest Supernova is available, via an interceptor file. Dictation software, Dragon Dictate is available on two machines, and there are also scanners. DAISY (Digital Accessible Information SYstem) is on every machine and content can be converted into DAISY. There is a projector and screen so small group workshops can be held.

Equipment and software were chosen because there might be a use for it, in other words, future needs have been anticipated. The room is above a large computer lab but anything happening, such as lectures in that lab can be seen and heard upstairs as lecturers have radio mikes and can hear the students in the Enablement Centre via walkie-talkies. This was seen as the cheapest solution. In fact, most of the solutions have been fairly cheap and simple.

End of Accessibiltiy SIG Meeting 24th July 2007: Back to Top of Page