HOME
 

CONTACTS
 

GENERAL INFORMATION
 

MATERIAL FOR ESIGN PARTNERS

 

MATERIAL FOR THE UEA ESIGN TEAM

 

SITE MAP
 

Summary

 
On this page: Detailed information about the eSIGN project

Project summary

Further details

Project summary

The eSIGN project aims to provide sign language on websites. The project is working on local Government websites in Germany, the Netherlands and the UK.

This project is based on virtual signing technology which uses virtual humans (or 'Avatars'). This technology is still in development, but virtual signing can be better than using videos of real human signers in some situations. With virtual signing it is possible to change small sections of the signing quickly and easily (to update the site for example) without having to record the whole section again. Virtual signing will be quicker to download than videos and does not take up lots of space on Internet servers.

The system being used and developed for the eSIGN project is based on databases (or 'lexicons') of signs. This means that when an individual sign has been created for one section of signing, it can be used again in other sections. As these databases of signs grow, it becomes easier and quicker to put signing onto websites. In the future it will therefore be possible to provide sign language on more and more websites.

Deaf people are working on the project to create the signed content (qualified sign language experts) and many users will be involved in evaluating the quality of the signing throughout the project. Virtual signing could become an important access method for Deaf people as access to information in general becomes more technology-based.

Further details

Project aim

The eSIGN project aims to provide important Government information in sign language, using Avatar technology. Sign language is the first language of many Deaf people, and their ability to understand written language may be poor. As such, it is very important for this group to have access to information in their first language, sign language.

Introduction to virtual signing

Virtual signing works by sending commands from a website to animation software installed on a user's PC. This software includes the Avatar (shown here). Customised avatars can be produced, allowing companies and organisations to display a character that conforms to their corporate image.

Signed content can be placed on the Internet by recording videos of human signers. However, virtual signing has many advantages when compared to providing videos of human signing. Firstly, the information that is sent from the website to the Avatar is more compact than the information that must be sent when downloading video clips. This means that high quality signing can be provided over low bandwidth Internet connections, something that is not possible when using video. In addition the Avatar itself is a real 3D system, so not only is the quality of the image high when compared to video systems but the character can be rotated by the user to provide the optimum view.

Producing video clips of human signing on websites is expensive, and when any detail of the content changes the clip must be re-recorded. This does not fit well with the normal model of information provision on the web, where the ease and speed of updates is a fundamental feature. Virtual signing technology allows small components of the signed content to be changed without the need to re-create the entire clip. The use of an Avatar also means that output remains consistent (i.e. played through the same virtual human character) even when many different people work on the creation and modification of content over time. Currently, as the technology is still being developed, the creation of virtual signing content is relatively slow when compared to recording sign language on video. However when virtual signing technology is developed further, content production will be faster and cheaper than video production.

Another key advantage with virtual signing technology is the ability to produce signed output by blending together sequences of signs to make new phrases on demand. This has the potential to integrate into web content management systems, again increasing the viability of including signing on websites. An example of this is train timetables, where content management systems look up train time information from a database and provide specific information on request from users. A virtual signing system could be created alongside this that would use the same data source as a reference to create signed information. As the signing system would run alongside the 'standard' system, it would not require updating each time the timetables change.

Creating virtual signed content

There are two ways to create virtual signed content: motion capture and synthetic signing. Motion capture works by using a combination of technologies, such as motion capture gloves and position markers, to capture detailed movement of signing components from a human signer. This information can then be stored, manipulated, and sent to the Avatar for playback.

Synthetic signing works by sending motion commands, in the form of written codes, for the Avatar to animate. In eSIGN we use various coding systems to describe the different components of signing (manual components, facial expressions, mouth patterns, head and body movements) that are then animated by the Avatar. The animation system has a pre-programmed model of how each signing component should be animated, and these have also been developed and refined during the eSIGN project.

Signs created using the synthetic signing approach generally appear less natural than signs produced from motion capture data. However synthetic signing has some clear advantages over motion capture for use in providing content for websites. The main advantage is the ease with which signs can be blended together. It is possible to blend together 'chunks' of motion captured signing, but with synthetic signing it is possible to create individual signs and automatically blend these together to create phrases of sign language.

This means that once an individual sign has been created for one phrase, it can be stored in a lexicon, and reused with little or no modification. As more signs are stored and the lexicon grows in size, more and more signed phrases can be built by simply pulling in signs from the database, meaning that the time needed to create content is reduced.

The future for virtual signing

Skilled personnel are required to ensure the future of virtual signing. In order to add new signs to a lexicon, individuals with a good knowledge of sign language linguistics are required. In addition they must be trained in the notation systems that are used to notate sign components.

Taking signs out of the lexicon to create sequences of sign language requires translation skills (for example English to British Sign Language) and good knowledge of the target sign language. The person who undertakes this role may be a bilingual Deaf person, a relay interpreter or a hearing interpreter, for example.

Initially, the creation of signed content is time consuming and therefore not economically viable. However, as more signs are created and saved in the lexicon, it becomes possible to create signed sequences quickly and efficiently. In Germany, the Institute of German Sign Language at the University of Hamburg has a considerable lexicon of German Sign Language (DGS) signs, while the Netherlands and the UK both started the eSIGN project without a lexicon. For these two countries further lexicon development will be required before it will become feasible to produce content economically.

It is envisaged that virtual signing will continue to develop into a viable solution for the provision of sign language on the Internet. As lexicons are built up, it will be possible for competent sign language translators to create signed sequences very quickly and cheaply. Content providers will therefore have the option of giving information translations or explanations in sign language, making a realistic and viable business solution. Information could be presented in a wide variety of locations; on the Internet, on public display systems and to support face-to-face transactions between hearing and Deaf people.

 

Download a summary of the eSIGN project Partnership (Powerpoint presentation, 332K)


         

Maintained byJudy Tryggvason (jt@cmp.uea.ac.uk)