SIMILAR

The European Research Taskforce Creating Human-Machine Interfaces SIMILAR to humans

 
  eNTERFACE 2005


  Objectives  
 

The long-term objective of SIMILAR is to merge European research in the human-machine interface field
into a single European taskforce.
The immediate objectives are:
1) to integrate the traditionally separated research communities in visual interaction, interactive speech,
interactive haptics, learning, and human-computer interaction (HCI) across Europe;
2) to integrate national and regional research communities in the above fields in a larger community;
3) to promote standards, solutions and dissemination of knowledge in multimodal interaction and
4) to eventually lead to the establishment of a single ‘European virtual research centre in multimodal interaction
with strong industrial support.

 

  Consortium  
 

Université Catholique de Louvain-la-Neuve, Belgium
Aristotle Univ. of Thessaloniki, Greece
Ecole Polytechnique Fédérale de Lausanne, Switzerland
Institut Eurécom, France    
Faculté Polytechnique de Mons, Belgium CReATIVe-I3S Laboratory, University of Nice-Sophia Antipolis and CNRS
National Institute for R. and D. in Informatics-ICI Bucharest, Romania
Imperial College London
Institut National de Recherche en Informatique et en Automatique, France
Consiglio Nazionale delle Ricerche-ISTI, Italy
University of Linkoping, Sweden
Universidad de Valladolid-LPI-UVA, Spain
University of Southern Denmark-NISLAB, Denmark
University College Dublin, Ireland
University of Las Palmas Gran Canaria,Spain
University of Geneva-CVML, Switzerland
Universitat Politècnica de Catalunya, Spain Zentrum für Graphische Datenverarbeitung e.V., Germany

 
   
  Link  
  www.intuition-eunetwork.net

The eNTERFACE workshops aim at establishing a tradition of collaborative, localized research and development work by gathering, in a single place, a team of leading professionals in multimodal interfaces together with students (both graduate and undergraduate), to work on a prespecified list of challenges, for 4 complete weeks.
The workshop is held on a yearly basis, in July-August, and organized around several research Projects dealing with multimodal human-machine interfaces design. It is thus radically different from traditional, scientific workshops, in which only specialists meet for a few days to discuss state-of-the art problems, but do not really work together.
The senior reseachers are mainly members of the SIMILAR Network of Excellence, together with other university professors and industrial and governmental researchers presently working in widely dispersed locations. A small number (usually 7-9) of  undergraduates are also selected through an open Call for Participation, set on outstanding academic promise.
About thirty graduate students familiar with the field are selected in accordance with their demonstrated performance.

Members of the ITI/AVRLab team were selected in both the 2005 and 2006 editions of the eNTERFACE summer workshops to contribute on the following projects :

A Multimodal (Gesture+Speech) Interface for 3D Model Search and Retrieval Integrated in a Virtual Assembly Application :


development of a multimodal interface for content-based search of 3D objects based on sketches. This user interface will integrate graphical, gesture and speech modalities to aid the user in sketching the outline of the 3D object he/she wants to search from a large database. Finally, the system will be integrated in a virtual assembly application, where the user will be able to assemble a machine from its spare parts using only speech and specific gestures.

(July, August 2005, Mons, Belgium)

Multimodal tools and interfaces for the intercommunication between visually impaired and “deaf and mute” people :


building of a multimodal interface combining visual, aural and haptic interaction with gesture-speech recognition, speech synthesis and sign language recognition and synthesis, in order to enable the communication of people exhibiting different kinds of disabilities.

(July-August 2006, Dubrovnik, Croatia)


Multimodal user interface for the communication of the disabled:

Upgrade of the treasure hunting game application that is jointly played by the blind and deaf-and-mute user. The integration of the multimodal interfaces into a game application serves both as an entertainment and a pleasant education tool to the users. The proposed application integrates haptics, audio, visual output and computer vision, sign language analysis and synthesis, speech recognition and synthesis, in order to provide an interactive environment where the blind and deaf-and-mute users can collaborate to play the treasure hunting game.

Game Demonstration Video Files