Outils pour utilisateurs

Outils du site


en:start

Research activities on Modelling and Automatic Processing of Sign Languages at LISN (aka LIMSI)

Update : 3 november 2023

Permanent researcher : Annelies Braffort, Michael Filhol, Michèle Gouiffès
Permanent computer graphist : Cyril Verrecchia
PhD students : Camille Challant, Julie Halbout, Yanis Ouakrim, Paritosh Sharma
Post-docs : Emmanuella Martinod
Engineers : Thomas Von Ascheberg, Julie Halbout
External collaborator : Claire Danet

Sign Languages (SL) are natural languages used in Deaf communities, and French Sign Language (LSF) is the language used in France. They are visuo-gestural languages: a person expresses him/her-self in SL using numerous bodily components (hands and arms, but also facial expressions, gaze, torso, etc.) and their interlocutor perceives the message through the visual channel. The SL linguistic system exploits these specific channels: a large amount of information is expressed simultaneously and organised spatially, and iconicity plays a central role. To date, SL do not have a standard writing or graphic system for transcription. They are still poorly described and have under-resources (very few reference works, limited signbanks, partial knowledge of grammar, few resources in general). Computer modelling of SL requires the design of representations with little available data, and where pre-existing models, which are essentially linear, have been developed for written or spoken languages and do not cover all aspects of LS.
Our research questions cover the following aspects: How can sign languages be analysed, represented and processed? How can we take into account the linguistic specificities linked to their visual-gestural nature (multilinearity, spatialisation, iconicity)? What types of approach are possible with little French sign language (LSF) data?
Through national and international projects and collaborations, we produce linguistic resources and tackle problems of analysis, representation and processing of LSF in an interdisciplinary way, with points of view from several fields of computer science (NLP, signal processing, computer vision, computer graphics), as well as from the sciences of language, movement and perception.
The corpora produced or co-produced by LISN are accessible to public research on the Ortolang website.

Demos, popularising science

  • Video that explains our research projects
  • Vidéo showing the prototype of the French/LSF computer-assisted translation software developed as part of M. Kaczmarek's PhD
  • Quiz-LSF : Serious games to raise awareness of iconicity in the LSF lexicon
  • Quiz-photo : Serious games to raise awareness of iconicity in speech in sign language

PhDs

  • Ongoing
    • Modelling
      • 2021-2024 : Camille Challant, Formal representation and grammatical constraints for French sign language link. Supervisor M. Filhol. ED STIC de l'Université Paris-Saclay.
    • Animation, generation
      • 2021-2024 : Paritosh Sharma, Sign language synthesis using an AZee-based animation system with decreasing granularity link. Supervisor M. Filhol. ED STIC de l'Université Paris-Saclay.
    • Analysis, recognition, translation
      • 2021-2024 : Yanis Ouakrim (AMI, Gipsa-Lab), Recognition of continuous LS to create a gestural server link. Supervisors M. Gouiffès (AMI) et D. Beautemps (Gipsa-Lab), co-supervisors A. Braffort et T. Hueber (Gipsa-Lab),Grenoble-Alpes University.
      • 2023-2026 : Julie Halbout, Computer vision and natural language processing for the enrichment of French sign language resources, co-supervisors M. Gouiffès (AMI) and A. Braffort. Paris-Saclay University.
  • Recently defended, video of translation or interpretation in LSF sometimes available.
    • 2023 : Hannah Bull, Learning Sign Language from Subtitles link. Video soon available
    • 2022 : Marion Kaczmarek, (in French) Spécification d'un logiciel de traduction assistée par ordinateur à destination des langues signées link. Vidéo bientôt en ligne
    • 2021 : Félix Bigand, Extracting human characteristics from motion using machine learning : the case of identity in Sign Language link. Video HERE
    • 2021 : Michael Filhol, (in French) HDR. Modélisation, traitement automatique et outillage logiciel des langues des signes link
    • 2020 : Valentin Belissen, From Sign Recognition to Automatic Sign Language Understanding : Addressing the Non-Conventionalized Units link. Video HERE

Projects and collaborations

  • Ongoing
    • 2021-2024 : Easier - European Horizon 2020 project
      • Coordination for LISN: M. Filhol.
      • The objective is to design, develop and validate a complete multilingual translation system for borderless communication between deaf and hearing individuals, as well as a platform to support the creation of sign language content.
      • People involved at LISN: 2 permanent staff (M. Filhol, A. Braffort), 1 visiting researcher (J. McDonald), 1 research engineer (T. Von Ascheberg)
    • 2020-2024 : Gestural Server - BPI France PSPC project.
      • Coordination for LISN: A. Braffort.
      • The objective of the Gestural Server project is to provide deaf sign language users with the equivalent of a voice server for hearing people, in partnership with the Gipsa-lab laboratory and the IVèS and 4DViews companies. LISN is involved in both the automatic recognition and the automatic generation parts.
      • Partners: IVès and 4Dviews companies, LISN and Gipsa-Lab laboratories.
      • People involved at LISN: 4 permanent staff (A. Braffort, M. Filhol, M. Gouiffès, C. Verrecchia), 3 PhD students (Y. Ouakrim, P. Sharma, H. Bull) and 2 post-docs (C. Danet, E. Martinod).
    • Collaboration with the ASL Avatar project at DePaul University in Chicago
      • People involved at LISN: M. Filhol and A. Braffort
      • People involved at DePaul: J. McDonald and R. Wolfe
      • Studies on motion analysis and sign language synthesis. Generation of 3D animations from linguistic representations developed in the team, motion analysis from the team's mocap corpora for modelling and synthesis.
  • Recently completed
    • 2018-2021 : Rosetta - PIA Major digital challenges project
      • Coordination for LISN: A. Braffort
      • Development of an automatic generator of multilingual subtitles for television programmes and internet video content for the deaf and hard of hearing based on artificial intelligence and development of an automatic system for representing French sign language (LSF) in the form of an animation based on a 3D signing avatar.
      • Partners: Systran, MocapLab and MFP (Multimédia France Productions/subsidiary of France Télévision), LISN and CHArt-LUTIN laboratories.
      • People involved at LISN: 5 permanent staff (A. Braffort, M. Filhol, M. Gouiffès, E. Prigent, F. Yvon), 3 PhD students (F. Bigand, F. Buet, M. Kaszmarek), 3 post-docs (V. Belissen, C. Danet, E. Martinot).
Traductions de cette page:
en/start.txt · Dernière modification : 2023/11/16 16:40 de braffort

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki