Project Scope

Sign Languages (SLs), being the mother tongues of the deaf, are an important part of the European and the world cultural diversity. In Europe, there are 30 official SLs and more than 750000 SL users, while only 12000 interpreters. This shortage undermines the right to equal education and health services, and even endangers the lives of deaf people.

DeepSignNet makes significant contributions to automated visual SL recognition (SLR). We address: (a) Inference of the appropriate machine learning model size to limit the amount of parameters to learn, and (b) integration of prior linguistic constraints and non-manual cues. 

  To this end, we will collaborate with a world-leading group, the Computational Bioimaging and Modeling (CBIM) Center of Rutgers University – New Jersey. It is one of the leaders in computer vision and has extensive know–how in SL recognition.

Project Innovation and Originality

Tools and algorithms to infer model structure, as opposed to cross-validation. 

We propose inference of the amount of components to represent the input via nonparametric Bayes.

Integration of prior information to limit  needs in training data. 

These priors may reflect linguistic constraints (e.g. hand shapes at the beginning or end of a manual sign).

Develop an end-to-end system. 

The input data will be continuous signs. No artificial markers will be used and no engineering will be required. Recognition will be performed “in the wild.”

This synergistic innovation will bring the partners to the forefront of the area, well beyond the present state-of-the-art.