Seven workshops will be organized at FG’24. More details about each workshop are found below:

Synthetic Data for Face and Gesture Analysis

Organizers: Deepak Kumar Jain (Dalian University of Technology), Pourya Shamsolmoali (East China Normal University), Fadi Boutros (Fraunhofer IGD), Naser Damer (Fraunhofer Institute for Computer Graphics Research IGD and TU Darmstadt ), Vitomir Struc (University of Ljubljana)

Abstract: Recent advancements in generative models within the realms of computer vision and artificial intelligence have revolutionized the way researchers approach data-driven tasks. The advent of sophisticated generative models, such as GANs (Generative Adversarial Networks), VAEs (Variational Autoencoders), or more recently, diffusion models, has empowered practitioners to create synthetic data that closely mirrors real-world scenarios. These models enable the generation of high-fidelity images and sequences, laying the foundation for groundbreaking applications in face and gesture analysis. The significance of these generative models lies in their ability to produce synthetic data that is remarkably realistic, thereby mitigating challenges associated with data scarcity and privacy concerns. As a result, the utilization of synthetic data has become increasingly prevalent in various research domains, offering a versatile and ethical alternative for training and testing machine learning algorithms. This workshop aims to delve into the diverse applications of synthetic data in the realm of face and gesture analysis. Participants will explore how synthetic datasets have been instrumental in training facial recognition systems, enhancing emotion detection models, and refining gesture recognition algorithms. The workshop will showcase exemplary use cases where the integration of synthetic data has not only overcome data limitations but has also fostered the development of more robust and accurate models.


Program (May 27 Morning, Room 2):

9:00 – 9:05: Opening Session

9:05 – 10:00: Keynote talk: Prof. Rama Chellappa

10:00 – 10:45: Session 1 – Applications of Synthetic Data

A Study of Video-based Human Representation for American Sign Language Alphabet Generation; Fei Xu; Lipisha Chaudhary; Lu Dong; Srirangaraj Setlur; Venu Govindaraju; Ifeoma Nwogu

Training Against Disguises: Addressing and Mitigating Bias in Facial Emotion Recognition with Synthetic Data; AAdith Sukumar; Aditya Desai; Peeyush Singhal ; Sai Gokhale; Deepak Kumar Jain; Rahee Walambe; Ketan V Kotecha

DiCTI: Diffusion-based Clothing Designer via Text-guided Input; Ajda Lampe; Julija Stopar; Deepak Kumar Jain; Shinichiro Omachi; Peter Peer; Vitomir Štruc

10:45 – 11:00: Coffee Break

11:00 – 12:15: Session 2 – Generation and Detection of Synthetic Data

Towards Inclusive Face Recognition Through Synthetic Ethnicity Alteration; Praveen Kumar Chandaliya; Kiran Raja; Raghavendra Ramachandra; Zahid Akhtar; Christoph Busch

Massively Annotated Datasets for Assessment of Synthetic and Real Data in Face Recognition; Pedro C. Neto; Rafael M Mamede; Carolina Albuquerque; Tiago FS Gonçalves; Ana F. Sequeira

Analyzing the Feature Extractor Networks for Face Image Synthesis; Erdi Sarıtaş; Hazim Kemal Ekenel

INDIFACE: Illuminating India’s Deepfake Landscape with a Comprehensive Dataset; Kartik Kuckreja; Ximi Hoque; Nishit Nilesh Poddar; Shukesh G Reddy; Abhinav Dhall; Abhijit Das

Real, fake and synthetic faces – does the coin have three sides? Shahzeb Naeem; Ramzi Al-Sharawi; Muhammad Riyyan Khan; Usman Tariq*; Abhinav Dhall; Hasan Al-Nashash

12:15 – 12:20 Closing session

Advancements in Facial Expression Analysis and Synthesis: Past, Present, and Future

Organizers: Itir Onal Ertugrul (Utrecht University), Laszlo A Jeni (Carnegie Mellon University)

Abstract: This workshop aims to bring together computer scientists, psychologists and behavioral scientists who have been working on automated analysis and synthesis of facial expressions and their application in several domains including assessment of pain, mental health, personality, and emotion among others. With the invited talks by distinguished researchers in the field, we aim to shed light on the past, present, and future of face analysis and synthesis. The workshop will conclude with a dynamic panel discussion, featuring interdisciplinary researchers and their valuable insights into the multidimensional aspects of facial expression analysis and synthesis.


Program (May 27 Afternoon, Room 2):

14:00 – 15:30 Sessions

15:30 – 16:00 Coffee break

16:00 – 18:00 Sessions

First International Workshop on Responsible Face Image Processing (ReFIP 2024)

Organizers: Andrea Atzori (University of Cagliari), Fadi Boutros (Fraunhofer IGD), Lucia Cascone (University of Salerno), Naser Damer (Fraunhofer Institute for Computer Graphics Research IGD and TU Darmstadt ), Mirko Marras (University of Cagliari), Ruben Tolosana (Universidad Autonoma de Madrid), Ruben Vera-Rodriguez (Universidad Autónoma de Madrid)

Abstract: The consideration of ethical dimensions beyond mere accuracy is increasingly important in both industrial and academic spheres, given the pervasive influence of facial image processing systems in our daily lives. Despite this attention, crucial aspects such as fairness, accountability, transparency, and privacy remain under-explored in the domain of facial image processing systems. To have a better understanding of these aspects, our workshop on responsible face image processing (ReFIP) aims to gather high-quality, impactful, and original research in this emerging field, providing a shared platform for researchers and practitioners. This workshop seeks to go beyond domain-generic studies in the literature, fostering a deeper understanding of the ethical aspects associated with facial image processing, generating vivid community exchanges.


Program (May 31 Morning, Room 1):

09:00 – 10:30


10:30 – 11:00

Coffee break

11:00 – 13:30


2nd Workshop on Learning with Few or without Annotated Face, Body and Gesture Data (LFA-FG2024)

Organizers: Maxime Devanne (Université de Haute Alsace), Mohamed Daoudi (IMT Nord Europe/CRIStAL (UMR 9189)), Germain Forestier (University of Haute Alsace), Jonathan Weber (University of Haute Alsace), Stefano Berretti (University of Florence, Italy)

Abstract: Since more than a decade, Deep Learning has been successfully employed for vision-based face, body and gesture analysis, both for static and dynamic granularities. This is particularly due to the development of effective deep architectures and the release of quite consequent datasets.

However, one of the main limitations of Deep Learning is that it requires large scale annotated datasets to train efficient models. Gathering such face, body or gesture data and annotating them can be time consuming and laborious. This is particularly the case in areas where experts from the field are required, like in the medical domain. In such a case, using crowdsourcing may not be suitable.

In addition, currently available face and/or gesture datasets cover a limited set of categories. This makes the adaptation of trained models to novel categories not straightforward. Finally, while most of the available datasets focus on classification problems with discretized labels, continuous annotations are required in many scenarios. Hence, this significantly complicates the annotation process.    

The goal of this 2nd edition of the workshop is to explore approaches to overcome such limitations by investigating ways to learn from few annotated data, to transfer knowledge from similar domains or problems, to generate new data or to benefit from the community to gather novel large scale annotated datasets.


Program (May 31 Afternoon, Room 1):

14:00 – 14:10

Opening session

14:10 – 14:30

Gait Recognition from Highly Compressed Videos

Andrei Niculae, Andy Catruna, Adrian Cosma, Daniel Rosner, Emilian Radoi

14:30 – 14:50

Aligning Actions and Walking to LLM-Generated Textual Descriptions

Radu Chivereanu, Adrian Cosma, Andy Catruna, Razvan Rughinis, Emilian Radoi

14:50 – 15:10

Exploring Radar Capabilities to Support Gesture-Based Interaction in Smart Environments

Gonçalo Aguiar, Ana P. Rocha, Samuel Silva, António Teixeira

15:10 – 15:30

Interactive Visualization and Dexterity Analysis of Human Movement: AIMove Platform

Brenda Elizabeth Olivas Padilla, Sotiris Manitsaris, Alina Glushkova

15:30 – 16:00

Coffee break

16:00 – 16:20

ENTIRe-ID: An Extensive and Diverse Dataset for Person Re-Identification

Serdar Yıldız, Ahmet Nezih Kasim

16:20 – 16:40

IMEmo: An Interpersonal Relation Multi-Emotion Dataset

Hajer Guerdelli, Claudio Ferrari, Stefano Berretti, Alberto Del Bimbo

16:40 – 17:00

Self-supervised Variational Contrastive Learning with Applications to Face Understanding

Mehmet Can Yavuz, Berrin Yanikoglu


Closing session

The Second Workshop on Privacy-aware and Acceptable Video-based Assistive Technologies

Organizers: Sara Colantonio (Institute of Information Science and Technologies of the National Research Council of Italy), Francisco Flórez-Revuelta (University of Alicante), Martin Kampel (Vienna University of Technology, Computer Vision Lab)

Abstract: The quest for responsible research is a cornerstone of an ethical, legal and social-aware approach to the development of assistive technologies. As technology advances – driven by the huge and rapidly evolving innovations through modern information and communication technologies – it penetrates private domains and interacts with personal, private, and intimate activities. It is a necessary requirement that any technology development should be carefully designed and balanced within societal, cultural and individual values, and norms.

Assistive technologies based on computer vision, multimedia data processing and understanding, and machine intelligence present several advantages in terms of unobtrusiveness and information richness. Indeed, camera sensors are far less obtrusive with respect to the hindrance that other wearable sensors may cause to people’s activities. Currently, video-based applications are effective in recognising and monitoring face expressions, activities, movements, and overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). However, cameras are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information that this technology conveys and the intimate setting where it may be deployed in. Therefore, solutions able to ensure privacy preservation by context and design as well as to ensure high legal and ethical standards are in high demand.

This workshop aims to create a forum for contributions presenting and discussing image- and video-based applications for active assisted living as well as initiatives proposing ethical and privacy-aware solutions.

The workshop is supported by the visuAAL Marie Skłodowska-Curie Innovative Training Network and the GoodBrother COST Action, which aims to bridge the gap between users’ requirements and the safe and secure use of video-based AAL.


Program (May 31 Morning, Room 2):

09:00 – 10:30


10:30 – 11:00

Coffee break

11:00 – 13:30


SkatingVerse: Segmentation and Assessment of Continuous Video in Figure Skating Competition and the 1st SkatingVerse Workshop & Challenge

Organizers: Jian Zhao (Institute of North Electronic Equipment), Lei Jin (Beijing University of Posts and Telecommunications), Zheng Zhu (Tsinghua University), Yinglei Teng (Beijing University of Posts and Telecommunications), Jiaojiao Zhao (University of Amsterdam), Sadaf Gulshad (University of Amsterdam), Zheng Wang (Wuhan University), Bo Zhao (Bank of Montreal), Xiangbo Shu (Nanjing University of Science and Technology), Xuecheng Nie (Meitu Inc.), Xiaojie Jin (Bytedance Inc. USA), Xiaodan Liang (Sun Yat-sen University), Yunchao Wei (UTS), Jianshu Li (Ant Group), Shin’ichi Satoh (National Institute of Informatics), Yandong Guo (AI^2 Robotics), Cewu Lu (Shanghai Jiao Tong University), Junliang Xing (Tsinghua University), Shen Jane (Pensees Technology)

Abstract: Human action understanding in computer vision focuses on locating, classifying, and assessing human actions in videos. However, the current tasks are inadequate for practical application such as fine-grained action segmentation and assessment. To address this, we construct a dataset comprising 1,687 continuous videos from figure skating competitions, encouraging the development of algorithms that can accurately analyze each action. We chose the figure skating task, because of its difficulty, presence of challenging actions, and availability of fine-grained labels. This workshop encourages participants to submit their contributions, surveys, and case studies that address human action perception and understanding problems.


Program (May 31 Afternoon, Room 2):

14:00 – 15:30


15:30 – 16:00

Coffee break

16:00 – 18:00


Fourth Workshop on Applied Multimodal Affect Recognition (AMAR)

Organizers: Shaun Canavan (University of South Florida), Tempestt Neal (USF), Marvin Andujar (University of South Florida), Saurabh Hinduja (University of Pittsburgh), Lijun Yin (State University of New York at Binghamton)

Abstract: Novel applications of affective computing have emerged in recent years in domains ranging from health care to the 5th generation mobile network. Many of these have found improved emotion classification performance when fusing multiple sources of data (e.g., audio, video, brain, face, thermal, physiological, environmental, positional, text, etc.). Multimodal affect recognition has the potential to revolutionize the way various industries and sectors utilize information gained from recognition of a person’s emotional state, particularly considering the flexibility in the choice of modalities and measurement tools (e.g., surveillance versus mobile device cameras). Multimodal classification methods have been proven highly effective at minimizing misclassification error in practice and in dynamic conditions. Further, multi-modal classification models tend to be more stable over time compared to relying on a single modality, increasing their reliability in sensitive applications such as mental health monitoring and automobile driver state recognition. To continue the trend of lab to practice within the field and encourage new applications of affective computing, this workshop will provide a forum for researchers to exchange ideas on future directions, including novel fusion methods and databases, innovations through interdisciplinary research, and emerging emotion sensing devices. Also, this workshop will address the ethical use of novel applications of affective computing in real world scenarios. More specifically, it will discuss topics including, but not limited to, privacy, manipulation of users, and public fears and misconceptions regarding affective computing.


Program (May 31 Morning, Room 3):

09:15 – 9:30

Welcome and Opening Remarks from Organizers

9:30 – 10:30


10:30 – 10:45

Workshop Paper Presentation

10:45 – 11:00

Coffee Break

11:00 – 11:15

Workshop Paper Presentation

11:15 – 12:15


12:15 – 12:30

Closing Remarks