Seven workshops will be organized at FG’24. These will be held on either May 27 or May 31, 2024, in the same venue as the FG 2024 main conference (the exact program will be announced closer to the conference). Workshop papers will be published with the main conference proceedings. Please visit the individual workshop pages to check submission dates and links. 

Fourth Workshop on Applied Multimodal Affect Recognition (AMAR)

Organizers: Shaun Canavan (University of South Florida), Tempestt Neal (USF), Marvin Andujar (University of South Florida), Saurabh Hinduja (University of Pittsburgh), Lijun Yin (State University of New York at Binghamton)

Abstract: Novel applications of affective computing have emerged in recent years in domains ranging from health care to the 5th generation mobile network. Many of these have found improved emotion classification performance when fusing multiple sources of data (e.g., audio, video, brain, face, thermal, physiological, environmental, positional, text, etc.). Multimodal affect recognition has the potential to revolutionize the way various industries and sectors utilize information gained from recognition of a person’s emotional state, particularly considering the flexibility in the choice of modalities and measurement tools (e.g., surveillance versus mobile device cameras). Multimodal classification methods have been proven highly effective at minimizing misclassification error in practice and in dynamic conditions. Further, multi-modal classification models tend to be more stable over time compared to relying on a single modality, increasing their reliability in sensitive applications such as mental health monitoring and automobile driver state recognition. To continue the trend of lab to practice within the field and encourage new applications of affective computing, this workshop will provide a forum for researchers to exchange ideas on future directions, including novel fusion methods and databases, innovations through interdisciplinary research, and emerging emotion sensing devices. Also, this workshop will address the ethical use of novel applications of affective computing in real world scenarios. More specifically, it will discuss topics including, but not limited to, privacy, manipulation of users, and public fears and misconceptions regarding affective computing.

Website: https://cse.usf.edu/~tjneal/AMAR2024

SkatingVerse: Segmentation and Assessment of Continuous Video in Figure Skating Competition and the 1st SkatingVerse Workshop & Challenge

Organizers: Jian Zhao (Institute of North Electronic Equipment), Lei Jin (Beijing University of Posts and Telecommunications), Zheng Zhu (Tsinghua University), Yinglei Teng (Beijing University of Posts and Telecommunications), Jiaojiao Zhao (University of Amsterdam), Sadaf Gulshad (University of Amsterdam), Zheng Wang (Wuhan University), Bo Zhao (Bank of Montreal), Xiangbo Shu (Nanjing University of Science and Technology), Xuecheng Nie (Meitu Inc.), Xiaojie Jin (Bytedance Inc. USA), Xiaodan Liang (Sun Yat-sen University), Yunchao Wei (UTS), Jianshu Li (Ant Group), Shin’ichi Satoh (National Institute of Informatics), Yandong Guo (AI^2 Robotics), Cewu Lu (Shanghai Jiao Tong University), Junliang Xing (Tsinghua University), Shen Jane (Pensees Technology)

Abstract: Human action understanding in computer vision focuses on locating, classifying, and assessing human actions in videos. However, the current tasks are inadequate for practical application such as fine-grained action segmentation and assessment. To address this, we construct a dataset comprising 1,687 continuous videos from figure skating competitions, encouraging the development of algorithms that can accurately analyze each action. We chose the figure skating task, because of its difficulty, presence of challenging actions, and availability of fine-grained labels. This workshop encourages participants to submit their contributions, surveys, and case studies that address human action perception and understanding problems.

Website: https://skatingverse.github.io/

2nd Workshop on Learning with Few or without Annotated Face, Body and Gesture Data (LFA-FG2024)

Organizers: Maxime Devanne (Université de Haute Alsace), Mohamed Daoudi (IMT Nord Europe/CRIStAL (UMR 9189)), Germain Forestier (University of Haute Alsace), Jonathan Weber (University of Haute Alsace), Stefano Berretti (University of Florence, Italy)

Abstract:

Since more than a decade, Deep Learning has been successfully employed for vision-based face, body and gesture analysis, both for static and dynamic granularities. This is particularly due to the development of effective deep architectures and the release of quite consequent datasets.

However, one of the main limitations of Deep Learning is that it requires large scale annotated datasets to train efficient models. Gathering such face, body or gesture data and annotating them can be time consuming and laborious. This is particularly the case in areas where experts from the field are required, like in the medical domain. In such a case, using crowdsourcing may not be suitable.

In addition, currently available face and/or gesture datasets cover a limited set of categories. This makes the adaptation of trained models to novel categories not straightforward. Finally, while most of the available datasets focus on classification problems with discretized labels, continuous annotations are required in many scenarios. Hence, this significantly complicates the annotation process.    

The goal of this 2nd edition of the workshop is to explore approaches to overcome such limitations by investigating ways to learn from few annotated data, to transfer knowledge from similar domains or problems, to generate new data or to benefit from the community to gather novel large scale annotated datasets.

Website: https://sites.google.com/view/lfa-fg2024/home 

Advancements in Facial Expression Analysis and Synthesis: Past, Present, and Future

Organizers: Itir Onal Ertugrul (Utrecht University), Laszlo A Jeni (Carnegie Mellon University)

Abstract: This workshop aims to bring together computer scientists, psychologists and behavioral scientists who have been working on automated analysis and synthesis of facial expressions and their application in several domains including assessment of pain, mental health, personality, and emotion among others. With the invited talks by distinguished researchers in the field, we aim to shed light on the past, present, and future of face analysis and synthesis. The workshop will conclude with a dynamic panel discussion, featuring interdisciplinary researchers and their valuable insights into the multidimensional aspects of facial expression analysis and synthesis.

Website: https://sites.google.com/view/afeas-24/home 

The Second Workshop on Privacy-aware and Acceptable Video-based Assistive Technologies

Organizers: Sara Colantonio (Institute of Information Science and Technologies of the National Research Council of Italy), Francisco Flórez-Revuelta (University of Alicante), Martin Kampel (Vienna University of Technology, Computer Vision Lab)

Abstract: The quest for responsible research is a cornerstone of an ethical, legal and social-aware approach to the development of assistive technologies. As technology advances – driven by the huge and rapidly evolving innovations through modern information and communication technologies – it penetrates private domains and interacts with personal, private, and intimate activities. It is a necessary requirement that any technology development should be carefully designed and balanced within societal, cultural and individual values, and norms.

Assistive technologies based on computer vision, multimedia data processing and understanding, and machine intelligence present several advantages in terms of unobtrusiveness and information richness. Indeed, camera sensors are far less obtrusive with respect to the hindrance that other wearable sensors may cause to people’s activities. Currently, video-based applications are effective in recognising and monitoring face expressions, activities, movements, and overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). However, cameras are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information that this technology conveys and the intimate setting where it may be deployed in. Therefore, solutions able to ensure privacy preservation by context and design as well as to ensure high legal and ethical standards are in high demand.

This workshop aims to create a forum for contributions presenting and discussing image- and video-based applications for active assisted living as well as initiatives proposing ethical and privacy-aware solutions.

The workshop is supported by the visuAAL Marie Skłodowska-Curie Innovative Training Network and the GoodBrother COST Action, which aims to bridge the gap between users’ requirements and the safe and secure use of video-based AAL.

Website: https://goodbrother.eu/conferences/privaal2024/

Synthetic Data for Face and Gesture Analysis

Organizers: Deepak Kumar Jain (Dalian University of Technology), Pourya Shamsolmoali (East China Normal University), Fadi Boutros (Fraunhofer IGD), Naser Damer (Fraunhofer Institute for Computer Graphics Research IGD and TU Darmstadt ), Vitomir Struc (University of Ljubljana)

Abstract: Recent advancements in generative models within the realms of computer vision and artificial intelligence have revolutionized the way researchers approach data-driven tasks. The advent of sophisticated generative models, such as GANs (Generative Adversarial Networks), VAEs (Variational Autoencoders), or more recently, diffusion models, has empowered practitioners to create synthetic data that closely mirrors real-world scenarios. These models enable the generation of high-fidelity images and sequences, laying the foundation for groundbreaking applications in face and gesture analysis. The significance of these generative models lies in their ability to produce synthetic data that is remarkably realistic, thereby mitigating challenges associated with data scarcity and privacy concerns. As a result, the utilization of synthetic data has become increasingly prevalent in various research domains, offering a versatile and ethical alternative for training and testing machine learning algorithms. This workshop aims to delve into the diverse applications of synthetic data in the realm of face and gesture analysis. Participants will explore how synthetic datasets have been instrumental in training facial recognition systems, enhancing emotion detection models, and refining gesture recognition algorithms. The workshop will showcase exemplary use cases where the integration of synthetic data has not only overcome data limitations but has also fostered the development of more robust and accurate models.

Website: https://sites.google.com/view/sd-fga2024/ 

First International Workshop on Responsible Face Image Processing (ReFIP 2024)

Organizers: Andrea Atzori (University of Cagliari), Fadi Boutros (Fraunhofer IGD), Lucia Cascone (University of Salerno), Naser Damer (Fraunhofer Institute for Computer Graphics Research IGD and TU Darmstadt ), Mirko Marras (University of Cagliari), Ruben Tolosana (Universidad Autonoma de Madrid), Ruben Vera-Rodriguez (Universidad Autónoma de Madrid)

Abstract: The consideration of ethical dimensions beyond mere accuracy is increasingly important in both industrial and academic spheres, given the pervasive influence of facial image processing systems in our daily lives. Despite this attention, crucial aspects such as fairness, accountability, transparency, and privacy remain under-explored in the domain of facial image processing systems. To have a better understanding of these aspects, our workshop on responsible face image processing (ReFIP) aims to gather high-quality, impactful, and original research in this emerging field, providing a shared platform for researchers and practitioners. This workshop seeks to go beyond domain-generic studies in the literature, fostering a deeper understanding of the ethical aspects associated with facial image processing, generating vivid community exchanges.

Website: https://responsiblefaceimageprocessing.github.io/fg2024/