First meeting

The Third Facial Micro-Expressions Grand Challenge (MEGC): New Learning Methods for Spotting and Recognition

Download Call for Papers (pdf version).

Micro-facial expressions (MEs) are involuntary movements of the face that occur spontaneously when a person experiences an emotion but attempts to suppress or repress the facial expression, most likely in a high-stakes environment. As such, the duration of MEs is very short with the general duration of not more than 500 milliseconds (ms), and is the telltale sign that distinguishes them from a normal facial expression. Computational analysis and automation of tasks on micro expressions is an emerging area in face research, with a strong interest appearing as recent as 2014. Only recently, the availability of a few spontaneously induced facial micro-expression datasets has provided the impetus to advance further from the computational aspect. There are two facial macro- and micro- expression databases which contain long videos: the CAS(ME)2 with 98 sequences at 30 fps and the SAMM Long Videos dataset with 147 sequences at 200 fps. While much research has been done on short videos, there has been not many attempts to spot micro-expressions on long videos. This workshop is organized with the aim of promoting interactions between researchers and scholars from within this niche area of research, and also those from broader, general areas of computer vision and psychology research.


This workshop has two main agendas:

  1. To organize the Third Grand Challenge for facial micro-expression research: spotting macro and micro expressions on long videos.
  2. To solicit original works that address a variety of challenges of ME research, but not limited to:
    • ME spotting/detection by using self-supervised learning
    • ME recognition by using self-supervised learning
    • ME feature representation and computational analysis
    • Unified ME spot-and-recognize schemes
    • Deep learning techniques for MEs spotting and recognition
    • MEs data analysis and synthesis
    • New datasets for MEs
    • Psychology of MEs

Important Dates

Submission deadline: 31 Jan 2020

Notification: 14 Feb 2020

Camera-ready: 28 February 2020

Submission

Submission website: https://cmt3.research.microsoft.com/MEGC2020

Workshop paper format should adhere to the paper submission guidelines for FG2020: https://fg2020.org/instructions-of-paper-submission-for-review/

Organisers

Su-Jing Wang

Chinese Academy of Sciences, China, wangsujing@psych.ac.cn

Moi Hoon Yap

Manchester Metropolitan University,UK, m.yap@mmu.ac.uk

John See

Multimedia University, Malaysia, johnsee@mmu.edu.my

Xiaopeng Hong

Xi’an Jiaotong University, hongxiaopeng@xjtu.edu.cn

Xiaobai Li

University of Oulu,
Xiaobai.Li@oulu.fi


Program Chair

Jingting Li

Chinese Academy of Sciences, China,
lijt@psych.ac.cn

Student Volunteer

Ying He

Chinese Academy of Sciences, China
heyingyouxiang@qq.com

Advisory Panel


Xiaolan Fu,
Chinese Academy of Sciences, China


Guoying Zhao,
University of Oulu, Finland


Keynote Speaker

Dr. Rama Chellapa
Title: Facial expression recognition under occlusion and alignment-free conditions


Program schedule

Download Workshop Full Program.
All the accepted papers are available on IEEE Computer Society Digital Library.
Date and Time: 16th November 2020 (Monday) Whole day session (Buenos Aires time)

Opening from the Chairs
SAMM and CAS Micro-expressions datasets updates
Spotting Challenge Papers
Invited Talk: Dr. Rama Chellapa
Break
Regular papers
Awards and Summary from the Chairs
Panel discussion and brainstorming for the next steps
Closing

Program Committee:

Ruiping Wang, Institute of Computing Technology, Chinese Academy of Sciences, China
Wen-Jing Yan, JD Digits, China
Hongying Meng, Brunel University, UK
Zhen Cui, Nanjing University of Science and Technology, China
Sze Teng Liong, Feng Chia University, Taiwan
Adrian Keith Davison, University of Manchester, UK
Daniel Leightley, King’s College London, UK
Walied Merghani, Sudan University of Science and Technology, Sudan
Choon-Ching Ng, PRDCSG, Singapore
Tong Chen, Southwest University, China

Review