Human Activity Recognition has been researched in thousands of papers so far, with mobile / environmental sensors in ubiquitous / pervasive domains, and with cameras in vision domains. As well, Human Behavior Analysis is also explored for long-term health care, rehabilitation, emotion recognition, human interaction, and so on. However, many research challenges remain for realistic settings, such as complex and ambiguous activities / behavior, optimal sensor combinations, (deep) machine learning, data collection, platform systems, and killer applications.
In this conference, we comprehend such research domains as ABC: Activity and Behavior Computing, and provide an open and a confluent place for discussing various aspects and the future of the ABC.
Imperial College London / UK and University of Augsburg / Germany
Multimodal Sentiment Analysis usually comes at massive cravings for labelled data. And, this data is best digested by deep nets well prepared by expert chefs who know their job all too well. For each modality flavour such as audio, text, video, or even physiology, a different recipe is best suited, and this is best topped by some fancy well-fitting multimodal fusion layers or techniques. This makes one wonder if there was any short-cut to perfect Multimodal Sentiment Analysis ideally saturating with little portions of data and preparable even by the domain layman. In other words: Can we solve sentiment analysis with minimal labelling effort and some black-box AI that takes it all from there, even if the data is largely heterogenous in terms of involved modalities. This talk invites to explore such an avenue with the steps of self-learning representations, coupling analysis and synthesis of sentiments for data augmentation, and autonomously learning reinforced, cross-modally, and self-supervised at scale. It will be garnished by insights into recent challenges organised by the presenter including MuSe and Interspeech ComParE. If all goes well, we shall arrive soon at instant multimodal sentiment analysis that fully satisfies.
Björn W. Schuller received his diploma, doctoral degree, habilitation, and Adjunct Teaching Professor in Machine Intelligence and Signal Processing all in EE/IT from TUM in Munich/Germany. He is Full Professor of Artificial Intelligence and the Head of GLAM - the Group on Language, Audio, & Music - at Imperial College London/UK, Full Professor and Chair of Embedded Intelligence for Health Care and Wellbeing at the University of Augsburg/Germany, co-founding CEO and current CSO of audEERING – an Audio Intelligence company based near Munich and in Berlin/Germany, Guest Professor at Southeast University in Nanjing/China and permanent Visiting Professor at HIT/China amongst other Professorships and Affiliations. Previous stays include Full Professor at the University of Passau/Germany, Key Researcher at Joanneum Research in Graz/Austria, and the CNRS-LIMSI in Orsay/France. He is a Fellow of the IEEE and Golden Core Awardee of the IEEE Computer Society, Fellow of the BCS, Fellow of the ISCA, President-Emeritus of the AAAC, and Senior Member of the ACM. He (co-)authored 1,000+ publications (40k+ citations, h-index=91), is Field Chief Editor of Frontiers in Digital Health and was Editor in Chief of the IEEE Transactions on Affective Computing amongst manifold further commitments and service to the community. His 30+ awards include having been honoured as one of 40 extraordinary scientists under the age of 40 by the WEF in 2015. First-in-the-field of Affective Computing and Sentiment analysis challenges such as AVEC, Interspeech ComParE, or MuSe have been initiated and by now organised overall more than 25 times by him. He is an ERC Starting and DFG Reinhart-Koselleck Grantee, and consultant of companies such as Barclays, GN, Huawei, Informetis, or Samsung.
Osaka Prefecture University / Japan
Reading is a fundamental activity for learning languages. By “reading” a reading behavior of a learner, we can know the level of the learner’s knowledge and mental states. The result of “reading” can be used to improve learners’ behavior by using various actuators, i.e., methods of giving feedback. In general, effective actuators depend on learners and thus can be found by analyzing the experience of learning. In my talk, I introduce recent results of the project called “experiential supplements,” focusing on application to language learning. By analyzing human experiences, we obtain pieces of information called experiential supplements, which make it easier to follow other learners’ successful learning experiences. As a result, learners can learn a language more efficiently and effectively.
Koichi Kise received B.E., M.E., and Ph.D. degrees in communication engineering from Osaka University, Osaka, Japan, in 1986, 1988 and 1991, respectively. From 2000 to 2001, he was a visiting researcher at German Research Center for Artificial Intelligence (DFKI), Germany. He is now a professor of the Department of Computer Science and Intelligent Systems, Osaka Prefecture University, Japan. With Prof. Andreas Dengel, DFKI, he founded in 2008 the Institute of Document Analysis and Knowledge Science (IDAKS), Osaka Prefecture University, and now works as the director. He has received awards including best paper awards of three major international conferences in the field of document analysis, i.e., ICDAR (international conf. on document analysis and recognition, in 2007 and 2013), DAS (document analysis systems, in 2010) and ICFHR (international conf. on frontiers in handwriting recognition, in 2010). He was the chair of IAPR TC11 (reading systems, 2012-2016), and a member of IAPR conferences and meetings committee. He has been an Editor-in-Chief of International Journal of Document Analysis and Recognition. He also worked for international conferences including as the general chair of ICDAR2017, a track chair of the document analysis track of ICPRs (2012, 2018), and a program co-chair of ICDAR2013, 2015 and ACPR2013, 2015. His research interests are in the areas of document analysis, human behavior analysis and learning augmentation.
You can get the paper abstracts from HERE.
Full paper PDFs are available to registered participants and will be published at the Springer soon.
You can import to your calendar in your timezone from the [+] below.
Please do registration from below.