Justin Baker gives a keynote lecture on Sensing Psychosis: Deep Phenotyping of Neuropsychiatric Disorders at the Affective Computing and Intelligent Interaction (ACII) 2023 conference.
Traditional psychiatric care relies on subjective assessments, hindering progress in personalized treatments. However, pervasive computing offers unprecedented opportunities to develop dynamic models of mental illness by quantifying individual behavior over time and applying latent construct models. By transcending the precision-personalization dichotomy, we can revolutionize therapeutic discovery through unobtrusive, quantitative behavioral phenotyping. This presentation explores the integration of affective computing in severe mental illnesses such as depression, bipolar disorder, and schizophrenia. Affective computing enhances our understanding of illness fluctuations, contextual factors, and treatment interventions, enabling the identification of causal relationships and targeted interventions for specific neural circuits. By employing single-case experimental designs, we demonstrate the potential of affective computing to reshape psychiatric research and clinical practice. This technological integration paves the way for a closed-loop, personalized approach that optimizes care for individuals seeking treatment.
Louis-Philippe Morency gives a keynote lecture on What is Multimodal? at the Affective Computing and Intelligent Interaction (ACII) 2023 conference.
Our experience of the world is multimodal – we see objects, hear sounds, feel texture, smell odors, and taste flavors. In recent years, a broad and impactful body of research emerged in artificial intelligence under the umbrella of multimodal, characterized by multiple modalities. As we formalize a long-term research vision for multimodal research, it is important to reflect on its foundational principles and core technical challenges. What is multimodal? Answering this question is complicated by the multi-disciplinary nature of the problem, spread across many domains and research fields. Two key principles have driven many multimodal innovations: heterogeneity and interconnections from multiple modalities. Historical and recent progress will be synthesized in a research-oriented taxonomy, centered around 6 core technical challenges: representation, alignment, reasoning, generation, transference, and quantification. The talk will conclude with open questions and unsolved challenges essential for a long-term research vision in multimodal research.