Xiang ‘Anthony’ Chen

Registration link: Register Here

Date: 16 July 2024, Tuesday

Time: 11:00am – 12:00pm

Venue: School of Economics/School of Computing & Information Systems 2 (SOE/SCIS 2), Level 4, Seminar Room 4-2

Talk title: No More UI Wrappers! Rethinking HCI's Role in the Era of Generative AI

About the talk: There is a recent surge of HCI work that feels like derivatives of popular generative AI, such as adding UI wrappers to off-the-shelf LLMs and text-to-image models. In this talk, I draw on my own research experience to rethink HCI's role in the era of generative AI. Specifically, I argue that HCI should challenge itself with three important sets of problems. First, while generative AI (e.g., ChatGPT) performs fairly well in most general use cases, it remains unclear whether and how they can support experts' domain-specific workflow, such as drug discovery. Second, one fundamental problem of the current generative AI interfaces is the limitation of expressing and interpreting humans' intent in language alone. Third, as generative AI becomes increasingly capable, HCI should contribute to the long-standing problem of value alignment---specifically, how to enhance human autonomy and AI fairness.

About the speaker: Xiang ‘Anthony' Chen is an Associate Professor in UCLA's Departments of Electrical & Computer Engineering and Computer Science. He received a Ph.D. in the School of Computer Science at Carnegie Mellon University. Anthony's area of expertise is Human-Computer Interaction (HCI). His research takes a human-centered approach to design, build, and study interactive AI systems that align with human values, assimilate human intents, and augment human abilities., supported by NSF CAREER Award, ONR YIP Award, Google Research Scholar Award, Intel Rising Star Award, Hellman Fellowship, NSF CRII Award, and Adobe Ph.D. Fellowship. Anthony’s work has resulted in 60+ publications with three best paper awards and four honorable mentions in top-tier HCI conferences.


Yuhan Luo

Registration link: Register Here

Date: 8 July 2024, Monday

Time: 1:30am – 2:30pm (Tentative)

Venue: School of Computing & Information Systems 2 (SCIS 2), Level 3, Seminar Room 3-2

Talk title: Scaling up Personalized Health Support in Everyday Life

About the talk: Designing effective health support tools, such as food recommendation systems, fitness planners, and mental health companions, warrants a holistic understanding of individuals’ situations, encompassing their daily activities, health conditions, lifestyles, and social environments. However, achieving such a level of personalization is challenging. Partially, collecting and analyzing various personal health data is resource-consuming. Moreover, individuals without domain expertise often face challenges in interpreting these data and understanding the implications, not to mention making informed decisions to improve their health. The recent surge of Large Language Models (LLMs), with their remarkable text and image generation ability, holds promise to overcome these barriers. By gathering various health-related data from users, LLMs can generate relevant health summaries and recommendations without excessive data training. Despite the promise, it is unclear how researchers and developers can make the best use of LLMs in personalized health support, given the emerging concerns about privacy, transparency, and hallucination. In this project, we bring together experts in computer science, human-computer interaction, and health professionals to scale up personalized health support leveraging LLMs in several health contexts: diet, exercise, and mental health support. Specifically, we aim to (1) give users the agency to customize their ideal health companion, (2) evaluate the effectiveness of LLM-powered health support systems through field studies, and (3) address emerging issues with LLMs’ generated content regarding hallucinations and transparency.

About the speaker: Yuhan's research seeks to enhance individuals' everyday health and well-being by unlocking the potential of ubiquitous computing technologies. She builds multimodal systems (e.g., speech interfaces, chatbots) to support self-tracking, designs interventions to encourage health behaviors, and explores opportunities for utilizing personal data in healthcare contexts. Yuhan received her Ph.D. in Information Studies from University of Maryland in 2022, MS in Information Science and Technology from Penn State in 2017, and BEng in Computer Science from Southeast University in 2015, respectively. She was a UX researcher intern at Meta in 2020 and Google in 2019.


Xian Xu

Date: 17 May 2024

Time: 2pm - 3pm

Venue: School of Economics/School of Computing & Information Systems 1 , Level 4, Seminar Room 4-4

Talk title: Data-Driven Storytelling: Cinematic Guidelines for Data Videos

About the talk: Storytelling is a skill that humans have developed throughout their evolutionary journey to meet the needs of communication. Crafting a compelling data story demands more attention and study in the big data era. As an emerging form of data storytelling, data videos combine the art of storytelling with cinematic audio-visual elements in a time-based narrative media that effectively delivers data insights to the general public. However, little is known about how to create an attractive and impressive data video.

This talk attempts to present how to use interdisciplinary study to integrate cinematic arts and data visualization to help create cinematic data videos. The aim is to assist data designers in conveying data insights to the general public in an efficient and intuitive manner. I aspire that this interdisciplinary study will inspire more engaging and effective storytelling techniques for data videos as well as other storytelling mediums.

About the speaker: Dr. Xian Xu is a Research Assistant Professor in the Division of Emerging Interdisciplinary Areas (EMIA), at The Hong Kong University of Science and Technology (HKUST). She received her PhD in Individualized Interdisciplinary Program (Computational Media and Arts) at HKUST VisLab, The Hong Kong University of Science and Technology. She received her MFA in Studies of Drama (Film and TV) and BA in Directing (Film and TV Producing) from the Central Academy of Drama, China. She is a five-time national scholarship winner and Beijing’s outstanding graduate student winner. She completed her research visit at the University of Oxford and the University of Cambridge. Currently, she is also an associate in Cambridge Digital Humanities, at the University of Cambridge. Her research interests and artworks mainly focus on the interdisciplinary study of data-driven storytelling, cinematic arts, education in VR or the Metaverse, and human-computer interaction. Her works were published in ACM CHI, ACM Multimedia, ACM WWW, IEEE VIS, and IEEE VR.


Johannes Schöning

Registration Link: Register here

Date: 9 Feb 2024

Time: 12:00-1:00pm

Venue: School of Economics/School of Computing & Information Systems 2 (SOE/SCIS 2), Level 4, Seminar Room 4-1

Talk title: The Importance of Human-Computer Interaction Perspectives for Next-Generation Spatial User Interfaces: Why Homer Simpson is right!

About the talk: Catastrophic incidents associated with GPS devices and other personal navigation technologies are all too common: A tourist drives his rental car across a beach and directly into the Atlantic Ocean, a person in Belgium intending to drive to a nearby train station ends up in Croatia, a family travelling on a dirt road gets stranded for four days in the Australian outback. Often, we blame those accidents on human error. Still, as HCI researchers, we have a deep understanding that humans make mistakes, and it's our responsibility to analyze the failures and improve the technical design to minimize the chances of human error. In my talk, I give an overview of how we design, develop and evaluate the next generation of such spatial user interfaces with a lens of human-computer interaction (HCI). I will outline our approaches to help people navigate, perceive and interact with space.

About the speaker: Johannes Schöning is a professor of computer science at the University of St. Gallen, Switzerland. With our research, we want to empower individuals and communities with the information they need to make better data-driven decisions by developing novel user interfaces with them. Our research aims to fit human and technological needs and empower users using novel interfaces. We want to gain a deeper understanding of the interplay between rapidly advancing technologies and how digital interfaces can empower users in their rich set of activities. We focus on a broad range of use cases from geographic information science, public health, medical contexts, and extreme conditions such as space missions. We love to work in interdisciplinary teams to create novel insights. To do so, we use rigorous methods from AI, computer graphics and cognitive psychology and commit to a theoretical and practice-based inquiry. I have a particular interest in the application of user-centred design methodologies as well as mixed methods approaches.


Scott Bateman

Registration Link: Register here

Date: 13 Dec 2023

Time: 10:00am - 11:00am

Venue: School of Economics/School of Computing & Information Systems 2 (SOE/SCIS 2), Level 4, Seminar Room 4-3

Talk title: Shaping Human Performance for a Better (HCI) Future: Explorations in Games, Visualization, Coding, and Mixed Reality

About the talk: We use tools every day; to make things easier, to make things more fun, to help us remember, and to help us achieve what would not otherwise be possible. The fact that technology can help us achieve new things is not particularly insightful. However, the idea that we are still finding new ways to display and interact with abstract information, that interactive computer systems continue to fuel much of our research innovation, and evolves what it means to be human, is fascinating. In this talk, Scott Bateman will survey his research over the past 15 years describing his interest in understanding the many ways in which interactive systems influence our ability to work, play, and live our lives. He will also consider how emerging mixed reality technology provides a double-edged sword. On the one hand it exposes countless new ways to expand the human experience in incredibly engaging ways. However, in the near future, where imagination is the only limit on what we can do in MR, HCI researchers need to take action to understand the impact that this might have on us and our ability to perform in all aspects of our lives. We have the opportunity (and responsibility) to help define how the human experience will evolve for the better.

About the speaker: Scott Bateman is an Associate Professor of Computer Science at the University of New Brunswick on Canada’s Atlantic coast, Director of the Spectral Spatial Computing Research Centre, and founding co-director of the Human-Computer Interaction Lab. He holds a PhD in Computer Science from the University of Saskatchewan. His work in Human-Computer Interaction is motivated by understanding how the design of technology can influence and affect our ability to perform at our best. This work has spanned computer-supported cooperative work, games and play, health technologies and most recently most mixed reality technology.


Toby Jia-Jun Li

Date: 11 Dec. (Monday) 2023

Time: 2pm-3pm

Venue: School of Economics/School of Computing & Information Systems 2 (SOE/SCIS 2), Level 4, Meeting Room 4-2

Talk title: Beyond “Thin Wrappers” in Human-AI Co-Creation

About the talk: Recent advances in AI and ML, particularly large language models (LLMs), have paved the way for automation and user assistance in numerous creativity tasks. However, existing interfaces in those domains are largely “think wrappers” of LLMs that struggle with user intent ambiguities, objective uncertainties, and evolving user goals that change with task progression. This talk explores effective strategies to facilitate human-AI collaboration in these challenging contexts. Using case studies from argumentative writing planning, ideation support, and qualitative text analysis, I will outline the methodologies and interaction strategies we’ve adopted to mitigate these issues. A focal point of the discussion will be the development of intermediate representations of tasks and corresponding data, promoting interactive clarification, disambiguation, and collaborative planning.

About the speaker: Toby Jia-Jun Li is an Assistant Professor in the Department of Computer Science and Engineering at the University of Notre Dame. He directs the SaNDwich Lab, where he and his students use human-centered methods to design, build, and study interactive systems to empower individuals to create, configure, and extend AI-powered computing systems. His recent work seeks to address the societal challenges in the future of work through a bottom-up human-AI collaborative approach that helps individual workers automate and augment their tasks with AI systems. He published at premier academic venues across HCI, NLP, and systems (e.g., CHI, UIST, CSCW, ACL, EMNLP, MobiSys, VL/HCC) and won 4 best paper type awards at these venues.He received a Ph.D. degree in Human-Computer Interaction from Carnegie Mellon University in 2021 and a B.S. degree in Computer Science from the University of Minnesota in 2015. His work has been supported by NSF, the Google Research Scholar Program, the AnalytiXIN Initiative, Yahoo! through the InMind project, and J.P. Morgan. For more information, please see


Janghee Cho

Registration Link: Register here

Date: 8 Dec 2023

Time: 11:00am - 12:00pm

Venue: School of Economics/School of Computing & Information Systems 2 (SOE/SCIS 2) Level 4, Seminar Room 4-2

Talk title: Design for Sustainable Life in the Work-From-Home Era

About the talk: Navigating the complexities of the contemporary human experience is precarious, marked by latent but pervasive anxiety and uncertainty. In this talk, I draw on a reflective design approach that emphasizes the value of human agency and meaning-making processes to discuss design implications for technologies that could help people (re)establish a sense of normalcy in their everyday lives. Specifically, the focus centers on recent projects that investigate the role of data-driven technology in addressing well-being issues within remote and hybrid work settings, where individuals grapple with blurred boundaries between home and work.

About the speaker: Janghee Cho is an Assistant Professor in the Division of Industrial Design at National University of Singapore (NUS). As an HCI researcher, he focuses on wellbeing, the future of work, and reflective design. His research explores the role of digital technologies in promoting sustainable living, drawing on methods and theories from design and social sciences. Before joining NUS, he earned his PhD from the Department of Information Science at the University of Colorado Boulder in the US.


Ting-Hao (Kenneth) Huang

Registration Link: Register here

Date: 7 Dec, 2023

Time: 3:00pm-4:00pm (Tentative)

Venue: School of Computing & Information Systems 1 (SCIS 1), Level 3, Seminar Room 3-1

Talk title: Help People Write Papers and Stories in the Era of Large Language Models

About the talk: Writing stands at the center of human communication and creativity, and it is not easy: academics struggle to make their papers clear and effective, while novelists encounter writer's block. This talk covers two lines of research conducted at Penn State's Crowd-AI Lab, focusing on helping people write academic papers, particularly figure captions, and short stories. In our first set of projects, we introduced SciCap, the first large-scale dataset of real-world scientific figure captions from scholarly articles. Following this, we developed methods to automatically generate and assess captions for these scientific figures. The second set of projects explored how creative writers can choose and integrate suggestions from online crowd workers, story plot prediction models, and Large Language Models (LLMs) in realistic writing scenarios. In the era of LLMs, a key question is how we develop systems and technologies that enable humans to communicate and express themselves more easily, effectively, and authentically.

About the speaker: Dr. Ting-Hao (Kenneth) Huang is an Assistant Professor at the Pennsylvania State University's College of Information Sciences and Technology. Specializing in the intersection of human-computer interaction (HCI) and natural language processing (NLP), he focuses on developing intelligent systems that are practical, robust, and beneficial for complex human tasks. His research contributions are published across HCI, NLP, and AI conferences like CHI, IUI, ACL, NAACL, EMNLP, HCOMP, and AAAI, earning paper awards at INLG, CHI, and IUI. Actively involved in academia, he recently co-chaired HCOMP 2022's Works-in-Progress Papers and Demonstration Track and co-organized the In2Writing workshop at CHI 2023. Dr. Huang completed his Ph.D. in Computer Science at Carnegie Mellon University in 2018. (Dr. Huang's personal website:


Joonsuk Park

Link: Register here

Date: 6 Dec, 2023

Time: 11:00am-12:00pm

Venue: School of Economics/School of Computing & Information Systems 2 (SOE/SCIS 2) Level 4, Seminar Room 4-2

Talk title: Evaluative Argument Mining and Its Applications

About the talk: The ease of internet access has significantly increased the number of user comments authored by inexperienced writers. Despite the potential usefulness of such comments, readers face the daunting task of sifting through copious amounts of uninformative content to extract relevant information. One popular approach to deal with this problem is to build systems that can help readers by recommending helpful comments or summarizing available information. We, however, consider the problem from the perspective of commenters: Can we build a system that can guide commenters to write “better” comments? Such an approach would enhance the overall quality of textual content available online and complement existing solutions for reader assistance. In this talk, I will present the core components of an automated system to assist commenters in constructing better-structured arguments in their comments. These include: (1) A theoretical  argumentation model to capture the evaluability of arguments in the online setting, (2) A classifier for determining an appropriate type of support---reason or evidence---for propositions comprising user comments, (3) A classifier for identifying support relations present in user comments. I will also discuss how this system can be applied to practical applications such as assistive commenting interfaces and recommendation systems.

About the speaker: Joonsuk Park is an Assistant Professor in the Department of Computer Science at the University of Richmond, where he is also affiliated with the Linguistics Program. His research primarily focuses on theoretical and empirical methods to assist human communication from the perspectives of logicality, factuality, and ethicality. Specific areas of research include argument mining, fact verification, and implicit toxicity detection. Currently, he is a consultant at NAVER AI Lab and a co-organizer of the 10th Workshop on Argument Mining (ArgMining). Previously, he was a faculty in the Department of Computer Science at Williams College and a visiting scholar at NAVER AI Lab.


Wang Yun

Link: Register here

Date: 22 Sept, 2023

Time: 1:00pm-2:00pm

Venue: School of Economics/School of Computing & Information Systems 2 (SOE/SCIS 2) Level 3, Seminar Room 3-10

Talk title: Human-AI Collaborative Creation of Visual Storytelling

About the talk: Visual storytelling is a powerful form of communication that uses visual elements, such as charts, infographics, images, animations, and videos, to create narratives. However, creating engaging visual stories is a challenging task, especially for non-experts, as it requires a deep understanding of the content and logic, the visual design principles, and the narrative structures. Moreover, it often takes a lot of time and effort to produce high-quality visual stories. In this talk, the speaker will present her work on how human-AI collaboration can enhance the process and outcome of visual storytelling. She will show how to generate expressive and informative visualizations, compose visualizations and infographics into stories, and transform visualizations into more engaging animations. She will also examine how humans and AIs can collaborate in the creative process of visual storytelling, and how such collaboration can augment both human creativity and AI capabilities. Finally, she will discuss the future research opportunities and challenges in this domain.

About the speaker: Dr. Yun Wang is a senior researcher in the Data, Knowledge, Intelligence (DKI) Area at Microsoft Research Asia. Her research interests lie at the intersection of human-computer interaction, information visualization, artificial intelligence, and data science. Her work aims to facilitate human-data interaction, human-AI collaboration, and visual storytelling with novel techniques, tools, and systems. Yun's recent research focus on enhancing human communication with visualizations infused with AI techniques. She has developed techniques and systems for creating visual stories in diverse forms, such as infographics, interactive web pages, motion graphics, and animated videos. She has also explored how to simplify and improve the data interaction workflows between humans and AI for analysis, ideation, authoring, and storytelling. She envisions a future where humans and AI can co-create engaging and informative visual stories. She has published over 40 papers in high-impact venues such as VIS, CHI, UIST, TVCG, and CG&A and serves as reviewers, program committees for a variety of venues. She holds a Ph.D. in computer science and engineering from HKUST, and a joint B.Eng. in software engineering and B.Sc. in computer science from Fudan and UCD. More information can be found on her personal page (


Zhang Tengxiang

Link: Register here

Date: 8 Sept, 2023

Time: 11:00am-12:00pm

Venue: School of Economics/School of Computing & Information Systems 2 (SOE/SCIS 2) Level 4, Seminar Room 4-2

Talk title: Merging Digital and Physical Realities: A Human-centered Approach

About the talk: The convergence of the digital and physical realms presents a complex yet fascinating challenge. This talk will introduce a human-centered approach to this intersection, with a focus on the interactions among on-body wearables, off-body devices, and humans. The discussion will highlight the potential of next-generation wearables and ubiquitous devices, particularly smart head-worn devices and backscatter tags, as unique tools for unified representations of digital and physical resources. The talk will conclude with a discussion of the future human-computer interaction technologies from an interdisciplinary perspective.

About the speaker: Dr. Zhang Tengxiang is an Associate Research Scientist at the Institute of Computing Technology, Chinese Academy of Sciences. Dr. Zhang’s research lines in the technical side of ubiquitous computing and human-computer interaction. He builds smart wearables (e.g., glasses, rings), develop sensing algorithms (e.g., for gestures/facial actions), and design interaction interfaces (e.g., with AR/MR) to understand and merge the tagged physical world, the digital metaverse, and the humans. Such effort has led to various patents and publications on top venues including CHI and IMWUT. He received his Ph.D. with Honors in Computer Science from Tsinghua University in 2019, and his master’s degree from the ECE department of the University of Texas at Austin in 2013. He is currently a Visiting Scholar at the Biomedical Engineering Department at NUS working on skin electronics.


Online Panel: What I Wish I Knew About CHI Submission Preparation 5 Years Ago

Link: (event ended)

Date: 11 Aug, 2023

Venue: Online event

About the panel: This week, we run a panel about CHI submission preparation. Join us to pick up advice from Shengdong ZHAO (NUS), FOONG Pin Sym (NUS) and Simon PERRAULT (SUTD).


Maria Wolters

Link: Register here

Date: 7 March, 2023

Time: 1:00-2:00pm

Venue: School of Economics/School of Computing & Information Systems 2 SOE/SCIS2, Level 3, Seminar Room 3-9

Talk title: Introducing the Experience Sampling Method into Clinical Practice - What Makes It Acceptable?

About the talk: In this talk, I will present initial findings from stakeholder requirements gathering done for the  IMMERSE project. In the EU project IMMERSE, we will implement a solution in clinical practice that allows patients with mental health conditions to document their mental health throughout their day in various different situations. This is called the Experience Sampling Method (ESM). Patients’ ESM data is then shared with the health professional who treats them in the form of a dashboard. The goal of IMMERSE is to find out, using a Randomised Clinical Trial approach, whether ESM can improve shared decision making and treatment efficacy in clinical practice. The IMMERSE trial will run in four countries: Belgium, Germany, Scotland, and Slovakia. I will summarise the strategy used for gathering stakeholder requirements in IMMERSE and discuss factors that affect patients’ and clinicians’ readiness to engage with an ESM solution, as determined through a survey of over 400 patients in four European countries.

About the speaker: Dr Maria Wolters is the incoming Research Group leader for the group SOC (digital participation) at the German research institute OFFIS and Reader (associate professor) in Design Informatics at the University of Edinburgh. She has published over 90 peer reviewed papers in Human-Computer Interaction, eHealth, and Computational Linguistics.

Maria is interested in digital inclusion. Around 10% of the population will be excluded from online-only services due to lack of access to technology, a badly designed user experience, lack of interest, or lack of trust. This results in systemic gaps and biases in data-driven systems to support health and social care. Maria is looking at ways to mitigate this by designing solutions that span digital and physical, online and in person.


Jude Yew

Staff User Experience Researcher, Google

Date: 15 February, 2023

Time: 3:30-4:30 pm

Venue: School of Economics/School of Computing & Information Systems 2 (SOE/SCIS 2) Level 4, Seminar Room 4-1

About the talk: Building an Operating System (OS) is a large endeavor and requires us to think about the developer’s experience in unique ways that are different from consumer product experiences. Unlike interfaces that cater to individual users, there is no one universal developer archetype that we have to cater for. Operating Systems have to take into account the collective and collaborative needs of teams that maintain different parts of the OS for instance, their different development contexts and tools used, which lead to different sets of requirements for different sets of users.

In this presentation, the speaker will share some of the recent work that he has carried out in better accounting for Fuchsia’s diverse sets of users and the various methods himself and how his team of UXers go about ensuring that their platform is simple to use, easy to understand and is effective in enabling developers to realize their goals.

About the speaker: Jude is currently a Staff User Experience Researcher at Google working on Fuchsia, Google’s next generation Operating System. His current work focuses on improving the developer’s experiences on Fuchsia via tooling and workflows. Specifically, he is pursuing a data-driven and mixed-methods approach, combining data science and interviews, to help inform teams about the developer’s experiences using their product.

Jude was formerly a tenure-track faculty member at the Dept. of Communications and New Media, NUS, where his research focused on harnessing collective intelligence, collaboration and cooperation for prosocial good. He received his PhD and Mac from the University of Michigan and has been awarded grants from the National Science foundation, National Heritage Board and the Rackham School of Graduate Studies.


Anusha Withana

Senior Lecturer, University of Sydney

Date: 17 January 2023, Tuesday

Time: 1:00-2:00pm

Venue: Meeting Room 5.1, Level 5 School of Computing & Information Systems 1 Singapore Management University 80 Stamford Road Singapore 178902

About the talk: Wearable sensors, especially sensing technologies are of critical importance in a wide variety of applications including disability management, age care, physical rehabilitation, and sports. Despite the growing need, adherence to these technologies is still low. Research finds that a major factor of technology abandonment is the poor fit between the user’s abilities and the system’s characteristics. This is not a surprise considering the challenges faced by people in these applications areas manifest in dramatically different ways in individuals and change over time.

This talk explores an approach to creating wearable technologies by replacing mass production, i.e. “designed for many” with personal fabrication, i.e. “designed for me”. Combining understanding and modelling user activities with novel fabrication technologies such as 3D and 2D functional, “designed for me” aims to create highly customisable wearable devices from on-skin interfaces to personalized accessories.

About the speaker: Anusha Withana is an ARD DECRA fellow and a senior lecturer (Asst. Prof.) at the School of Computer Science, the University of Sydney where he leads the AID-LAB. He received his Masters and Ph.D. from Keio University, Japan, and was a postdoc at Max Planck Institute for Informatics and Saarland University before joining the University of Sydney. He works in the research field of human-computer interaction (HCI), mainly focusing on creating personalized enabling technologies, where technology blend and harmonizes with users and the environment leveraging on natural affordances of the context. His research work has been published in top-tier HCI conferences and journals such as ACM SIGCHI/UIST. He has won numerous awards including the Most Innovative Engineers in Australia award 2020, Most Promising Technology Award at Innovfest unBound 2016, and his research have featured in leading media outlets such as CNN, Discovery TV, SBS, Straits Times, Gizmodo, and Engadget.