HCI Reading Group (7 June 2024)

HCI Reading Group: 7 June 2024

Posted by Kotaro on 7 June 2024

At SMU, we have a weekly HCI reading group. We pick a paper of our interest, read a paper before the meeting, and participants of the reading group share thoughts on the paper. Participants typically include faculty members, students, engineers, and postdocs. (If you are interested, join the Slack group for the reading group to get update.)

Today, we read: Jonathan Zong, Isabella Pedraza Pineros, Mengzhu (Katie) Chen, Daniel Hajas, and Arvind Satyanarayan (2024) Umwelt: Accessible Structured Editing of Multi-Modal Data Representations

I (Kotaro) attended the presentation for this paper at CHI2024 and found the work fascinating. So I picked it for today. Below is the summary of today’s discussion generated by ChatGPT based on the meeting notes, with some edits by me!

Summary of Today’s Discussion

During our recent reading group meeting, we delved into a newly published research paper that explores accessible method for authoring data representations, e.g., visualization and sonification. We were all excited about the work. For some of us, this was the first time reading an accessibility related paper, so it was a great learning opportunity. A few thoughts:

Some of us were curious about how would we incorporate haptics or other modalities in a way they incorporated visualization and sonification. Participants questioned what would be the scope of types of data that the approach can cover. It's probably hard to come up with a similar authoring method for sonifying pictures like satellite imagery. The conversation further touched on the need to understand various sonification methods (e.g., instead of simple sound with varying pitch, use music?).

In terms of study and analysis methods, the group raised concerns about the sample size of participants in the study and whether it was sufficient for robust conclusions. There were queries about the paper’s definition and measurement of "expressiveness" in the system. The inclusion of co-authors as target users was debated. The group appreciated the paper’s detailed justification of decisions but felt that the scenario section was overly long and possibly misplaced. Additionally, some wished for clearer explanations on how the sonification system benefits blind users and helps them interpret complex datasets.