Editor’s Note: This is a guest post from Amy Koike, who I came to know through her work on the Sprout robot. Amy writes about her impressions from the Human-Robot Interaction (HRI) Conference in March 2025. Amy has been a regular presenter at this conference over the years, including her work on Sprout which won a best-paper award in 2024.
Who am I?
Hello, thank you for visiting my very first article on Substack!
My name is Amy Koike, and I am currently pursuing a Ph.D. in Computer Sciences at the University of Wisconsin-Madison in the United States. My research focuses on designing the expressivity of robots—typically by manipulating a robot’s form factors (e.g., Soft Expressive Robots1 and Fluid Expressions for Robots2).
Recently, my interests have expanded into studying “robots in the wild”—exploring how people naturally react to and interact with robots in everyday environments. I'm especially curious about how we can design robots to blend seamlessly into society, be used frequently, provide better services, and, of course, be loved!
At this years’ Human-Robot Interaction (HRI) 2025 conference, I presented a paper related to ‘robots in the wild’3, where we investigated what motivates people to approach a guidance robot in a shopping mall in Japan4.
In the rest of this post, I’ll share some of my experiences at HRI’25!
General Info about HRI’25
This year, the conference was held in Melbourne, Australia. It was my first time visiting, and I absolutely fell in love with the city. Melbourne is beautiful and well-organized, and I was especially impressed by the public transportation system—trams, trains, and buses were all very accessible and incredibly clean.
The conference venue was the Melbourne Convention Center, located along the Yarra River. I really enjoyed walking by the river and taking in the scenery around the venue. One of my lab members also liked walking there—we even bumped into each other near the river after the conference one day!
According to the opening ceremony, 550 people from around the world attended HRI this year. The full paper acceptance rate was 25% (100 papers accepted out of 400 submissions).
One big change from previous years was the switch from a single-track to a dual-track format for full paper talks. Honestly, I was a bit sad about this—I loved seeing everyone together in one room, sharing inspiration across different sub-fields, even if the research wasn’t directly related to my own.
That said, I really liked the setup where demos, industry booths, and poster sessions were placed in the coffee break room. It naturally encouraged human-to-human interaction—I saw so many people staying between sessions, watching demos, chatting by posters, and actively discussing research. (As an introvert who sometimes struggles to start conversations, this setup really helped me move into discussions more smoothly... 😂)
Interesting Papers and Thoughts
As this article introduces some trends, one major highlight from this year’s conference was the increased exploration of Large Language Models (LLMs) in both full papers and Late-Breaking Reports (LBRs). The use cases varied — some papers focused on using LLMs for conversational systems while others explored how LLMs could support designing expressivity (e.g., context-aware robot facial expressions5, robot personality6).
I definitely feel that the rise of LLMs is pushing HRI research into a new stage. Personally, I’m hoping we’ll see more field deployments powered by LLMs. Since LLMs can enable spontaneous, improvised conversations, they open up exciting opportunities to study off-script human-robot interactions across diverse contexts and populations.
As a robot design researcher interested in form factors, I am especially intrigued by a paper titled “MetaMorph -- A Metamodelling Approach For Robot Morphology.7” This paper proposes a new framework to systematically describe and compare robot morphology using graph-based representation. This approach not only has potential applications in quantifying visual similarity between robots but also in enhancing accessibility—for example, by helping visually impaired users understand robot forms through structured descriptions.
In the field of HRI, the impact of a robot’s form factor on human perception is undeniable. However, since different labs often use different robots, it sometimes becomes difficult to generalize findings across studies. I believe the MetaMorph framework offers a promising solution to these challenges. I’m also hoping the authors will eventually provide public tools to make their framework operational—similar to the ABOT database. A tool like that could be valuable for researchers across the HRI and robotics design communities.
Another interesting HRI concept I encountered at HRI’25 was captured in the title of a paper: “'A Robot's Life is Over When People Give Up': Socio-Technical Infrastructure for Sustaining Consumer Robots.8” The paper proposes that robots continue to “live” as long as people care for them and that this care is upheld by a rich, socio-technical infrastructure involving people, language, tools, rituals, and emotion. It argues that designing future robots should go beyond functionality to incorporate these human-centered elements, to foster meaningful and lasting relationships.
That said, this paper made me reflect on a dilemma I often think about: many users inevitably lose interest in a robot over time—especially social home robots. So how can we design robots that sustain emotional relevance and avoid becoming obsolete in users’ lives? 🤔💭
A Paper I Presented
The paper I presented was titled “What Drives You to Interact?: The Role of User Motivation for a Robot in the Wild.” In this work, we explored why and how people approach a robot in the wild, and whether there’s an interaction between these two aspects. Our core motivation was to inform better robot behavioral design—understanding why people initiate interaction can help identify the types of motivations that drive engagement, while understanding how they approach can give robots clues to distinguish between those motivation types in real-time.
While previous HRI research has recognized that interactions in the wild differ from lab settings, we wanted to dive deeper: How do different motivations lead to different types of interaction? And how can we leverage those insights in future robot design? To answer these questions, we conducted a qualitative analysis of interactions between a guidance robot and people in a real-world field deployment.
Ultimately, we identified four types of motivations that drive people to approach a robot in the wild: Function, Experiment, Curiosity, and Education. Each motivation led to distinct interaction patterns:
Function-motivated users saw the robot as a tool to “use,” so their interactions were typically brief and task-focused.
Experimenters approached the robot with the intent to test its capabilities, often asking multiple and even playful or challenging questions—like “Do you know where my boss is?” (Keep in mind, this robot was deployed in a shopping mall to provide way-finding assistance, so it clearly wasn't designed to answer personal questions!)
Curiosity-driven users were less interested in the robot’s functionality and more drawn to the robot itself. These users would often observe the robot quietly or engage in non-task-related conversations, asking things like “What’s your name?” or “Don’t you feel lonely here?”
Education-motivated users were typically adult-child pairs, where the adult initiated the interaction to encourage the child to engage with the robot—turning it into an educational moment.
If you’re interested in learning more, please feel free to download and read our paper from here!
Presenting at HRI’25
I’d like to wrap up this post with a short reflection on my experience presenting at the conference.
The first thing I remember is just how intimidating the setup was. As you can see in the photo below, the room looked like a mini football stadium 😂. I still remember my hands shaking during the talk. (Oh—and now that I think about it, the podium was way too tall for me, so I had to stand on my tiptoes the whole time! 😂)
Intimidating—but also incredibly exciting. At last year’s HRI in Colorado, I had to give my talk online due to a winter storm, so this year felt like a dream finally coming true. I was so grateful that many HRI researchers paid attention to my talk and engaged with it.
One thing I learned from presenting this year is that giving your talk on the first day of the conference has a big advantage—it creates more opportunities to meet people who already recognize you or your work. I had several people come up to me later in the week saying things like, “I saw your presentation yesterday!” That really helped have deeper conversations and made the rest of the conference feel extra productive.
Closing Thoughts💭
This article ended up quite long—but thank you so much for reading! 😊 Human-Robot Interaction (HRI) is my favorite conference because it brings together such a wonderful, interdisciplinary community of robot researchers. I’m really happy I got to share some of my experiences with you.
I hope you were able to feel some of the “HRI vibes” through this post! If you have any questions or thoughts, please feel free to reach out—I love chatting about all things HRI.
Amy Koike, Michael Wehner, and Bilge Mutlu. 2024. Sprout: Designing Expressivity for Robots Using Fiber-Embedded Actuator. In Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (HRI '24). Association for Computing Machinery, New York, NY, USA, 403–412. (https://doi.org/10.1145/3610977.3634983)
Amy Koike and Bilge Mutlu. 2023. Exploring the Design Space of Extra-Linguistic Expression for Robots. In Proceedings of the 2023 ACM Designing Interactive Systems Conference (DIS '23). Association for Computing Machinery, New York, NY, USA, 2689–2706. (https://doi.org/10.1145/3563657.3595968)
Amy Koike, Yuki Okafuji, Kenya Hoshimure, and Jun Baba. 2025. What Drives You to Interact?: The Role of User Motivation for a Robot in the Wild. In Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction (HRI '25). IEEE Press, 183–192. (https://dl.acm.org/doi/10.5555/3721488.3721514)
This project was the product of my internship at the AI Lab, CyberAgent, Japan. The AI Lab is a leading industrial R&D group that actively engages in various research areas such as economics, machine learning, computer vision, and more. Last summer, I had the opportunity to work with amazing research scientists and engineers who are developing and deploying autonomous conversational agents in real-world environments.
Victor Nikhil Antony, Maia Stiber, and Chien-Ming Huang. 2025. Xpress: A System For Dynamic, Context-Aware Robot Facial Expressions using Language Models. In Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction (HRI '25). IEEE Press, 958–967. (https://dl.acm.org/doi/10.5555/3721488.3721605)
Alex Wuqi Zhang, Clark Kovacs, Liberto de Pablo, Justin Zhang, Maggie Bai, Sooyeon Jeong, and Sarah Sebo. 2025. Exploring Robot Personality Traits and Their Influence on User Affect and Experience. In Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction (HRI '25). IEEE Press, 968–977. (https://dl.acm.org/doi/10.5555/3721488.3721606)
Rachel Ringe, Robin Nolte, Nima Zargham, Robert Porzel, and Rainer Malaka. 2025. MetaMorph -- A Metamodelling Approach For Robot Morphology. In Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction (HRI '25). IEEE Press, 627–636. (https://dl.acm.org/doi/10.5555/3721488.3721566)
Waki Kamino, Selma šabanović, and Malte F. Jung. 2025. 'A Robot's Life is Over When People Give Up': Socio-Technical Infrastructure for Sustaining Consumer Robots. In Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction (HRI '25). IEEE Press, 142–151. (https://dl.acm.org/doi/10.5555/3721488.3721510)
> One thing I learned from presenting this year is that giving your talk on the first day of the conference has a big advantage.
My first conference as a presenter in June 2004 in Paris, France. I was scheduled 1st day, 1st session. And then my flight got delayed by >24 hours due to weather issues at Dallas. I reached the conference venue all disheveled, with all luggage... 20 mins before my talk (session had already started). My advisor gave a look which said, "Where on earth did you turn up from?"
Loved your paper! I don’t know if you have seen the launch of the Ropet? It really shows cases your research about how the physical appearance matters in robots. Great article!