Designing Trust Between Humans and Delivery Robots

Designing Trust Between Humans and Delivery Robots

Designing Trust Between Humans and Delivery Robots

Graduate study on design choices that can foster trust through everyday encounters between humans and robots in shared environments.

Graduate study on design choices that can foster trust through everyday encounters between humans and robots in shared environments.

Graduate study on design choices that can foster trust through everyday encounters between humans and robots in shared environments.

Download the full report here!

Download the full report here!

Work accepted to the CHI 2026 Conference in Barcelona.

Work accepted to the CHI 2026 Conference in Barcelona.

MY ROLE

MY ROLE

I conducted this independent study at the Institute of Design, Illinois Tech, as a part of my graduate program. I framed the challenge, conducted the research, and developed the trust framework.

I conducted this independent study at the Institute of Design, Illinois Tech, as a part of my graduate program. I framed the challenge, conducted the research, and developed the trust framework.

THE TEAM

THE TEAM

Shreya Mathur

Advised by Ruth Schmidt

Shreya Mathur

Advised by Ruth Schmidt

SKILLS

SKILLS

Human-Robot Interaction

Behavioral Design

Systems Thinking

Human-Robot Interaction

Behavioral Design

Systems Thinking

TIMELINE

TIMELINE

Aug 2025 - Dec 2025
14 Weeks

Aug 2025 - Dec 2025
14 Weeks

MY ROLE

I conducted this independent study at the Institute of Design, Illinois Tech, as a part of my graduate program. I framed the challenge, conducted the research, and developed the trust framework.

THE TEAM

Shreya Mathur, Advised by Ruth Schmidt

SKILLS

Human-Robot Interaction, Behavioral Design, Systems Thinking

TIMELINE

Aug 2025 - Dec 2025, 14 Weeks

Project Overview

This project began with a simple provocation: what happens when the user is no longer human? As autonomous products like delivery robots move into everyday life, they now share public space with people, navigating the same sidewalks. While these robots often spark curiosity, they can just as easily cause frustration, revealing an emerging human–robot relationship that warrants closer attention.


Much of the focus on autonomy has centered on technical performance, leaving the relational layer of how people interpret robotic behavior and develop trust largely unexplored. This research resulted in a framework that operationalizes trust by breaking it down into three human experience constructs, shaping coexistence through everyday interactions.

This project began with a simple provocation: what happens when the user is no longer human? As autonomous products like delivery robots move into everyday life, they now share public space with people, navigating the same sidewalks. While these robots often spark curiosity, they can just as easily cause frustration, revealing an emerging human–robot relationship that warrants closer attention.


Much of the focus on autonomy has centered on technical performance, leaving the relational layer of how people interpret robotic behavior and develop trust largely unexplored. This research resulted in a framework that operationalizes trust by breaking it down into three human experience constructs, shaping coexistence through everyday interactions.

HUMAN-ROBOT

INTERACTION

X

BEHAVIORAL DESIGN

X

SYSTEMS

THINKING

Project Impact

While grounded in delivery robots, this framework applies to any context where autonomous systems enter spaces people already inhabit. The principles translate directly to service robots in hospitals, warehouses, and campuses, as well as consumer-facing products like robotic pets or home assistants.

While grounded in delivery robots, this framework applies to any context where autonomous systems enter spaces people already inhabit. The principles translate directly to service robots in hospitals, warehouses, and campuses, as well as consumer-facing products like robotic pets or home assistants.

Project Overview

This project began with a simple provocation: what happens when the user is no longer human? As autonomous products like delivery robots move into everyday life, they now share public space with people, navigating the same sidewalks. While these robots often spark curiosity, they can just as easily cause frustration, revealing an emerging human–robot relationship that warrants closer attention.


Much of the focus on autonomy has centered on technical performance, leaving the relational layer of how people interpret robotic behavior and develop trust largely unexplored. This research resulted in a framework that operationalizes trust by breaking it down into three human experience constructs, shaping coexistence through everyday interactions.

HUMAN-ROBOT

INTERACTION

X

BEHAVIORAL DESIGN

X

SYSTEMS

THINKING

Project Impact

While grounded in delivery robots, this framework applies to any context where autonomous systems enter spaces people already inhabit. The principles translate directly to service robots in hospitals, warehouses, and campuses, as well as consumer-facing products like robotic pets or home assistants.

Sidewalk robots need more than intelligence.

Sidewalk robots need more than intelligence.

Delivery robots navigate sidewalks, negotiate pedestrian traffic, and make decisions about routes and timing, all without human intervention in the moment. They've been around long enough that seeing them isn't entirely foreign anymore. But we're at an inflection point.


With companies like DoorDash launching their own delivery robot (Dot) as recently as September 2025, and UberEats continually expanding its robot partnerships in various cities, we are transitioning from pilot programs to defining infrastructure. Which means we need to examine our relationship with these robots. What does trust look like in this context? How can we design interactions that work for everyone?

Delivery robots navigate sidewalks, negotiate pedestrian traffic, and make decisions about routes and timing, all without human intervention in the moment. They've been around long enough that seeing them isn't entirely foreign anymore. But we're at an inflection point.


With companies like DoorDash launching their own delivery robot (Dot) as recently as September 2025, and UberEats continually expanding its robot partnerships in various cities, we are transitioning from pilot programs to defining infrastructure. Which means we need to examine our relationship with these robots. What does trust look like in this context? How can we design interactions that work for everyone?

Delivery robots navigate sidewalks, negotiate pedestrian traffic, and make decisions about routes and timing, all without human intervention in the moment. They've been around long enough that seeing them isn't entirely foreign anymore. But we're at an inflection point.


With companies like DoorDash launching their own delivery robot (Dot) as recently as September 2025, and UberEats continually expanding its robot partnerships in various cities, we are transitioning from pilot programs to defining infrastructure. Which means we need to examine our relationship with these robots. What does trust look like in this context? How can we design interactions that work for everyone?

The gap that remains.

The gap that remains.

Technical autonomy is progressing rapidly, with companies like Starship and Kiwibot making robots smarter and more capable. However, the relational layer—how humans interpret robotic behavior and develop trust remains largely unexplored.


This imbalance highlights a critical design gap in understanding how to foster trust, empathy, and coexistence. This study seeks to address that gap.

Technical autonomy is progressing rapidly, with companies like Starship and Kiwibot making robots smarter and more capable. However, the relational layer—how humans interpret robotic behavior and develop trust remains largely unexplored.


This imbalance highlights a critical design gap in understanding how to foster trust, empathy, and coexistence. This study seeks to address that gap.

Technical autonomy is progressing rapidly, with companies like Starship and Kiwibot making robots smarter and more capable. However, the relational layer—how humans interpret robotic behavior and develop trust remains largely unexplored.


This imbalance highlights a critical design gap in understanding how to foster trust, empathy, and coexistence. This study seeks to address that gap.

Existing knowledge shaped the study

Existing knowledge shaped the study

To understand trust in human-robot interactions, two complementary perspectives were needed: what formal research tells us about how these relationships should work, and what's actually happening on sidewalks right now.

The literature review and technological reports grounded the study in established frameworks and provided theoretical foundations for understanding what makes autonomous systems legible and trustworthy. Social media analysis captured unfiltered, real-time reactions from people encountering delivery robots in their daily lives. Platforms like Reddit, TikTok, and Instagram revealed spontaneous emotional responses, emerging social norms, and the actual behaviors people exhibit when they share space with robots.

Together, these methods allowed me to examine trust both as a design construct and as a lived social phenomenon.

To understand trust in human-robot interactions, two complementary perspectives were needed: what formal research tells us about how these relationships should work, and what's actually happening on sidewalks right now.

The literature review and technological reports grounded the study in established frameworks and provided theoretical foundations for understanding what makes autonomous systems legible and trustworthy. Social media analysis captured unfiltered, real-time reactions from people encountering delivery robots in their daily lives. Platforms like Reddit, TikTok, and Instagram revealed spontaneous emotional responses, emerging social norms, and the actual behaviors people exhibit when they share space with robots.

Together, these methods allowed me to examine trust both as a design construct and as a lived social phenomenon.

To understand trust in human-robot interactions, two complementary perspectives were needed: what formal research tells us about how these relationships should work, and what's actually happening on sidewalks right now.

The literature review and technological reports grounded the study in established frameworks and provided theoretical foundations for understanding what makes autonomous systems legible and trustworthy. Social media analysis captured unfiltered, real-time reactions from people encountering delivery robots in their daily lives. Platforms like Reddit, TikTok, and Instagram revealed spontaneous emotional responses, emerging social norms, and the actual behaviors people exhibit when they share space with robots.

Together, these methods allowed me to examine trust both as a design construct and as a lived social phenomenon.

01

01

Social Media

Analysis

Social Media

Analysis

02

02

Literature Review

Literature Review

03

03

Technology

Reports

Technology

Reports

PHOTO BY JIP VIA CC BY SA 4.0

KEY INSIGHTS FROM DESK RESEARCH:

KEY INSIGHTS FROM DESK RESEARCH:

  • Predictability and transparency in a robot's actions are essential to cultivating user trust.

  • Anthropomorphic design influences human response and must be balanced carefully.

  • Delivery robots that explicitly signal intent feel safer to pedestrians.

  • Humans can develop empathetic responses toward robots, speaking to the relationship.

  • Resistance indicates that technical readiness doesn't guarantee social acceptance.

  • Safety, accessibility, and equity concerns persist.

  • Predictability and transparency in a robot's actions are essential to cultivating user trust.

  • Anthropomorphic design influences human response and must be balanced carefully.

  • Delivery robots that explicitly signal intent feel safer to pedestrians.

  • Humans can develop empathetic responses toward robots, speaking to the relationship.

  • Resistance indicates that technical readiness doesn't guarantee social acceptance.

  • Safety, accessibility, and equity concerns persist.

  • Predictability and transparency in a robot's actions are essential to cultivating user trust.

  • Anthropomorphic design influences human response and must be balanced carefully.

  • Delivery robots that explicitly signal intent feel safer to pedestrians.

  • Humans can develop empathetic responses toward robots, speaking to the relationship.

  • Resistance indicates that technical readiness doesn't guarantee social acceptance.

  • Safety, accessibility, and equity concerns persist.

Core Experience Constructs

Core Experience Constructs

Across insights, moments of ease, confusion, and discomfort consistently clustered around similar breakdowns in understanding intent, coordination, and emotional response. Three distinct dimensions of human experience emerged from analyzing these interactions:

Across insights, moments of ease, confusion, and discomfort consistently clustered around similar breakdowns in understanding intent, coordination, and emotional response. Three distinct dimensions of human experience emerged from analyzing these interactions:

Across insights, moments of ease, confusion, and discomfort consistently clustered around similar breakdowns in understanding intent, coordination, and emotional response. Three distinct dimensions of human experience emerged from analyzing these interactions:

💻

💻

Collaborative Ease: Am I ready to work with it?

Collaborative Ease: Am I ready to work with it?

Collaborative Ease: Am I ready to work with it?

Knowing the robot will adapt and make interaction easy.

Knowing the robot will adapt and make interaction easy.

Knowing the robot will adapt and make interaction easy.

🧠

🧠

Intent Clarity: Do I understand what it’s doing?

Intent Clarity: Do I understand what it’s doing?

Intent Clarity: Do I understand what it’s doing?

Understanding the robot’s purpose, goal, and next move.

Understanding the robot’s purpose, goal, and next move.

Understanding the robot’s purpose, goal, and next move.

👾

👾

Emotional Comfort: Do I feel safe around it?

Emotional Comfort: Do I feel safe around it?

Emotional Comfort: Do I feel safe around it?

Feeling safe, at ease, and positive in the robot's presence.

Feeling safe, at ease, and positive in the robot's presence.

Feeling safe, at ease, and positive in the robot's presence.

RESEARCH GOAL

RESEARCH GOAL

The goal of this work is to explore how trust emerges from everyday encounters between humans and robots in shared environments, and what design choices can foster relationships that sustain coexistence.

The goal of this work is to explore how trust emerges from everyday encounters between humans and robots in shared environments, and what design choices can foster relationships that sustain coexistence.

RESEARCH GOAL

The goal of this work is to explore how trust emerges from everyday encounters between humans and robots in shared environments, and what design choices can foster relationships that sustain coexistence.

Rather than treating trust as an abstract goal, it can be addressed by how designers can shape it.

Rather than treating trust as an abstract goal, it can be addressed by how designers can shape it.

Rather than treating trust as an abstract goal, it can be addressed by how designers can shape it.

What robot behavior patterns create positive human experiences across the dimensions of Collaborative Ease, Intent Clarity, and Emotional Comfort?

What robot behavior patterns create positive human experiences across the dimensions of Collaborative Ease, Intent Clarity, and Emotional Comfort?

What robot behavior patterns create positive human experiences across the dimensions of Collaborative Ease, Intent Clarity, and Emotional Comfort?

To ground the three experience constructs in lived responses, I conducted a questionnaire study with 30 participants, aged 25–40, across cities including Chicago, New York, Bangalore, San Francisco, Toronto, Madrid, and Delhi. Conducted from November 12 to November 24, 2025, the study captured how people interpret and respond to delivery robot behaviors across a range of encounters.

To ground the three experience constructs in lived responses, I conducted a questionnaire study with 30 participants, aged 25–40, across cities including Chicago, New York, Bangalore, San Francisco, Toronto, Madrid, and Delhi. Conducted from November 12 to November 24, 2025, the study captured how people interpret and respond to delivery robot behaviors across a range of encounters.

To ground the three experience constructs in lived responses, I conducted a questionnaire study with 30 participants, aged 25–40, across cities including Chicago, New York, Bangalore, San Francisco, Toronto, Madrid, and Delhi. Conducted from November 12 to November 24, 2025, the study captured how people interpret and respond to delivery robot behaviors across a range of encounters.

Translating the insights into a framework

Translating the insights into a framework

Rather than serving as a prescriptive checklist, the framework functions as a diagnostic and generative tool. It supports both the evaluation of existing behaviors and the exploration of new interaction strategies during early-stage design, testing, and deployment. By making these relationships explicit, the framework helps teams design not just what robots do, but how they are socially experienced.

Rather than serving as a prescriptive checklist, the framework functions as a diagnostic and generative tool. It supports both the evaluation of existing behaviors and the exploration of new interaction strategies during early-stage design, testing, and deployment. By making these relationships explicit, the framework helps teams design not just what robots do, but how they are socially experienced.

Rather than serving as a prescriptive checklist, the framework functions as a diagnostic and generative tool. It supports both the evaluation of existing behaviors and the exploration of new interaction strategies during early-stage design, testing, and deployment. By making these relationships explicit, the framework helps teams design not just what robots do, but how they are socially experienced.

01.

01.

01.

Participants prefer robots to fade into the background of public space.

Participants prefer robots to fade into the background of public space.

Participants prefer robots to fade into the background of public space.

02.

02.

02.

70% of participants prefer robots that show distress signals in failure moments.

70% of participants prefer robots that show distress signals in failure moments.

70% of participants prefer robots that show distress signals in failure moments.

03.

03.

03.

Trust is highly sensitive to predictability and directional intent of the robot.

Trust is highly sensitive to predictability and directional intent of the robot.

Trust is highly sensitive to predictability and directional intent of the robot.

04.

04.

04.

56% of participants say clear intention is the primary driver of comfort.

56% of participants say clear intention is the primary driver of comfort.

56% of participants say clear intention is the primary driver of comfort.

05.

05.

05.

Robot’s emotiveness works best when it was situational, not constant.

Robot’s emotiveness works best when it was situational, not constant.

Robot’s emotiveness works best when it was situational, not constant.

06.

06.

06.

64% of participants prefer calm signaling over expressive personality for comfort.

64% of participants prefer calm signaling over expressive personality for comfort.

64% of participants prefer calm signaling over expressive personality for comfort.

About the Framework

About the Framework

This framework operationalizes trust by breaking it down into three human experience constructs—Collaborative Ease, Intent Clarity, and Emotional Comfort.

Each construct is translated into actionable design principles, which are then expressed through specific, observable design behaviors in real sidewalk encounters.
Rather than treating trust as an abstract outcome, the framework works from trust to experience, and from experience to design action, ultimately creating a clear line of sight between what users feel and what designers build.

The design manifestations in the outermost layer ensures the framework is not only theoretical, but directly implementable across contexts.

This framework operationalizes trust by breaking it down into three human experience constructs—Collaborative Ease, Intent Clarity, and Emotional Comfort.

Each construct is translated into actionable design principles, which are then expressed through specific, observable design behaviors in real sidewalk encounters.
Rather than treating trust as an abstract outcome, the framework works from trust to experience, and from experience to design action, ultimately creating a clear line of sight between what users feel and what designers build.

The design manifestations in the outermost layer ensures the framework is not only theoretical, but directly implementable across contexts.

This framework operationalizes trust by breaking it down into three human experience constructs—Collaborative Ease, Intent Clarity, and Emotional Comfort.

Each construct is translated into actionable design principles, which are then expressed through specific, observable design behaviors in real sidewalk encounters.
Rather than treating trust as an abstract outcome, the framework works from trust to experience, and from experience to design action, ultimately creating a clear line of sight between what users feel and what designers build.

The design manifestations in the outermost layer ensures the framework is not only theoretical, but directly implementable across contexts.

CONSTRUCT #1

CONSTRUCT #1

Collaborative Ease

Collaborative Ease

Collaborative Ease describes how effortlessly humans and robots coordinate in shared space. It reflects whether a robot reduces cognitive and physical effort, or adds friction to everyday movement.

Collaborative Ease describes how effortlessly humans and robots coordinate in shared space. It reflects whether a robot reduces cognitive and physical effort, or adds friction to everyday movement.

Collaborative Ease describes how effortlessly humans and robots coordinate in shared space. It reflects whether a robot reduces cognitive and physical effort, or adds friction to everyday movement.

CONSTRUCT #2

CONSTRUCT #2

Intent Clarity

Intent Clarity

Intent Clarity addresses how transparently a robot communicates its purpose and direction. Predictable movement, visible status, and operational consistency reduce uncertainty in shared environments.

Intent Clarity addresses how transparently a robot communicates its purpose and direction. Predictable movement, visible status, and operational consistency reduce uncertainty in shared environments.

Intent Clarity addresses how transparently a robot communicates its purpose and direction. Predictable movement, visible status, and operational consistency reduce uncertainty in shared environments.

CONSTRUCT #3

CONSTRUCT #3

Emotional Comfort

Emotional Comfort

Emotional Comfort centers on how safe and at ease people feel in the presence of a robot. It emerges when robots communicate restraint, respect boundaries, and express emotion only when appropriate.

Emotional Comfort centers on how safe and at ease people feel in the presence of a robot. It emerges when robots communicate restraint, respect boundaries, and express emotion only when appropriate.

Emotional Comfort centers on how safe and at ease people feel in the presence of a robot. It emerges when robots communicate restraint, respect boundaries, and express emotion only when appropriate.

Using the framework in practice

Using the framework in practice

This framework is intended for designers and engineers working on autonomous systems that operate alongside people. For shaping behaviors, UX and HRI designers can use it to design communication methods and expressive interfaces. Product teams can define behavior specifications and feature priorities to generate new ideas.

It can also support company leadership and deployment teams in making strategic decisions about where, when, and how any anthropomorphized or autonomous technology is introduced into public environments, anticipating public perception, acceptance, and long-term coexistence.

This framework is intended for designers and engineers working on autonomous systems that operate alongside people. For shaping behaviors, UX and HRI designers can use it to design communication methods and expressive interfaces. Product teams can define behavior specifications and feature priorities to generate new ideas.

It can also support company leadership and deployment teams in making strategic decisions about where, when, and how any anthropomorphized or autonomous technology is introduced into public environments, anticipating public perception, acceptance, and long-term coexistence.

This framework is intended for designers and engineers working on autonomous systems that operate alongside people. For shaping behaviors, UX and HRI designers can use it to design communication methods and expressive interfaces. Product teams can define behavior specifications and feature priorities to generate new ideas.

It can also support company leadership and deployment teams in making strategic decisions about where, when, and how any anthropomorphized or autonomous technology is introduced into public environments, anticipating public perception, acceptance, and long-term coexistence.

To sum up,

TO SUM UP,

The framework functions both as a design tool for exploring new interaction strategies and as an evaluative lens for assessing whether existing robot behaviors support trust in real-world public contexts.

The framework functions both as a design tool for exploring new interaction strategies and as an evaluative lens for assessing whether existing robot behaviors support trust in real-world public contexts.

TO SUM UP,

The framework functions both as a design tool for exploring new interaction strategies and as an evaluative lens for assessing whether existing robot behaviors support trust in real-world public contexts.