Using co-design to learn about family caregivers
by Adam Felts
How can we uncover what people want? One way is to have them invent it themselves.
Recently, the AgeLab published a paper describing what a very particular group, family caregivers, might want out of tools powered by artificial intelligence. But the method we used was an unusual one: a co-design workshop.
Co-design is a research method that involves intended users and other stakeholders in a design process. For example, when planning and designing a new park in a neighborhood, a co-designer would include the neighborhood’s residents in the design process, to better ensure, among other things, that the city doesn’t end up with a park that no one wants to use.
But co-design can also be used simply for uncovering people’s experiences, values, and desires, without any immediate aim for application.
For our co-design study on family caregiving and AI, we recruited participants who were either family caregivers or older adults, or both. The participants, who were divided into groups, were tasked with creating a prototype of an AI-powered technology that solved a caregiving challenge.
However, the goal of the workshop was never to advance these prototypes toward production. Instead, the participants’ creations were analyzed as reflections of their attitudes and experiences.
(We adapted this study from an earlier project we conducted in collaboration with the MIT Media Lab.)
Out of six groups of participants, we had six prototypes to analyze. In addition to having participants describe their prototype, we also used generative AI in the moment to create a visual representation of their ideas (some of them are reproduced below).
Four of the ideas seemed to coalesce around a few applications and major themes.
Behavioral management tools
Two of our workshop groups came up with applications designed to manage the behavior of the care recipient – that is monitor their behavior and provide suggestions and reminders to them. Think about a wearable device that reminds a person to take their daily medications, or suggests that they go for a walk or call a friend on the phone.
The groups’ gravitation toward this sort of application tells us four things:
First, it tells us that managing the behavior of a care recipient – that is, making sure they’re doing what they’re supposed to do, and persuading them to do it – is a difficult task for family caregivers. From accompanying research we’ve learned that it’s time consuming, emotionally draining, and puts strain on the relationship.
Second, it tells us that existing tools that could do this job are not very effective. One of our participants made her complaints about those tools explicit: “Alexa has so much potential, it sucks. ElliQ [a voice assistant for older adults] also sucks. They seem to think that getting older equates to being five years old. Even with dementia, my mom is not an idiot.”
Third, it tells us that our participants believed that AI could be effective for creating tools that assist with behavioral management. They noted the ability of AI chatbots to produce natural language in an empathetic tone – sometimes more empathetically and more patiently than a human.
Information sharing tools
Two more groups came up with ideas for tools that, broadly speaking, are intended to share and relay information, especially medical information.
The first was an AI assistant that would play the role of a doctor, a “second opinion,” with the benefits of always being available to the caregiver and, in the words of the participants, lacking bias.
The second was a platform like a medical portal, but personalized so that the information is catered toward whoever is viewing it – doctor, caregiver, or care recipient – each of whom, of course, possesses different informational needs.
What these groups’ creations surface, primarily, is complaints. That is, their ideas appeared to be inspired by negative experiences with existing tools, professionals, and institutions.
The desire for an automated second opinion reflects frustrations with the limited availability and at times stunted communication skills of doctors. Medical training, infamously, makes doctors worse at communicating rather than better. Yet participants never wanted to remove the human professional from the equation, instead hoping that AI could supplement the expertise of a doctor.
The desire for a hub of personalized health information reflects frustrations with how medical information in patient portals is presented in a highly technical way, interpretable only by the (again scarce) doctor.
Taken together, these two ideas reflect how valuable it is to be able to access, interpret, and share medical information and advice. What’s more, good advice is a scarce and precious resource. The idea that AI could serve as an expert medical interpreter and advisor was a tantalizing possibility for our participants.
Why co-design?
Could these discoveries have been made using more conventional methods? That is, instead of subjecting our participants to this very involved process of designing AI prototypes, could we have just asked them to relate some complaints they had about caregiving and how they would like them to be addressed?
The AgeLab, of course, has asked these very sorts of questions to caregivers in prior studies. But the answers we have gotten through such conventional methods were not as deep. By asking participants to think like designers from the outset, it puts them in the position of thinking in terms of solutions, as well as in terms of systems: what accounts for the difficulties I’ve run into as a caregiver? How could they feasibly be addressed? These considerations are rolled into the process from the very beginning, rather than needing to be elicited step by step by a researcher.
In other words, the co-design method frames the topic in a particular way, and directs participants in a particular fashion. Both the strength and weakness of co-design as a social science method lies in its framing and direction. Its strength, again, lies in placing participants in an active and creative frame of mind.
Its weakness, by the same token, is that it demands its participants to provide a solution, in this case, a solution that uses AI. There is less room for participants to explore the idea that that this technology may not be the solution that they want, that they may want some other solution entirely, or maybe even that a “solution” is not something that they want at all.