The As-If Machine: Walking in Someone Else’s Shoes

By Kelsey Keeves

As AI tools become increasingly more powerful and widely available, University of Michigan researchers are utilizing AI’s capabilities in new and innovative ways to solve the world’s most challenging problems.

One example of this work is an AI known as the As-If Machine, developed through a partnership between Ceren Budak, associate professor of information, and Stephanie Preston, professor of psychology. Budak and Preston are building a generative artificial intelligence agent, similar to other large language models like ChatGPT, that can respond to users by co-writing a situation designed to help users build empathy for those dealing with complex societal challenges very different from their own lives.

“Life experiences shape the way that we perceive the world, our behaviors and our support for policies, but our experiences are limited,” Budak said. “You can only live one life. Using this sort of generative AI technology, we can hopefully get people to engage with different life experiences to increase their empathy for marginalized groups.”

Large language models allow users to create agents that can respond more completely to specific prompts. The researchers are developing four of these agents to interact as a cohesive tool, which individual users will co-write with to create specific scenarios that encourage the user to increase their understanding and empathy with certain issues.

Life experiences shape the way that we perceive the world, our behaviors and our support for policies, but our experiences are limited. You can only live one life. Using this sort of generative AI technology, we can hopefully get people to engage with different life experiences to increase their empathy for marginalized groups.

Ceren Budak

Associate Professor of Information, School of Information

One agent will create plausible scenarios based on user input, while another will forecast the possible effects of that scenario. A third will create a cohesive story arc. An additional agent will facilitate the ability for the agents to interactively co-write with the user.

To ground the system in reality and counter the possibility of AI hallucinations, Budak and Preston are using retrieval augmented generation, or RAG. RAG uses documents to give context to agents to constrain the situations that generative AI tools can create. Based on the researcher’s input, the RAG system will be customized to different scenarios and goals.

The project, conceptualized by Josh Ashkinaze, a Ph.D. student in the School of Information, recently received a Propelling Original Data Science grant from the Michigan Institute for Data & AI in Society. The team aims to develop a production-ready web application that will be useful to researchers in a wide range of fields.

“Many big problems can feel distant until they hit us. We hope that we will be able to use AI to bridge social divides and make long-term risks feel concrete,” Ashkinaze said. “The goal is to immerse users in specific counterfactual narratives to make others’ experiences feel less distant.”

The tool is built on the concept of narrative transportation theory, a psychological model that states readers are more likely to feel connected to a narrative and remember its details when they are reading material that they become fully immersed in, feeling transported to the situation that they are learning about. The researchers will use this concept on a broad scale and assess if users are more likely to connect with the characters and their struggles created in the AI-generated stories tailored to the readers’ own experiences.

In the past, researchers in psychology and related fields would test similar concepts by asking participants to listen to a recording of an interview, read a testimonial or even write a story as if they were a person affected by the issue being studied. These tactics are less reliable because it is difficult to be certain if the participants are paying full attention to the materials.

“This project is an amazing opportunity because we can complement one another’s skills so well,” Preston said. “AI is a growing field in cognitive psychology, but I personally have limited skills in that department. Our partnership allows me to imagine the tool, figure out where it will be theoretically interesting from a psychological perspective, determine where the pitfalls and promises are going to be and work with Ceren and Josh to build it. So it’s really a perfect collaboration.”

Participants will begin the program by answering questions about their living situation, employment, daily routines or anything relevant to the scenario being tested. Researchers will be able to create custom questionnaires specific to the experiment and how scenarios could realistically unfold depending on the given circumstances. The AI agents will use the responses to create a custom scenario based on the user’s specific circumstances.

Participants will then respond to the AI-generated story with their own experiences, and as users type, the AI writing assistant will provide real-time suggestions that advance the narrative, while adapting to the user’s real-life story.

For example, if a user with a history of serious medical issues was interacting with a version of the tool designed to create awareness and empathy for topics related to poverty, the AI agents would work with the participant to co-create a story that focuses on the hardships that could be endured by those attempting to pay for medical debt.

“The As-If Machine is an engine of experiments rather than a single experiment. Researchers can change just a few lines in a file, with little to no coding required, to create a new scenario and new questions,” Ashkinaze said. “This means that beyond the interventions we are studying, any researcher can deploy this system to test new ones. We hope that we will be able to use AI to bridge social divides, make long-term risks feel concrete and increase trust in science.”