Learning From a Simulated World

Michael Yankoski, a postdoctoral fellow at the Davis Institute for Artificial Intelligence, wants to use AI to help solve complex human problems

Michael Yankoski, a postdoctoral fellow at the Davis Institute for Artificial Intelligence, teaches a class at the Davis Science Center. Yankoski specializes in AI and disinformation and their impact on politics and society.
Share
By Abigail Curtis Photography by Ashley L. Conti
June 12, 2024

Lots of people love to play the fast-paced party card game Codenames, in which players compete by guessing words based on clues given by a teammate. 

At Colby, however, one particular group of game players may raise eyebrows. That’s because they are simulated people living in a computer world created by Michael Yankoski, a postdoctoral fellow at the Davis Institute for Artificial Intelligence, with the help of student researchers. Ultimately, the simulations will provide insights about how information is shared and how messages are received in the real world. 

“Watching computer simulations play games with one another is a really fascinating moment,” Yankoski said. “The students are having a ton of fun with it. I love working with Colby students on this project. It’s an innovative space to be working on a project with some potentially significant implications.” 

Yankoski, whose research focuses on the social and political implications of AI, misinformation, and disinformation, came to the College last fall from the University of Notre Dame in South Bend, Ind. The simulation project highlights something important to him: the way that artificial intelligence can play a role in furthering the understanding of and the approach to complex human problems. 

He believes that is a good match for the mission of the Davis Institute for Artificial Intelligence, the first cross-disciplinary institute for AI at a liberal arts college. 

“I think the Davis Institute is a really important institute to exist at a location like Colby because being able to think about the societal, political, and economic implications of artificial intelligence really does require the broad view that a liberal arts college and liberal arts environment provides,” he said. “It helps address these really complicated questions from a multiplicity of trajectories and a multiplicity of frameworks.”  

A needle in a haystack

As technology advances and content produced by generative AI becomes cheaper, more believable, and ubiquitous, it can be hard to distinguish between what is real and what isn’t. Such advances make the work of tracking and identifying misinformation (false or inaccurate information) and disinformation (false information deliberately intended to mislead) even more critical.

It’s work that Yankoski knows well. At Notre Dame, he helped develop an AI system that focused on the detection of disinformation and misinformation trends in social media. He was also coauthor of a recently published paper that documented how in the run-up of the Russian invasion of Ukraine, the Russian government used social media to distribute propaganda, including manipulated visual images, to influence public opinion. 

Michael Yankoski, a postdoctoral fellow at the Davis Institute for Artificial Intelligence, works on researching the social and political implications of AI, misinformation, and disinformation.

“If truth is a needle in a haystack, generative AI has the potential to make the haystack larger,” Yankoski said. “It does influence the amount of work that we all then have to do in order to try and find reputable sources, trustworthy sources, accurate reporting, all these kinds of things.” 

Social media users worried about falling prey to misinformation and disinformation might find it helpful to pay attention to their emotional response to different posts, he said. If they find themselves becoming frustrated, angry, and even irate, it’s likely worth taking a closer look at what content they’re seeing. 

It doesn’t mean that any content that causes frustration is therefore disinformation or misinformation, Yankoski clarified. But being aware that social media algorithms often optimize emotionally activating content is an important, even foundational, aspect of media literacy.

“A lot of disinformation is designed to solicit that kind of response,” he said. “This is just basic media literacy education, but if you notice yourself becoming highly activated by the content that you’re consuming, stop and pause and ask why that might be.” 

Asking big questions

Yankoski came into this work in a roundabout way. After graduating college, he spent more than a decade in the philanthropic field as an author and public speaker before deciding to return to academia. He did so because of a growing belief that individual generosity and philanthropy wasn’t going to be enough to contend with vast, structural issues such as inequity and climate change. 

“We need good policy, we need thoughtful regulations, we need a clearer awareness of what’s happening. We need rigorous ways of better understanding root causes and systemic processes,” he said. “So this sort of multidisciplinary approach that allowed the asking of questions and a more thorough engagement with the complexities is what I was really hungry for and wanted. It’s why I came to value the academic and research perspective so much.” 

At Colby, one of Michael Yankoski’s big projects is building a simulated world called CompHuSim, an artificial intelligence project that uses large language models to build complex human behavior simulations—or “sims.”

He enrolled in a doctoral program in ethics and peace studies at the Kroc Institute for International Peace Studies at the University of Notre Dame. After receiving his degree, he worked as a research associate in the university’s Department of Computer Science and Engineering. 

Yankoski draws on his experience to inform his work at Colby. He’s teaching computing ethics, which looks at the broad societal implications of computing technology, including bias in artificial intelligence, corporate practices, and more. 

“These are enormous systems that have incredible, and perhaps unprecedented, power to shape human societies,” he said. 

Building a world

He’s also working with a group of students on his own research of building the simulated world he’s called Comp-HuSim. It’s an artificial intelligence project that uses large language models to build complex human behavior simulations—or “sims.” Through a software process, each of the sims has its own personality, demographics, backstory, occupation, memories, intentions, and relationship network. There’s also a real social media network Yankoski and the students are building now inside of the simulated world so the sims can communicate with one another. 

Its goal is to observe how information flows between the simulated people and use those observations to learn more about how information and messaging work in reality. For example, Yankoski said, if the National Institutes of Health wants to issue messages in response to a pandemic, Comp-HuSim could act as a sort of sandbox to try out different types of messages and get a sense of how different messaging might affect the sims. 

In the wake of the Covid-19 pandemic, it’s clear that the question isn’t just an academic one. Humans witnessed firsthand how messages around everything from handwashing to social distancing to vaccinations were critical, and sometimes not well-received by the intended audience. Such simulations will never replace other disciplinary approaches to how messaging works in human populations, but it can help by adding additional perspectives, he said. 

“There are so many unpredictable complexities in the real world,” Yankoski said. “[We think] there are ways that we can build models and simulations that, because they run on silicon inside of a computer simulation, you could iterate through a million different versions of messaging and see which ones are most likely to have the impact that you’re looking for.”

related

Highlights