ai

Human nature also manifests in AI

New research shows how artificial intelligence models replicate social behavior.

Betsy Loeff

If you think artificial intelligence tools like ChatGPT and Gemini sound human, this shouldn't surprise you: In social network settings, they behave like humans, too.

That's what Assistant Professor of Information Systems Marios Papachristou discovered when he teamed with Yuan Yuan from the University of California at Davis to see if collectives of large language model (LLM) agents would exhibit the same behavior seen among humans in social networks. LLMs do act like people in network settings, and Papachristou sees enormous potential for business research teams to leverage this finding in the future.

What are LLMs? They're a form of artificial intelligence that's trained on massive amounts of information and leverages mathematical models to understand and generate human language. They don't think; they predict. Asked what an LLM was, ChatGPT noted, "At its core, an LLM answers the question, 'Given everything so far, what is the most likely next word or idea?'"

Papachristou and Yuan examined whether the predictive power of LLMs could simulate human behavior in network formation, or how individual LLMs connect. The team's experiments were designed to see if LLMs demonstrate preferential attachment, triadic closure, and homophily, three traits seen in the way humans choose network associates.

Social studies

Have you ever looked at your social media network and felt that your friends have more connections than you do? If so, you're not alone. This occurs in part because of preferential attachment, a phenomenon demonstrated in actual social networks. "Preferential attachment means that generally people have tendencies to form connections with people who already have lots of connections," Papachristou says. For example, he points to the way many people follow the accounts of famous people on social media platforms. A few people end up with numerous connections, while most have fewer.

To test preferential attachment and other traits, the researchers fed three LLMs — GPT, Claude, and Llama — information about the individual they represented and others within the network. For the first experiment, the research team simulated network growth by creating an empty network and adding hypothetical individuals to it, with each new addition receiving information about the other individuals (nodes) in the network. Like their human counterparts, the AI agents were more likely to connect with already popular members of the network.

The following experiment tested whether LLMs would exhibit triadic closure, a network phenomenon in which people are more likely to connect if they share a mutual friend. The team used two groups within a network, each comprising simulated individuals (network nodes) powered by LLMs. Throughout 10 simulations, the researchers found that LLMs consistently formed links with other nodes that shared common neighbors.

A third study the researchers conducted looked at homophily. "This is the tendency of people to connect with others who are similar to them," explains Papachristou. Shared interests, hobbies, beliefs, or demographics are among the characteristics that make people more likely to connect with one another.

For this experiment, the researchers randomly assigned each LLM agent one of three hobbies, favorite colors, or locations within the U.S. Then each LLM was tasked with connecting with up to five other LLM agents who might or might not share these hobbies, color preferences, or locales. Across all simulations, the LLMs exhibited homophily. The researchers threw in an additional factor — lucky numbers — which didn't result in as many homophilic connections. Hobbies and favorite colors had an influence, but homophily was the strongest predictor of connections being made.

Alongside these individual-level drivers of network connection decisions, the team examined higher-level network characteristics, including community structure and the small-world effect. "Communities usually arise due to homophily and triadic closure," Papachristou says. "The small-world phenomenon says that between you and someone else, there are only a few intermediate friends." Pointing to the six-degrees-of-separation theory and Stanley Milgram's "small-world" experiment, he adds, "With anyone on Facebook, you are at most six steps away. The Watts-Strogatz model, a well-known network science model, illustrates this concept. We tried to see if we could replicate the results, and we also found evidence of this theory."

Of men and machines

To validate their findings, the research team conducted the same experiments using real-world data from three sources: a collection of Facebook accounts from 100 colleges and universities, a year's worth of nationwide call records in Andorra from July 2015 to June 2016, and a company’s dataset of employee calls and texts.

"We gave the LLM agents personas based on real data and asked them to make connections," Papachristou says. "We wanted to see if the effects we found before arose and with what priority." The real-world data reflected the same results as earlier experiments. Questions about connection priorities indicated that LLMs prioritized homophily, followed by triadic closure and preferential attachment.

Finally, the researchers administered the same survey to both human and LLM participants and again asked about the priorities that drive connection decisions. This experiment examined network-formation choices in two social contexts: a social network where participants assumed the role of college students and a company network where survey takers assumed the role of employees making connections within the company.

In this final experiment, Humans made the same choices LLMs made, and they ranked their reasons for those choices almost identically. In social networks, both humans and LLMs tend to favor homophily. In contrast, in company networks, both groups tend to connect with people on higher rungs of the corporate ladder, resulting in heterophily.

One difference between LLMs and humans stood out: LLMs were more consistent than humans in their individual decisions. "This could be concerning for language models because you don't always want that much homogeneity in behavior. The spectrum of human preferences is generally more diverse," Papachristou says.

Still, he sees enormous potential for researchers — both academic and corporate — in this study's findings. "Think about LLMs for use in silicon experiments," Papachristou says. "Instead of experimenting in a lab with real people, you do it with 'artificial' people."

He views LLMs as a means to test algorithms in a sandbox rather than undertaking the lengthy process of testing with real study participants. He adds that it's essential to acknowledge the potential drawbacks of LLMs for specific types of research. For instance, the consistency of LLMs could be helpful for problem-solving but not for market research or consumer behavior studies. "Maybe in that case we'd need to personalize the LLMs a little more," Papachristou says.

Ultimately, he believes LLMs may someday enable companies to test algorithms, offerings, and policies with simulated individuals rather than real ones, thereby eliminating concerns about cost, time, and privacy. "There's a tremendous opportunity for companies testing out their algorithms without having to test them in the real world," he added. Recommendation algorithms, hiring algorithms, routing or allocation algorithms — the list goes on and on.

"Our results are pretty encouraging," Papachristou says. "LLMs are the best proxy for human behavior that we have so far."

Latest news