Researchers Made a Social Media Platform Where Every User Was AI. The Bots Ended Up at War

Social platforms like Facebook and X exacerbate the problem of political and social polarization, but they don’t create it. A recent study conducted by researchers at the University of Amsterdam in the Netherlands put AI chatbots in a simple social media structure to see how they interacted with each other and found that, even without the invisible hand of the algorithm, they tend to organize themselves based on their pre-assigned affiliations and self-sort into echo chambers.

The study, a preprint of which was recently published on arXiv, took 500 AI chatbots powered by OpenAI’s large language model GPT-4o mini, and prescribed to them specific personas. Then, they were unleashed onto a simple social media platform that had no ads and no algorithms offering content discovery or recommended posts served into a user’s feed. Those chatbots were tasked with interacting with each other and the content available on the platform. Over the course of five different experiments, all of which involved the chatbots engaging in 10,000 actions, the bots tended to follow other users who shared their own political beliefs. It also found that users who posted the most partisan content tended to get the most followers and reposts.

The findings don’t exactly speak well of us, considering the chatbots were intended to replicate how humans interact. Of course, none of this is truly independent from the influence of the algorithm. The bots have been trained on human interaction that has been defined by decades now by how we behave online in an algorithm-dominated world. They are emulating the already poison-brained versions of ourselves, and it’s not clear how we come back from that.

To combat the self-selecting polarization, the researchers tried a handful of solutions, including offering a chronological feed, devaluing viral content, hiding follower and repost figures, hiding user profiles, and amplifying opposing views. (That last one, the researchers had success with in a previous study, which managed to create high engagement and low toxicity in a simulated social platform.) None of the interventions really made a difference, failing to create more than a 6% shift in the engagement given to partisan accounts. In the simulation that hid user bios, the partisan divide actually got worse, and extreme posts got even more attention.

It seems social media as a structure may simply be untenable for humans to navigate without reinforcing our worst instincts and behaviors. Social media is a fun house mirror for humanity; it reflects us, but in the most distorted of ways. It’s not clear there are strong enough lenses to correct how we see each other online.

Like
Love
Haha
3
ترقية الحساب
اختر الخطة التي تناسبك
إقرأ المزيد