Tuesday, October 27, 2020

Colorful computer code

Let's say you want to watch a news clip about Confederate monuments. You search YouTube and choose a video from what appears to be a randomly generated list of results. When the video ends, YouTube autoplays another video and recommends dozens more—and likely they’re the sort of thing you actually might watch, because that list is generated by algorithms that process your YouTube viewing history; the videos, tweets, and posts you’ve liked on social media; and your behavior elsewhere on the web. Seems helpful, right? What’s the problem?

The problem, explains Brian Ekdale, associate professor in the UI School of Journalism and Mass Communication and PI on a new $1 million grant from the Minerva Research Initiative, is that “social media algorithms tend to reinforce our personal biases. There’s a big difference between scrolling through a news feed online—which is targeted at you, specifically—and picking up a copy of the Des Moines Register, where there’s a mix of content that both reinforces your preexisting beliefs and challenges them.” If we don’t encounter content that challenges our biases, our beliefs are likely to become entrenched and, depending on our own unique psychological and cultural makeup, may become extremist. We might perform symbolic violence against those who hold different opinions, like squabbling online and using discriminatory language to disparage other groups—or we might even end up supporting or performing physically violent acts, from destroying property and stealing political yard signs to kidnapping elected officials.  

But which people or communities are most susceptible to radicalization, and how do social media algorithms take advantage of this vulnerability? “There isn’t much research that connects tech variables to extremist tendencies,” says Ekdale. That’s where Ekdale and his co-investigators come in. Funded by the Minerva Research Initiative—the social science research arm of the Department of Defense—Ekdale, Tim Havens (Communication Studies and African American Studies, UI); Rishab Nithyanand (Computer Science, UI); Andrew High (Communication Studies, Pennsylvania State University); and Raven Maragh-Lloyd (Communication Studies, Gonzaga University)—will use qualitative, quantitative, and computational research methodologies to investigate the psychological attributes that make a person vulnerable to radicalization and how U.S. users of various social media respond to personalization and radical content. Ultimately the group plans to develop techniques for predicting which online communities are likely to adopt extremist ideologies.

Last-Minute Windfall

Brian Ekdale

Shortly after Ekdale submitted his grant application to the Minerva Research Initiative, the Trump Administration proposed eliminating all Department of Defense social science research funding. Assuming his project wouldn’t be funded, Ekdale began submitting his application elsewhere, only to hear seven months later that Congress opted to preserve Minerva’s funding and that his own project was one of four that would likely be funded. (Read the press release.)

“I went from forgetting about the application to utter joy in the span of an hour,” laughs Ekdale. “Luckily, Congress recognizes that social science research—understanding social phenomena, the ways people behave, the ways they communicate with each other and with machines—affects national security.” In addition to the four new projects, Minerva has, in recent years, funded projects on Russian disinformation and propaganda campaigns; Africa’s “youth bulge” and its effect on national security, and forecasting armies’ will to fight. (For more, visit Minerva’s blog, The Owl in the Olive Tree.)

“Black Box” Algorithms

Though you can find general information online about how social media personalization algorithms work, you won’t find the algorithms themselves, says Ekdale; they’re proprietary software. No one outside a coterie of software engineers can view them—not even researchers. “And even if we had access to the code, it wouldn’t be that helpful,” he says. “Some of these algorithms are constantly refined through machine learning, so they’re complex and not at all static.” More useful are algorithm audits, tests that show how users experience algorithmic personalization while browsing the web and using social media. So far, Ekdale’s group has conducted two audits to determine whether a person’s web-browsing history affects which articles are displayed when s/he visits Google News. (It does.)

Social media sites use the same basic strategy, says Ekdale. “Social media companies are working for their own commercial good instead of the public good,” so they want to keep you on their sites for as long as possible—and that means managing your experience, showing you only what they “think” you want to see, from people they “think” you care most about, to posts containing ideas they “think” you’re likely to agree with. If there were no algorithms to prioritize content, you’d be shown all of the posts being generated by all of your connections at any one time and would likely be overwhelmed and lose interest.

And what do social media algorithms “think” you want to see? Content that matches the type of content you’ve read, looked at, or searched for before—content that, in effect, reinforces your personal biases. This is collected from a wide variety of sources, especially if you use your Google or Facebook credentials to log in to other sites. While this AI-generated content can be surprising, even funny—“Facebook thinks I’m interested in hot dogs for some reason,” Ekdale laughs—it has serious implications for democracy, domestic peace, and national security.

“What are the consequences of social media companies making the decisions about what and what not to show you?” Ekdale and his group are asking. “When you use social media, what are you not seeing? Points of view that might differ from your own? Content from particular religious, racial, ethnic, or other identity groups that differ from your own? How does this affect how you view the world and others?” These are especially timely and important questions, given our ultra-polarized political climate.

Timeline

Right now, the group is in the design stage. In early 2021, they plan to survey 1,500 people about their uses of technology and their political views on a handful of issues. They will then recruit from those respondents approximately 150 who report being very politically engaged and install on their devices a plug-in that will track their online behavior. In early 2022, the 150 will complete the initial survey again, and the team will determine how much the subjects’ political opinions changed during the year, if at all. The researchers will then conduct in-depth interviews with a subset of the respondents. If they find that a subject’s political opinions have become radical, the team will investigate how that change occurred by reviewing the plug-in data of the person’s web browsing history as well as their responses to the two surveys and interview questions.

“We don’t think it’s wrong to have passionate views,” explains Ekdale. “A passionate view doesn’t make someone an extremist or a radical. It’s only when those views are accompanied by a dehumanizing of the other or an intent to carry out violent action against people who disagree that someone becomes ‘radical,’ at least for the purposes of our study.”

The group’s ultimate goal, he says, is to develop software that can identify and even predict radicalization on social media, whether it’s radicalization of an individual or of an online community like a subreddit or Facebook group. “We want to know what the transition into fringe behavior on a social media platform looks like,” he says, “and what Phase 1, 2, and 3 of an online community’s life looks like. Is there a trajectory of radicalization?”

High Stakes

Right now, says Ekdale, U.S. law is set up so that social media companies—which insist that they’re “tech” rather than media companies—can’t be held legally accountable for what users post on their sites the way media companies can. Newspapers, for instance, are legally responsible for what appears in their pages and on their websites. But, says Ekdale, there’s no one providing an outside ethical or legal view that holds social media companies accountable. According to the Communications Decency Act of 1996, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Ekdale is hopeful that his research group can provide data to other researchers that can help determine what federal oversight could look like. “We want to be very mindful as we design these tools,” he notes. “We want to make sure that what we’re creating will produce more good than ill.”

The Evolution of a Reading Group

The Obermann Center was central to the origin of the research group, which comprises faculty from three universities and three different disciplines. In 2015, Tim Havens convened the Algorithms and Social Media Obermann Working Group after becoming interested in Netflix’s personalization algorithm. This first group was informal—“a let’s-get-together-around-Obermann’s-big-wooden-table-and-talk-about-ideas group”—and very interdisciplinary, including faculty from communication studies, journalism, math, and computer science. The group read articles from diverse fields, from theoretical math to cultural studies. “The group really brought together multiple worlds that don’t talk to each other very well and exist almost independently of each other,” Ekdale says. Eventually a half dozen participants became the Personalization Algorithms and Bias in Social Media Working Group, directed by Havens and Zubair Shafiq, then a member of the UI Computer Science faculty. The group included PIs High, then a member of the UI Communication Studies faculty, and Maragh-Lloyd, then a PhD candidate in that department.

More recently, with Nithyanand replacing Shafiq, the group began designing its current project. “I’ll be honest: it took much longer than it would have if the group had been comprised only of my journalism colleagues,” says Ekdale. “There were a lot of translation issues, making sure that the communication and journalism members understood how the computer scientists thought about the subject and vice versa.” Because of those difficulties, though, Ekdale says he now has a much deeper understanding of the computational sciences, and he’s noticed that Nithyanand and his students have started asking questions about media and social science in the course of their computer science work.

Indeed, says Nithyanand: "For too long, computer scientists have been working in isolation on technology that impacts our socio-political well-being. Recent developments have shown how this can result in unintended negative harms. My students and I have been fortunate to be able to work with Brian and other media scholars via the Obermann Center to do research that does not propagate such harms. I'm hopeful that this collaboration leaves a lasting impact on the thought processes of my research students as they go out into the real world and start building products that impact us all."

Working Group meeting virtually

The working group currently comprises the grant’s PIs and UI graduate students Ryan Stoldt (Journalism and Mass Communication) and John Thiede (Computer Science). “We really like working together,” notes Ekdale. “One of the highlights of my week has been jumping onto a Skype call with these folks. We’ve gone through professional and personal hardships together that have created a bond that is much deeper than a typical research collaboration.” Theirs is a joyful interdisciplinarity, one that respectfully checks and challenges members’ perspectives, ever encouraging new ideas, and ensuring that the members continue to grow.

Further Reading