Law and Politics People and Community

Experts explore Twitter and its role in public conversations

University of Miami communication specialists analyze the susceptibility of Twitter and other social media platforms to society’s ills—racism, rage, and violence—and what might be done to improve digital media as a resource.
iPhone screen displaying Donald Trump's Twitter feed

When Twitter founder Jack Dorsey launched the online social networking site in March 2006, his team envisioned a noble purpose for the enterprise: to serve the public conversation and stimulate shared learning and solutions for some of the world’s most complex problems.

Yet today, far from being a hub for healthy exchange, Twitter seems to function more often as a lightning rod for vitriol and reflecting the schisms in society. Twitter and other social media platforms and their executives have come under increasing criticism from both sides of the political spectrum: from progressives who clamor for the platforms to restrict the harassment, hate speech, and misinformation that proliferates, and from conservatives, including President Donald Trump, who claim a bias against conservative ideas.

Alyse Lancaster, associate professor in the University of Miami School of Communication and chair of the Department of Strategic Communication, teaches Social Media Strategies, which debates the pros and cons of social media as fundamental.

“The primary advantage of social media sites is that they allow for the proliferation of all kinds of information,” Lancaster said. “It gives people the opportunity to see alternative points of view and challenge their preconceived notions about what they believe is right or wrong.”

“Unfortunately, that tends to be social media’s biggest negative as well,” she continued. “People feel a lot more comfortable saying things on social media than they would saying them in person, and this has led folks who are inclined to feel safe making comments that are racist, bigoted, and even bordering on inciting violence. This has only exacerbated the deep divide that already exists in America.”

Karin Wilkins, dean of the School of Communication and a leading scholar on global communication and political engagement, pointed to the inequities of access to these platforms that inhibit a more robust public debate.

“The idea that these digital media platforms offer an opportunity for public discourse and civic engagement fits with the idealism in our culture,” Wilkins said. “Not all of us have access though, or the means and familiarity to leverage this resource to its fullest advantage. We need to begin then with the recognition that there are serious inequities that prohibit this from being a shared and equitable space for debate.”

The dean urged the companies to do more to protect the health of the discourse and debate.

“The policies of these organizations have a role to play in the production and distribution of content, recognizing the particularities of the algorithms that privilege some content over others,” she said. “There is indeed potential for online discussion to accelerate problematic and discriminatory sentiments and accentuate hate speech, whether from politicians or citizens. We need to encourage a balance between creating spaces for authentic, not manipulated, voices and debate against stimulating a violent and abusive rhetoric.”

The dean recognized, though, that financial interests inhibit the companies from doing more to monitor their users.

“Media platforms exist within organizational structures, with financial imperatives,” Wilkins noted. “When media companies serve a profit incentive, as many of the digital media companies do, we need to recognize the importance of commercial interests in structuring the possibilities for engagement.” She added that regulatory policies also have an important role in creating these structures.

The Communications Decency Act, enacted in 1996, and specifically Section 230 of the legislation has been cited as a critical regulation regarding these social media platforms.

The legislation essentially provides broad protection for online intermediaries—platforms such as Twitter, Facebook, and others—for the third-party content that they host and publish.

It was specifically this act that Trump targeted when he signed an executive order on May 28.

The president is well-known for using social media—especially Twitter—to disseminate his opinions and policy inclinations.

Many say the platforms’ hands-off treatment of the president and other world leaders have granted far too much leeway—often in violation of their own policies.

In response to a couple of the president’s recent tweets alleging voter fraud through voting by mail, Twitter took the step of adding fact-checking labels to the tweets on May 26.

Angered by the move, the president signed the order urging that under certain conditions, “websites of any size should lose their protections under Section 230”  and that the Federal Trade Commission should investigate the sites “for deceptive advertising based on their terms of service” and suffer other possible consequences, as cited by the online website The Verge.

Miami Law School professor Mary Anne Franks, in an article in the May 30 issue of The Atlantic magazine, criticized the president’s “utter coherence” regarding his spat with the social site.

Franks pointed to a range of ironies and irregularities in the president’s decision to advance the executive order yet says “the most profound” is that “[the order] doesn’t address the core problem with the federal law that scholars and advocates have highlighted for years—namely, how its immunity provision not only fails to encourage online intermediaries to address harmful content but rewards them for indifference.”

“Trump’s order does not acknowledge the ways that this immunity has allowed online intermediaries to ignore, encourage, and profit from abuses—harassment, privacy invasion, deadly misinformation—directed at vulnerable groups, especially women and people of color,” Franks wrote in her article.

In a June 2019 TED conversation titled “How Twitter Needs to Change,” Twitter founder Dorsey shared his concern for how far the platform has strayed from its original intent.

“It’s a pretty terrible situation when you want to learn something about the world and you end up spending the majority of your time reporting abuse and receiving harassment,” Dorsey said. An Amnesty International report found that women of color are the most victimized and harassed on Twitter and other social media sites.  

He shared that machine learning has relieved some of the burden from the victim to report alleged abuses or harassment and that Twitter, in its leadership and on its teams, has been intentional about ensuring representation from the communities it serves in order to “build empathy on what people are experiencing.” 

Lancaster, whose doctoral research focused in part on health communications and on how mass media can be used to promote healthy behaviors and discourage unhealthy ones, said that hate speech, as much as we may dislike it, is legal.

“We teach our students that the best way to combat hate speech is with positive speech in the other direction,” Lancaster noted. “Social media sites have allowed for this exchange of ideas to happen.”

Wilkins highlighted that “a critical part of our communication education involves critical analysis of the sources, texts, and structures of media, as well as informed engagement in the production and circulation of our own narratives.

“The future of civil and active political engagement depends on our ability to improve digital media as a resource, rather than surrendering as a regret,” Wilkins said.