Skip to content
Menu
menu

Illustration by Security Management

Mitigating Information Disorder Risks Requires Whole of Society Approach

Misinformation and disinformation are problems that cannot be completely solved, but their risks can be mitigated, according to the Aspen Institute’s Commission on Information Disorder.

In its sweeping new report published Monday, the commissioners wrote that “information disorder is a crisis that exacerbates all other crises. When bad information becomes as prevalent, persuasive, and persistent as good information, it creates a chain reaction of harm.”

The report noted that elevating truthful content is not enough to combat bad information. This is because there are psychological and technological incentives to promoting and spreading content that drives an emotional reaction—“a culture of enragement, not engagement,” said journalist Katie Couric, one of the co-chairs of the commission and report authors, in a panel discussion on the findings of the report.

The commission members analyzed the multidimensional issues of information disorder—the report defined this as broad societal challenges associated with misinformation, disinformation, and malinformation—during the course of six months, looking primarily at U.S. issues and interventions.

“In a free society, a certain amount of misinformation will always exist; our task is not to eradicate every half-truth or willful misrepresentation of the facts—a task that is not only impossible but even undesirable in a free society,” the report said. “We sought instead to identify structural interventions that will illuminate the problem of information disorder, we explored the financial motivations that incentivize both platforms and bad actors, and we looked to identify other interventions that will mitigate some of the greatest harms caused by relatively narrow classes of mis- and disinformation—threats like those to public health, election integrity, and the targeting of underrepresented communities.”

The commission focused on three priorities: increasing transparency and understanding, building trust, and reducing harms. Along those themes, commissioners produced 15 non-partisan recommendations designed to address bad-faith actors and consider disproportionate harms to different communities.

For transparency’s sake, the commission recommends that platforms allow researchers and journalists more leeway to examine their content moderation policies, ad data, viral post activity and sources, and more. Currently, platforms can use their terms of service agreements to block attempts to research political ads or other activity. Some of these will likely require acts of Congress to establish “safe harbor” rules for researchers.

When it comes to trust, the recommendations are a little fuzzier. For instance, the commissioners recommended steps to “improve U.S. election security and restore voter confidence with improved education, transparency, and resiliency,” and “endorse efforts that focus on exposing how historical and current imbalances of power, access, and equity are manufactured and propagated further with mis- and disinformation—and on promoting community-led solutions to forging social bonds.” This category also focused on the challenge of the media landscape itself—local journalism has declined significantly in recent years, despite the fact that this is most Americans’ source for news. The commission recommended investing in local media and diversifying newsrooms to bring in different perspectives.  

Like the notion of building trust as a whole, these recommendations are significantly easier said than done. But the commission wrote that good faith efforts to push these forward, both from platforms, regulators, and consumers can make progress on long-term goals around combatting information disorder.

The commission’s six recommendations for reducing harms are a little more direct. The report recommended that the U.S. federal government establish a comprehensive strategic approach to countering disinformation, including a centralized national response strategy. It also advocated for the creation of an independent organization that develops systematic misinformation countermeasures.

“This is a whole-of-society problem,” said Chris Krebs, former director of the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and one of the co-commissioners. “Yes, civil society has a role, media has a role, the private sector has a role, academia, and of course government has a role here. Really what we need more than anything, I believe, is clarity of mission, clarity of purpose, as well as a clear, integrated, holistic strategy.”

Krebs added that “We’re not asking the government to step up and certainly not establish some ministry of truth or ministry of propaganda or speech, but the government has a role just like in any other national security related issue. And obviously the way our adversaries are exploiting seams and division and sowing doubt and mistrust in public institutions, we have an emerging national security issue. So, I think it’s incumbent upon any administration—the prior, the current, or any future administrations—to be thinking about information disorder and disinformation strategically.”

Regarding superspreaders—actors who frequently create or amplify mis- and disinformation—the report recommended holding them to account through platform moderation, post assessments, and potential removal. The commission provided multiple framework options for escalating responses, including removing incentives for spreading viral information—such as demonetizing those posts.

The report also recommended two changes to Section 230 of the Communications Decency Act of 1996: Withdraw platform immunity for content that is promoted through paid advertising and post promotion, and remove immunity as it relates to the implementation of product features, recommendation engines, and design. Section 230 was originally intended to provide immunity to platforms that carry personal speech and to encourage responsible moderation, but it has been used to shield companies from liability for non-speech practices since the act was originally passed.

“Nearly 25 years later, it’s a good time to update the regulations in ways that preserve what is good about online collaboration and user-generated content, but address some of the unforeseen developments,” the report said.

arrow_upward