Main

To the extent that a “wealth of information creates a poverty of attention” (p. 41)1, people have never been as cognitively impoverished as they are today. Major web platforms such as Google and Facebook serve as hubs, distributors and curators2; their algorithms are indispensable for navigating the vast digital landscape and for enabling bottom-up participation in the production and distribution of information. Technology companies exploit this all-important role in pursuit of the most precious resource in the online marketplace: human attention. Employing algorithms that learn people’s behavioural patterns3,4,5,6, such companies target their users with advertisements and design users’ information and choice environments7. The relationship between platforms and people is profoundly asymmetric: platforms have deep knowledge of users’ behaviour, whereas users know little about how their data is collected, how it is exploited for commercial or political purposes, or how it and the data of others are used to shape their online experience.

These asymmetries in Big Tech’s business model have created an opaque information ecology that undermines not only user autonomy but also the transparent exchange on which democratic societies are built8,9. Several problematic social phenomena pervade the internet, such as the spread of false information10,11,12,13,14—which includes disinformation (intentionally fabricated falsehoods) and misinformation (falsehoods created without intent, for example, poorly researched content or biased reporting)—or attitudinal and emotional polarization15,16 (for example, polarization of elites17, partisan sorting18 and polarization with respect to controversial topics19,20). Some disinformation and misinformation involve public health and safety; some of it undermines processes of self-governance.

We argue that the behavioural sciences should play a key role in informing and designing systematic responses to such threats. The role of behavioural science is not only to advance active scientific debates on the causes and reach of false information21,22,

Why behavioural sciences are crucial for sha** the online ecosystem

More than any traditional media, online media permit and encourage active behaviours31 such as information search, interaction and choice. These behaviours are highly contingent on environmental and social structures and cues32. Even seemingly minor aspects of the design of digital environments can shape individual actions and scale up to notable changes in collective behaviours. For instance, curtailing the number of times a message can be forwarded on WhatsApp (thereby slowing large cascades of messages) may have been a successful response to the spread of misinformation in Brazil and India33.

To a substantial degree, social media and search engines have taken on a role as intermediary gatekeepers between readers and publishers. Today, more than half (55%) of global internet users turn to either social media or search engines to access news articles2. One implication of this seismic shift is that a small number of global corporations and Silicon Valley CEOs have significant responsibility for curating the general population’s information34 and, by implication, for interpreting discussions of major policy questions and protecting civic freedoms. Facebook’s recent decision to declare politicians’ ads off-limits to their third-party fact checkers illustrates how corporate decisions can affect citizens’ information ecology and the interpretation of fundamental rights, such as freedom of speech. The current situation, in which political content and news diets are curated by opaque and largely unaccountable third parties, is considered unacceptable by a majority of the public35,36, who continue to be concerned about their ability to discern online what is true and what is false2 and who rate accuracy as a very important attribute for social media sharing42. This example highlights the need for industry-independent behavioural research to ensure transparency for the user and to avoid opportunistic responses by those who are regulated. The second problem is that the speed and adaptability of technology and its users exceed that of regulation directly targeting online content. If uninformed by behavioural science, any regulation that focuses only on the symptoms and not on the actual human–platform interaction could be quickly circumvented. The third problem is the risk of censorship inherent in regulations that target content; behavioural sciences can reduce that risk as well. Rather than deleting or flagging posts based on judgements about their content, we focus here on how to redesign digital environments so as to provide a better sense of context and to encourage and empower people to make critical decisions for themselves43,44,45.

Our aim is to enlist two streams of research that illustrate the promise of behavioural sciences. The first examines the informational cues that are available online31 and asks which can help users gauge the epistemic quality of content or the trustworthiness of the social context from which it originated. The second stream concerns the use of meaningful and predictive cues in behavioural interventions. Interventions can take the form of nudging46, which alters the environment or choice architecture so as to draw users’ attention to these cues, or boosting47, which teaches users to search for them on their own, thereby hel** them become more resistant to false information and manipulation, especially but not only in the long run.

Digital cues and behavioural interventions for human-centred online environments

The online world has the potential to provide digital cues that can help people assess the epistemic quality of content48,49,50—the potential of self-contained units of information (here we focus on online articles and social media posts) to contribute to true beliefs, knowledge and understanding—and the public’s attitudes to societal issues51,52. We classify those cues as endogenous or exogenous53.

Endogenous cues refer to the content itself, like the plot or the actors and their relations. Modern search engines use natural language-processing tools that analyse content54. Such tools have considerable virtues and promise, but current results rarely afford nuanced interpretations55. For example, these methods cannot reliably distinguish between facts and opinions, nor can they detect irony, humour or sarcasm56. They also have difficulty differentiating between extremist content and counter-extremist messages57, because both types of messages tend to be tagged with similar keywords. A more general shortcoming of current endogenous cues of epistemic quality is that their evaluation requires background knowledge of the issue in question, which often makes them non-transparent and potentially prone to abuse for censorship purposes.

By contrast, exogenous cues are easier to harness as indicators of epistemic quality. They refer to the context of information rather than the content, are relatively easy to quantify and can be interpreted intuitively. A famous example of the use of exogenous cues is Google’s PageRank algorithm, which takes centrality as a key indicator of quality. Well-connected websites appear higher up in search results, irrespective of their content. Exogenous cues can indicate how well a piece of information is embedded in existing knowledge or the public discourse.

From here on we focus on exogenous cues and how they can be enlisted by nudging46 and boosting47. Let us emphasize that a single measure will not reach everyone in a heterogeneous population with diverse motives and behaviours. We therefore propose a range of measures that differ in their scope and in the level of user engagement required. Nudging interventions shape behaviour primarily through the design of choice architectures and typically require little active user engagement. Boosting interventions, in contrast, focus on creating and promoting cognitive and motivational competences, either by directly targeting competences as external tools or indirectly by enlisting the choice environment. They require some level of user engagement and motivation. Both nudging and boosting have been shown to be effective in various domains, including health58,59 and finances60. Recent empirical results from research on people’s ability to detect false news indicate that informational literacy can also be boosted61. Initial results on the effectiveness of simple nudging interventions that remind people to think about accuracy before sharing content103. Going a step further, adding prominent hyperlinks to vetted reference sources for important concepts in a text could encourage a reader to gain context by perusing multiple sources—a strategy used by professional fact checkers104.

Nudges can also communicate additional information about what others are doing, thereby invoking the steering power of descriptive social norms105: For instance, contextualizing the number of likes by expressing them against the absolute frequency of total readers (for example, ‘4,287 of 1.5 million readers liked this article’) might counteract false-consensus effects that a number presented without context (‘4,287 people liked this article’) may otherwise engender. Transparent numerical formats have already been shown to improve statistical literacy in the medical domain106. Similarly, displaying the total number of readers and their average reading time in relation to the potential total readership could help users evaluate the content’s epistemic quality: if only a tiny portion of the potential readership has actually read an article, whereas the majority spent just a few seconds on it, it might be clickbait. The presentation of many other cues, including ones that reach into the history of a piece of content, could be used to promote epistemic value on social media. Figure 2a shows a nudging intervention that integrates several exogenous cues into a social media news feed.

Fig. 2: Nudging interventions that modify online environments.
figure 2

a, Examples of exogenous cues and how they could appear alongside a social media post. b, Example of a transparently organized news feed on social media. Types of content are clearly distinguished, sorting criteria and their values are shown with every post, and users can adjust weightings.

Similarly, users could be discouraged from sharing low-quality information without resorting to censorship by introducing ‘sludge’ or ‘friction’—for instance, by making the act of sharing slightly more effortful107. In this case, sharing low-quality content may require a further mouse click in a pop-up warning message, alongside additional information about which of the above cues are missing or have critical values.

Another type of nudge targets how content is arranged in browsers. The way a social media news feed sorts content is crucial in sha** how much attention is devoted to particular posts. Indeed, news feeds have become one of the most sophisticated algorithmically driven choice architectures of online platforms7,108. Transparent sorting algorithms for news feeds (such as the algorithm used by Reddit) that show the factors that determine how posts are sorted can help people understand why they see certain content; at the very least this nudging intervention would make the design of the feed’s architecture more transparent. Relatedly, platforms that clearly differentiate between types of content (for example, ads, news, or posts by friends) can make news feeds more transparent and clearer (Fig. 2b).

Boosting interventions to foster user competences

Boosting seeks to empower people in the longer term by hel** them build the competences they need to navigate situations autonomously (for a conceptual map of boosting interventions online, see also ref. 109). These interventions can be integrated directly into the environment itself or be available in an app or browser add-on. Unlike some nudging interventions, boosting interventions will ideally remain effective even when they are no longer present in the environment, because they have become routinized and have instilled a lasting competence in the user.

The competence of acting as one’s own choice architect, or self-nudging, can be boosted110. For instance, when users can customize how their news feed is designed and sorted (Fig. 2b), they can become their own choice architects and regain some informational autonomy. For instance, users could be enabled or encouraged to design information ecologies for themselves that are tailored toward high epistemic quality, making sources of low epistemic quality less accessible. Such boosting interventions would require changes to the online environment (for example, transparent sorting algorithms or clear layouts; see previous section and Fig. 2b) and the provision of epistemic cues.

Another competence that could be boosted to help users deal more expertly with information they encounter online is the ability to make inferences about the reliability of information based on the social context from which it originates111. The structure and details of the entire cascade of individuals who have previously shared an article on social media has been shown to serve as proxies for epistemic quality112. More specifically, the sharing cascade contains metrics such as the depth and breadth of dissemination by others, with deep and narrow cascades indicating extreme or niche topics and breadth indicating widely discussed issues113. A boosting intervention could provide this information (Fig. 3a) to display the full history of a post, including the original source, the friends and public users who disseminated it, and the timing of the process (showing, for example, if the information is old news that has been repeatedly and artificially amplified). Cascade statistics teaches concepts that may take some practice to read and interpret, and one may need to experience a number of cascades to learn to recognize informative patterns.

Fig. 3: Illustrations of boosting interventions as they could appear within an online environment or as external tools.
figure 3

a, Visualization of a sharing cascade. Alongside metrics, like the depth or the breadth of a cascades, a pop-up window on social media can provide a simple visualization of a sharing cascade that shows who (if the profile is public) and when others have shared content before it reached the user. b, A fast-and-frugal decision tree as an example of a boosting intervention. A pop-up or an external tool can show a fast-and-frugal decision tree alongside an online article that helps a reader check criteria to evaluate the article’s reliability, where the criteria were adapted from professional fact checkers and primarily point to checking external information90.

Yet another competence required for distinguishing between sources of high and low quality is the ability to read laterally104. Lateral reading is a skill developed by professional fact checkers that entails looking for information on sites other than the information source in order to evaluate its credibility (for example, ‘who is behind this website?’ and ‘what is the evidence for its claims?’) rather than evaluating a website’s credibility by using the information provided there. This competence can be boosted with simple decision aids such as fast-and-frugal decision trees114,115. Employed in a wide range of areas (for example, medicine, finance, law, management), fast-and-frugal decision trees can guide the user to scrutinize relevant cues. For example, users can respond to prompts in a pop-up window (for example, ‘are references provided?’), with each answer leading either to an immediate decision (for example, ‘unreliable’) or to the next cue until a final judgment about content reliability is reached (for example, ‘reliable’; Fig. 3b)116. Decision trees can also enhance the transparency of third-party decisions. If reliability is judged by third-party fact checkers or via an automated process, users could opt to see the decision tree and follow the path that led to the decision, thereby gaining insight that will be useful in the long-term. Eventually, fast-and-frugal decision trees may help people establish a habit of checking epistemic cues when reading content even in the absence of a pop-up window suggesting they do so47.

Finally, the competence of understanding what makes intentionally false information so alluring (for example, novelty and the element of surprise) can be boosted by mental inoculation techniques. Being informed about manipulative methods before encountering them online enables an individual to detect parasitic imitations of trustworthy sources and other sinister tactics117,118. Making people aware of such strategies or of their own personal vulnerabilities leaves them better able to identify and resist manipulation. For instance, having people take on the role of a malicious influencer in a computer game has been demonstrated to improve their ability to spot and resist misinformation61,119. This inoculation technique can be used in a range of contexts online; for example, learning about the target group of an advertisement can increase people’s ability to detect advertising strategies.

Conclusion

Any attempt to regulate or manage the digital world must begin with the understanding that online communication is already regulated, to some extent by public policy and laws but primarily by search engines and recommender systems whose goals and parameters may not be publicly known, let alone subject to public scrutiny. The current online environment has given rise to opaque and asymmetric relationships between users and platforms, and it is reasonable to question whether the industry will take sufficient action on its own to foster an ecosystem that values and promotes truth. The interventions we propose are aimed primarily at empowering individuals to make informed and autonomous decisions in the online ecosystem and, through their own behaviour, to foster and reinforce truth. The interventions are partly conceptualized on the basis of existing empirical findings. However, not all interventions have been tested in the specific context in which they may be deployed. It follows that some of the interventions that we have recommended, and others designed to promote the same goals, should be subject to further empirical testing. Current results identify some interventions as effective37,119 while also indicating that others are less promising120. Both set of results will inform the design of more effective interventions.

In our view, the future task for scientists is to design interventions that meet at least three selection criteria. They must be transparent and trustworthy to the public; standardisable within certain categories of content; and, importantly, hard to game by bad-faith actors or those with vested interests contrary to those of users or society as a whole. We also emphasize the importance of examining a wide spectrum of interventions, from nudges to boosts, to reach different types of people, who have heterogeneous preferences, motivations and online behaviours. These interventions will not completely prevent manipulation or active dissemination of false information, but they will help users recognise when malicious tactics are at work. They will also permit producers of quality information to differentiate themselves from less trustworthy sources. Behavioural interventions in the online ecology can not only inform government regulations, but also signal a platform’s commitment to truth, epistemic quality and trustworthiness. Platforms can indicate their commitment to these values by providing their users with exogenous cues and boosting and nudging interventions, and users can choose to avoid platforms that do not offer them these features.

For this dynamic to gain momentum, it is not necessary that all or even the majority of users engage with nudging or boosting interventions. As the first Wikipedia contributors have proven, a critical mass may suffice to allow positive effects to scale up to major improvements. Such a dynamic may counteract a possible drawback of the proposed interventions; namely, widening information gaps between users if only empowered consumers are able to recognise quality information. If a critical mass is created, nudging and boosting interventions might well help to mitigate gaps currently arising from disparities in education or in the ability to pay for quality content. In light of the high stakes—for health, safety and self-governance itself—we err on the side of adopting interventions that empower as many people as possible.