Discussion about this post

User's avatar
Przemyslaw Grabowicz's avatar

Manoel, I greatly appreciate your feedback, since it makes me realize that I should write more about our Science eLetter and its implications. Once I find time, I'll put together a proper overall response at UncommonGood.substack.com. Meanwhile, let me quickly respond to the "three key arguments against Bagchi et al. (2024)" that you've taken from the response eLetter by Guess et al.

First, I don't think the paper by Guess et al. is internally valid. Consider the following. If a paper computes a causal estimate based on an experiment, but the control condition is meaningfully changed during the experiment specifically to affect the target causal estimand, and the paper doesn't reveal anything about that specific change, nor accounts for it, then the change could result in any desirable value of the causal estimate, without revealing anything about how it happened. In other words, a causal claim must define exactly what control and treatment conditions are. If it doesn't then the causal claim may be invalid in the situations where the description misses something meaningful that's related to the estimand. If the change was not described, then the default assumption should be that there was no meaningful change during the experiment. However, during the the experiment of Guess et al. there was a meaningful change introduced...

Second, you write that:

> Guess et al. (2023) data, they found little change in the number of unreliable sources pre- vs. post-study period. In other words, in their control group (where the recommender algorithm is enabled), they don’t observe this drop in the fraction of untrustworthy content.

Ok, so let's see what exactly Guess et al. write in their response eLetter, I quote

> Over the 90 days prior to the treatment, untrustworthy sources represented 2.9% of all content seen by participants in the Algorithmic Feed (control) group – during the study period, this dropped only modestly to 2.6%.

So, according to their own measure, there was a drop in the fraction of misinformation from 2.9% to 2.6%, so that's 10.5% relative drop (0.3/2.9), whereas we reported a 24% drop. Note, however, that only about half of their treatment period overlaps with the period of Facebook's emergency interventions. If it overlapped entirely, then probably instead of 10.5% drop, we would observe a 21% drop. That's starts to be quite close to the 24% drop we measured using a different dataset and a different notion of misinformation.

Third, you're right that the evidence used by Bagchi et al. (2024) is not causal. However, in our eLetter we haven't made any causal statements. Instead, we're pointing out that Guess et al. made causal statements without properly describing the control condition of its experiment. That said, we provided also potential explanations for the drop in the fraction of misinformation in news feeds of users. This explanation aligns with the reasons why emergency measures were introduced. These reasons were provided both officially by Facebook representatives [1], and unofficially by Facebook employees and a whistleblower, Francis Haugen [2, 3].

[1] https://www.nytimes.com/2020/12/16/technology/facebook-reverses-postelection-algorithm-changes-that-boosted-news-from-authoritative-sources.html

[2] https://www.wsj.com/articles/the-facebook-files-11631713039

[3] https://www.washingtonpost.com/documents/5bfed332-d350-47c0-8562-0137a4435c68.pdf

Expand full comment
Przemyslaw Grabowicz's avatar

Thank you, Manoel, for your interest in our Science eLetter and for your comments. I appreciate them greatly. I attach my responses to your Substack post here, since these exchanges may lead to a broader discussion about our eLetter and the original paper by Guess et al.

First, I'd like to clarify that the "debunk" framing originates from the University of Massachusetts Amherst's press release, not from Dublin, and it doesn't originate from me, but it does appear in both releases. It may sound as an exaggeration, since Science hasn't issued the correction. For this reason, I've requested already to take it out from the title of University College Dublin's press release.

Second, I also don't believe that the entire paper by Guess et al. is debunked, but it did miss crucial information. Without revealing *any* information about the 63 break-glass measures, it could arrive at any desired conclusion, since the result depends on the unrevealed emergency interventions, and nobody would know what this conclusion really means.

Third, yes we've read the "fairly precise description of algorithmic changes enacted by Facebook". However, when I talked with co-authors from Meta, they said that these dates are based on unofficial leaks and may be incorrect. That's why we are careful about the wording in our Science eLetter.

Finally, you write "This is false!". Would you mind clarifying what exactly is false in that statement “Our results show that social media companies can mitigate the spread of misinformation by modifying their algorithms but may not have financial incentives to do so.”?

Expand full comment
6 more comments...

No posts