In May 2021, the world watched in horror as Israeli police evicted the Palestinian residents of Jerusalem’s Sheikh Jarrah neighborhood against their fervid resistance. Meanwhile, another fight was raging: that of narrative power. As journalists, citizen activists, and human rights organizations attempted to document Israel’s brutal crackdown, many found their communications subject to overzealous content moderation. Key social media posts were removed from influential platforms, including Twitter, Facebook, and Instagram, precisely when those posts were most crucial. The effect of this censorship, many contended, was to dramatically stifle the already marginalized voices of Palestinians, who hoped to show a global audience their lived reality under a violent occupation. This censorship followed a familiar pattern; digital rights organizations such as Access Now and 7amleh have, for years, produced reports meticulously documenting suppression of Palestinian content by social media companies. This pattern of targeted censorship mainly traces its roots to the U.S. response to the attacks on 9/11, and the ensuing buildup of the national security state designed to track and flag any potentially dangerous terrorist activity. One outcome of this intensive buildup was to systematize the kind of discrimination that paved the way for today’s content moderation dragnet, in which Palestinians often find themselves caught.
With the advent of the War on Terror, the U.S. Treasury Department created the Specially Designated Nationals and Blocked Persons (SDN) list to track “global terrorists.” Created via executive order by former President George W. Bush, the list was compiled to impede funding for individuals and organizations designated as terrorists by blocking their assets. In 18 U.S Code § 2339A - Providing Material Support to Terrorists, material support or services is defined as any property that is tangible or intangible. Since these laws were written before the spread of social media, it is unclear if postings can reasonably be censored for providing intangible “material support” to terrorists. The vagueness of the material support for terrorism clause and the expansiveness of the SDN list have led Facebook, Youtube, Twitter, and even Zoom to adopt an overly broad definition of “terrorist content” that censors and discriminates against Palestinians as well as Muslims and Arabic speakers in general.
In October 2021, the Intercept revealed Facebook’s list of “Dangerous Individuals and Organizations'' (DIO). This list informs Facebook’s Community Standards and aims to prevent real world harm by deplatforming “organizations or individuals that proclaim a violent mission or are engaged in violence.” According to the Intercept, the list is “a clear embodiment of American anxieties, political concerns, and foreign policy values since 9/11.” Additionally, Human Rights Watch reports that Facebook relies on the list of organizations that the U.S. has designated as “foreign terrorist organizations” to inform its DIO list. That list includes political movements that have armed wings, such as the Popular Front for the Liberation of Palestine and Hamas. Facebook’s policy, however, seems to call for removing praise or support for all major Palestinian political movements, even when those postings do not explicitly advocate violence. This is evidenced by its removal of posts advocating for the Boycott, Divestment, and Sanctions (BDS) movement. The fear of legal liability results in Facebook broadly suppressing Palestinian expression and content, even when it cannot clearly be tied to organizations on the DIO list or violence more generally. In one instance, Facebook removed references to the Al-Aqsa Mosque in Jerusalem because it associated “Al-Aqsa” with Al-Aqsa Martyrs’ Brigade, which is on Facebook’s DIO list. In a similar incident, a post sharing a news article about a recently issued threat by the Izz al-Din al-Qassam Brigades was removed due to the organization’s inclusion on the DIO list.
Often, Facebook removes content on the basis of “incitement” rather than the DIO list. Incitement laws in Israel are vague and often leveled at Palestinian political speech. As of 2015, 470 Palestinians had been arrested for “incitement” due to their Facebook posts, including poet Dareen Tatour for a poem posted on the platform. While these incitement laws do not exist in the United States, American social media companies comply with 90% of the hundreds of thousands of content removal requests made by the Israeli Cyber Unit. Companies comply for the same reason they are hypervigilant about content that has the faintest connection to the SDN list: They fear legal liability. In 2015, Facebook was hit with a $1 billion lawsuit that claimed the platform “facilitated and encouraged violence” against Israelis. While the case was dropped in 2017, Facebook has since intensified its relationship with the Israeli government and pledged to tackle Palestinian incitement. Such concerns are not limited to Facebook either: In 2020, Zoom canceled an event hosted by San Francisco State University that featured Palestinian militant Leila Khaled for similar fears of legal liability. The company released the following statement: “Zoom is committed to supporting the open exchange of ideas and conversations, subject to certain limitations contained in our Terms of Service, including those related to user compliance with applicable U.S. export control, sanctions, and anti-terrorism laws.”
Social media content of Palestinian Americans is also subject to scrutiny by law enforcement on the basis of antiterrorism laws. In 2018, the Intercept reported on how Palestinian Americans were receiving home visits from the FBI because of their social media posts. In one instance, a law student was interviewed by the New Jersey Joint Terrorism Task Force about his pro-Palestinian posts. These posts had been included in a profile about him created by a right wing pro-Israel website, Canary Mission, which has been revealed to be a source for the FBI and other American law enforcement agencies engaged in counter-terrorism. Canary Mission is also utilized by Israel, especially by Israeli border agents, when running checks on Palestinian Americans attempting to visit the occupied territories. These cases only further reveal how Palestinian content is constantly surveilled; if the content is not removed by social media companies, it is documented by far-right organizations and law enforcement and cited as evidence of potential national security risks.
It is clear that a major obstacle to a democratized social media space is the War on Terror framework, which continues to define the Palestinian struggle to engage in free speech online. As long as the SDN list and threat of legal repercussions for vague claims of material support for terrorism continue to exist, tech companies have no incentive to change their moderation policies and make room for Palestinians. Dismantling digital barriers requires critically considering the disastrous legacy of the War on Terror and its legal relics. Yesterday's national security imperatives cannot be the basis for legislating the unpredictable online world of 2022 and beyond; stakeholders should watch carefully to see how governments and other actors grapple with this reality.
Nooran Alhamdan is a graduate research fellow for MEI’s Cyber Program for the 2021-22 academic year. She previously served as a graduate research fellow for MEI’s Program on Palestine and Palestinian-Israeli Affairs. She is currently pursuing her Masters in Arab Studies at the Edmund A. Walsh School of Foreign Service at Georgetown University. The views expressed in this piece are her own.
Photo by AHMAD GHARABLI/AFP via Getty Images