Against that backdrop, Facebook’s researchers interviewed over two dozen users and found some underlying issues potentially complicating efforts to rein in misinformation in India.
“Users were explicit about their motivations to support their political parties,” the researchers wrote in an internal research report seen by CNN. “They were also skeptical of experts as trusted sources. Experts were seen as vulnerable to suspicious goals and motivations.”
One person interviewed by the researchers was quoted as saying: “As a supporter you believe whatever your side says.” Another interviewee, referencing India’s popular but controversial Prime Minister Narendra Modi, said: “If I get 50 Modi notifications, I’ll share them all.”
Facebook also faced two fundamental problems in India that it did not have in the United States, where the company is based: understanding the many local languages and combatting distrust for operating as an outsider.
“We faced serious language issues,” the researchers wrote, adding that the users they interviewed mostly had their Facebook profiles set to English, “despite acknowledging how much it hinders their understanding and influences their trust.”
Some Indian users interviewed by researchers also said they didn’t trust Facebook to serve them accurate information about local matters. “Facebook was seen as a large international company who would be relatively slow to communicate the best information related to regional news,” the researchers wrote.
Facebook spokesperson Andy Stone told CNN Business that the study was “part of a broader effort” to understand how Indian users reacted to misinformation warning labels on content flagged by Facebook’s third-party fact checkers.
“This work informed a change we made,” Stone said. “In October 2019 in the US and then expanded globally shortly thereafter, we began applying more prominent labels.”
Stone said Facebook doesn’t break out content review data by country, but he said the company has over 15,000 people reviewing content worldwide, “including in 20 Indian languages.” The company currently partners with 10 independent fact-checking organizations in India, he added.
Warnings about hate speech and misinformation in Facebook’s biggest market
But the country’s sheer size and diversity, along with an uptick in anti-Muslim sentiment under Modi’s right-wing Hindu nationalist government, have magnified Facebook’s struggles to keep people safe and served as a prime example of its missteps in more volatile developing countries.
For example, Facebook researchers released a report internally earlier this year from the Indian state of Assam, in partnership with local researchers from the organization Global Voices ahead of state elections in April. It flagged concerns with “ethnic, religious and linguistic fear-mongering” directed toward “targets perceived as ‘Bengali immigrants'” crossing over the border from neighboring Bangladesh.
The local researchers found posts on Facebook against Bengali speakers in Assam with “many racist comments, including some calling for Hindu Bengalis to be sent ‘back’ to Bangladesh or killed.”
“Bengali-speaking Muslims face the worst of it in Assam,” the local researchers said.
Facebook researchers reported further anti-Muslim hate speech and misinformation across India. Other documents noted “a number of dehumanizing posts” that compared Muslims to “pigs” and “dogs” and false claims that the “Quran calls for men to rape their female family members.”
The company faced issues with language on those posts as well, with researchers noting that “our lack of Hindi and Bengali classifiers means much of this content is never flagged or actioned.”
“An Indian Test User’s Descent Into a Sea of Polarizing, Nationalistic Messages”
Facebook’s efforts around the 2019 election appeared to largely pay off. In a May 2019 note, Facebook researchers hailed the “40 teams and close to 300 people” who ensured a “surprisingly quiet, uneventful election period.”
Facebook implemented two “break glass measures” to stop misinformation and took down over 65,000 pieces of content for violating the platform’s voter suppression policies, according to the note. But researchers also noted some gaps, including on Instagram, which didn’t have a misinformation reporting category at the time and was not supported by Facebook’s fact-checking tool.
One February 2019 research note, titled “An Indian Test User’s Descent Into a Sea of Polarizing, Nationalistic Messages” detailed a test account set up by Facebook researchers that followed the company’s recommended pages and groups. Within three weeks, the account’s feed became filled with “a near constant barrage of polarizing nationalist content, misinformation, and violence and gore.”
Many of the groups had benign names but researchers said they began sharing harmful content and misinformation, particularly against citizens of India’s neighbor and rival Pakistan, after a February 14 terror attack in the disputed Kashmir region between the two countries.
“I’ve seen more images of dead people in the past 3 weeks than I’ve seen in my entire life total,” one of the researchers wrote.
“As there are a limited number of politicians, I find it inconceivable that we don’t have even basic key word detection set up to catch this sort of thing,” one employee commented. “After all cannot be proud as a company if we continue to let such barbarism flourish on our network.”