-11.1 C
Алматы
Thursday, 19 December 2024
  • Қазақ тілі
  • Русский
  • English
  • News

    India: Facebook struggles in its battle against fake news

    “I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life,” a Facebook researcher in India wrote in 2019 after following recommendations by the social network’s algorithms for three weeks.

    The researcher’s report was part of a cache of internal documents called The Facebook Papers, recently obtained by New York Times and other US publications. They show the social media giant struggling to tame the avalanche of fake news, hate speech, and inflammatory content -“celebrations of violence”, among other things – out of India, the network’s biggest market.

    This was made worse, reported the New York Times, by failure to deploy enough resources in India’s 22 officially recognised languages and a lack of cultural sensitivity.

    A Facebook spokesperson told me that the findings had led the company to undertake a “deeper, more rigorous analysis” of its recommendation systems in India and contributed to “product changes to improve them”.

    So, is a lack of resources hobbling efforts by Facebook to fight fake news and inflammatory material in India? Facebook has partnered locally with 10 fact-checking organisations. Items flagged across the social network are fact-checked in English and 11 other Indian languages, making it one of the largest networks after the US.

    But the reality is more complex. Fact-checking organisations working with Facebook in India say they cross-check and tag suspicious news and posts flagged by users. The network is then expected to suppress the distribution of such posts.

    “We really do not have any moral or legal authority on what Facebook does after we tag a news or a post,” a senior official of a fact-checking organisation told me.

    Modi hugging Mark Zuckerberg
    Image caption,Prime Minister Modi and Facebook boss Mark Zuckerberg in 2015

    Also, fact-checking is only one part of Facebook’s efforts at countering misinformation. The problem in India is much bigger: hate speech is rife, bots and fake accounts linked to India’s political parties and leaders abound, and user pages and large groups brim with inflammatory material targeting Muslims and other minorities. Disinformation is an organised and carefully mined operation here. Elections and “events” like natural calamities and the coronavirus pandemic usually trigger fake news outbreaks.

    Also, the fact that Facebook does not fact check opinion and speech posted by politicians on grounds of “free expression and respect for the democratic process” is not always helpful. “A large part of the misinformation on social media in India is generated by politicians of the ruling party. They have the largest clout, but Facebook doesn’t fact-check them,” says Pratik Sinha, co-founder of Alt News, an independent fact-checking site.

    So, the latest revelations do not come as a surprise to most fact-checkers and rights activists in India. “We have known this all along. No social media platform is above blame,” says Mr Sinha.

    With a surfeit of hate speech, trolling and attacks on minorities and women, Indian Twitter is a polarised and dark place. WhatsApp, the Facebook-owned messaging service, remains the largest carrier of fake news and hoaxes in its biggest market. YouTube, owned by Google, hosts a lot of fake news and controversial content, but doesn’t attract the same amount of attention. For example there were live videos, up to 12 hours long, on the site that fanned conspiracy theories about the death of Bollywood actor Sushant Singh Rajput last year. (The police later ruled that Rajput died by suicide.)

    Delhi riots
    Image caption,Many inflammatory videos on the 2019 Delhi riots were posted on Facebook

    The problem with Facebook lies elsewhere. With 340 million users, India is its biggest market. It is a general purpose social media platform which offers users individual pages and to form groups. “The wide range of features make it more vulnerable to all kinds of misinformation and hate speech,” says Mr Sinha.

    The overwhelming bulk of hate speech and misinformation on the social network are expected to be captured by its internal AI engines and content moderators all over the world. Facebook claims to have spent more than $13bn and hired more than 40,000 people in teams and technology around the world on safety and security issues since 2016. More than 15,000 people review content in more than 70 languages, including 20 Indian languages, a spokesperson told me.

    When users report hate speech, automated “classifiers” – a database created by humans which annotates different kinds of speeches – vet them before selected ones reach human moderators, which are often third-party contractors. “If these classifiers were good enough they would catch a lot more hate speech, with fewer false positives. But they clearly aren’t,” says Mr Sinha.

    A Facebook spokesperson told that the firm had “invested significantly in technology to find hate speech in various languages, including Hindi and Bengali”.

    “As a result, we’ve reduced the amount of hate speech that people see by half this year. Today, it’s down to 0.05%. Hate speech against marginalised groups, including Muslims, is on the rise globally. So we are improving enforcement and are committed to updating our policies as hate speech evolves online,” the spokesperson said.

    Then there are allegations that Facebook favours the governing party. A series of articles by journalists Cyril Sam and Paranjoy Guha Thakurta in 2018 wrote about the platform’s “dominant position in India with more than a little help from friends of Prime Minister Narendra Modi and the BJP”, among other things. (The articles also looked at the Congress party’s own “relations with Facebook”.) “A business model predicated on virality makes Facebook an ally of ruling governments,” says Mr Guha Thakurta, co-author of The Real Face of Facebook in India.

    Many believe a large part of the blame must broadly lie with the social network’s algorithms which decide what to show up when you search for a subject, and pushes users to join groups, watch videos and explore new pages.

    Alan Rusbridger, a journalist and member of Facebook’s oversight board, has said it “is well known that the algorithms reward emotional content that polarises communities because that makes it more addictive”. In other words, the network’s algorithms allow “fringe content to reach the mainstream”, as Roddy Lindsay, a former data scientist at Facebook, says.

    “This ensures that these feeds will continue promoting the most titillating, inflammatory content, and it creates an impossible task for content moderators, who struggle to police problematic viral content in hundreds of languages, countries and political contexts,” notes Mr Lindsay.

    In the end, as Frances Haugen, the Facebook product-manager-turned-whistleblower, says: “We should have software that is human-scaled, where humans have conversations together, not computers facilitating who we get to hear from.”

    BBC.com

    Related Posts

    China says it’s restricting abortions to promote gender equality. Experts are skeptical

    Фариза

    There is a hunger crisis in America this Thanksgiving

    Фариза

    Haiti struck by deadly 7.2-magnitude earthquake

    Фариза

    fifteen + thirteen =

    * Используя эту форму, вы соглашаетесь с хранением и обработкой ваших данных на этом веб-сайте.