‘There is a huge risk of misinformation and disinformation, and targeted campaigns that could potentially happen against candidates or to push certain narratives.’
With assembly elections round the corner, largest social-media platforms have started embracing new mechanisms to crack down on the possibility of targeted campaigns, the influence of fake accounts, and misleading content in India, their largest user base in the world.
The growing use of generative artificial intelligence (AI) and its advanced tools that make it difficult to recognise fake content has multiplied challenges in content moderation for platforms such as Google, X (formerly known as Twitter), and Meta-owned Facebook.
The companies are addressing the issue with newer policies, increasing partnerships for fact-checking, and credible sources of information.
Video streaming giant YouTube has said it will roll out a watch page for news in India, across 11 languages, in the last quarter of 2023.
The watch page is aimed at putting together a diverse and ‘credible’ range of news sources on YouTube for viewers who want to go beyond headlines and explore a story, recommending content across video on demand, livestreams, podcasts, and Shorts.
“We spend a lot of energy on surfacing authoritative sources of information and you have probably seen during Covid times the kind of information we used from health agencies, etc. Secondly, the definition of reality is getting challenged in today’s world of generative AI and so we are focusing on how we can expand fact-checking (capabilities),” Saikat Mitra, vice-president, head of trust & safety for the Asia-Pacific region, Google, told Business Standard.
Google’s policies mandate election advertisements containing synthetic media to display appropriate labels.
Between April and June this year, YouTube removed more than 2 million videos in India for violating its policies.
More than one of three YouTube users searches for news on the platform.
Mitra said Google had made a commitment all images generated by its AI platform would be visibly watermarked.
According to Meta, it treats elections across the world in a similar manner.
The company has a tradition of dealing with misinformation, safety, human rights, and cybersecurity through its temporary Elections Operation Center.
In India, the company has one of its largest fact-checking networks, with 11 independent certified fact-checkers verifying content in 15 Indian languages.
‘When a fact-checker rates a piece of content as false, we reduce its distribution, notify people who share the content — or who have previously shared it — that the information is false or misleading, and we add a warning label that links to the fact-checker’s article disproving the claim,’ Meta said in a blog.
The developments have picked up pace after increasing scrutiny by lawmakers.
The ministry of electronics and information technology recently sent a memorandum to several digital platforms, asking them to submit an ‘action taken’ note on dispelling ‘fake’ news and unlawful content within 10 days.
The directive was based on a questioning of these platforms by the parliamentary standing committee on IT.
On the other hand, the INDIA bloc of 28 Opposition parties wrote to Meta CEO Mark Zuckerberg and Alphabet CEO Sundar Pichai, flagging the “culpability” of social-media platforms.
“Earlier there was the issue of advertising and who’s advertising on whose behalf. Now increasingly there is a huge risk of misinformation and disinformation, and certain targeted campaigns that could potentially happen against certain political candidates or to push certain narratives. And that is something that is a real concern,” said Rohit Kumar, co-founder at public policy firm TQH Consulting.
Experts also flag the need of platforms maintaining their internal capabilities for fact-checking with human reviewers. Meta and X have laid off several people from their India teams in the past one year.
“Incorporating human moderators into the equation is essential. While technology is advancing at a rapid pace, the human touch in discerning nuances and cultural contexts is irreplaceable. Therefore, investing in building the capacity of these moderators, offering them regular training and updated tools, is paramount,” said Shruti Shreya, senior programme manager, The Dialogue, a Delhi-based think-tank.
Feature Presentation: Aslam Hunani/Rediff.com
Source: Read Full Article