Government Asked Social Media Platforms To Take Necessary Steps To Remove Deep Fake Imagery To Create A Safer Platform For Users.
According to the IT Regulations, 2021, the government has requested that social media platforms including Facebook, Instagram, WhatsApp, YouTube, and Twitter take "all reasonable and feasible means" to delete or disable access to "deep false imagery."
Social media is a like a pocket world with easy ways to communicate but it can also become a nightmare for many. For example, imagine one day your friend sent you a video having your face and body in it but you don’t remember shooting any video like that.
For years, false videos have been made using a variety of easy video editing programs. People generally can detect the falsity of those superficial videos with a little study. They are not taken too seriously by us.
But with time, it is getting harder and harder to tell the difference between actual videos and false ones thanks to artificial intelligence technology, known as Deep Fake.
Most of these false images and videos are posted on social media sites like YouTube, Facebook, Twitter, etc., and the government is consistently taking the necessary actions to decrease the use of deep fakes and improve the safety and security of these platforms.
According to the IT Regulations, 2021, the government has requested that social media platforms including Facebook, Instagram, WhatsApp, YouTube, and Twitter take “all reasonable and feasible means” to delete or disable access to “deep false imagery.”
These platforms must comply with this within 24 hours of receiving a complaint from a user, the Ministry of Electronics and Information Technology stated in an advisory, according to a report by sources.
Social media will always be a reliable place to inform and update the public about vital aspects and engage with an audience on a deeper level, whether people concentrate on TikTok, Twitter, Facebook, or another platform entirely.
But given the increasing number of cyberattacks in India, cyber safety is a serious problem.
Deep fake technology has recently been in the news and deep fakes are one the most recent advancement in computer images, produced when artificial intelligence (AI) is trained to swap out one person’s appearance for another in a recorded video.
In accordance with Rule 3(2)(b) of the IT Regulations, The Ministry of Electronics and Information Technology (Meity) advisory stated that this content might constitute electronic impersonation, including electronically modified photos of a specific individual.
The ministry has been notified of deep fakes by agencies, including some in the Ministry of Home Affairs (MHA), according to a MeitY source mentioned in the paper. MeitY has consequently requested that the businesses check into the situation.
The ministry is looking forward to a prompt response from the businesses on the matter and will then invite them for a conversation on how to limit deep fakes.
MeitY stated that reports about the probable usage of deep fakes created by artificial intelligence that were deceiving people by creating doctored content were made in the email that was sent to the chief compliance officers of these platforms.
Furthermore, intermediaries have been recommended to set up the proper techniques and procedures for identifying content that can contravene user agreements or governing rules. Additionally, it stated that Prompt Action must be taken within the periods required by the IT Rules.
The rules state that an intermediary must remove illegal information or disable access within 36 hours of receiving a court order, a notification from the government or one of its authorized agencies, or both, and within 24 hours when the information originates from an individual or a person authorized by that individual.
MeitY stated that intermediaries notify users at least once a year that they have the right to instantly terminate usage privileges or remove non-compliant content in the event of non-compliance with its terms and conditions or user agreement.
One of the five areas with a major impact on user safety, according to homegrown microblogging site Koo, is impersonating famous people.
There could be severe consequences from this for well-known people and public personalities. Malicious deep fakes have the potential to seriously harm careers and lives. They could be used by anyone with malicious intent to pose as others and take advantage of their friends, families, and coworkers. Even phony videos of world leaders can be used to instigate international conflicts and other problems.
How deep fake works in social media?
While the size to automatically swap faces to produce convincing and realistic-looking synthetic video has some intriguing, innocuous applications (like in gaming and film), it is undoubtedly a risky technology with some unsettling applications. Making fake pornography was one of the first uses of deep fakes in the real world.
The forgeries are made by one model using a database of sample movies, while the other model tries to determine whether the video is actually fake. The deep fake is likely convincing enough to a human viewer when the second model can no longer determine whether the video is fake. The term “Generative Adversarial Network” (GAN) refers to this method. In this definition, you can find out more about GANs.
When given a large data set to work with, GAN performs better. That is why politicians and Hollywood stars are frequently seen in deep fake films. GAN can use its extensive video library to produce incredibly lifelike deep fakes.
But deep fakes will only get more realistic over time as techniques advance, we’re not defenseless against them. A number of companies, some of them startups, are working on techniques for identifying deep fakes.
Currently, it might still be able to detect poorly made deep fakes directly. The absence of human cues like blinking and potentially odd elements like incorrectly slanted shadows are dead giveaways that are typically simple to identify.
Nevertheless, as technology advances and GAN algorithms improve, it may soon be impossible to determine whether a video is genuine or not. The initial GAN part, which generates the forgeries, will get better over time. The purpose of ML is to instruct the AI in order to improve it.
It will eventually surpass our ability to distinguish between the true and the fake. In fact, experts estimate that it will be between six months and a year before absolutely authentic digitally altered videos are available.
Because of this, efforts to develop AI-based defenses against deep fakes are still being made. These countermeasures must advance along with technology. Facebook and Microsoft have created a coalition behind the Deep fake Detection Challenge (DFDC), together with other businesses and a number of respected American colleges.
Indian law against deep fake
No Indian legislation specifically addresses deep fake cybercrime, although different other laws can be combined to do so. If you or someone you love suffers from a deep fake crime, these legal protections might be helpful.
According to chartered accountant Ankur Agarwal, if a communication device or computer resource is used (with) mala fide (intention) for cheating to personate, the criminal may face up to three years in prison and/or a fine of up to Rs 1 lakh.
Section 66E of the IT Act is another part. As the person’s privacy is invaded by the taking, publishing, or transmission of their photos in mass media, deep fake crimes violate this section too. According to Agarwal, this offense is punishable by up to three years in prison or a fine of Rs 2 lakh.
To file a complaint about any cybercrime, including cyber fraud, one can get in touch with the National Cyber Crime Reporting portal (Helpline Number -1930). To submit a complaint, you can also get in touch with the local police station. Cybercrime.gov.in also has an online complaint form.
edited and proofread by nikita sharma