The challenge of deepfakes: Identifying and addressing them

In India, while current laws provide certain avenues to address deepfake issues, the absence of a precise legal definition hinders specific prosecution efforts.

In response to the escalating concerns surrounding deepfake content on social media platforms, the Ministry of Electronics and IT (MeitY) in India has taken proactive measures. Notably, advisories have been issued to swiftly remove misleading videos, exemplified by one featuring actor Rashmika Mandanna’s face superimposed on another woman’s body, within a 24-hour timeframe.

This article explores the profound implications of deepfakes, which extend beyond spreading fake news to influencing elections and fabricating convincingly fake evidence. The effectiveness of current regulations in addressing the multifaceted challenges posed by deepfake technology is questioned. Proposed amendments highlighted in the discussion include reinforcing privacy and data protection laws, imposing limitations on freedom of expression, and establishing proactive rules governing the distribution and use of deepfake technologies.

As deepfakes increasingly blur the lines between reality and fiction, the article emphasizes the critical importance of these regulatory considerations in safeguarding society, preserving democracy, and upholding the rule of law.

Deepfake: Meaning and Interpretations

The term “deepfake” surfaced in 2017 on Reddit, where users began superimposing celebrities’ faces onto different individuals, particularly in adult content. The article adopts a broad definition of deepfake, encompassing various manipulations aligned with popular understanding. In this context, a deepfake typically involves creating a video using advanced technical means to portray an individual saying or doing something they did not.

Detecting such manipulations proves challenging, with applications extending to realistic-looking videos generated without high-tech means, high-quality videos featuring non-existent individuals, fake audio or text fragments, and manipulated satellite signals. This inclusive perspective is crucial for legal and policy considerations, emphasizing outcomes over specific technical methods.

The gravity of the deepfake issue was underscored in 2019 when hackers impersonated a CEO’s phone request, resulting in a $243,000 unauthorized bank transfer. This incident prompted heightened vigilance and precautionary measures within financial institutions, even as hackers continued to refine their techniques.

In 2021, criminals executed a sophisticated scheme exploiting knowledge about an impending company acquisition. A bank manager was deceived into transferring a staggering $35 million to a fraudulent account by strategically timing the attack with the company’s expected wire transfer for the acquisition. This instance underscores the evolving threat landscape and emphasizes the pressing need for enhanced cybersecurity measures to counter deepfake attacks in the financial sector.

Laws Against Deepfakes in India and Worldwide

In India, the legal framework against deepfakes involves Section 66E and Section 66D of the Information Technology Act of 2000. Section 66E addresses privacy violations related to the capture, publication, or transmission of a person’s images in mass media through deepfake means. Such offenses are punishable by up to three years of imprisonment or a fine of ₹2 lakh. Section 66D prosecutes individuals using communication devices or computer resources with malicious intent, leading to impersonation or cheating. An offense under this provision carries a penalty of up to three years imprisonment and/or a fine of ₹1 lakh.

Additionally, the Indian Copyright Act of 1957, particularly Section 51, provides copyright protection against unauthorized use of works, allowing copyright owners to take legal action.

Despite lacking specific deepfake legislation, the Ministry of Information and Broadcasting issued an advisory on January 9, 2023, urging media organizations to label manipulated content and exercise caution.

In the global context, concerns about AI manipulation led to the Bletchley Declaration, signed by 28 nations. Approaches to AI regulation vary worldwide, with the US opting for stricter oversight. President Joe Biden’s recent executive order mandates companies to share AI safety test results with the US government, emphasizing extensive testing before public release.

India has also outlined potential regulatory frameworks, suggesting a risk matrix and proposing a statutory authority. Tech giants like Alphabet, Meta, and OpenAI are taking steps such as watermarking to combat deepfakes. India, a pivotal player in AI’s global development, must contribute to shaping the regulatory landscape while balancing innovation with regulatory concerns.

Copyright Concerns

In its December 2019 publication, the World Intellectual Property Organization (WIPO) navigates the intricate landscape of deepfake content. The document, centered on intellectual property rights, poses two pivotal questions:

  1. Concerning the creation of deepfakes based on copyrightable data, the question arises: to whom should the copyright in a deep fake belong?
  2. Should there be a system of equitable remuneration for individuals whose likenesses and “performances” are employed in a deep fake?

WIPO acknowledges the heightened complexities of deepfakes, surpassing conventional copyright infringements, encompassing violations of human rights, privacy, and personal data protection. This prompts a critical examination of whether copyright should be granted to deepfake imagery at all. WIPO suggests that if deepfake content significantly contradicts the subject’s life, it should not receive copyright protection.

In response to these queries, WIPO proposes that subject to copyright, deepfake copyright should belong to the creator rather than the source person. This proposal considers the lack of intervention or consent during the creation process, emphasizing the importance of acknowledging the creative input of the individual responsible for the deepfake.

WIPO asserts that copyright, in itself, is not an optimal tool against deepfakes due to the absence of copyright interest for victims. Instead, victims are encouraged to turn to the right of personal data protection. Citing Article 5(1) of the EU General Data Protection Regulation (GDPR), which mandates accurate and up-to-date personal data, WIPO recommends the prompt erasure or rectification of irrelevant, inaccurate, or false deepfake content.

Moreover, even if deepfake content is accurate, victims can leverage the “right to be forgotten” under Article 17 of GDPR, allowing the erasure of personal data without undue delay. This dual approach involving personal data protection rights is positioned as a more effective strategy in combating the multi-faceted challenges posed by deepfake content. WIPO thus emphasizes the need for a comprehensive approach that goes beyond traditional copyright frameworks to safeguard individuals from the adverse impacts of deepfake technology.

A Question of Evidence

Deepfake technology poses significant challenges in legal proceedings, particularly in criminal cases, with potential repercussions on individuals’ personal and professional lives. The absence of mechanisms to authenticate evidence in most legal systems puts the onus on the defendant or opposing party to contest manipulation, potentially privatizing a pervasive problem. To address this, a proposed rule could mandate the authentication of evidence, possibly through entities like the Directorate of Forensic Science Services, before court admission, albeit with associated economic costs.

In India, existing laws offer some recourse against deepfake issues, but the lack of a clear legal definition hampers targeted prosecution. The evolving nature of deepfake technology compounds the challenges for automated detection systems, leading to increased difficulty, particularly in the face of contextual complexities. This poses a significant threat to legal proceedings, potentially prolonging trials and heightening the risk of false assumptions.

Beyond the immediate legal ramifications, deepfakes exacerbate issues such as slut-shaming and revenge porn, presenting serious consequences for individuals’ reputations and self-image. The intricate challenges demand comprehensive legal frameworks to address evolving threats and safeguard individuals from potential harm.

The increasing introduction of deepfakes as fake evidence in courts raises significant concerns for the rule of law, manifesting in several ways:

  1. Prolonged Trials: Trials are likely to extend as parties can claim evidence fabrication.
  2. Risk of Accepting Forged Evidence: Deepfakes heighten the risk

Leave a Reply

Your email address will not be published. Required fields are marked *