THE JUDGEMENT IN FALANA V META PLATFORMS – A LIABILITY CREATED BY ARTIFICIAL INTELLIGENCE
In an era where digital content shapes public perception, informs economic decisions, and influences political consciousness, the boundary between reality and fabrication has become increasingly fragile. Artificial intelligence has made possible what was once confined to science fiction, a phenomenon known as deepfakes. In their most basic form, deepfakes are highly convincing synthetic audio, images, or videos that portray real people saying or doing things they never said or did, with very disturbing realism.
While some may consider deepfakes amusing or harmless, their implications extend far beyond entertainment. Deepfakes pose serious threats that undermine trust, distort truth, and jeopardise personal privacy. They can cause significant financial, emotional, and reputational harm to individuals. More critically, they have the potential to distort public discourse, disrupt democratic processes, and facilitate fraud on a large scale.
In Nigeria, the suit instituted by Mr Femi Falana against Meta Platforms. represents a defining moment in the intersection between artificial intelligence and established legal doctrine. The dispute arose following the circulation of a deepfake-style video on Facebook in which Falana was falsely portrayed as narrating personal health experiences, including claims that he had suffered from prostatitis for over sixteen years. The video appropriated his likeness, voice, and professional credibility without authorisation, thereby exposing him to reputational injury and public embarrassment.
Aggrieved by the circulation of the fabricated video online, Mr. Femi Falana, SAN, instituted legal proceedings to vindicate his rights and seek redress. Although the substratum of the action was clearly predicated on the deployment of deepfake technology, the court’s adjudication did not treat deepfakes as a sui generis legal wrong. This is largely because there’s no specific law regulating deepfakes in Nigeria, hence the position of the Court is not accidental.
Accordingly, Rather than the court addressing artificial intelligence, as the tool used for the creation of the falsification in question, the Court merely focused on the wrong complained of against the defendant as a platform for the publication of what has been created, this is because the claimant framed his wrong within the existing law regulating constitutional right to privacy under Section 37 of the Constitution of the Federal Republic of Nigeria 1999 (as amended) and Data Protection Act, therefore the creator of the deepfakes and the tools used for the creation of the deepfakes (AI) where on the blind side of the Court.
In effect, liability was constructed not because Nigerian law specifically prohibits deepfakes, but because the conduct fell within the prohibitions of established privacy and data protection frameworks. The technological mischief that triggered the litigation remained legally peripheral. Artificial intelligence created the harm, but the remedy was sourced from laws that were never drafted with synthetic media in contemplation.
This structural gap becomes more apparent when viewed alongside the 2024 incident involving Seun Okinbaloye of Channels Television. In that instance, AI-generated videos circulated widely on social media, falsely depicting him endorsing certain products and investment platforms. The fabricated content exploited his credibility to mislead members of the public and potentially expose unsuspecting viewers to financial risk.
Like the Falana scenario, the misuse of identity was technologically sophisticated and reputationally damaging. Yet again, any available remedies would necessarily be found in defamation principles, consumer protection statutes, or cybercrime provisions rather than in legislation specifically designed to regulate deepfakes.
“Rather than the court addressing artificial intelligence, as the tool used for the creation of the falsification in question, the Court merely focused on the wrong complained of against the defendant as a platform for the publication of what has been created”
Both episodes illustrate an emerging reality, and that is the fact that Nigeria is experiencing the social and legal consequences of deepfake technology without possessing a dedicated regulatory framework to address it. While existing statutes provide indirect protection, they operate reactively and incidentally. They neither define deepfakes nor establish tailored standards for platform responsibility, evidentiary authentication, or proportionate sanctions for AI-driven identity manipulation.
Deepfakes present challenges that traditional legal categories struggle to accommodate. The speed and scale of digital dissemination further amplify the harm before victims can respond. The anonymity of creators complicates enforcement. Cross-border platform governance raises jurisdictional issues.
Perhaps most concerning is the impact on evidentiary reliability. As manipulated media becomes increasingly sophisticated, public confidence in the authenticity of audio-visual materials may diminish. Where the credibility of such evidence is in doubt, the integrity of judicial proceedings and the stability of democratic institutions are correspondingly placed at risk.
The judgment in Falana v Meta Platforms, yet to be tested at the appellate level, demonstrates that Nigerian courts are capable of adapting existing legal doctrines to contemporary harms. However, judicial adaptation is not a substitute for legislative foresight. Artificial intelligence has created a new species of injury, one that blends technological manipulation with reputational, economic, and constitutional consequences. The law must therefore evolve in a deliberate and structured manner.
A forward-looking legislative response would define deepfakes with precision, delineate civil and criminal liabilities for malicious deployment, impose due diligence obligations on digital platforms, and create expedited remedies for victims. Such regulation would not stifle technological innovation, rather, it would provide clarity and a deterrent force in a rapidly digitalised society.
The Falana and Okinbaloye scenarios are not isolated. They are signals of a broader situation where identity can be fabricated and weaponised. If artificial intelligence can manufacture credibility, then the legal system must strengthen the safeguards that protect authenticity. To conform with the present state of society, Nigeria must move beyond peripheral reliance on privacy and data protection laws and enact specific, focused legislation that confronts use of artificial intelligence directly. Only then can liability truly match the realities created by artificial intelligence.
Ajibola Bello, Esq.
Deputy Managing Partner, And Head of Corporate Department,
Franklyn C. Chukwenenye, Esq.
Associate,Technology, Media & Telecommunication (TMT)
