When artificial intelligence (AI) tools first emerged as more than just science fiction, few foresaw just how quickly they would permeate everyday life. Today, AI is no longer a niche fascination of technologists or a luxury for elite corporations. It has become a mainstream utility, embedded in classrooms, offices, and creative studios alike. Students rely on it for research, professionals use it to automate workflows, and content creators employ it to churn out videos, images, and stories at remarkable speed.
But behind this dazzling productivity lurks an unsettling question: how do we, as a society, handle the growing influence of AI—legally, ethically, and culturally?
A Revolution in Productivity — and Complexity
What makes AI both seductive and disorienting is its power to generate content that is often indistinguishable from human creation. On social media, short-form videos and reels created by AI tools now dominate feeds. Many are so realistic that they blur the line between authentic and synthetic, leaving viewers—and often even the creators themselves—unsure of where human input ends and machine output begins.

This extraordinary capacity, however, brings with it equally extraordinary questions—about authorship, ownership, privacy, and even dignity. AI challenges not just how we produce content but also how we attribute value to it, and how we protect those whose work or likenesses may have been exploited in the process.
The Law Struggles to Keep Up
In Pakistan, as in many jurisdictions, the legal frameworks currently in place were never designed to address situations where machines autonomously create content. Intellectual property law, privacy protections, and even criminal statutes assume that humans—not algorithms—are responsible for both creation and harm.
Consider the Copyright Ordinance of 1962. Under Section 13, the “author” of a work is the first owner of the copyright, while Section 2(d) defines the author in explicitly human terms—a composer, a photographer, a writer. But when an AI system generates an image or a video in response to a user’s prompt, who is the author? The user, who merely wrote the prompt? Or the AI model, which synthesised data from millions of sources?

This ambiguity is not merely academic. AI systems are trained on massive datasets often scraped indiscriminately from the internet—including copyrighted works. When such systems generate content, they may inadvertently (or perhaps inevitably) incorporate fragments of pre-existing creative works. This raises the spectre of mass, unlicensed appropriation.
A telling example was when AI tools began generating artwork unmistakably in the style of Studio Ghibli, the celebrated Japanese animation studio. While the outputs were technically “new,” they leaned heavily on the studio’s distinctive creative identity—a fact that did not escape notice from artists or legal scholars.
The Threat to Privacy and Dignity
Beyond intellectual property, AI also poses serious threats to privacy and personal dignity. Public figures—politicians, actors, athletes—have already become unwilling subjects of AI-generated deepfakes. But even ordinary people, whose photographs are scattered across social media, are vulnerable.
In Pakistan, the Constitution’s Article 14 enshrines the right to dignity, while the Prevention of Electronic Crimes Act (2016) prohibits unauthorised use of personal data and images, particularly when intended to harm or defame. Yet these laws were crafted in an era before deepfakes and generative AI. Today, tracing the source of AI-generated defamatory content can be nearly impossible.

The risk is not just reputational but psychological. Victims of AI-driven misinformation or harassment often struggle to find redress, especially when law enforcement agencies lack the training and resources to handle such sophisticated technologies.
The Policy Response — Still Nascent
In April 2023, Pakistan established a National Task Force on AI to chart a ten-year roadmap for integrating AI into governance, business, education, and healthcare. A draft National AI Policy followed shortly thereafter. These initiatives represent a welcome acknowledgment of the stakes involved.
But much remains to be done. There is no standalone data protection law, and existing intellectual property laws are silent on machine authorship. Regulators, courts, and legislators face the daunting task of updating legal and institutional frameworks while keeping pace with the relentless evolution of technology.
A Call for Deliberate Integration
Ultimately, AI is not just a technological revolution; it is a legal, cultural, and ethical frontier. To treat it as merely another tool is to underestimate its transformative—and potentially disruptive—power.
For students and professionals, AI offers unparalleled efficiency but risks eroding their own skills and judgment if overused. For creators, it provides inspiration and acceleration but threatens their livelihoods when their styles and ideas are appropriated without consent. And for society as a whole, it demands a careful balance: fostering innovation while safeguarding fairness, accountability, and respect for human agency.
What is needed now is not panic, nor blind enthusiasm, but thoughtful dialogue—between policymakers, platform operators, users, and creators—to ensure that AI enhances rather than undermines the values we hold dear.
Final Thoughts
Machines may now write, draw, and compose. But they do so standing on the shoulders of human ingenuity—often without acknowledgment. Whether we allow them to trample that ingenuity, or to elevate it, depends on how we choose to regulate, use, and even resist them.
The promise of AI is vast. But so is its capacity for harm. As the line between authentic and synthetic continues to blur, we would do well to remember: technology does not absolve us of responsibility.