Friday, October 24, 2025

Latest Posts

Canard on Trial: Parliamentary committee to crack down on fake news in 2025, pushes for stricter laws  

The country’s rumour mill has just hit a speed limit. India’s Parliamentary Committee on Communications & Information Technology has branded fake news a serious and urgent threat to public order and democracy. The committee is advocating for a swift rewiring of India’s information ecosystem, with sharper penalties, airtight accountability, and fact-checking with teeth—not as a slogan but as a system.

The Standing Committee has recommended a hard reset of the country’s misinformation playbook: amend penal provisions, raise fines, and make fact-checking and an internal ombudsman mandatory across print, TV, and digital newsrooms. The draft report—led by BJP MP Nishikant Dubey—also pushes the government to design legal and tech solutions for AI-generated fakes, with rules on labels, accountability, and inter-ministerial coordination. Critics argue that the cure mustn’t kill free speech, while supporters contend that democracy can’t function in a fog of lies.

Social media has emerged as the primary purveyor of misinformation, fake news, and deep fake concerns, which are on the rise in India.  A study conducted by the Indian School of Business (ISB) and CyberPeace in 2024 uncovered India’s fears about fake news and deep fakes, which aren’t paranoia but trendlines. Even as the union government develops indigenous tools to identify fakes and toolkits, the volume of digital deceit continues to rise.

What’s getting faked most? Politics, by a mile—46 per cent of the phoney stories tracked. Then come general issues (33.6 per cent) and religion (16.8 per cent). Add those up and you’re looking at well over nine in 10 cases. In short, the fibs that spread farthest are the ones tied to ballots, beliefs, and the commonplace.

Where do they spread? Mostly where we scroll. Social platforms account for 77.4 per cent of misinformation cases, while mainstream media accounts for about 23 per cent. Among the social culprits, Twitter (61 per cent) and Facebook (34 per cent) lead the league tables, with the remainder dribbling across other social networks.

Fixing this isn’t a one-button update. Yes, we need niftier tech to flag ersatz—but that’s only part of the antidote. Pair tools with media literacy, cleaner reporting pathways, and norms that reward responsible online behavior. Think of it as a full-stack defence for the public square: code, classrooms, and common sense working in concert.

WHAT’S ON THE TABLE
In a draft report adopted this week, the Standing Committee on Communications & Information Technology recommends a triad of changes that would touch every newsroom and platform in the country:

AMEND THE PENAL PLAYBOOK: Tougher provisions and higher fines—especially for repeat offenders—to make fabrication costly, not just embarrassing.

MAKE VERIFICATION MANDATORY: Every print, digital, and broadcast outlet should maintain a formal fact-checking mechanism and appoint an internal ombudsman empowered to halt questionable stories before they go live.

PIN DOWN RESPONSIBILITY: Editors and content heads for editorial calls, owners and publishers for systemic failures, and intermediaries/platforms for algorithmic amplification of known falsehoods.

The committee—chaired by BJP MP Nishikant Dubey—adopted the report unanimously, an unusual moment of cross-party alignment on an issue that typically dissolves into partisan fog.

COALITION VS. COPY-PASTE LIE
The panel urges a collaborative model: government agencies working in harness with private outlets and independent fact-checkers. This approach emphasizes shared standards, shared signals, and faster escalations when disinformation surges—especially during elections or crises, making the audience feel the importance of collective action.

One recommendation to the Ministry of Information & Broadcasting is unambiguous: make fact-checking workflows and ombuds offices compulsory across the industry. The draft also flags the Electronics & Information Technology Ministry, signalling this won’t be a single-ministry clean-up—it’s a whole-of-government refurbish.

“Social media has emerged as the primary purveyor of misinformation, fake news, and deep fake concerns, which are on the rise in India”

THE DUBEY POST AND THE DETERRENCE DOCTRINE
After an allegedly inaccurate post targeted the ruling alliance, Dubey took to X to underscore the point: stringent measures are coming—punishments included. The report’s frame is classic deterrence theory: raise the cost of lying until it’s no longer an attractive business model.

AI: POWERFUL ASSISTANT, POOR EDITOR
The committee doesn’t treat artificial intelligence as a magic wand; it treats it as a toolkit. It calls for:

Human-overseen use of AI to detect manipulated media, cloned voices, and synthetic documents.

Exploration of licensing requirements for AI content creators, plus mandatory labels for AI-generated videos and imagery.

A national task force—lean but specialized—draws representatives from I&B, External Affairs, MeitY, and legal experts to combat cross-border misinformation and coordinate responses when lies cross jurisdictions. In other words: let machines flag; let humans decide.

DEFINE THE DISEASE, THEN PRESCRIBE THE CURE
A recurring headache in any anti-misinfo regime is definitional mush. The committee notes that current descriptions of “fake news” and “misinformation” are ambiguous, and presses for clearer, narrower definitions inserted into existing rules for print, electronic, and digital media—with constitutional guardrails intact. The brief is to combat falsehoods without infringing upon free speech and individual rights.

SECTION 79 AND THE ALGORITHMIC MEGAPHONE
Stakeholders voiced familiar concerns about safe harbour under Section 79 of the IT Act. The critique: engagement-driven feeds often supercharge sensational content, giving disinformation a free escalator ride. The committee stops short of junking safe harbour but wants tighter accountability where platforms knowingly amplify proven fakery.

GLOBAL CRIB NOTES, LOCAL GUARDRAILS
New Delhi doesn’t need to reinvent every wheel. The draft suggests studying international best practices—for instance, France’s election-season misinformation law—while tailoring them to India’s federal realities and scale. Expect more inter-ministerial coordination at home and multilateral cooperation abroad, because virality ignores borders even when laws don’t.

NOT JUST STICKS: TEACH THE SCROLL
Penalties deter; literacy inoculates. The panel nudges ministries to:

Build a time-bound grievance redress system with digital tracking.

Develop a media-literacy curriculum spanning a student’s education arc—paired with training for teachers, librarians, and instructors.

Report progress on Press Council of India suggestions to integrate media studies into school syllabi.

Run public-awareness campaigns so citizens know how to spot a doctored clip before it sows panic.

WHO ANSWERS TO WHOM
The accountability maps the committee sketches are refreshingly specific:

Editors & Content Leads: Editorial judgments and verification lapses.

Owners & Publishers: Institutional systems that allowed the lapse.

Platforms & Intermediaries: Distribution choices, algorithmic boosts, and failure to act on verified flags.

It also proposes an independent regulatory body to standardize practices, ensure audits are conducted honestly, and impose higher penalties on repeat offenders.

THE MORNING AFTER THE RUMOUR MILL
By the time the committee’s recommendations began to circulate, newsroom Slack channels were already buzzing with a familiar, jittery question: “Okay—but what counts as a ‘mandatory’ fact-check?” Editors parsed screenshots; legal teams annotated PDFs; one producer muttered that “truth is becoming a compliance category.” In the background, every communication professional in Lutyens’ Delhi tried to decipher what “internal ombudsman” would mean on Monday mornings and Friday nights.

The core message, stripped of the jargon: Parliament’s tech-and-media sentinels want higher legal stakes for publishing falsehoods and built-in editorial guardrails—before the lie goes viral, not after. Multiple outlets report the panel called fake news a “serious threat” to public order and the democratic process and urged changes to penal laws alongside mandatory fact-checking frameworks and in-house ombudsmen.

THE BIG THREE: PENALTIES, PIPELINES, AND PEOPLE

PENAL PROVISIONS WITH SHARPER TEETH: The draft says the current toolkit isn’t deterring serial offenders. It presses for amendments to penal provisions and stiffer fines, with an emphasis on accountability for editorial decisions—not just the nameless “sources” who nudge stories off the rails. What that means in practice will depend on which statutes are touched and how due process is protected, but the direction of travel is unmistakable: raise the cost of reckless publishing.

MANDATORY FACT-CHECKING AS A SYSTEM, NOT A SLOGAN: At the heart of the overhaul is a compulsory fact-checking mechanism—not a post-mortem apology but a documented workflow: claim intake → source verification → second-level review → editor sign-off → audit trail. Think pre-flight checks for stories. The panel also wants an internal ombudsman in every organization—someone empowered, resourced, and annoying in a healthy way. Your newsroom needs a person whose full-time job is to say: “Where’s the evidence?” and “Are we about to sue ourselves?”

PEOPLE WHO CAN PARSE PIXELS FROM POISON
The committee explicitly calls out AI-driven fakery—the face swaps, cloned voices, counterfeit paperwork, and “it-looks-real” screen recordings that flood timelines. It urges the government to develop legal and technical solutions—including labels for synthetic media, transparent responsibility chains, and coordination between I&B and MeitY—and cautions that, while AI can flag suspect content at scale, final fact-checks must be human-led. That’s not techno-panic; it’s process hygiene.

THE HOW QUESTION: BLUEPRINTS, NOT VIBES
A mandatory system needs more than a decree—it needs architecture. Here’s what a compliant newsroom could look like:

FRONT-DOOR FILTERS: Every fact entering the CMS carries a status tag—unverified / partially verified/verified. Unverified claims are sandboxed; they don’t publish without escalation.

TWO-SOURCE RULE, DOCUMENTED EXCEPTIONS: For high-risk assertions (crime, public health, elections), two independent sources—or one source + primary document—are required. Exceptions need editor-level approval with a written rationale.

FORENSIC TOOLKITS: Reverse-image searches, deepfake detectors, checksum tools for documents, and phone forensics for call-record verifications—all logged.

RED-TEAM DESK: A small internal group that tries to break the story before the public does: What’s the weakest link? What would opponents challenge?

OMBUDSMAN TRIGGERS: Complaints above a severity threshold auto-route to the ombudsman, who can freeze updates to a story until a review concludes.

AUDIT TRAILS: If it wasn’t logged, assume it didn’t happen. That’s harsh—but regulators and courts tend to agree.

None of this dictates the editorial line. It demands evidence discipline.

WHAT ABOUT PLATFORMS AND CREATORS?
The recommendations zero in on media organizations, but the practical spillover is apparent:

Platforms will face mounting pressure to label and throttle disputed content, preserve evidence, and escalate faster on government or ombudsman notices—especially when AI is involved.

Creators and influencers who repeatedly push falsehoods may find themselves in contractual crosshairs: demonetized, deprioritized, or bound by stricter terms that mirror newsroom standards.

Adtech could be the surprise enforcement lever. If agencies require proof of fact-checking for campaign adjacency, the market will punish sloppy publishers long before a judge does.

Expect this to become a shared-burden regime: publishers first, platforms next, intermediaries last—but nobody gets to shrug.

WHAT IF THE FACT-CHECKER IS WRONG?
Good question. Fact-checking is a method, not a deity. The committee’s model—mandatory mechanisms + internal ombudsman—is really a bet on traceability: if you can show your work, you can fix your errors faster and defend your decisions better. That matters when penalties rise. Of course, an ombudsman is only as credible as their independence and teeth. Expect fierce debates about who appoints them, how they can be removed, and what happens when they clash with the editor-in-chief. (Short answer: keep those meeting rooms soundproof.)

THE CONSTITUTIONAL TIGHTROPE
India’s free-speech jurisprudence allows reasonable restrictions—public order, defamation, incitement—but history also warns against over-breadth and selective enforcement. The committee’s language frames fake news as a public-order threat and calls for more substantial penalties and required checks. The line between curbing harm and chilling dissent will depend on drafting finesse: clear definitions, narrow tailoring, due-process pathways, and transparent oversight. Courts will want to see harms answered with proportionate tools, not cudgels. (That “AI can flag; humans must decide” clause is a clever nod to proportionality).

THE AI PROBLEM YOU CAN’T OUTSOURCE TO AI
Generative models are confident artists—fluent, fast, and occasionally wrong with panache. The panel’s insistence that AI shouldn’t be the final arbiter of truth is more than conservative prudence; it’s a recognition that context beats pattern-matching. AI can spot the recycled flood image, the tell-tale compression artefacts, and the checksum mismatch. But it can’t (yet) weigh two eyewitnesses with conflicting motives or spot the off-record nudge from a source with skin in the game. That’s human work—and the recommendation keeps it human.

WHY THIS PUSH, AND WHY NOW?
Because the bill for unchecked disinformation has arrived—itemized. The report cites harms to public order, democratic processes, individual reputations, market integrity (hello, stock manipulation), and the credibility of the press itself. In a world where a single faked audio can tank a brand, roil an election, or whip up a mob, the committee’s answer is simple: stop treating accuracy as optional and start treating it as infrastructure.

THE FINE PRINT THAT WILL MAKE OR BREAK IT
Grand designs fail on small details. Watch for:

DEFINITIONS: Narrow, testable, and context-aware—so satire isn’t policed like sabotage.

DUE PROCESS: Clear appeals and transparent corrections so honest errors aren’t punished like intent to deceive.

OMBUDSMAN INDEPENDENCE: Appointment, tenure, and veto power must be strong enough to matter and visible enough to build trust.

SMALL-NEWSROOM VIABILITY: Tooling and audits must be affordable; otherwise, reforms entrench giants and smother local voices.

DATA GOVERNANCE: How long to keep verification logs, who can access them, and how whistleblower protections work inside newsrooms.

THE ROAD TO THE HOUSE—AND BEYOND
The Dubey-led panel has submitted the report to the Lok Sabha Speaker and expects a parliamentary tabling next session. From there: consultations, notifications, and—inevitably—courtroom tests. If the centre can translate recommendations into clear SOPs instead of foggy mandates, compliance won’t feel like trench warfare.

India’s information sphere is getting a pre-flight checklist. If the proposals land as intended, lies get slower, truth gets faster, and audiences get a fighting chance. Done clumsily, the regime chills reporting and hands bad actors a new set of loopholes. Done well, it does something rare in the age of virality: it rewards craftsmanship over outrage.

Until then, consider this the new house rule for anyone publishing to the public square: verify like your reputation depends on it—because it does.

THE ELECTION-SEASON TEST
The real stress test will arrive in the run-up to major polls—when rumour mills shift to turbo and deepfakes grow fangs. The panel’s recommendations aim to govern the information machine efficiently: stop the obvious hoaxes quickly, label AI fabrications clearly, and ensure that controversial but legitimate reporting isn’t stifled by vague rules. Implementation notes to watch for:

RAPID-RESPONSE SLAS: How quickly must a flagged claim be checked?

ESCALATION LADDERS: When does a dispute jump from desk to ombudsman to regulator?

APPEAL PATHS: How do publishers contest a foul call without the fix taking longer than the news cycle?

DATA RETENTION: How long must verification logs be preserved—and who can access them?

FIVE FRICTION POINTS YOU’LL HEAR IN THE NEXT 30 DAYS

DEFINITION DRIFT: What is “fake”? False facts? Misleading framings? Satire gone wrong? The narrower the definition, the fairer the enforcement.

OMBUDSMAN INDEPENDENCE: Who appoints, who pays, and how insulated are they from management’s bad hair days?

CROSS-BORDER CONTENT: When narratives are born on a foreign platform and laundered through domestic groups, whose accountability—and under which law?

CHILLING EFFECTS: Will small outlets spike investigative stories rather than risk rupee-draining litigation?

AI LABELING STANDARDS: Watermarks? Cryptographic provenance? Platform-level badges? Labels that users actually notice.

A NEWSROOM SURVIVAL KIT (STEAL THIS)
MAKE A RISK MAP:
Elections, health, law-and-order, interfaith tensions—build stricter pre-publication gates for these beats.

CREATE A CLAIMS BOARD: One Notion page (or equivalent) where every contested claim lives until verified or spiked.

TRAIN FOR FORGERY: Every six weeks: one drill on deepfake audio, one on doctored PDFs, one on synthetic screenshots.

WRITE YOUR CORRECTIONS PLAYBOOK: Speed, prominence, and transparency codified.

EMPOWER THE OMBUDSMAN: Give them read/write access everywhere and veto power on high-risk posts. If they’re ornamental, you’re not compliant.

WILL THIS ACTUALLY WORK?
Against low-effort misinformation—the recycled flood photo, the mislabeled protest clip—yes. Against bespoke disinformation—the coordinated campaign with polished assets and patient operators—it’s more complicated. But even there, documented processes raise the cost of deception, and clear penalties reshape incentives. If the system proves even half as nimble as the rumours it aims to defang, public trust should tick upward. If it turns into a paperwork maze, audiences will simply route around it.

The committee’s most valuable sentence might be the quietest one: AI can help, but human editors must decide. That’s not nostalgia. It’s a design choice.

THE ROAD FROM RECOMMENDATION TO REGULATION
A draft report is not a law. From here, ministries will draft, consult, redraft, and (inevitably) litigate. Expect phased obligations, model SOPs, and sector-specific advisories. The most brilliant move the government can make is to publish sample workflows and reference toolkits that small outlets can adopt off the shelf, then certify them via third-party audits rather than micromanaging content.

Do that, and the ecosystem upgrades without turning every editor into a clerk.

BOTTOM LINE
The committee isn’t trying to outlaw error; it’s trying to outlaw indifference. Mandatory fact-checking and in-house ombudsmen won’t eliminate bad information, but they will raise the floor—and, with sharper penalties, raise the stakes for those who game the system. If the drafting is careful and the oversight credible, India could end up with a rare thing: a speech-preserving, harm-reducing framework for the age of generative everything. If not, we’ll have built a costly filing cabinet.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Posts

Don't Miss

Stay in touch

To be updated with all the latest news, offers and special announcements.