AI-Powered Troll Farms and the Battle for Truth in the Philippines
AI is changing the game for troll farms in the Philippines, making disinformation faster, cheaper, and harder to detect. As the 2025 elections approach, the lines between truth and deception blur. How do we fight back?


Scroll long enough on social media, and something feels off.
A flood of comments, eerily similar, flooding every post about politics. New accounts pushing the same ideas, some with barely a week of activity before they explode into engagement. A once-civil discussion spirals into chaos, with faceless profiles stirring outrage in perfect synchronization.
It’s not an accident. It’s not organic. It’s engineered.
The Philippines, with its 92.5 million social media users, has become fertile ground for disinformation. What started as small troll operations—paid groups posting propaganda for political gain—has evolved into something far more dangerous.
Troll farms now have AI on their side.
AI has made disinformation faster, cheaper, and harder to detect. Bots pose as real users, AI-generated articles spread falsehoods at scale, and deepfakes fabricate events with unsettling realism. The result? A war on truth where deception spreads faster than facts—often unnoticed by those caught in the middle.
This isn’t just about elections. It’s about control—over public opinion, over history, over what people believe.
And because we’re not paying attention, it’s happening again.
The Rise of Troll Farms in the Philippines
Troll farms don’t operate for free. They exist because someone funds them.
Politicians, businesses, and powerful figures pay for influence. They need an army—one that can flood social media with support, silence critics, and manipulate public opinion.
And in the Philippines, that army is cheap.
The country’s low labor costs make it easy to hire operators who manage fake accounts, infiltrate discussions, and push pre-written disinformation. A single troll can run dozens of accounts, pretending to be different people, each one programmed to steer conversations in a certain direction.
But trolls alone can only do so much.
When Rodrigo Duterte’s administration used troll farms to drown out critics and manufacture public approval, the strategy was simple: Make online dissent invisible and amplify voices that supported them. That model has since been adopted by other political groups, evolving from paid commenters to automated disinformation networks.
And now, AI has taken it to the next level.
What once required thousands of real people can now be done with bots—cheaper, faster, and on a much larger scale. Fake engagement. Fake support. Fake outrage.
And with the 2025 elections approaching, the people funding these troll farms are already preparing for war.
How AI is Supercharging Troll Farms
Troll farms used to rely on human operators, each controlling multiple fake accounts. It was messy, time-consuming, and limited by manpower.
Now, AI does the heavy lifting.
Disinformation has become automated, scalable, and dangerously convincing. With AI, a single operator can flood social media with thousands of posts that look like they came from real people. Comments feel authentic. Fake news articles read like professional journalism. Even videos and voices can be faked with stunning accuracy.
Here’s how AI is reshaping disinformation in the Philippines:
AI-Powered Bots That Mimic Real Users
Bots no longer sound robotic. They engage in discussions, reply to comments, and even create posts that feel human (RAND).
Some are designed to argue, others to agree, but their goal is the same—control the conversation (Heinz College - Carnegie Mellon University).
Once enough bots interact, they create the illusion of majority opinion—making lies look like popular beliefs (CISA).
Generative AI Writing Fake News at Scale
AI tools like ChatGPT can mass-produce disinformation with human-like fluency (NATO).
Fake accounts use AI-generated posts to flood social media, making real information harder to find (International Journal of Social Sciences).
These AI-generated posts target specific groups, creating content that manipulates their beliefs and emotions (Fraud Blocker).
Deepfakes and Synthetic Media
AI can generate fake videos and audio recordings, making people say things they never did (East Asia Forum).
In politics, deepfakes can be used to spread scandals, fake endorsements, or discredit critics (Modern War Institute).
One recent example: a deepfake audio of President Marcos Jr. ordering a military attack—a fabricated event that fooled many (ISEAS-Yusof Ishak Institute).
Social Media Manipulation Strategies
AI doesn’t just create content—it ensures it spreads. Here’s how troll farms use AI to bury dissenting voices and dominate online spaces:
✔ Click/Like Farming – AI-powered bots flood posts with likes and shares to artificially inflate engagement. A paid influencer's post can suddenly "go viral" overnight, making it seem widely accepted when in reality, it’s a staged operation (Turing Institute).
✔ Hashtag Hijacking – Trending topics get hijacked as bots insert disinformation into high-traffic conversations. A political discussion about government policies can suddenly be flooded with off-topic but highly engaging propaganda, diverting attention away from real issues (Channel News Asia).
✔ Repost Networks – Coordinated accounts instantly repost from a central source, flooding platforms with identical content. This tricks algorithms into promoting the false information to more users, burying organic discussions (The Diplomat).
✔ Mass Reporting to Silence Critics – AI-powered bots mass-report posts from journalists, activists, or dissenting voices. Social media platforms, relying on automated moderation, take down these posts or suspend accounts—effectively silencing opposition (Reuters Institute).
✔ Manipulating Comments to Control Perception – AI can flood the comments section of viral posts with manufactured public opinion. If a government official gets criticized online, an army of bots will flood the comments with praise and counterarguments, shifting perception in their favor (Human Rights Pulse).
Troll farms don’t just spread lies. They erase the truth by flooding the internet with noise.
And with AI on their side, it’s happening faster than ever.
Who Troll Farms Target and What They Push
AI-powered troll farms don’t just spread lies randomly. Every post, comment, and manipulated trend has a target and a purpose.
They aim for people who pose a threat. They amplify issues that divide society. And they rewrite history and current events to shape public perception.
Here’s who they attack—and what they want people to believe.
Who’s in the Crosshairs?
Political Opponents: Discredit them, fabricate scandals, and suppress their reach (ISEAS-Yusof Ishak Institute).
Journalists & Activists: Silence investigations and smear reputations (Reuters Institute).
Religious & Ethnic Groups: Exploit divisions and stir conflict (RAND).
Academics & Historians: Bury facts that challenge the official version of events (The Diplomat).
AI-powered troll farms excel at targeting individuals, whether it’s a politician on the rise or a journalist uncovering corruption. They don’t just spread negative posts—they weaponize deepfakes, AI-generated articles, and manipulated statistics to make attacks seem more credible.
For example, after investigative reports exposed government-linked corruption, journalists in the Philippines faced waves of AI-driven harassment—fake posts, doctored screenshots, and coordinated smear campaigns (Human Rights Pulse).
Once the target is discredited, the next step is to control the discussion.
What Disinformation Is Being Pushed?
Historical Revisionism: Whitewash past abuses and make controversial figures seem heroic (The Diplomat).
Political Propaganda: Boost favored candidates and manipulate public perception (Fraud Blocker).
Exploiting Social Issues (Crime, Poverty, etc.): Create fear, push authoritarian solutions, or divide communities (RAND).
✔ Historical Revisionism: Troll farms rewrite history by erasing uncomfortable truths or over-glorifying past administrations. AI-generated content is used to downplay human rights abuses or inflate economic successes, particularly with the Marcos era (ISEAS-Yusof Ishak Institute).
✔ Political Propaganda: Whether it’s amplifying a leader’s “achievements” or spreading baseless corruption claims about an opponent, AI-driven disinformation floods social media in favor of those funding the troll farms (Fraud Blocker).
✔ Exploiting Social Issues: Fear is a powerful tool. AI-generated posts inflate crime statistics, fabricate violent incidents, or push divisive rhetoric about social problems to justify crackdowns or authoritarian policies (The Diplomat).
AI-Powered Lies Shape Public Opinion
AI makes disinformation look real. And when enough people believe it, it becomes reality.
This is why troll farms focus on controlling the conversation—not just by spreading false information but by burying the truth under AI-generated noise.
With the 2025 elections ahead, these tactics aren’t slowing down. They’re getting more sophisticated.
And without awareness, millions of Filipinos won’t even realize they’re being manipulated.
How to Detect AI-Powered Troll Farms
AI-powered troll farms are designed to blend in. They use fake profiles, bot networks, and mass-produced content to manipulate online spaces without looking obvious.
But they leave clues.
Here’s how to spot AI-generated disinformation and the accounts spreading it.
Suspicious Account Characteristics
Recently created accounts: Many troll accounts appear just before elections or major political events.
AI-generated or stolen profile pictures: Many use AI-created faces or stolen photos to seem real.
Suspicious following patterns: They follow and are followed by accounts that interact only with each other, forming coordinated networks.
Impersonation of real people: Some fake accounts pose as journalists, activists, or regular users to infiltrate discussions.
Paid verification marks: Some accounts purchase blue checkmarks to gain credibility and avoid content moderation filters.
Content Red Flags
Repetitive messaging: Troll accounts post the same talking points across multiple platforms.
Emotionally charged language: The goal is to trigger anger, fear, or outrage, not inform.
Amplification of known propagandists: They frequently share and retweet specific disinformation sources.
Unusual posting frequency: Bots post at unnatural speeds, sometimes hundreds of times per day.
Identical or generic content: Some AI-generated posts feel unnatural, overly polished, or repetitive.
Inconsistent language patterns: AI is improving, but some posts still have awkward phrasing or unnatural grammar.
Behavioral Cues of Troll Activity
Coordinated posting: Multiple accounts post the same message within minutes of each other.
Sudden shifts in discussions: A topic starts trending, and trolls flood the comments with distractions or counter-narratives.
Targeting influential individuals: Journalists, academics, and politicians face sudden waves of harassment from suspicious accounts.
Hashtag hijacking: Trolls flood trending hashtags with off-topic disinformation to drown out organic discussions.
Mass reporting campaigns: Trolls coordinate to report specific posts, leading to temporary or permanent bans for real users.
How to Protect Yourself From AI-Powered Disinformation
Question the source: Investigate where the information is coming from. Check if it’s being repeated by credible news outlets.
Seek multiple perspectives: Don’t rely on a single post or comment thread—compare different sources before forming an opinion.
Use fact-checking resources: Platforms like Tsek.ph and Rappler’s fact-checking service help verify information in real time.
Watch for AI-generated patterns: If a post feels too polished, check if other accounts are posting the exact same thing. AI-generated disinformation often lacks personal details or originality.
Don’t engage with obvious trolls: Many troll accounts thrive on outrage and engagement—blocking or reporting them denies them visibility.
Troll Farms Count on You Not Noticing
AI-powered disinformation works because it looks real. Troll farms flood social media with content that feels organic, sounds familiar, and blends in with real discussions.
But the more people recognize their tactics, the less effective they become.
Filipinos aren’t powerless against AI-powered lies—but awareness is the first step.
(SOURCES: Rappler, GMA Network, Fraud Blocker, RAND, Heinz College - Carnegie Mellon University, Reuters Institute, The Diplomat, CISA, Channel News Asia, Human Rights Pulse, ISEAS-Yusof Ishak Institute.)
How to Counter AI-Powered Troll Farms
AI-powered troll farms are escalating faster than efforts to stop them.
Fake accounts are created faster than they’re banned. Deepfakes spread before fact-checkers can debunk them. And as AI continues to improve, detecting fabricated content is becoming harder by the day.
But while disinformation is evolving, so are the efforts to fight back.
Governments, journalists, and fact-checking organizations are working to expose and limit the damage, but progress remains slow. At the same time, ordinary Filipinos must take responsibility—learning how to identify and resist AI-powered manipulation.
Here’s what’s being done—and what needs to change.
Detection and Monitoring Efforts
Disinformation thrives because it hides in plain sight. Detecting AI-generated propaganda requires tools that can analyze patterns, identify bot behavior, and trace the origins of false narratives.
Some of the most active efforts in the Philippines include:
Rappler’s AI-Powered Disinformation Tracking: Since the Duterte administration, Rappler has used AI tools to map disinformation networks, uncovering how troll farms manipulate online discussions. Their research has shown that troll accounts don’t just spread false information—they coordinate their attacks to make dissenting voices seem unpopular or “wrong” (Rappler).
Tsek.ph’s Fact-Checking Coalition: A collaboration of Philippine fact-checkers, Tsek.ph provides real-time verification of viral claims. Its researchers work to debunk deepfakes, manipulated videos, and AI-generated misinformation, but they face a critical challenge: by the time a false claim is corrected, millions have already seen and believed it (Tsek.ph).
Vera Files – A Philippine fact-checking organization that actively debunks disinformation, tracks false narratives, and investigates coordinated troll operations. It works with social media platforms to flag misleading content and provide verified information.
Philippine Center for Investigative Journalism (PCIJ) – An independent media organization known for exposing corruption and digital propaganda. It conducts in-depth investigations into how troll farms and AI-driven disinformation influence public perception and political narratives.
Facebook’s Takedown of Troll Networks: Meta has removed hundreds of troll farm accounts linked to Philippine disinformation campaigns. However, these takedowns are often reactive—by the time Facebook acts, new accounts have already replaced the old ones (GMA Network).
AI-Powered Detection Systems: Researchers worldwide are developing AI that can spot deepfakes, repetitive AI-generated content, and bot-driven manipulation. But AI-generated disinformation is always evolving, and detection tools are struggling to keep up (RAND).
Despite these efforts, the battle is uneven. The challenge isn’t just spotting AI-generated disinformation—it’s stopping it before it spreads.
Legislative and Platform Interventions
Even as governments and tech companies propose regulations, enforcement remains inconsistent—and in some cases, ineffective.
Social Media Crackdowns That Fall Short: Facebook and X (formerly Twitter) have removed thousands of fake accounts, yet disinformation continues to spread unchecked. The issue isn’t just individual trolls—entire networks are operating in the shadows, adapting as quickly as platforms remove them (Channel News Asia).
Proposed Social Media Regulations: Some lawmakers in the Philippines have pushed for mandatory ID verification for social media users, arguing it would prevent anonymous troll accounts. However, critics warn that this could endanger activists and whistleblowers who rely on anonymity for protection (The Diplomat).
Governments Pressuring Big Tech: The Philippine government has joined other nations in calling on Meta, TikTok, and YouTube to strengthen content moderation. Yet social media platforms still prioritize engagement over safety, allowing sensational, high-engagement falsehoods to spread faster than verified news (Reuters Institute).
Meta’s Abandonment of Fact-Checking: In 2024, Meta scaled back its fact-checking operations, removing funding from partnerships designed to combat disinformation. The result? AI-generated propaganda now faces even less resistance on Facebook. Troll farms exploit this gap, knowing their content is now less likely to be flagged or removed (GMA Network).
Meta’s decision to step away from fact-checking is a gift to AI-powered disinformation. Without strong enforcement, Facebook remains the main battleground for troll farms, allowing AI-generated propaganda to spread unchecked.
Unless social media platforms change their profit model—prioritizing truth over engagement—troll farms will continue to thrive.
Digital Literacy and Public Awareness
Filipinos are on their own when it comes to protecting themselves from AI-driven disinformation. Governments and platforms aren’t stopping it—which means the best defense is education.
Some steps being taken to improve digital literacy include:
Media Literacy in Schools: Universities and NGOs are working to train young Filipinos to recognize disinformation tactics. This includes spotting AI-generated content, understanding how troll farms operate, and fact-checking viral claims (ISEAS-Yusof Ishak Institute).
Fact-Checking Initiatives: Independent organizations like Tsek.ph and Rappler’s fact-checking division provide free resources to verify information. However, these initiatives often struggle with reach—fact-checks are seen by far fewer people than the original falsehood (Rappler).
Community-Based Awareness Campaigns: Some organizations are working at the grassroots level, teaching people in rural areas about AI-generated fake news and disinformation traps (Human Rights Pulse).
The real challenge isn’t just fighting disinformation—it’s making the truth reach people before the lies take hold.
False information trigger emotional reactions, making them more shareable. Even when a claim is debunked, people often remember the lie, not the correction.
Challenges in Fighting AI-Powered Disinformation
Even with all these efforts, the battle remains one-sided. AI-powered troll farms continue to grow because:
✔ AI Evolves Too Fast: New AI tools create convincing fake content, making detection harder.
✔ Platforms Profit from Engagement: Social media companies benefit financially from high-engagement posts—even if they’re false.
✔ Legal Loopholes Exist: Cybercrime laws are outdated, unable to fully address AI-driven disinformation campaigns.
✔ Distrust in Fact-Checking: Troll farms have discredited journalists, making some Filipinos mistrust legitimate debunking efforts.
The reality is simple: Troll farms are outpacing the defenses against them.
Fighting Back Starts with Awareness
Governments can pass new laws, and tech companies can develop better detection systems, but these efforts will always be reactive. By the time disinformation is flagged or removed, millions have already seen and believed it.
The best defense against AI-powered propaganda isn’t stricter policies or improved moderation—it’s critical thinking.
Filipinos need to question what they see online, especially when a post triggers outrage, confirms personal biases, or presents an argument that feels too perfectly written to be organic. Troll farms don’t just manufacture lies; they shape conversations, drown out dissenting voices, and manipulate emotions to steer public perception.
With the 2025 elections approaching, the fight against AI-powered disinformation is only going to intensify. The question is whether Filipinos will recognize the tactics being used against them—or fall for the same deception once again.
Unmasking the AI Arsenal of Troll Farms: A Deep Dive into Disinformation Technology
AI-powered disinformation isn’t just spreading faster—it’s becoming harder to detect.
Troll farms aren’t just relying on human labor anymore. They’ve automated their operations with AI-powered tools that generate fake content, manipulate engagement, and create deepfakes that distort reality.
To fight back, we need to understand their weapons.
This section reveals the AI software and techniques used by troll farms—how they work, how they evade detection, and why these tools are getting more powerful by the day.
The AI Tools Behind Troll Operations
Troll farms don’t just spread lies—they use sophisticated AI software to create, amplify, and manipulate online discussions. These tools fall into three key categories:
1. AI-Powered Content Generation
Fake News & AI-Written Propaganda: Language models can produce entire news articles, political rants, or misleading social media posts that sound authentic.
Synthetic Profile Creation: AI-generated images can create fake people with realistic faces, making troll accounts seem legitimate.
Deepfake Technology: AI tools can generate fake videos and manipulated images, making fabricated scenarios look real.
2. Automated Engagement & Algorithm Manipulation
AI-Controlled Bots: Troll farms use software that automatically likes, shares, and comments on posts, making fake news seem more popular than it actually is.
Mass Reporting & Hashtag Hijacking: Coordinated AI bots report legitimate accounts to silence critics and flood trending topics with misleading content.
3. AI-Generated Deepfakes & Video Manipulation
AI-Generated Audio & Video Manipulation: Some AI tools can clone voices, swap faces, and create realistic fake videos, making it difficult to tell real from fake.
Troll farms integrate these tools seamlessly, allowing them to expand their operations, ensuring that fabricated propaganda spreads widely while evading detection.
The Software That Powers Disinformation
Some of the AI-powered tools used by troll farms include:
Content Generation
Faker.js – Generates realistic fake names, addresses, and other data, allowing trolls to create believable fake identities.
Mockaroo – Produces large volumes of fake profile information, useful for populating troll accounts.
Automated Engagement
TrollWall.ai – Automates content moderation and engagement, making it easy for troll farms to flood discussions while hiding opposing views.
Thryv – An AI-powered platform that automates mass interactions, helping trolls manage large numbers of fake accounts at once.
Deepfake Creation
DeepSwap – AI-powered face-swapping technology that makes fake videos appear realistic.
Synthesia IO – Converts text into AI-generated videos, allowing trolls to create fake video statements from politicians, influencers, and journalists.
DeepFaceLab – An open-source tool that gives full control over deepfake creation, making it a powerful weapon for disinformation.
These tools were originally developed for legitimate purposes, but bad actors have weaponized them for propaganda, political warfare, and large-scale online manipulation. AI now enables them to generate vast amounts of misleading content, impersonate real people, and artificially amplify their influence, making their campaigns more convincing and harder to trace than ever before.
How Troll Farms Combine These Tools
Troll farms don’t just use one AI tool—they coordinate multiple AI systems to execute large-scale disinformation campaigns.
Step-by-Step Example of a Troll Farm Operation
Creating Fake Identities: Trolls use Faker.js and Mockaroo to generate hundreds of fake social media accounts with realistic names and profile pictures.
Generating Fake Content: AI-powered language models create convincing fake news articles, social media posts, and political rants.
Automating Engagement: TrollWall.ai and Thryv coordinate fake likes, shares, and comments, making disinformation appear widely accepted.
Weaponizing Deepfakes: DeepSwap or Synthesia IO generate fake videos of political figures, further distorting public perception.
Suppressing Opponents: Bots mass-report critics, getting their accounts banned while dominating discussions with pro-troll propaganda.
This multi-layered approach allows troll farms to manipulate public discourse on an industrial scale—all while making it appear organic.
Why This Makes AI-Generated Disinformation So Dangerous
The combination of fake content, AI-generated engagement, and deepfakes makes disinformation campaigns more effective than ever.
AI-powered troll farms can now:
✔ Scale Their Operations – A handful of trolls can control thousands of fake accounts with minimal effort.
✔ Create More Convincing Lies – AI makes false information sound professional, well-researched, and believable.
✔ Mimic Human Behavior – AI-generated comments are indistinguishable from real conversations, making it hard to spot trolls.
✔ Evade Detection – Troll accounts rotate profiles, copy human speech patterns, and spread disinformation in waves to avoid platform bans.
Social media algorithms reward engagement, meaning AI-powered troll farms can force false information to trend, misleading millions before fact-checkers intervene.
Who’s Behind These Operations?
Troll farms rarely operate alone—they are often funded by governments, political groups, or private entities with vested interests. Some are state-sponsored, using AI-powered propaganda to shift public perception, silence critics, and manipulate elections both domestically and internationally. Others operate as private enterprises, offering disinformation as a service to corporations, politicians, and interest groups looking to distort reality in their favor.
Because AI tools are widely accessible, even small groups with limited resources can now engage in coordinated online manipulation. Open-source deepfake technology, automated engagement software, and AI-generated content mean that even a handful of individuals can launch large-scale disinformation campaigns that appear organic and credible.
This raises an urgent concern:
If disinformation is this powerful now, what will happen as AI technology becomes even more advanced and seamlessly integrated into online discourse?
The Future of AI-Driven Troll Farms
AI is changing the landscape of information warfare. With each advancement, disinformation becomes more sophisticated, more convincing, and more dangerous.
If nothing is done, AI-powered propaganda will:
Erase the line between truth and fabrication.
Make public perception easy to manipulate.
Turn elections into battles of technology, not ideas.
The war on truth is no longer about who has the best arguments—it’s about who controls the AI that shapes reality.
The only way to fight back? Awareness, regulation, and a commitment to critical thinking.
Because in a world where AI-generated disinformation can look indistinguishable from the truth, believing everything we see is no longer an option.
The 2025 Elections: What’s at Stake?
Every election in the Philippines has been shaped by disinformation, but 2025 will be different.
AI-powered troll farms won’t just flood social media with propaganda—they will refine, personalize, and automate deception on a scale never seen before. Fake posts will look more credible. Deepfake videos will be harder to detect. AI-generated engagement will make manipulated opinions seem like overwhelming public sentiment.
For voters, this means distinguishing truth from lies will be harder than ever.
The question isn’t whether AI will be used to manipulate the elections—it’s how much damage it will do before people recognize what’s happening.
A Testing Ground for AI-Driven Manipulation
Troll farms have already perfected their tactics, using AI to:
Manufacture Public Perception: AI-generated engagement—likes, shares, and comments—can make one-sided political views appear widely accepted, drowning out real discussions.
Destroy Opponents: Fake scandals, deepfake videos, and AI-generated articles can discredit candidates in ways that are nearly impossible to refute.
Hijack Trending Topics: AI bots will flood election-related hashtags, ensuring that only specific viewpoints dominate the conversation.
Suppress Dissenting Opinions: Coordinated mass reporting will get critics banned, making it harder for people to challenge misleading claims.
AI doesn’t need to convince everyone—it just needs to create enough confusion and division to make voters doubt what’s real.
And that’s how disinformation wins.
The Cost of Letting AI-Generated Lies Take Over
When disinformation runs unchecked, elections lose their legitimacy. People vote based on false information, and democracy becomes a tool for those who control the narrative.
If troll farms succeed in manipulating 2025, expect:
✔ A Leadership Built on Lies – If candidates win because of AI-generated deception, public trust in governance will collapse.
✔ A More Divided Nation – AI-powered manipulation doesn’t just target elections—it widens social and political divides, making unity even harder to achieve.
✔ A Future Where Truth Doesn’t Matter – If AI-driven disinformation keeps evolving, future elections will be decided by whoever has the best technology—not the best leadership.
These aren’t just hypothetical risks. They are the logical outcome of a system where AI and troll farms are allowed to shape public opinion without consequences.
What Filipinos Must Do to Defend the Elections
The 2025 elections will be a test—not just for politicians, but for the voters themselves. The only way to fight back is through awareness, vigilance, and critical thinking.
Here’s what needs to happen:
Voters must be more skeptical than ever. If a claim is too shocking, too perfect, or too convenient, it must be questioned.
Fact-checking must become a habit. Verifying before sharing is no longer optional—it’s a responsibility.
Candidates must call out AI-powered disinformation. Staying silent while AI-generated propaganda spreads is the same as endorsing it.
Filipinos must refuse to let AI dictate their vote. Elections should be decided by real people making informed choices, not by bots and deepfake campaigns.
This is the first major election where AI will play a decisive role.
And the outcome will shape not just 2025, but the future of elections in the Philippines.
If Filipinos fail to recognize the threat, AI-driven disinformation will only get worse.
Reflections
Thoughts on life shared over morning coffee.
Contact us
subscribe to morning coffee thoughts today!
© 2024. All rights reserved.