The Fake News Trap: When the Display Looks Real but the Truth Is Hidden

In this Morning Coffee Thoughts post, I reflect on the Philippine congressional hearings targeting vloggers accused of spreading fake news—and why the bigger threat might not be the content creators, but the unchecked power behind the hearings.

There’s a new coffee shop in Cabanatuan I passed by the other day. Clean, inviting, and filled with the scent of freshly ground beans. At the center of the shop were sacks of coffee on display—light brown, dusty, earthy. I bent down to take in the aroma, hoping to guess the roast. The owner smiled and told me not to bother—those beans were spoiled and only for display. The real beans, the ones worth brewing, were kept behind the counter. Fresh, unopened, and only ground when someone buys.

I chuckled to myself. Fooled by the display.

It reminded me of a meme I saw online a few mornings ago. It made me laugh at first—clever, sharp, and timed to perfection. But a quick check revealed it was entirely false. Not satire, not mistaken. Just fake. A display made to fool. And I fell for it, even if only for a few seconds.

I don’t share memes anymore, not without checking. Not after 2016. But moments like these still get to me. How many others laughed and reposted that meme without knowing it was fabricated? How many others believed?

What started as harmless jokes or half-truths on timelines has turned into something deeper—something darker. We’re no longer just talking about fake news as noise. It’s turned into a system, a strategy, a weapon. And now, lawmakers want to regulate it. They want to call it out. But in doing so, they’ve also summoned vloggers, influencers, and ordinary content creators—accusing some of spreading disinformation, and threatening others with contempt for not showing up.

Suddenly, the question isn’t just what’s true? It’s who gets to decide?

And that’s where the danger lies.

Disinformation in the Philippines: A Deepening Divide

It’s hard to tell what’s real online these days. Sometimes I scroll past a post, and even if it sounds off, I catch myself nodding. Then I look again—and realize it’s not even true. I’d like to think I know better, but it gets harder each year.

Fake news didn’t just sneak into our feeds—it took root. It’s been around since 2014, but something shifted when people started using it not just to mislead, but to win elections. We all saw it. Stories shared without question, videos edited just enough to look believable, headlines that confirmed what people already wanted to think.

And now, we’re divided—not just by opinions, but by the things we think are facts.

Jay Ruiz from the Presidential Communications Office said it himself. We’re being pitted against each other online (PCO). I’ve felt it. I’ve seen how some people stopped talking to each other after elections. You bring up one topic, and suddenly there’s tension where there used to be laughter. That’s not just about politics. That’s what disinformation does—it rewires how we see each other.

So when Congress decided to investigate fake news, I wasn’t surprised. They formed a tri-committee and called in vloggers who’ve been accused of spreading disinformation. But it didn’t end there. Some didn’t show up. Now they’re being threatened with contempt and possible detention.

And here we are again. Another divide. This time, between protecting people from lies—and protecting their right to speak.

The Tightrope Walk: Where Free Speech Meets Regulation

I’m not a lawyer. I don’t memorize court rulings or follow the fine print of contempt charges. But I’ve watched enough hearings to know that when lawmakers start pressing someone in public, things can shift fast. Especially when you're not the one with the microphone.

About 40 vloggers and influencers were invited to appear in Congress to explain their posts. Most didn’t show up. Only three attended the first hearing, and the rest were warned—they could be cited for contempt if they continued to ignore the summons.

Eventually, some of them came. And one by one, the apologies started.

I saw clips of the hearing where a few vloggers broke down in tears. One of them, Mark Lopez, admitted on record that he had peddled fake news. But what struck me wasn’t just the admission—it was the way they spoke. It didn’t feel like they were trying to clarify anything. It felt like they just didn’t want to get detained. Like they were trying to make it out of the room without becoming an example.

Now just to be clear—I know who these vloggers are. DDS, hardcore, loud, and yes, peddlers of disinformation. That part’s not up for debate. But this article isn’t about defending them.

It’s about asking: Is this really how we want to regulate speech?

Because right now, they’re the ones being called in. But what happens when a different administration is in power? What happens when it’s another vlogger—someone on the opposite side of the political fence—facing that same kind of pressure?

That’s the risk when rules bend for one group and not the other. Today, the targets are convenient. Tomorrow, it could be someone we agree with.

So I went back and checked the Constitution. It says no law should take away our freedom to speak, to write, to express. It’s one of the clearest protections we have. And it exists even for those who say things we don’t agree with.

But when a hearing becomes less about truth and more about fear, I wonder if we’re really confronting fake news—or just making people too scared to say anything at all.

What Is Fake News, Really?

We throw the term around so casually now—fake news. It’s become a shortcut in arguments, a punchline in memes, a way to shut someone up without actually saying anything meaningful. But if we’re going to talk about it, especially in the context of congressional hearings and public pressure, then we need to pause and ask: what does it even mean?

Not all wrong information is the same. I had to look into it myself.

Disinformation is when someone spreads false information on purpose. They know it’s wrong, and they post it anyway to mislead, to provoke, or to manipulate.

Misinformation, on the other hand, happens when people share something that isn’t true—but they don’t know it’s false. Maybe they saw it from someone they trust. Maybe it looked real enough. There’s no intent to deceive—but the damage can still happen.

Then there’s misleading content. This one’s tricky. It might be technically true, but presented in a way that removes context. A quote taken from the middle of a speech. A screenshot that leaves out the rest of the story. It's not always about lying—it’s about shaping perception.

The problem is, all of these different things now get labeled under one phrase: fake news. And instead of helping us figure things out, it’s often used to dismiss people we disagree with. Someone calls out the government? Fake news. Someone posts a correction online? Fake news. It’s become a weapon—not just against lies, but against dissent.

That’s why I remembered the Anti-False Content Bill from a few years ago. It tried to regulate disinformation, but the definitions were so broad that almost anything could be punished. Political opinions. Satire. Commentary. It raised real fears that it could be used to silence critics, not protect the public.

And that’s the danger when we stop being careful with words. When fake news stops meaning “a lie” and starts meaning “anything that makes us uncomfortable,” we lose more than just clarity. We lose the space to speak honestly—even when we’re wrong.

Christian Esguerra’s “Journalistic Truth”: A Better Standard

I don’t know Christian Esguerra personally, but I’ve followed his work enough to say this: he’s one of the few voices in media who still sounds like he’s thinking things through, not just reacting.

In his podcast Facts First, he talks about what he calls “journalistic truth.” And no, it’s not about being all-knowing or pretending to have the final answer. It's more grounded than that. It’s about starting with the facts we do have, being transparent about the ones we don’t, and refusing to take shortcuts—even when the algorithm wants you to.

That kind of discipline feels rare now.

We live in a time where outrage spreads faster than clarity. Posts go viral not because they’re helpful, but because they’re loud. Sometimes, I scroll past clips with dramatic music, spliced edits, and bold fonts—just to find out later they left out the most important part. What Christian pushes for isn’t perfection. It’s responsibility. And that sits differently with me.

He reminds people that truth isn’t something you win—it’s something you work toward. It's not about who speaks first or loudest. It’s about who’s willing to check, to listen, to revise when something new comes up.

And maybe that’s the middle ground we’re missing. Between government overreach and careless posting, maybe what we really need is this quieter kind of standard. One that doesn’t come with threats or applause—just an honest effort to get it right.

Because you don’t have to be a journalist to tell the truth. You just have to care about it.

What the World Is Doing: Global Lessons in Moderation

Sometimes, when things get messy here, I catch myself wondering—are other countries doing any better? Or are we all just trying to figure this out one bad post at a time?

Turns out, everyone’s been experimenting. Some with education. Others with AI. And some… well, they’ve gone a bit too far.

In Sweden, they decided to start early. Kids aged 10 to 18 are taught how to spot fake photos, dissect memes, and double-check headlines. It’s part of their school system now, not just an after-school campaign. And it’s working. One study from the Carnegie Endowment’s EU Media Literacy Index showed a 33% improvement in recognizing manipulated images among Swedish students. That’s what happens when you treat fake news like a literacy issue—not a legal one.

Then there’s Brazil, where elections have been messy and loud. They created something called the “Fake News Radar”—an AI tool that monitored 200 million posts per day. Meta and Google worked with them. And it helped: election-related violence dropped by 19%. But it came at a price. About 83,000 accounts were suspended. That’s a lot of voices silenced, even if some of them needed to be. Makes you wonder: where’s the line between protection and control?

In the European Union, they passed a law called the Digital Services Act. It requires platforms like TikTok and Facebook to audit how their algorithms work—especially when it comes to what content gets pushed in front of users. After the audits, TikTok’s EU branch saw a 47% drop in disinformation exposure. But again, there’s the other side. Smaller platforms are struggling to keep up with the compliance costs. Fighting fake news, it seems, is also expensive.

And then there’s India. Their approach feels the heaviest. The government wants encrypted apps like WhatsApp to trace the origin of forwarded messages. They even proposed a government-run fact-checking unit. That didn’t sit well with everyone—lawsuits followed. During the 2023 farmer protests, over 1,200 accounts were blocked for allegedly spreading “logistics misinformation.” That’s not a small number. And it's hard not to feel uneasy when governments can silence that many people with one decision.

But as I was reading through these efforts, something bothered me. In all our talk here about regulating vloggers, summoning influencers, and filing contempt charges—where’s Facebook? Where’s YouTube? Where are the platforms where this disinformation actually spreads?

In Brazil, Meta was at the table. In the EU, tech giants were forced to open up their systems. But here? We keep focusing on users at the bottom of the chain. Meanwhile, the companies that profit from virality and engagement—the ones who allow fake news to trend in the first place—stay quiet.

Why not call them into a congressional hearing? Ask them what they’re doing about algorithmic amplification, moderation gaps, and fact-checking enforcement? If we’re serious about solving the problem, then we need to look at the machines that carry the lies—not just the people who post them.

Because if we don’t, we’re just repeating a pattern we’ve seen before.

At that same House hearing, vlogger Krizette Laureta Chu admitted under oath that she and other Filipino influencers had been sent to China for a seminar—one that was funded by the Chinese government and organized by the Chinese Communist Party’s international liaison office. She said it was about communication strategies and nothing more. But when lawmakers raised concerns, you could feel the shift in the room. It was no longer just about individual responsibility—it was about reach, influence, and whose hand might be guiding what we see online.

And yet, we’re still going after the easiest targets.

It feels a lot like the drug war under Rodrigo Duterte. We went after the small fish—the street-level users, the ones with no power and no protection. Meanwhile, the big suppliers, the syndicates, the names at the top? They were left alone. Some of them even got appointments.

Now we’re seeing the same thing in this so-called war against fake news. The vloggers and content creators may be noisy and reckless, but they’re not the source. They're just the easiest ones to drag into a hearing. The real disinformation engines—the troll networks, the tech platforms, the ones who benefit financially—are untouched. Unbothered. Unnamed.

Looking at all this, one thing stands out—nobody has it all figured out. As the Carnegie report reminds us, there’s no “silver bullet” for combating disinformation. Every solution comes with a trade-off. More control here might mean less freedom there. More safety for some could lead to more silence for others. These global efforts show that solutions are possible, but none are perfect—and some come at a steep cost.

The Role of Technology: Help or Hazard?

There was a time I thought social media gave us a seat at the table. That it leveled the playing field. That the quiet ones finally had a voice. But somewhere along the way, it changed. And maybe it wasn’t so much the people who changed—it was the system behind what we see.

We like to think algorithms just organize our feeds, but they do more than that. They decide what gets seen. What gets ignored. What gets repeated. And they’re not designed for truth—they’re designed for attention.

Behind every post, reel, video, or headline is a machine trained to keep you looking. Keep you scrolling. Not because it’s good for you, or for democracy, but because the longer you stay, the more ads you see. The more outrage you click on, the more of it you get. That's not a glitch. That’s the design.

Ethicist Tristan Harris once said something that stayed with me: “You have to appeal to the algorithm to get elected… it controls what we do.” That’s chilling, but it’s true. Political strategy today isn’t just about ground campaigns or debate performances—it’s about engagement metrics. The algorithm has primacy over values.

And it's not just about what gets amplified—it’s about how we’re slowly being pulled into smaller and smaller corners. Filter bubbles. Echo chambers. Personalized feeds that wrap us in what we already believe. We start to think our view is the majority, because that’s all we see. Different perspectives become distant, even threatening.

It doesn’t help that we’re wired to pay more attention to fear, outrage, and moral triggers. That instinct used to help us survive in small tribes. Now, algorithms exploit it to boost watch time. And what spreads fastest in that kind of system? Not the truth. Not nuance. But lies dressed as headlines. Fake news travels six times faster than real news—not by accident, but because it performs better.

And foreign actors have figured that out, too. Russia. China. Iran. Even private contractors. They've learned how to game the system—mass posting, fake accounts, synced behavior—to hijack what trends and shape what people believe. Generative AI makes it even easier. You can now deploy thousands of fake identities at scale, all saying the same thing, all looking like real people.

We saw this play out in the Cambridge Analytica scandal. Psychological profiles built from stolen data, used to send perfectly tailored political ads to people based on their fears and insecurities. That wasn't just unethical. It was algorithmically engineered manipulation.

And here’s the kicker—not everyone is affected equally. Studies have shown that far-right groups, for instance, get disproportionately high returns on small ad budgets. Some voices are algorithmically favored. Others are buried. That’s not free speech. That’s engineered inequality.

All of this makes one thing painfully clear: the architecture of these platforms no longer serves democracy. If anything, it undermines it.

But the story doesn’t have to end here.

Researchers at Stanford have shown that algorithms can be redesigned—ones that reduce division instead of deepening it. The technology can be refocused to prioritize societal values instead of raw engagement. The problem isn’t that algorithms exist. It’s what we’ve trained them to reward.

It’s going to take more than just tech fixes. It’ll take regulation. Transparency. Digital literacy. Real accountability. We have to treat these systems the way we treat any democratic institution—with scrutiny, checks, and safeguards. Because they don’t just shape our feeds anymore. They shape how we think. How we vote. What we believe is real.

We built these platforms. Now we have to decide whether they keep serving shareholders—or start serving the people.

A Call for Discernment – What Ordinary People Can Do

I’ve been thinking about this a lot lately. It’s easy to feel small in all this—between congressional hearings, troll farms, billion-dollar platforms, and AI-powered fake accounts flooding the internet. Where does that leave us? The ones who just want to read the news, share something thoughtful, maybe scroll to unwind after work?

I don’t have grand solutions. But I do believe there’s still power in small, personal decisions. And that starts with discernment.

Here are a few things ordinary people like us can do:

  • Pause before sharing. Ask: Is this helpful? Is it verified? Or is it just emotionally charged content designed to provoke?

  • Be okay with being wrong. If you shared something inaccurate, say so. That alone is a quiet rebellion against the culture of pretending.

  • Make space for opposing views. Not every disagreement needs a reaction. Sometimes, just reading is enough.

  • Choose thoughtfulness over virality. Don’t measure your worth by likes. Say what matters, even if it doesn’t trend.

  • Practice digital kindness. Correct people gently. Call in, not just call out.

  • Talk to the people around you. A short conversation explaining why something is misleading can be more powerful than a trending post.

  • Support media and creators who take the hard path. The ones who fact-check, who explain, who don’t just echo what’s popular.

  • Demand better systems. Write. Vote. Campaign. Ask for accountability from the platforms themselves—not just from the users stuck in the middle of a broken design.

Discernment won’t fix everything. But it’s a start. It’s what we can do right now, from wherever we are. We may not be able to change the algorithm overnight. But we can stop letting it change us.

Conclusion: When Power Redefines Truth


After that coffee shop visit, I went home with a small pack of kapeng barako. Nothing fancy—just strong, dark, and familiar. The kind you don’t sip for flavor notes. You drink it because it wakes you up. Keeps you steady. Reminds you that some things don’t need to be complicated to be real.

That’s the frame I carry with me when I go online. Because I know—every time I scroll, read, or react—I’m moving inside a confined space. A space shaped by algorithms, narrowed by past clicks, and constantly reinforced by what I already think. An echo chamber I didn’t build, but one I live in by default.

So when I come across new information, I’ve made it a habit to pause. To fact-check. Not because I’m extra careful. But because it’s really not that hard. And in a world where truth is either diluted or weaponized, that pause matters.

Which brings me back to the heart of this blog.

This isn’t just about fake news, or vloggers, or algorithms. It’s about what happens when power decides it can define truth for everyone else. When a congressional hearing is no longer about dialogue, but about humiliation. When content creators, however flawed, are dragged into a room—not to explain, but to be made an example of. And when that power isn’t questioned, just clapped for.

Today it was DDS vloggers. Tomorrow it could be anyone who posts something a powerful person doesn’t like. That’s not democracy. That’s a warning.

And just like social media platforms quietly shape what we believe, power does too—through what it chooses to question, and what it conveniently leaves alone.

We’ve touched on many things—troll farms, foreign influence, tech platforms, education, even the limits of legislation. But underneath it all is one question: who gets to decide what truth is—and what happens when they misuse that authority?

That’s why this isn’t just a moment to point fingers at vloggers. It’s a moment to ask who’s sitting behind the counter of power. And what kind of future they’re preparing to serve us—quietly, while we’re distracted by what’s on display.

So we don’t just need laws. We need eyes open. We need ordinary people paying attention—even to the quiet stuff behind the counter.