My Facebook Feed Thinks I’m a DDS Supporter — and How I’m Fighting Back

After the original Morning Coffee Thoughts page got hacked, my new Facebook feed started mistaking criticism for fandom. This blog takes a serious look at how our feeds are shaped — how they track, guess, and decide what we see — and what we can do to stop the system from promoting propaganda. A guide to taking back control, cleaning up your Facebook feed, and blocking the DDS.

10 min read

Mornings are for writing. It’s when my head’s clear enough to put thoughts into words before the noise of the day kicks in.

But lately, even before I start drafting, my Facebook feed looks like a landfill of recycled propaganda. The same faces, the same recycled lines, the same DDS pages that never seem to die.

It’s strange because I had to start over. The original Morning Coffee Thoughts page got hacked, so I built a new one from scratch. Now the platform doesn’t seem to recognize me.

Since I often write about the garbage the Dutertes left behind, the system seems to think I’m one of their followers. It keeps feeding me their content like it’s doing me a favor.

I’ve known for a while that there’s an unseen filter shaping what reaches us online. But this is the first time I’ve taken a serious, deep look at how it works—and how I can train it to stop confusing criticism with support.

Because every scroll, every pause, every click teaches it something about who we are. And the more I write, the more I realize: it’s not just the content that shapes people. The system shaping the content needs watching too.

Understanding How Social Media Builds Your Feed

The feed you see isn’t random. It’s built piece by piece by a system that studies what catches your attention. Every post, every pause, every reaction feeds into that invisible process that decides what shows up next.

It begins with what’s called inventory — all the posts available to you. That includes updates from friends, pages you follow, groups you’ve joined, and even public posts you’ve never interacted with before (SocialBee, Hootsuite).

Then comes the signals phase. The platform looks at hundreds of data points: who posted it, what kind of content it is, when it was shared, and how you’ve interacted with similar posts in the past (Planable).

After that, the system makes predictions. It tries to guess how you’ll behave — whether you’ll like, share, comment, or watch a video till the end. Each prediction helps it learn what to prioritize.

Finally, everything gets a relevance score. The higher the score, the more likely it appears at the top of your feed (Buffer).

The cycle repeats every time you scroll. It’s fast, constant, and fine-tuned to keep you from leaving. Platforms will always say their systems are neutral, but neutrality in this context is tricky. When engagement becomes the measure of relevance, whoever stirs the most reactions ends up dominating the feed (Knight Columbia).

That’s why the same posts, faces, and political pages keep resurfacing, even when you’ve moved on. The system isn’t trying to understand you — it’s trying to keep you scrolling.

How the System Tracks Your Behavior

The moment you open an app, it starts watching. Not in a paranoid way — just quietly, constantly, and with precision. Everything you do tells it something.

It begins with what you click, read, and watch. Each reaction — like, comment, share, or even just a few seconds of viewing — becomes a clue to your interests (Tencent Cloud, Wired). On TikTok, for example, your viewing time, likes, follows, and even the kind of videos you linger on form a detailed behavioral profile (Sage Journals).

Then comes dwell time — how long your eyes stay on a post before you move on. It’s one of the strongest signals that the system measures. Even without liking or commenting, just staring at something for a few seconds tells it you’re interested (The Social Content Factory, Prominence Global, arXiv).

It also pays attention to how you scroll. Every pause, flick, and hesitation becomes a datapoint. Slowing down on a certain post tells it to find more of the same. Speeding past something tells it to show less of that in the future (Amplitude, Rose & Gold).

The scariest part is that you don’t even have to engage for it to learn from you. Just looking counts. It’s a strange feeling — knowing your silence online is still being studied.

The Technical Side: What Your Smartphone Sensors Reveal

Phones today do more than track what you click. They can also sense how you look, move, and react. It sounds excessive, but it’s already part of how most modern devices work.

Some apps can estimate where your eyes are focused using just the front camera. Studies have shown that even without extra hardware, your phone can follow your eye movement with surprising accuracy (Nature, PubMed Central).

Then there’s facial expression analysis — systems that detect micro-expressions to guess what you’re feeling while scrolling. It’s not just theoretical anymore; researchers have mapped emotional responses in real time using facial landmarks captured through the camera (JMIR, UniBW).

Every tap and swipe also leaves a trace. Touch behavior — the way you scroll, the rhythm of your typing, even how firmly you press — can reveal patterns unique to you (PubMed Central, Kostakos.org).

And behind all that, your phone is quietly counting how long you spend on each screen. Some apps even log total time per post or page to calculate how long your attention stays in one place (Medical Xpress).

It’s easy to forget how much these small movements add up. Every second, your phone is building a portrait of how you think, react, and linger. Not through what you say — but through what your body unconsciously gives away.

The “Always Listening” Microphone: Eavesdropping and Ad Targeting

Phones today don’t just watch. They listen too — even when you’re not talking to them.

Voice assistants like “Hey Siri” or “OK Google” keep the microphone active so they can respond anytime. The problem begins when other apps also have microphone access. If you’ve granted permission, they can pick up snippets of audio even when the app isn’t open (Norton, Surfshark, Spiralytics).

Ever notice how you mention a product in conversation, then suddenly see ads for it? That isn’t coincidence. Short audio segments can be analyzed for keywords and linked to your profile for ad targeting. The recordings aren’t always stored, but the extracted data remains (New Atlas).

The fix is simple, though not automatic. Check which apps have microphone access. Deny the ones that don’t need it to function. Most people forget these settings exist — and that’s exactly why background listening goes unnoticed.

Once I reviewed mine, I realized dozens of apps I never use still had permission to hear me. It was unsettling, but also liberating to finally switch them off.

Our devices already collect so much. We can’t stop every form of tracking, but we can draw a line where it matters: what enters through the mic should stay private.

Platform Policies, Moderation, and Legal Remedies

There’s another layer to all this — one that decides which voices get amplified and which quietly fade away.

When a post or page is reported as spam or misinformation by enough people, Facebook and similar platforms may limit its visibility. It can be flagged, hidden from timelines, or placed under review without the user ever knowing it happened. These mass reports, whether coordinated or not, often dictate what the public gets to see (Internet Governance Forum).

The process sounds fair on paper, but it isn’t always. Some groups use reporting tools to silence critics or independent writers. I’ve seen people lose their pages overnight — not because they spread lies, but because their truths upset the wrong crowd.

Still, there are real protections in place for those who experience digital harassment or defamation. In the Philippines, repeated online abuse and organized misinformation campaigns can fall under cybercrime and privacy laws. Victims can seek legal support or consult experts who specialize in online accountability (Respicio.ph).

Social media often feels lawless, but that’s not entirely true. The same tools used to manipulate visibility can also be used to defend it — if you know your rights and how to respond.


Step-by-Step: Blocking and Filtering on Major Platforms

After learning how these systems read us, I started cleaning up my own feed. It’s not dramatic work — just quiet, repetitive clicks that make your space a little lighter each time.

Facebook

Go to any post and tap the three dots. Choose Hide post or Snooze [account].
To stop seeing a page altogether, open it and hit
Block.
You can also fine-tune your feed under
Settings > News Feed Preferences. From there, unfollow, snooze, or prioritize who you actually want to hear from. (Facebook Help Center via SocialBee)

Instagram

Tap the three dots on a post and select Not Interested or Report.
For persistent accounts, go to their profile, tap the three dots again, and hit
Block. (Instagram Support via Hootsuite)

TikTok

Long-press any video and choose Not Interested or Report.
If a user keeps showing up on your “For You” page, open their profile, tap the three dots, and select
Block.
(TikTok Safety Center via
Wired)

Each platform updates these settings often, so it helps to check their help pages every few months.

Small actions add up. Blocking one account doesn’t feel like much, but over time your feed begins to breathe again.

The Tolerance Trap: Why Scrolling Past Content Backfires

There was a time when I thought ignoring DDS content was enough.
Scroll fast, don’t engage, and maybe it’ll stop showing up. I was wrong.

Every time you pause on a post, even for a heartbeat, the system interprets it as interest. It doesn’t care if you’re disgusted or amused—it only measures time and attention. That’s how hate pages keep slipping back into your feed.

Dwell time, scroll speed, and passive viewing all feed the same machine that curates your timeline. The longer you hover, the stronger the signal that you “like” that kind of content (Metricool, Dash Social, Brandwatch).

Blocking, muting, or marking posts as “Not Interested” sends the opposite signal. It teaches the system to treat those posts as noise.
Arguing in the comments does the worst kind of damage—it tells the platform the post is engaging, no matter what you said.

Silence, when it means scrolling past, isn’t neutrality. It’s fuel.
And that’s how the algorithm keeps the garbage floating to the top.

The Blocking Strategy: Taking Active Control

After learning how the system reacts to every move we make, the next step is simple but deliberate: stop feeding it.

Block, hide, or report any account or page that spreads propaganda or disinformation.
Unfollow and mute contacts whose posts drain your focus.
Use keyword filters when the option exists.
And most importantly, never comment, argue, or react to bait posts—each response tells the system that the content deserves to stay on your feed (
SocialBu).

If you’re part of an online group or advocacy page, take it further.
Encourage
mass blocking and mass reporting when coordinated disinformation campaigns appear. These actions push the system to lower the visibility of those pages faster.

Reclaiming control isn’t a one-time act. It’s a habit. The more consistently you block and filter, the more your timeline learns what deserves to be seen.

The Limits of User Control

Even with all the blocking, filtering, and cleaning up, there’s only so much we can control. These systems were designed to keep us engaged, not to protect our peace.

DDS content can easily reappear under new names or recycled pages. Sometimes, even when a post gets mass-reported, it slips through because it doesn’t technically break any platform rule. And then there are sponsored posts — paid visibility that overrides whatever you tell the feed you don’t want to see (SocialBu).

Browser tools can help a little. Extensions that filter keywords or mute pages can reduce noise, though they can’t erase it completely (TechLockdown).

There’s no such thing as a perfectly clean timeline. Platforms change constantly, and what works today might not work next month. But cleaning up isn’t just a technical act — it’s a form of self-defense.

When you start filtering what you let in, you’re not escaping reality. You’re choosing which parts of it deserve your time.

Privacy, Ethics, and Mindful Tech Use

At some point, it stops feeling like technology and starts feeling like intrusion.
Every scroll, tap, or glance becomes another entry in a data file you’ll never see.

The truth is, constant tracking and background sensors raise real questions about consent. Many of us gave permission years ago without reading the fine print, and now those quiet permissions have turned into a permanent spotlight. Data brokers and ad systems trade in what we read, what we buy, and how long we stare at a screen (FS Poster).

Privacy laws exist, but enforcement is patchy. What’s legal in one country might be abuse in another. It helps to know your rights, review app permissions, and use browser tools that reduce tracking. The goal isn’t paranoia—it’s awareness.

There’s also the mental side of all this. Doom-scrolling through noise and hostility takes a toll. Studies have linked constant exposure to polarizing content with higher stress and anxiety (PMC, Information Matters). The mind wasn’t built to handle endless outrage.

Learning to scroll with intention matters. Curate your feed like you curate your surroundings. You can’t filter every ad or lie, but you can decide what deserves your time and attention. The line between awareness and exhaustion is thin—cross it too often and you start mistaking noise for truth.

Collective, National Action and Advocacy

Cleaning up your own feed helps, but the fight doesn’t end there.
What we see online isn’t shaped by one person alone — it’s the product of millions of clicks feeding the same system.

When coordinated misinformation drives flood timelines, individual blocking isn’t enough. That’s where collective action matters. Educate friends, family, and coworkers about how engagement works. Encourage them to report troll pages and propaganda networks when they appear. When whole communities move in sync, the system starts to shift.

Schools can also play a part. Digital literacy should be as basic as reading and writing — teaching young Filipinos how to question what they see, check sources, and recognize manipulation. It’s not paranoia; it’s survival in an environment designed to capture attention, not tell the truth.

At a larger scale, we need stronger policies. Lawmakers and regulators should demand transparency about how these feeds are built and moderated. Platforms must be held accountable for how their systems shape public opinion and political discourse (Knight Columbia).

It’s time to treat digital misinformation the same way we treat pollution — something that doesn’t just exist online but spills over into real life, poisoning how people think, vote, and decide.

Empower Yourself and Your Community

Awareness only matters if it turns into action.
Each block, unfollow, or report teaches the system what kind of content doesn’t belong in your space.

Show others how to do it. Not through lectures or long debates, but through quiet example.
When people see your feed free of trolls and noise, they’ll start asking how you did it — and that’s where the real teaching begins.

Be mindful of what you engage with. Every second you spend on a post, even in silence, tells the system to show you more of it. Guard your attention like something valuable. Protect it from the things designed to make you angry, tired, or numb.

We can’t rely on platforms to clean up the mess. Change begins when users stop rewarding the noise with attention.

So the next time a DDS post appears on your screen, don’t argue, don’t react, and don’t stay.
Block it. Report it. Move on.

When enough people do this, the system learns a new pattern — one that favors awareness over outrage.

The internet reflects what we feed it.
Let it mirror a country that values truth.
And yes, block the DDS.