Back to blog

23 Feb 2026

Recce: A Healthier Social Media

Challenges of building a social media feed in an age of silos.

aisocial mediafeedethicsalgorithm

Challenges of building a social media feed in an age of silos

Recce - A Healthier Platform

Social platforms like X, Facebook, Instagram, and TikTok have turned the feed into the product. It decides what gets seen, what gets ignored, and what becomes culturally “true” for the people inside it.

And yet, two failures keep repeating.

First, rage bait wins. Posts that trigger anger or humiliation reliably generate fast replies and quote posts. The system reads that as relevance and gives it more reach, which creates more reactions, which earns even more reach.

Second, echo chambers form by default. People follow the voices they like, mute what irritates them, and the feed learns to show “more of you” until a user’s world narrows into a familiar tunnel.

When we started building Recce, we did not want to inherit those incentives. Our mission is to help people discover what to watch and to help creators receive honest feedback without being harassed. That forced a tough design question early:

Can a feed still feel alive without training itself to reward outrage and siloed thinking?

We think it can, but only if you treat the feed as a system with values, not just a system that maximises engagement.

What X’s published recommendation code shows about how modern feeds work

In 2023, X published an engineering overview of its recommendation pipeline and released code for key parts of how the Home timeline is constructed.

Their public description is surprisingly clear: the system pulls candidates from multiple sources, ranks them with a machine learning model, then applies heuristics and filters before assembling the final timeline.

Here is the core pipeline, visually, as most large scale feeds operate today.

flowchart TD
  A[User opens the app] --> B[Candidate sourcing]
  B --> C[Ranking model scores each post]
  C --> D[Heuristics and filters]
  D --> E[Timeline is constructed and served]

  B --- B1[In network sources]
  B --- B2[Out of network discovery sources]
  D --- D1[Blocked or muted checks]
  D --- D2[Duplicate removal]
  D --- D3[Safety and quality rules]

A second useful detail comes from the X machine learning repo documentation: the “Heavy Ranker” is described as a model that ranks candidates after retrieval, and it is followed by filtering heuristics.

That architecture reveals the hidden truth of feeds.

If your strongest objective is predicted engagement, the model learns the quickest path to engagement. Anger is often quicker than nuance. Conflict often beats quiet usefulness. So even without anyone saying “promote negativity,” the system can drift there because it is rewarded there.

This is why people often experience the platform as if it is becoming more hostile, especially after major product or leadership shifts, including the era associated with Elon Musk. The incentives shape the vibe.

Recce’s shift: from engagement maximising to positive reinforcement

Most feeds effectively optimise a single outcome: “how likely is this to drive interaction.”

Recce is taking a different route. We are building the feed around positive reinforcement learning, in plain terms: we shape what the system treats as success so that posts with positive influence and discovery value rise.

Not because positivity is trendy, but because it aligns with the point of our product. People come to discover films and shows worth watching, and to exchange opinions in a way that helps others choose.

So we designed the feed around a different loop.

flowchart LR
  A[Post is shown] --> B[User actions]
  B --> C[Signal interpretation]
  C --> D[Ranking updates]
  D --> E[More reach for valuable posts]
  E --> A

  B --- B1[Watchlist add]
  B --- B2[Click into title details]
  B --- B3[Mark watched]
  B --- B4[Share that leads to downstream intent]

The goal is not to ban negative reviews. Honest critique is part of culture. The goal is to stop the feed from rewarding negativity as a shortcut to growth.

A harsh but thoughtful review can help someone avoid wasting time. A cruel post that attacks people for liking something should not get free amplification simply because it triggered replies.

The technical heart: signal design that rewards value, not heat

Feeds become what they measure.

So instead of treating “a reply” as automatically good, we separate signals into buckets that reflect our mission.

Signals we want to reward

  • Watchlist adds and saves
  • Click through to title pages, trailers, cast, and reviews
  • Actions that imply real intent like marking watched
  • Replies and quotes that are constructive and useful
  • Shares that lead to meaningful downstream engagement, not just drive by spread

Signals we don't want to encourage

  • Personal attacks and harassment patterns
  • Pile on behaviour, where a thread becomes a target
  • High reaction volume that correlates with low discovery value, such as lots of argument but few saves
  • Regret signals like hides, mutes, blocks, and fast scrolling away after exposure

Under the hood, this pushes you toward multi signal scoring rather than a single engagement score.

flowchart TD
  A[Candidate post] --> B[Predict discovery outcomes]
  A --> C[Predict conversation outcomes]
  A --> D[Predict regret and risk outcomes]

  B --> B1[P watchlist add]
  B --> B2[P watched]
  C --> C1[P helpful reply]
  C --> C2[P respectful disagreement]
  D --> D1[P regret]
  D --> D2[P toxic exchange]

  B1 --> E[Final score]
  B2 --> E
  C1 --> E
  C2 --> E
  D1 --> E
  D2 --> E
  E --> F[Ranked feed]

This is where the platform’s values become engineering. If discovery and helpfulness are heavily weighted, then the system learns to elevate posts that help people decide what to watch, not posts that start fights.

Breaking echo chambers without forcing controversy

Echo chambers form when three forces align: human preference for familiarity, easy unfollow and mute tools, and a ranking system that optimises for similarity because similarity keeps people engaged.

Research debates the extent and prevalence of echo chambers across contexts, but it is widely accepted that algorithmic curation can reduce exposure variety for many users, especially when personalisation is strong and friction is low.

Recce’s approach is to break silos through discovery that still respects intent. Since we are an entertainment product, we can widen a user’s world through genre, era, region, language, and creator diversity without turning every session into a political argument.

Practically, that looks like controlled exploration: a portion of feed slots reserved for adjacent content that is not identical to what the user always sees.

flowchart LR
  A[User taste profile] --> B[Core candidates]
  A --> C[Adjacent candidates]
  A --> D[Exploration candidates]

  B --> E[Feed slots: majority]
  C --> F[Feed slots: some]
  D --> G[Feed slots: small, controlled]

  F --> H[Measure pleasant surprise]
  G --> H
  H --> A

This is the difference between broadening and provoking. You expand discovery without making the user feel like the system is trying to pick a fight.

The hardest part: keeping the atmosphere alive while allowing full expression

Every platform says it wants healthy conversation. The hard part is that people also want humour, intensity, and strong opinions. If you over correct, you create a sterile museum. People leave.

So the craft is not “remove conflict.” The craft is distinguishing critique from cruelty and engagement from value.

This is where the shaping layer matters most, just like in X’s published pipeline where heuristics and filters come after ranking.

flowchart TD
  A[Ranked list from model] --> B[Conversation health checks]
  B --> C[Reach controls]
  C --> D[Final feed]

  B --- B1[Harassment detection]
  B --- B2[Pile on pattern detection]
  B --- B3[Quality thresholds for replies]
  C --- C1[Slow spread of hostile threads]
  C --- C2[Prevent repeated exposure to the same argument]

This is also where creators benefit. If criticism travels because it is substantive, and harassment is throttled because it is harmful, creators can receive honest feedback without being exposed to a mob dynamic.

Why does this matter even while the algorithm is early?

Recce’s feed is still early. We will add signals, tune weights, retrain models, and keep learning. But the key tenants are already set.

  • Positive influence should win
  • Discovery should expand people, not trap them
  • Critique should be possible without cruelty
  • Creators should hear feedback without being harassed

The feed is not neutral. It is a set of choices that become incentives.

Recce is making those choices up front, because once a platform discovers that outrage is a growth lever, it becomes painfully hard to stop pulling it.

We are still evolving and the objective is to be transparent about how to work with our community. If you have suggestions or ideas to feed into the algorithm, please reach out and we are happy to hear it.