A Radical Redefinition of Blocking on X
In what is quickly becoming a hallmark of Elon Musk’s management style—equal parts libertarian idealism and platform reengineering—X (formerly Twitter) has implemented a quietly explosive policy change: users you block can now still see your public posts. They can’t reply, retweet, like, or DM you—but they can still follow your feed, albeit silently.
According to a report from The Block, this shift represents a departure from the platform’s long-standing feature, where blocking meant digital invisibility. Now, the wall between blocker and blocked is more like a chain-link fence. And while the intent may be to promote open dialogue or limit what some critics call “speech silencing,” the implications reach far beyond mere user settings.

What’s the Rationale Behind the Change?
Elon Musk has been clear about his vision: X should be a “global town square” for expression. And to Musk, blocking—especially of public content—is antithetical to that mission. The policy shift dovetails with prior decisions to remove headlines from article links, loosen content moderation standards, and reinstate previously banned accounts. It’s all part of a broader strategy that emphasizes visibility over discretion, expression over safety.
But the block feature has never been about censorship. It has always been a last resort for users looking to protect their personal boundaries. In practice, it served as a signal: “I do not want to interact with you—and I don’t want you watching me either.”
Now, that boundary has been blurred.
Legal and Regulatory Red Flags
This isn’t just a user experience issue—it could also be a compliance problem.
Both Apple’s App Store Guidelines and Google Play Policies require apps that host user-generated content to provide effective blocking tools that prevent harassment and protect privacy. Allowing blocked users to retain visibility over your posts could, arguably, undermine the effectiveness of that protection. It may not be long before this lands on the radar of app store reviewers or triggers complaints under various regional data protection frameworks.
In some jurisdictions, particularly in Europe, GDPR protections could be invoked if users feel the platform is not allowing them sufficient control over how their content is accessed. Even in the U.S., where privacy law is more fragmented, this policy could create downstream risk if harassment escalates and victims argue that the platform enabled it.
Safety, Harassment, and the Real-World Consequences
Let’s put this in human terms.
Imagine you’ve blocked a user for repeated harassment. Under the old model, you could post freely without worrying they were lingering on your timeline. Now, they can read your posts, take screenshots, and potentially use that content elsewhere—without your knowledge, and without the protections blocking once provided.
For users in vulnerable positions—journalists, whistleblowers, activists, or anyone escaping abusive situations—this isn’t just a tech quirk. It’s a rollback of a digital safety net. In fact, many of the strongest critiques are coming from digital safety experts and advocacy groups who warn that visibility without interaction can still be a vehicle for stalking, doxxing, and coordinated abuse.
A Transparency Argument, But At What Cost?
Musk’s defenders argue that public content is, well, public—and that no one should expect privacy on a platform designed for broadcasting. And to some extent, that’s true. But the crux of the issue isn’t privacy; it’s control. The ability to block someone was not about hiding your posts from the world—it was about denying one specific person the power to use your content against you in real time.
If a user posts publicly but chooses to block someone, they’re drawing a line. This change erodes that line under the guise of transparency and egalitarianism. But in reality, it creates a loophole for abusers and trolls—one that’s already being exploited.
A Legal Gray Zone (for Now)
No lawsuits have emerged yet. No regulators have sent formal warnings. But that doesn’t mean this new policy exists in a vacuum.
Consumer-facing platforms are under increasing pressure from regulators and app stores to address user safety more proactively. Meta and TikTok have faced FTC scrutiny for mishandling minors’ data. Discord and Reddit have had to overhaul how they deal with harassment. It’s hard to imagine that X’s move won’t eventually catch the attention of the same watchdogs.
Moreover, if this policy change is perceived as enabling harassment—particularly in cases where someone had previously relied on the block feature to establish digital distance—X could find itself in the crosshairs of civil litigation, especially if harm can be demonstrated.
The Bottom Line: A Dangerous Kind of Visibility
In a world that’s already algorithmically loud, the block button was one of the few things users could press to reclaim control. Now, even that has lost some of its weight.
This change isn’t just a UX adjustment—it’s a philosophical shift that reveals how X is reimagining the relationship between platform, content, and user control. Whether regulators or users accept that reimagining remains to be seen.
At Montague Law, we track the evolving intersections of technology, privacy, and digital rights—because even a seemingly minor product tweak can become a fulcrum for legal consequence. Whether you’re building platforms, publishing content, or navigating user safety obligations at scale, these are the changes you can’t afford to ignore.
👉 Talk to our team if you want help navigating product policies or regulatory risk in the tech ecosystem.