I do confirm that i explicitly tested this with my super unused facebook account, just stating that i was testing restrictions on talking about Linux, the text was: """I don't often (or ever) post anything on Facebook, but when I do, it's to check if they really, as announced on hckrnews, are restricting discussing Linux. So here's a few links to trigger that: https://www.qubes-os.org/downloads/ ... https://www.debian.org/releases/stable/"""
and indeed within seconds I got the following warning: """ We removed your post The post may use misleading links or content to trick people to visit, or stay on, a website. """. This is one massive wow considering how much Facebook runs on Linux.
A user who never posts anything suddenly posting a message containing urls might in itself be a signal that something is weird. It would be an interestint test to post something not linux related and see how that fares.
Clearly there's a need for some kind of bad-url blocker. You don't want compromised accounts (or clueless people) sharing nefarious links to trusted friends.
And clearly blocking distrowatch etc is bizarre overreach. And probably not intended behaviour -- it just makes no sense.
The web exists just fine. Using Facebook as a front end to the web is a terrible idea though.
But are you not somewhat agreeing with the point that you're implicitly arguing against: "[This isn't a problem] if I [am] only seeing updates from the people I actually know and explicitly connected to on the social graph. The current problem exists because the content is chosen algorithmically."
The size of a total network is irrelevant until you start randomly connecting nodes.
At the moment "no one" is on mastodon. The folk there are the few, and are likely a self-selecting group that are resistant to spam or scams. Therefore you don't see (much) spam or scams there.
Of course should it become popular (side note; it wont) such that my mom and her friends are on it, then the spammers and scammers will come too. And since my mom is in my social graph a lot of that will become visible to me.
Enjoy mastodon now. The quality is high because the group is small and the barrier to entry us high. Hope it never catches on, because all "forums" become crap when the eternal September arrives.
Mastodon is perfect for affirmation of your worldview and strengthen your social bubble because instance rules are intolerant to random kind of opinions.
You are correct that since nostr is censorship resistant, you can't really prevent someone from posting something, but you can prevent being exposed to it on your side.
If it's a single nostr account (npub) sending you something you don't want, then you can block or mute them (the blocking is done in your app on your device). If they try attacking you at scale, then you can rely on web of trust (i.e. only allow content from people you actually follow, and 2nd degree) - this is now often the default.
That works for our own account to avoid seeing the texts, it doesn't prevent the troll from still posting replies to our posts.
With that said, that is an exotic situation. I'm a big fan of NOSTR in overall, all my recent hobby projects used npub and nsec. The simplicity and power of that combination is really powerful. No more emails, no more servers, no more passwords.
Yet. There are lots of sign spam is coming to Mastodon and there is real concern by a fair number of people who are there. Anyone with a lot of followers will be tagged often by spam (if you tag someone all their followers will see your post)
As someone who uses Mastodon I can assure you that spammers do target mastodon. So far it is only a few though and so human moderators are able to keep up. I doubt that will last long.
Would not a less draconian solution then to be to hide the link requiring the user to click through a [This link has been hidden due to linking to [potential malware/sexually explicit content/graphically violent content/audio of a loud Brazilian orgasm/an image that has nothing to do with goats/etc] Type "I understand" here ________ to reveal the link.]?
You get the benefits of striving to warn users, without the downsides of it being abusive, or seen as abusive.
It’s not a bad option, and there may be some research that suggests this will reduce friction between mod teams and users.
If I were to build this… well first I would have to ensure no link shorteners, then I would need a list of known tropes and memes, and a way to add them to the list over time.
This should get me about 30% of the way there, next.. even if I ignore adversaries, I would still have to contend with links which have never been seen before.
So for these links, someone would have to be the sacrificial lamb and go through it to see what’s on the other side. Ideally this would be someone on the mod team, but there can never be enough mods to handle volume.
I guess we’re at the mod coverage problem - take volunteer mods; it’s very common for mods to be asleep, when a goat related link is shared. When you get online 8 hours later, theres a page of reports.
That is IF you get reports. People click on a malware infection, but aren’t aware of it, so they don’t report. Or they encounter goats, and just quit the site, without caring to report.
I’m actually pulling my punches here, because many issues, eg. adversarial behavior, just nullify any action you take. People could decide to say that you are applying the label incorrectly, and that the label itself is censorship.
This also assumes that you can get engineering resources applied - and it’s amazing if you can get their attention. All the grizzled T&S folk I know, develop very good mediating and diplomatic skills to just survive.
thats why I really do urge people to get into mod teams, so that the work gets understood by normal people. The internet is banging into the hard limits of our older free speech ideas, and people are constantly taking advantage of blind spots amongst the citizenry.
>
I guess we’re at the mod coverage problem - take volunteer mods; it’s very common for mods to be asleep, when a goat related link is shared. When you get online 8 hours later, theres a page of reports.
When I consider my colleagues who work in the same department: they really have very different preferred schedules concerning what their preferred work hours are (one colleague would even love to work from 11 pm to 7 am - and then getting to sleep - if he was allowed to). If you ensure that you have both larks and "nightowls" among your (voluntary) moderation team, this problem should become mitigated.
Then this comes back to size of the network. HN for example is small enough that we have just a few moderators here and it works.
But once the network grows to a large size it requires a lot of moderators and you start running into problems of moderation quality over large groups of people.
I admit that ensuring consistent moderation quality is the harder problem than the moderation coverage (sleep pattern ;-) ) problem.
Nevertheless, I do believe that there do exist at least partial solutions for this problem, and a lot of problems concerning moderation quality are in my opinion actually self-inflicted by the companies:
I see the central issue that the companies have deeply inconsistent goals what they want vs not want on their websites. Also, even if there is some consistency, they commonly don't clearly communicate these boundaries to the users (often for "political" or reputation reasons).
Keeping this in mind, I claim that all of the following strategies can work (but also each one will infuriate at least one specific group of users, which you will thus indirectly pressure to leave your platform), and have (successfully) been used by various platforms:
1. Simply ban discussions of some well-defined topics that tend to stir up controversies and heated discussion (even though "one side may be clearly right"). This will, of course, infuriate users who are on the "free speech" side. Also people who have a "currently politically accepted" stance on the controversial topic will be angry that they are not allowed to post about their "right" opinion on this topic, which is a central part of their life.
2. Only allow arguments for one side of some controversial topics ("taking a stance"): this will infuriate people who are in the other camp, or are on the free speech side. Also consider that for a lot of highly controversial topics, which side is "right" can change every few years "when the political wind changes direction". The infuriated users likely won't come back.
3. Mostly allow free speech, but strongly moderate comments where people post severe insults. This needs moderators who are highly trustable by the users. Very commonly, moderators are more tolerant towards insults from one side than from the other (or consider comments that are insulting, but within their Overton window, to be acceptable). As a platform, you have to give such moderators clear warnings, or even get rid of them.
While this (if done correctly) will pacify many people who are on the "free speech" side, be aware that 3 likely leads to a platform with "more heated" and "controversial" discussions, which people who are more on the "sensitive" and "nice" side likely won't like. Also advertisers are often not fond of an environment where there are "heated" and "controversial" discussions (even if the users of the platform actually like these).
>Simply ban discussions of some well-defined topics that tend to stir up controversies and heated discussion (even though "one side may be clearly right").
Yup. One of my favored options, if you are running your own community. There are some topics that just increase conflict and are unresolvable without very active referee work. (Religion, Politics, Sex, Identity)
2) This is fine ? Ah, you are considering a platform like Meta, who has to give space to everyone. Dont know on this one, too many conflicting ways this can go.
3) One thing not discussed enough, is how moderating affects mods. Your experience is alien to what most users go through, since you see the 1-3% of crap others don't see. Mental health is a genuine issue for mods, with PTSD being a real risk if you are on one of the gore/child porn queues.
These options to a degree are discussed and being considered. At the cost of being a broken record, more "normal" users need to see the other side of community running.
Theres MANY issues with the layman idea of Freespeech, its hitting real issues when it comes to online spaces and the free for all meeting of minds we have going on.
There are some amazing things that come out of it, like people learning entirely new dance moves, food or ideas. The dark parts need actual engagement, and need more people in threads like this who can chime in with their experiences, and get others down into the weeds and problem solving.
I really believe that we will have to come up with a new agreement on what is "ok" when it comes to speech, and part of it is going to be realizing that we want freespeech because it enables a fair market place of ideas. Or something else. I would rather it happen ground up, rather than top down.
"Then this comes back to size of the network. HN for example is small enough that we have just a few moderators here and it works.
But once the network grows to a large size it requires a lot of moderators and you start running into problems of moderation quality over large groups of people."
As you said, consistent moderation is different that coverage. Coverage will matter for smaller teams.
There’s a better alternative for all of these solutions in terms of of consistency, COPE was released recently, and it’s basically a light weight LLM trained on applying policy to content. In theory that can be used to handle all the consistency issues and coverage issues. It’s beta though, and needs to be tested en masse.
Lord do I wish that were true. The main reason I left Facebook was less the algorithmic content I was getting from strangers, and more the political bile that my increasingly fanatical extended family and past acquaintances chose to write.
You should know that this sort of rhetoric is both
a) silly, because... it's not true. Spam, phishing attempts, illegal content - all of this should be removed.
b) more damaging to whatever you're advocating for than you realize. You want a free web? So do I. But I'm not going to go around saying stuff like "all users should be able to post any URL at any time" and calling moderation actions "utterly despicable"
I'd be curious if it's blocked if someone links just debian.org . I can definitely see a [totally overzealous] "security filter" blocking Qubes, but Debian is one of the most popular Linux distros in the world, so that would be especially ridiculous.
> 6. No Discrimination Against Fields of Endeavor
The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research.
nonfree according to OSI and several other organizations. If you have strong feelings that direct you in such a way, there's no reason to hold their opinion in sacred regard. Multiple philosophies can coexist. The DFSG and the FSF's schools of thought for instance are often in conflict and yet the world keeps on spinning.
Your custom license built with your own philosophy will still interoperate just fine with many common open source licenses, and as a bonus for some, will ward off corporations with cautious lawyers who don't like unknown software licenses.
Companies that actively decay society for profit? PS: Compamies that support change away from lawbased society also violate the license by virtue of it being based on laws and rules
If your domain links to content that AVs flag as malware, it gets blocked on FB. Distrowatch is likely uniquely susceptible to this because they're constantly linking to novel, 3rd-party tarballs (via the "Latest Packages" column).
Right, a proxy focused on privacy and removing ads. Of course that's "malware" to Facebook, a site recommending devilry such as this must be silenced at all cost...
It's either intentional, which would be puzzling and unsettling, or it's a bug which has gone unnoticed. In any case it is proof that big tech is in no shape to take on the responsibility for moderating discourse on the internet. This reminds me of the bug that falls into a typewriter in the beginning of the movie "Brazil" which causes a spelling error and the arrest and execution of a random innocent person. Granted, this type of automated banning without any ability to involve a real human is not costing any lives (yet), but I am increasingly worried about how big tech is becoming a Kafkaesque lawnmower. One thing is to deliberately censor speech that you do not like, another is to design a system where innocent and important speech is silently censored and noone in charge even notices.
> It's either intentional, which would be puzzling and unsettling, or it's a bug which has gone unnoticed.
I've long believed that a large part of technological evil comes from bugs which were introduced innocuously, but intentionally not fixed.
Like, your ISP wouldn't intentionally design a system to steal your money, but they would build a low-quality billing system and then prioritise fixing systematic bugs that cause errors in the customer's favour, while leaving the ones that cause overbilling.
This could easily be the same on Facebook - this got swept up in a false positive and then someone decided it's not a good one to fix.
There's a rumor that an unnamed ISP did exactly that - overcharged a large portion of its customers due to a software bug. Then decided to not fix the issue instead relying on customers to call support and have the charge fixed.
Distrowatch was blocked for linking to an AV-flagged privoxy 4.0.0 tarball. The same kind of anti-malware blocking you'd expect for a mass-market, non-technical audience. Nothing to do with "speech" or Linux in general.
I guess filtering is level of:
"My 11-year-old son keeps talking about this Linux thing with his computer. What is Linux? Is it a hacking tool? Should I be worried?"
Who knows? The article says "I've tried to appeal the ban and was told the next day that Linux-related material is staying on the cybersecurity filter." -- presumably we could ask Distrowatch to share the exact wording of the response they got back, but the fact FB apparently responded in such a way suggests it wasn't a filter specific to Distrowatch.
Maybe! We're all just speculating about the degree of accuracy here. I messaged them on Mastodon to see if they will clarify the text. Will post back if I hear from them.
On another note, Sourceforge just removes the malware flag, but did they actually check anything or just went with the provided explanation without any concrete details? If I hijacked some software and got caught, I'd act nonchalantly like this as well and hope it'll blow over without anyone noticing.
Nimda was a Windows malware from 2001. It seems unlikely that would be a meaningful attack vector for a compromised privoxy in 2025. But again, I have not investigated it.
Thank you for providing this, it seemed a little clickbaity. Even far less technical companies run some things in Linux so seems weird they’d ban Linux talk in general.
> Starting on January 19, 2025 Facebook's internal policy makers decided that Linux is malware and labelled groups associated with Linux as being "cybersecurity threats".
That's quite the statement to make without any source to back it up; I wonder what the evidence for this is.
I assumed that part was conjecture. However, if you define “internal policy makers” broadly from the users perspective, then it’s provably true from the result.
I get that it is worded like it was people in a boardroom making a decision after having a debate. However an overworked admin, or an AI Moderator could just as easily be lumped together as “internal policy makers” from the users perspective.
They are the source. A journo could write an article and mention distrowatch as where they got their information from. If you don't trust them - great, you can do your own research.
> I wonder what the evidence for it is
Maybe "Any posts mentioning DistroWatch and multiple groups associated with Linux and Linux discussions have either been shut down or had many of their posts removed" and "We've been hearing all week from readers who say they can no longer post about Linux on Facebook or share links to DistroWatch. Some people have reported their accounts have been locked or limited for posting about Linux"
What do you think evidence consists of if not that?
The evidence shows that Facebook is blocking Linux related posts, while the initial "policy makers decided" claim is significantly stronger and is not supported by anything. Much more obvious explanation is that some buggy ML classifier has added the distrowatch website to the spam list which triggers automated enforcement without any policy maker involvement.
The purpose of a system is what it does. If this behavior is happening because nobody with authority cares to do anything about it, that's also a decision. I never understand why people rush to make excuses for these huge companies awash in resources with no real accountability or customer support.
I'm obviously not claiming that Facebook moderation is perfect but it's a pretty big stretch to go from "Facebook does a bad job of reducing false positives" to "Facebook purposefully bans Linux discussions".
> I never understand why people rush to make excuses for these huge companies awash in resources with no real accountability or customer support
Because if nobody pushes back against the hyperbole then it just becomes a competition of who can make up the most exaggerated claim in order to attract the most attention.
Would that people would make the same effort to push back against PR departments, which in the case of social media companies often end up enabling the industrialized production and distribution of hyperbole.
If "some buggy ML classifier" is allowed to make decisions that trigger broad enforcement, that classifier is, for all intents and purposes, a policy maker. The claim made by the article is somewhat broad relative to the evidence presented, but whether policy decisions are automated or not doesn't really matter.
In the past I would have agreed with this statement, but nowadays I would assume an organization's actions are their policy until they state and act otherwise.
They have a screenshot of Facebook reviewing the post and deciding not to restore it, so I guess it isn’t just a buggy ML classifier (although it could be a buggy ML classifier combined with a human that doesn’t feel able to overturn it).
What you just did is a fallacy. That's fine, but it needs to be asked: what sort of "Nazi content" did you report?
If it was a user calling Trump a Nazi, then it should have been removed, and their moderation failed.
If it just espouses Nazi ideology or rhetoric, that's free speech in the US.
That's just how it is. It's part of this country. I have to listen to both the throaty, greasy growl of the white supremacist and the piercing howl of the victims wounded by words.
edit to add additional context:
There's a difference between someone "posting" "nazi" content on facebook and here on HN, for example. on FB they figure you're seeing it because of your actions. Your friends, a group you joined, etc. If it's a friend posting on their wall, your moderation task is easy, block the friend, unfriend, talk to the friend, call them out. regardless of your decision, FB doesn't have any obligation or, i would argue, right to step in and moderate in those circumstances. If it's in a group, the moderators of the group have to decide if it represents the group. If it does and you disagree, leave the group.
Someone spouting nazi nonsense on HN is spouting it into a megaphone on the streetcorner, as it were. I have to read the content, even if i didn't actively follow that user or "join" that group.
there are different moderation strategies. merely invoking "nazi" as the boogyman to back up your point is fallacious.
It's to easy to hide behind a computer to avoid responsibilities. "It's not my fault, the computer did it!" is a bad excuse. Computers don't have agency but people do. Anything a computer someone own do is one's fault. One had the choice to not boot it. One had the choice to not buy it.
As a member of that crowd, you're misrepresenting the argument. It is absolutely censorship when a private company does it, but they have the right to do so; it is not illegal. But they also cannot force me to use their platform, I have the right not to use it.
I don't have a problem with the censorship here on HN, so I post here. I do have a problem with the censorship on Meta properties (aside from being offended by their product design and general aims as an organization), so I don't have accounts with them or view content on their properties. I also have the right to criticize them for their censorship, but not the right to prevent anyone else from using it if they want.
Why would he bring up what he views as hypocrisy of members of this community when they espouse the view that it is not censorship when a private entity censors one view point(something they disagree with) but stays silent(viewed as tacitly agreeing) when there is outrage over viewpoints being removed that those members agree with.
IMO, it adds more to the conversation than all the comments the dog-piled with "It's not censorship because it's not the government".
>What would a definition of censorship be that includes private entities?
Censorship is the suppression of speech, public communication, or other information. This may be done on the basis that such material is considered objectionable, harmful, sensitive, or "inconvenient". Censorship can be conducted by governments and private institutions.
Censorship, the suppression of words, images, or ideas that are "offensive," happens whenever some people succeed in imposing their personal political or moral values on others. Censorship can be carried out by the government as well as private pressure groups. https://www.aclu.org/documents/what-censorship
Censorship, the changing or the suppression or prohibition of speech or writing that is deemed subversive of the common good. It occurs in all manifestations of authority to some degree, but in modern times it has been of special importance in its relation to government and the rule of law.
https://www.britannica.com/topic/censorship
I would ask you if you can link to a definition of censorship that only calls out the government? Aside from XKCD's terrible comic. https://xkcd.com/1357/
> What would a definition of censorship be that includes private entities? Can you link to one?
Merriam-Webster defines censorship [0] sense 1(a) as "the institution, system, or practice of censoring" and sense 1(b) as "the actions or practices of censors". Neither definition includes an explicit requirement that it must be done by the government as opposed to a private entity, although we also have to look at their definitions of "censoring" and "censors". Their example for sense 1(a) does mention the government ("They oppose government censorship") – but I don't think we should read examples as limiting the scope of the definition, plus the very phrase "government censorship" suggests there may also be "non-government censorship".
For "censor" (noun), their sense (1) is "a person who supervises conduct and morals" – it doesn't say such a person can only belong to the government. It then says "such as" (which I read as implying that the following subsenses shouldn't be considered exhaustive), sense (1)(a) "an official who examines materials (such as publications or films) for objectionable matter" – an "official" needn't be government – indeed, their definition of "official" [2] gives two examples, a "government officials" and a "company official", clearly indicating that officials can be either public or private. Their example for censor noun sense (1)(a) mentions "Government censors..." – but again, examples don't limit the scope of the definition, and qualifying them as "government" implies there may be others lacking that qualification.
For "censor" as a verb, Merriam-Webster gives two senses, "to examine in order to suppress (see suppress sense 2) or delete anything considered objectionable" (example: "censor the news"), and "to suppress or delete as objectionable" (example: "censor out indecent passages"). Neither gives any hint of being limited to the government. Let me give my own example of the verb "censor" being used, quite naturally, in a sense in which the government is not directly involved: "The Standards and Practices department of NBC censored one of Jack Paar's jokes on the February 10, 1960, episode of The Tonight Show", from the Wikipedia article "Broadcast Standards and Practices". [3] Now, you might argue that NBC was forced into censorship by the FCC – possibly, but I'm not sure if the FCC would have objected to the specific joke in question, and NBC had (and still does have) their own commercial motivations for censorship separate from whatever legal requirements the FCC imposed on them.
Similarly, Wiktionary's definition of "censorship" starts with "The use of state or group power to control freedom of expression or press..." [4]. The fact it says "state or group" as opposed to just "state" implies that non-governmental actors can engage in censorship per their definition.
Wiktionary's definition of the noun "censor" includes "An official responsible for the removal or suppression of objectionable material (for example, if obscene or likely to incite violence) or sensitive content in books, films, correspondence, and other media" [5] – it never says the official has to be a government official, and their example sense is "The headmaster was an even stricter censor of his boarding pupils’ correspondence than the enemy censors had been of his own when the country was occupied" – which could very easily be about a private school rather than a government-run one.
I should also point out that the Catholic Church has officials called "censors". To quote the 1908 Catholic Encyclopaedia article "Censorship of Books" [6], "Pius X in the Encyclical 'Pascendi Dominici gregis' of 8 September, 1907 (Acta S. Sedis, XL, 645), expressly orders all bishops to appoint as censors qualified theologians, to whom the censorship of books appertains ex officio." And the Catholic Church still employs "censors" to this day, [7] although their role has shrunk greatly – generally they are theologians (most commonly priests, although I believe laypersons are eligible for appointment) to whom a bishop delegates the review of certain publications (primarily religious education curricula) and who then makes a recommendation to the bishop as to whether to approve the publication or demand changes to it. Obviously if the Catholic Church has "censors", the concept includes private bodies, since the Catholic Church is a private body almost everywhere (Vatican City and the Holy See excluded).
I thoroughly dislike Facebook as much as the next person, but none of what you quoted constitutes evidence for a ban on discussing Linux on the platform.
Reading the post, it sounds like this may rather be because of incorrect categorization of DistroWatch and links to it than an outright ban on Linux discussion. So yet another issue with Facebook's content moderation methods.
Yes; the scope of censorship over discussing Linux at all vs the scope of censorship of linking to Distro Watch is vastly different.
If Facebook was removing links to an Pro-Catholic website for some reason but still allowed the discussion of Catholicism, Catholic Church groups, etc. You would be daft to claim that FaceBook is banning all Catholics and discussion of thereof.
"A bad thing is happening and the evidence of it happening is that I said it's happening."
By the way, I love DistroWatch and do think FB is messing with their posts. But there's no evidence to show if it's a new policy, a glitch in the moderation or an internal screw up.
Probably this: "I've tried to appeal the ban and was told the next day that Linux-related material is staying on the cybersecurity filter." (from the OP) .. Of course, it would have helped if the post author quoted FB's response so we could judge that for ourselves.
I can't speak for anyone else, it just seems that statement is a very specific accusation with nothing backing it up. I'm curious, that's all. It is very much possible that there's some evidence of policy makers discussing this, or even a public statement; nothing to do with "proving a negative".
It is obviously allowed to discuss Linux. There is plenty of discussion about Linux on Facebook, including some about the recent "ban".
My guess is that some automated scanner found something wrong about the linked page. Maybe there is some link to a "hacking"-oriented distro, maybe some torrents, some dubious comment, etc... Probably a false positive, it happens.
Meta is one of the biggest contributors to free software in the world. They certainly don’t believe that it’s equivalent to piracy. If your guess is indeed what happened, it will be corrected by higher-ups soon.
It is perfectly possible that someone at a lower level, especially a non-technical person, would believe that. Moderators are not going to be highly paid and skilled people.
It has to get to the attention of higher ups.
The one time I have reported a comment to FB, it was horrible racism (said "do not interbreed with [group x] because they are [evil - not sure of exact wording]" and got a reply saying that it did not violate community standards.
But at this point, in 2025, it's perfectly reasonable for GAFAMs (and other Russian/Chinese/USian infocoms) to be blocked (ideally at the state level).
And particularly in the context of work primarily about communication or computing : having an official Xitter account for a journalist or a GitHub account for a software developer is like promoting a brand of cigarettes or opiates by a doctor - a violation of professional deontology.
We are obligated to have an external auditor run PCI DSS penetration testing and network segmentation testing every year.
Their second request (after a network diagram) is always to create an EC2 instance running Kali.
Which, honestly, confuses me a bit -- all of the packages are available in AL or Ubuntu, so why do they care? I don't know, and I guess I don't care enough to ask. Just give me the attestation document please. :)
My assumption is it's for reducing the number of things they need to configure, and therefore troubleshoot.
It's easy to say "The newest Kali release is the distro the org will use" instead of "Use whatever Linux flavor you want and here's an install script that may or may not work or break depending on your distro and/or distro's version".
Them spending time troubleshooting a setup that's out-of-spec is still time billed, so it's better for their customers for everything to roll smoothly too. They also just want to execute their job well, not spend time debugging script / build issues.
From my experience, it is obviously not all the packages in Kali Repo will be in Ubuntu (or other regular distro) Repl. Lots of specific pentesting tool can be installed with just `apt install ...` in Kali, which make it a lot more convenient when you need to do pentesting.
It is believable if you've experienced anything to do with moderation on Facebook. It's a dystopian experience that defies any ordinary expectation of normalcy.
Reminds me of when they do 'firewall updates' at work, and many of the common open-source repositories/hosting etc are blocked.
I understand than some malicious software may use things like curl, but it's also annoying to have to re-create the same ticket and submit to internal IT, and then if someone working on the ticket hasn't done this before, they close it, we have to have a meeting about why we need access to that site...
The inverse isn't tolerated. If you're a software developer, you get tested for IT knowledge with phishing emails. Yet in IT it's perfectly normal to have an ignorance of the core needs of the developers - and computing itself - that results in reduced productivity or shadow IT systems.
It's not an exaggeration to say I've experienced it at every employer I've had.
I was on a penetration testing team at a large corp that doesn't specialize in cybersecurity and I downloaded Metasploit and about 15 minutes later an IT person came up to my desk to talk about the malware I just downloaded. I had to walk him to my manager to get him to understand what it was and why it was okay for me to download it.
Their OS is based on CentOS Stream, I think they're one of the very few major organizations that stuck with CentOS post-Stream and did not switch to something else entirely.
Untrue, it's purist startup people and some ISVs who believe that Alma or Rocky are the somehow "better".
Meta runs 10M+ CentOS 9 Stream boxes migrating to 10 eventually.
Cent has shorter security update availability latency and they're shipped more consistently. The benefit with Rocky and Alma is double the lifecycle time and arguably better governance, unfortunately though they're both tiny operations that suffer from a narrow bus factor, are always playing catch-up, drifting away from RHEL compatibility, and are the definition of fragmentation.
If you need RHEL-ish for servers, use CentOS Stream. It's not great for desktop. Use Fedora or something more LTS for that.
> Untrue, it's purist startup people and some ISVs who believe that Alma or Rocky are the somehow "better".
It's anyone who appreciates the value of stability in server software. In my personal opinion, that value is quite high and far too quickly cast aside by others in the industry.
I am one of those people who agree with you. On my main family computer we run Alma Linux with flatpaks for the main accounts.
I use guix to get up to date tools for development stuff.
(On my laptop I run aeon desktop and guix. I really do think that model is the future. Right now I am hoping to be able to run aeon desktop but with the opensuse slowroll packages which would give me all the benefits of aeon but without the constant updates).
Didn't Zuck recently announce that he's getting rid of fact checkers, on the pretext that the parties hired to do fact checking are biased and introduce censorship and unfair false positives that get accounts shut down?
Was it just a cost reduction: fact checking takes effort and those checkers have to be paid? With the result being situations like this?
There is no such thing as unbiased information. So FWIW, I think fact checking is really just a fight for censorship. Official lies and half truths instead of lies from everywhere intermixed with truths.
There are so many ways to do it wrong even if you tag info as true or fake and in principle you do it with good intention. For example it was the case that certain information was tagged as fake and when claimed for a correction the administrators "could not do anything" (Spain cases researched by Joan Planas by doing requests himself personally for the biggest official agency in Spain, called Newtral, which is intimately tied to the Socialist Party in Spain... really, the name makes me laugh, let us call war peace etc. like in 1984). But they were way faster in doing it in the other direction or often found excuses to clearly favor certain interests.
Now put this in the context of an election... uh... complicated topic, but we all minimally awake people know what this is about...
Your point doesn't hold together because it seems to be conflating fact checking with bias elimination.
They are obviously different and mostly separate.
A presentation of facts can be biased.
E.g. a news agency can have a characteristic political slant, yet not make up facts to suit that narrative.
When a bias is severe, such that it leads to behaviors like concealing important facts in order to manipulate the correct understanding of a situation, then fact checking can find a problem with it.
We have repeteadly found fake news in the fact checking as well as official truths in the case of Spain and I am pretty sure the pattern is replicated in other places. The funds that bought the newspapers, etc. in Spain are all the same around Europe.
They might not be the same, but they are interrelated sonce this is a fight to monopolize the truth and bias and lies are what you end up seeing. Many times they say sorry and get away with it,, but they are not saying sorry: they are working for some interests.
What happened to Biden's son in Ukraine. They totally disappeared before an election, for example. Why? Why it did not get through and went viral? I do not give a hell from these agencies. They are everything but seeking the truth. Yes, for some irrelevant info they might be ok but we all know who they work for.
Remember part of the leakages that Musk showed when he bought Twitter also with the mail exchanges of what to censor. Only a retarded would believe those agencies at this point.
1+1=2 has a correct interpretation in any base 3 and higher.
How we know that it wasn't to be calculated in binary is that the digit 2 occurs.
We have to have a reason to suspect that it was intended to be binary, otherwise we are inventing an inconsistency that isn't there in order to find a false or not-well-formed interpretation.
I was going to say something but the other two replies illustrate well enough things, especially the one of what information to hide or show. Others: where a headline goes, how fast information is corrected, what is the protocol to correct and if that protocol has a neutral appearance that favors someone more than others.
In fact I believe neutrality does not exist as such. No problem with it, objective information and multiple sources with their biases are ok to get an idea as long as facts are shown. But an official truth? Come on, what is that? It is dangerously similar to a dictatorship to have the monopoly of truth.
I imagine something about that caused certain lists to be populated in certain ways, and no linux user cares enough about Facebook to help them correct the problem.
I cared tiny bit. I even went out and bought a phone so that I could "prove I was a real person" or whatever to try to make a FB account. Account creation failed, my IP was banned, and I just blocked every FB domain and haven't looked back.
Yeah, I was really surprised by this. Last year, I reported a number of people, who were trying to scam me (via Messenger messages related to Marketplace listings). Not only did Facebook did not see anything wrong with the accounts and scammy messages, I was flagged for sending useless reports.
it (used to) in aggregate trigger human review (i.e. if many people report the same post). However, the humans who reviewed it were underpaid, overworked, and unlikely to have any context, so the output was not necessarily better than the automated system...
Their filters are comically bad. I belong to a Selectric Typewriter enthusiats group and we keep having to re-word things so they don't go into a black hole. Typewriter parts like "operational shaft" or "type ball" or even brand names of gun cleaners and lubricants that are popular with typewriter folks will cause a post not to appear.
I think they're wrong about the policy. It's more likely that the policy is "let's run the moderation bots unattended to save costs" and is actually site agnostic.
Well, my confidence in the owner of this company is as high as... so I am not surprised that if he is paid (I have no idea this os the case in this very situation), he will no wonder do what the money dictates without any consideration whatsoever. Did anyone see the ridiculous change he made after years of selling (at least in Europe) fact checking, following censorship and teaming up, the scandal selling data to influence an election before. I do not expect anything nice from this leadership. That is why I stopped using Facebook years ago as much as I could.
I'm not convinced this is intentional. I think their auto-moderation stuff is just buggy lately. To illustrate part of why I say that:
Yesterday I tried to submit a link to a Youtube video of the Testament song "Native Blood". Nothing terribly controversial about that, and I'm nearly 100% sure I've posted that song before with no problems. But it kept getting denied with some "link not allowed because blah, blah" error.
So is "Native Blood" banned on FB? Well, I tried a link to a different video of the same song, and was able to submit it just fine. This feels like a bug to me, and I wouldn't be surprised if similar bugs were interfering with other people trying to post stuff.
Granted that's just speculation so take this for what it's worth.
Maybe it is about time that we stop relying on closed gardens, censored and managed on a whim, and start reclaiming our internet and freedom back, publishing in open platforms?
I'd argue that automated ""AI""-driven moderation is actually more sinister than a human being deciding it. Censorship and control over communication by automated processes should be held to a very high standard (and probably regulated, I'd think). From IBM in 1979: "A computer can never be held accountable, therefore a computer must never make a management decision." ( https://web.archive.org/web/20221216204215/https://twitter.c... )
Yeah, these days it's basically the opposite: since a computer making a decision means we (in the C-suite) can't be held accountable, ALL decisions should be made by computer..
haha, I was thinking something along those lines as I was typing the prior msg! "oh, machine algo, not our fault. we'll try to fix it in the future" >_>
I agree, overzealousness sounds like the most likely reason for this.
> Starting on January 19, 2025 Facebook's internal policy makers decided that Linux is malware and labelled groups associated with Linux as being "cybersecurity threats".
The author gives no evidence to back up on this claim.
> The author gives no evidence to back up on this claim.
How can one provide evidence that something is not being displayed on a website? Isn't this, like, a formal fallacy, or something?
> We've been hearing all week from readers who say they can no longer post about Linux on Facebook or share links to DistroWatch. Some people have reported their accounts have been locked or limited for posting about Linux.
You've implied it's impossible to give such evidence and then you've immediately proved yourself wrong by giving it.
But anyway, they're not asking for evidence that something isn't being displayed. They're asking for evidence that 'Starting on January 19, 2025 Facebook's internal policy makers decided that Linux is malware and labelled groups associated with Linux as being "cybersecurity threats"'.
That sounds like a distinction without a difference. It doesn’t seem to meaningfully refute the point; it’s just hung up on the semantics of “policy-maker”. Who cares that the policy-maker is an algorithm?
This is the trouble with automation. It's clear this isn't a malicious post, it just matched some keywords their moderation bot identified as such.
I think a lot of the censorship problems would be resolved if they just shut the bots off and relied on user flagging. Does that require a lot more people? Sure. But the long-run result would be far more people would use and trust these networks (covering the revenue of hiring moderators). I know I'd be a lot happier if there was a thinking human deciding my fate than a random script that only a few people know the inner-workings of.
As-is, it seems like a lot of these social networks are just shooting themselves in the foot just to avoid costs and get a false sense of control over the problem.
Um, no. I don't want to see pics of NSFL gore before the userbase has had a chance to remove them. Which is what most moderators spend time removing from FB, to the point where it psychologically traumatizes them.
You don't have to. That's actually a place where automation could help. You could just use image detection and auto-tag stuff as to what you think it contains. Then, have a list of sensitive tags that are automatically blurred out in the feed (and let users customize the list as they see fit).
If it's something trending towards illegal, toss it into an "emergency" queue for moderators to hand-verify and don't make it visible until it's been checked.
So in your example, if someone uploads war imagery, it would be tagged as "war," "violence," "gore" and be auto-blurred for users. That doesn't mean the post or account needs to be outright nuked, just treated differently from SFW stuff.
Automation + human intervention, yes. In the setup I described, worst case scenario something gets blurred out that's benign, but it doesn't create a press/support nightmare for Meta.
Considering they've open sourced one of their image detection API [1], I'd imagine it's more a problem of accuracy and implementation at scale than a serious technical hurdle.
Those are subjective clarifications, and so will differ between each person. And models are pre-trained to recognize these classifications.
Since you mentioned war, I'm reminded of Black Mirror episode "Men Against Fire", where an army of soldiers have eye implants that cause them to visually see enemy soldiers as unsightly. (My point being this is effectively what Facebook can do.)
I'm not watching a 20 minute video on the topic, but there is a user in an HN comment[1] stating links to debian.org and qubes-os.org were removed by facebook.
Thus Facebook is not censoring Linux discussions or Linux content, what DistroWatch claimed, it blocks linking to what Facebook deems as malicious links (correctly or incorrectly), something a lot of software does these days.
This is what the yanks call "a complete nothing-burger".
I think the complaint is that it's not really a "comment", so much as it's a link to Bryan's own 20 minute video talking about it. It comes off as an annoying bit of self-promotion.
Though I will admit that Bryan is just a deeply unlikable human who is generally under-informed-at-best on any given subject that he's talking about, so people might be looking at it more cynically than if someone else posted it.
The cost of pissing of devs is so high, why cant companies just knuckle under- stop attacking add-blocking browsers like firefox or dev-operating systems. Why would you want to enter that world of pain of getting a ton of adversaries with while balancing on stack o swiss-cheese and duct tape?
What is going wrong in those decision maker heads.
I thought Zuckerberg was removing any fact checkers and platform censoring. I'm thoroughly confused. But maybe since Zuckerbergs death the company changed directions again.
I'm genuinely surprised that people were using facebook of all things to discuss Linux distros.
The idea of having to wade through AI generated pictures of Shrimp Jesus and my mad uncle posting about his latest attempts to turn lead into gold (yes, really) to find out about new distros to try seems very alien to me.
It's entertaining in the abstract but fairly depressing when he's telling you in person that he's spending his children's inheritance on turning lead slightly yellow. Still, on the bright side, he seems to have stopped talking about the "globalists" so much.
Also, turning lead into gold is easy: Just break all the protons off to get Hydrogen and maybe Helium, then compress it back so you get a star to form, and wait for it to go nova. Or, if you're in a hurry, you can compress your Hydrogen more and if you kind of jiggle it just the right way then you should get some gold along with other heavy elements.
Yeah, I can understand. I'm fortunate not to have many uncles and aunts who were old enough to use Facebook, and my parents were fairly tech-antagonistic. I did get to see a little of what you are referring to when some of my coworkers added me on FB and started sharing political content.
I still prefer that to all of the fake AI-slop message boards and meme/video culture that seems to have replaced it on FB.
Tech obviously isn't a strong suit, but elsewhere Facebook does have corners with good/entertaining/useful small communities. They have good SNR and are more personal than Reddit.
The secret is to train your feed by bookmarking the groups and linking to them directly instead of accepting whatever flailing nonsense the algo decides to default to.
Having said that - I hope everyone has worked out by now that when you have a "free speech" culture based on covert curation and moderation of contentious issues, it's not just going to be about porn and trans people.
Non-mainstream (i.e. non-consumer) tech is going to be labelled bad-think and suppressed too.
I assume Facebook doesn't want anything posted on FB that can't be turned into a racist diatribe. There's not a whole lot of racism potential in Kernel tuning.
Maybe you could squeeze in anti-Finnish rant about Linus, but it would be minimal
Welcome to 2020 Facebook, except they're coverage of valid topics to ban and censor has expanded more broadly now. This might've been avoided had more of its users sent a message 4-5 years ago that social media censorship isn't acceptable in a society that prides itself on free speech.
Perhaps they've become closer buddies with MICROS~1. I wouldn't be surprised if they did this in exchange for "AI" compute, i.e. that losing the Linux audience is worth less than being seen favourably by elder oligarchs.
Sure they do. They really, really don't want government agencies and non-techies to realise that there is a better option for most everyday computer tasks.
No one, more than linux users, cares about privacy and freedom. What is even the point of using crapbook? Everyone in linux community is either hanging out on IRC or matrix or have self hosted forums
What? Google is a linux user - I doubt they care about privacy or freedom. Same with facebook - that company uses linux a lot while actively opposing privacy.
Lots of people use linux because it's a good OS, irrespective of privacy concerns (see the occasional flareup about some software or another automatically shipping off bug reports - some people don't care, others are incredibly concerned).
My wife was temporarily banned for a photo of a marble statue.
My mother receives invitations to groups that share photos of migrants drowned in the Mediterranean.
Don't use Facebook, and certainly don't depend on it.
Edit: Recently, a lot of associations working to prevent HIV, sexually-transmitted diseases and family planning have been progressively de-listed, or their content blocked and their accounts banned, all over the world on all META platforms. This is the true face of freedom of expression according to META and its “community rules”.
I tried to make a post with the https://species.wikimedia.org/ link, and I get "Your content couldn't be shared, because this link goes against our Community Standards".
Being generous, it could be there's NSFW imagery in there? I can't be arsed to dig into a mountain of scientifically named links, but you can find troves of pr0n among other things in Wikimedia if you know where to look.
(I'm not sure why my comment is now collapsed by default. It doesn't seem to be flagged, and has a score of 15.)
I tried again, and this time I get "Posts that look like spam are blocked", and a similar message if I try to leave a link in a comment.
I wonder if spammers have been vandalizing Wikispecies and posting the links, but unlike Wikipedia the editors of Wikispecies struggle to remove the spam in time? The project has hundreds of thousands of pages, but the vast majority would have very little content or oversight. It could be the Wiki project with the worst pages-per-editor ratio.
I guess if they blocked *.wikimedia.org to get at commons.wikimedia.org that could make sense. However all those images are also accessible via an en.wikipedia.org url.
Thank you for actually spelling porn. This whole thing around altering spelling to avoid blocking which I presume comes out of other apps has gotten to be quite annoying.
It absolutely came from censorship. IRC chat rooms and PHPBB message boards with blacklists of words that would get starred out. Hoping it wasn't implemented with substring match so typing "shell" didn't come out "s****".
Meta barely changed their moderation policy. The community standard docs which list every violation are still extremely long and cover a large swath of speech https://transparency.meta.com/policies/community-standards/, to which they only added 2 bullet point exceptions (and eventually the future addition of community notes)
> speech standards should be determined by public opinion and not by reason, evidence and a scientific mindset.
Yes this is largely a debate between a top-down technocratic worldview vs democratic/meritocratic one. The point is FB is still very much on the former highly centralized expert-defined guideline/automated system side while only making small moves in the other direction with community notes. Maybe they'll keep going in that direction but what they say vs do is an important distinction.
How are you evaluating this? Are you including the truth of the Facebook post, whether moderators correctly/accurately act upon the flagging, whether users choose to stick on the platform after seeing the content, whether users stop believing in any objective truth, or something else?
Community notes only does fact-checking, but moderation has the ability to reduce the activity of bad actors. They serve 2 different purposes from where I stand.
To be clear, the people who believe speech standards should be determined by public opinion are as incorrect as, say, flat earthers.
I don't have a huge problem with community notes per se. I do have a huge problem with blatantly unequal standards just because large parts of the public have morally rotten views.
Speech standards have never been set by "reason, evidence and a scientific mindset". The people who are complaining now that the shoe is on the other foot were quite happy when it was their side setting the rules.
Objective standards would be best, but subjective standards that you pretend are objective are far worse than subjective standards that are honest about it.
Speech standards should be determined by public opinion, science has never had a seat at the table in the West. If anything Communism was the pro-science approach, typically centrally planned societies love science and technocrats - they put a lot of effort into working out a true and optimal way and it didn't work very well. The body count can be staggering.
The moment we start talking about speech standards being set by "science" you get a lot of people who are pretending that their thing is scientific. Ditto reason and evidence.
The win for free speech is setting up a situation where people who are actually motivated by science, reason and evidence can still say their piece without threatening the powerful actors in the community. And limiting the blast radius of the damage when they get things wrong despite being technically correct. But principles of free speech go far beyond what is true, correct and reasonable.
> science has never had a seat at the table in the West.
Other than science being the entire reason the US were able to corner the fascists in WW2. Let a lone all the scientific break throughs in the last few decades coming from the West. Heck before WWII, the automobile?
1. There was an entire sentence, taking the second part without the first ("Speech standards should be determined by public opinion") removes essential context.
2. The fascists were Westerners (and leaders in science/technology, for that matter, the US didn't beat them with more technology).
I still disagree, science has had a seat at the table in the West especially around speech. Speech was either locked down using control of technologies or speech was empowered using proliferation of technologies.
For the Japanese. The war was shortened. But by the time of the bomb they were doomed. They could not replace their losses like the Americans could
The Germans were beaten mostly by the Soviets. They (the Germans) were overwhelmed. And they too could not replace their losses like the Soviets could. Especially humans
> Speech standards should be determined by public opinion
To confirm, you are making a normative "ought" statement here, not just a descriptive "is" statement?
> science has never had a seat at the table in the West.
This is a strange idea to me. As a simple example, vaccinations are mandatory for a reason. The unfreedom there is clearly justified.
> If anything Communism was the pro-science approach, typically centrally planned societies love science and technocrats - they put a lot of effort into working out a true and optimal way and it didn't work very well. The body count can be staggering.
What James Scott called high modernism is indeed bad. The problem was not the fact that science was used, but the fact that the models used weren't complex enough to describe local conditions, and that politically motivated models (e.g. Lysenkoism) gained prominence. Science was also used in other parts of the world to much better effect, such as vaccines and HIV medications.
> The moment we start talking about speech standards being set by "science" you get a lot of people who are pretending that their thing is scientific. Ditto reason and evidence.
True, and yet some of those people are more correct than others. This is challenging, but it is not a challenge we can run away from.
> The win for free speech is setting up a situation where people who are actually motivated by science, reason and evidence can still say their piece without threatening the powerful actors in the community. And limiting the blast radius of the damage when they get things wrong despite being technically correct. But principles of free speech go far beyond what is true, correct and reasonable.
I think people not applying reason is far, far worse of a problem today than people applying it.
I've turned the flags off now. It's not a very good thread, though—mostly jokes and generic reactions, which is what happens when an article contains little information, but the information it does contain is provocative. (Edit: the comments got somewhat better through the afternoon.)
These little scandals nearly always turn out to be glitchy emphemera in the Black Box of $BigCo, rather than a policy or plan. I imagine that's the case here too. Why would Facebook ban discussion of the operating system it runs on, after 20+ years?
(Btw: @dang doesn't work - if you want reliable message delivery you need to email hn@ycombinator.com)
I flagged it when it first showed up because “Facebook ban on discussing Linux” is obviously bullshit, it took me half a minute to confirm The Linux Foundation was posting about Linux as recently as an hour ago.
I can believe DistroWatch the website got blocked by Facebook for whatever reason and I can sympathize, but exaggerating it to something obviously false doesn’t do them any favors. I think the title needs to be changed if it’s allowed to stay up.
If I'm reading right, the same facebook who announced a week or so ago that they where scaling back all moderation and validation around online safety, are now putting a blanket ban on users discussing such a fundamental aspect of modern technology that facebook itself runs on it?
If this is a genuine policy, I'm at a complete loss to understand Facebook's stance on anything.
Distrowatch has taken the observation that distrowatch URLs are blocked and really hyperbolized that into the broader and incorrect claim that discussion of Linux is banned. It isn't.
the "free speech" was a promise to promote right wing speech. do not mistake it for ideology.
banning left wing activism, either acknowledging the genocide in Gaza or apparently now promoting free (less surveilled) software is against what the authoritarians want so it is banned.
this is all consistent if you see it through that lens
dang's probably right that its a glitch- but I honestly believe Linux is Free as in Freedom, which is opposed by both parties but primarily the radical authoritarians in charge right now
Linux is free software, and software freedom is communist. It's also the brainchild of a Finn, and every red-blooded American knows that Europeans are all commies.
Real patriots use good ol' American operating systems, like Oracle Solaris™.
I thought they were going to go full free speech. /s
Seriously, if you haven’t already, sign up for a Mastodon account. This is the motivation you need. Encourage some friends and family members to connect with you there.
This is an obvious mistake, it's obvious Facebook isn't deliberately banning Linux posts, it's obvious their moderation is incorrectly flagging some posts for some reason, it'll get fixed. It could have been an interesting story and discussion about problems with false positives and automated moderating, or about the lack of human contact at Facebook scale, but instead it's just passionate screeds from too easily excitable posters.
they go together. once you have a walled garden, the temptation to moderate/censor it is too large. censorship was practically impossible in the old internet before social media.
It was also largely unnecessary because folks hadn't normalized acting like wild animals in online spaces, tools for automating acting like a wild animal online were lacking, and reach was extremely limited so there was little financial incentive for private interests to engage with the space in any way. All of which takes a back seat to folks more or less agreeing that online is where bullshit lived and only an embarrassing rube would take any of it seriously. The great irony here being the amount of bullshit online has only increased decade over decade yet weirdly at some point folks started taking it seriously, with utterly predictable results.
https://web.archive.org/web/20250127020059/https://distrowat...
I do confirm that i explicitly tested this with my super unused facebook account, just stating that i was testing restrictions on talking about Linux, the text was: """I don't often (or ever) post anything on Facebook, but when I do, it's to check if they really, as announced on hckrnews, are restricting discussing Linux. So here's a few links to trigger that: https://www.qubes-os.org/downloads/ ... https://www.debian.org/releases/stable/""" and indeed within seconds I got the following warning: """ We removed your post The post may use misleading links or content to trick people to visit, or stay on, a website. """. This is one massive wow considering how much Facebook runs on Linux.
A user who never posts anything suddenly posting a message containing urls might in itself be a signal that something is weird. It would be an interestint test to post something not linux related and see how that fares.
No this is a supported / common format on facebook.
Source: I work building an SMM tool, and Facebook Link posts constantly need our attention
> A user who never posts anything suddenly posting a message containing urls might in itself be a signal that something is weird.
...on a social media site designed to aggregate URLs?
Facebook is not designed to aggregate URLs and heavily penalizes external content
Any user ever posting URLs should never ever be removed. The Web should be allowed to exist. This is utterly despicable behavior.
Clearly there is content that would be unacceptable to post. Anything patently illegal, for example.
Or websites that look exactly like paypal, whose URLs begin in paypal.com (followed by a dot, not a slash), but that are, in fact, not Paypal.
I think that's a much more pressing concern.
the parent said: to block things that are illegal. phishing is illegal.
Insanity. Absolutely. Maybe.
Clearly there's a need for some kind of bad-url blocker. You don't want compromised accounts (or clueless people) sharing nefarious links to trusted friends.
And clearly blocking distrowatch etc is bizarre overreach. And probably not intended behaviour -- it just makes no sense.
The web exists just fine. Using Facebook as a front end to the web is a terrible idea though.
I once posted a Youtube comment with a link. Got removed without notice. I thought it was the uploader first but no ...
Yeah, anything with a link gets silently removed.
Wikipedia links seem to be an exception, maybe that’s special-cased.
Amusingly, new YouTube channels can't themselves put links in the description of their own videos even.
They really dislike this whole hypertext thing.
They really want Xanadu's for-profit linking.
Imagine how bad that could have been, had it happened - extrapolating from the current state of the web.
The internet would look like the spam folder of a compromised email address. No thanks.
Mastodon does not restrict the posting of URLs and it does not look like “the spam folder of a compromised email address” at all.
Mastodon isn't in charge of moderation though, that's up to the individual instances.
Also, Mastodon is tiny, and spam is a numbers game.
But are you not somewhat agreeing with the point that you're implicitly arguing against: "[This isn't a problem] if I [am] only seeing updates from the people I actually know and explicitly connected to on the social graph. The current problem exists because the content is chosen algorithmically."
The size of a total network is irrelevant until you start randomly connecting nodes.
At the moment "no one" is on mastodon. The folk there are the few, and are likely a self-selecting group that are resistant to spam or scams. Therefore you don't see (much) spam or scams there.
Of course should it become popular (side note; it wont) such that my mom and her friends are on it, then the spammers and scammers will come too. And since my mom is in my social graph a lot of that will become visible to me.
Enjoy mastodon now. The quality is high because the group is small and the barrier to entry us high. Hope it never catches on, because all "forums" become crap when the eternal September arrives.
Mastodon is perfect for affirmation of your worldview and strengthen your social bubble because instance rules are intolerant to random kind of opinions.
There is always NOSTR. Over there you follow however you wish without such artificial walled gardens.
Tip: If someone is trolling you, they can also write to your texts without a chance of you stopping them. No perfect solution exists, I guess.
You are correct that since nostr is censorship resistant, you can't really prevent someone from posting something, but you can prevent being exposed to it on your side. If it's a single nostr account (npub) sending you something you don't want, then you can block or mute them (the blocking is done in your app on your device). If they try attacking you at scale, then you can rely on web of trust (i.e. only allow content from people you actually follow, and 2nd degree) - this is now often the default.
That works for our own account to avoid seeing the texts, it doesn't prevent the troll from still posting replies to our posts.
With that said, that is an exotic situation. I'm a big fan of NOSTR in overall, all my recent hobby projects used npub and nsec. The simplicity and power of that combination is really powerful. No more emails, no more servers, no more passwords.
Because everyone knows, Twitter and Facebook have never arbitrarily enforced moderation on political topics they consider distasteful.
Yet. There are lots of sign spam is coming to Mastodon and there is real concern by a fair number of people who are there. Anyone with a lot of followers will be tagged often by spam (if you tag someone all their followers will see your post)
The simplest explanation for this would be that spammers are not targeting Mastodon.
As someone who uses Mastodon I can assure you that spammers do target mastodon. So far it is only a few though and so human moderators are able to keep up. I doubt that will last long.
Mastodon looks like a barely used social network instead.
Not if I only seeing updates from the people I actually know and explicitly connected to on the social graph.
The current problem exists because the content is chosen algorithmically
No. Even then. You may know assholes. User accounts may be compromised. Users may have different tolerances for gore that you don’t.
Not gotchas, I’m not arguing for the sake of it, but these are pretty common situations.
I always urge people to volunteer as mods for a bit.
At least you may see a different way to approach thing, or else you might be able to articulate the reasons the rule can’t be followed better.
Would not a less draconian solution then to be to hide the link requiring the user to click through a [This link has been hidden due to linking to [potential malware/sexually explicit content/graphically violent content/audio of a loud Brazilian orgasm/an image that has nothing to do with goats/etc] Type "I understand" here ________ to reveal the link.]?
You get the benefits of striving to warn users, without the downsides of it being abusive, or seen as abusive.
It’s not a bad option, and there may be some research that suggests this will reduce friction between mod teams and users.
If I were to build this… well first I would have to ensure no link shorteners, then I would need a list of known tropes and memes, and a way to add them to the list over time.
This should get me about 30% of the way there, next.. even if I ignore adversaries, I would still have to contend with links which have never been seen before.
So for these links, someone would have to be the sacrificial lamb and go through it to see what’s on the other side. Ideally this would be someone on the mod team, but there can never be enough mods to handle volume.
I guess we’re at the mod coverage problem - take volunteer mods; it’s very common for mods to be asleep, when a goat related link is shared. When you get online 8 hours later, theres a page of reports.
That is IF you get reports. People click on a malware infection, but aren’t aware of it, so they don’t report. Or they encounter goats, and just quit the site, without caring to report.
I’m actually pulling my punches here, because many issues, eg. adversarial behavior, just nullify any action you take. People could decide to say that you are applying the label incorrectly, and that the label itself is censorship.
This also assumes that you can get engineering resources applied - and it’s amazing if you can get their attention. All the grizzled T&S folk I know, develop very good mediating and diplomatic skills to just survive.
thats why I really do urge people to get into mod teams, so that the work gets understood by normal people. The internet is banging into the hard limits of our older free speech ideas, and people are constantly taking advantage of blind spots amongst the citizenry.
> I guess we’re at the mod coverage problem - take volunteer mods; it’s very common for mods to be asleep, when a goat related link is shared. When you get online 8 hours later, theres a page of reports.
When I consider my colleagues who work in the same department: they really have very different preferred schedules concerning what their preferred work hours are (one colleague would even love to work from 11 pm to 7 am - and then getting to sleep - if he was allowed to). If you ensure that you have both larks and "nightowls" among your (voluntary) moderation team, this problem should become mitigated.
Then this comes back to size of the network. HN for example is small enough that we have just a few moderators here and it works.
But once the network grows to a large size it requires a lot of moderators and you start running into problems of moderation quality over large groups of people.
This is a difficult and unsolved problem.
I admit that ensuring consistent moderation quality is the harder problem than the moderation coverage (sleep pattern ;-) ) problem.
Nevertheless, I do believe that there do exist at least partial solutions for this problem, and a lot of problems concerning moderation quality are in my opinion actually self-inflicted by the companies:
I see the central issue that the companies have deeply inconsistent goals what they want vs not want on their websites. Also, even if there is some consistency, they commonly don't clearly communicate these boundaries to the users (often for "political" or reputation reasons).
Keeping this in mind, I claim that all of the following strategies can work (but also each one will infuriate at least one specific group of users, which you will thus indirectly pressure to leave your platform), and have (successfully) been used by various platforms:
1. Simply ban discussions of some well-defined topics that tend to stir up controversies and heated discussion (even though "one side may be clearly right"). This will, of course, infuriate users who are on the "free speech" side. Also people who have a "currently politically accepted" stance on the controversial topic will be angry that they are not allowed to post about their "right" opinion on this topic, which is a central part of their life.
2. Only allow arguments for one side of some controversial topics ("taking a stance"): this will infuriate people who are in the other camp, or are on the free speech side. Also consider that for a lot of highly controversial topics, which side is "right" can change every few years "when the political wind changes direction". The infuriated users likely won't come back.
3. Mostly allow free speech, but strongly moderate comments where people post severe insults. This needs moderators who are highly trustable by the users. Very commonly, moderators are more tolerant towards insults from one side than from the other (or consider comments that are insulting, but within their Overton window, to be acceptable). As a platform, you have to give such moderators clear warnings, or even get rid of them.
While this (if done correctly) will pacify many people who are on the "free speech" side, be aware that 3 likely leads to a platform with "more heated" and "controversial" discussions, which people who are more on the "sensitive" and "nice" side likely won't like. Also advertisers are often not fond of an environment where there are "heated" and "controversial" discussions (even if the users of the platform actually like these).
>Simply ban discussions of some well-defined topics that tend to stir up controversies and heated discussion (even though "one side may be clearly right").
Yup. One of my favored options, if you are running your own community. There are some topics that just increase conflict and are unresolvable without very active referee work. (Religion, Politics, Sex, Identity)
2) This is fine ? Ah, you are considering a platform like Meta, who has to give space to everyone. Dont know on this one, too many conflicting ways this can go.
3) One thing not discussed enough, is how moderating affects mods. Your experience is alien to what most users go through, since you see the 1-3% of crap others don't see. Mental health is a genuine issue for mods, with PTSD being a real risk if you are on one of the gore/child porn queues.
These options to a degree are discussed and being considered. At the cost of being a broken record, more "normal" users need to see the other side of community running.
Theres MANY issues with the layman idea of Freespeech, its hitting real issues when it comes to online spaces and the free for all meeting of minds we have going on.
There are some amazing things that come out of it, like people learning entirely new dance moves, food or ideas. The dark parts need actual engagement, and need more people in threads like this who can chime in with their experiences, and get others down into the weeds and problem solving.
I really believe that we will have to come up with a new agreement on what is "ok" when it comes to speech, and part of it is going to be realizing that we want freespeech because it enables a fair market place of ideas. Or something else. I would rather it happen ground up, rather than top down.
> Ah, you are considering a platform like Meta, who has to give space to everyone.
This is what I at least focused on since
- Facebook is the platform that the discussed article is about
- in https://news.ycombinator.com/item?id=42852441 pixl97 wrote:
"Then this comes back to size of the network. HN for example is small enough that we have just a few moderators here and it works.
But once the network grows to a large size it requires a lot of moderators and you start running into problems of moderation quality over large groups of people."
As you said, consistent moderation is different that coverage. Coverage will matter for smaller teams.
There’s a better alternative for all of these solutions in terms of of consistency, COPE was released recently, and it’s basically a light weight LLM trained on applying policy to content. In theory that can be used to handle all the consistency issues and coverage issues. It’s beta though, and needs to be tested en masse.
Eh.. let me find a link. https://huggingface.co/zentropi-ai/cope-a-9b?ref=everythingi...
I’ve had a chance to play with it. It has potential, and even being 70% good is a great thing here.
It doesnt resolve the free speech issue, but it can work towards the consistency and clarity on rules issues.
I will admit I’ve strayed from the original point at this stage though
You would be surprised at the amount of crap that exists and the amount of malware that posts to fb
Lord do I wish that were true. The main reason I left Facebook was less the algorithmic content I was getting from strangers, and more the political bile that my increasingly fanatical extended family and past acquaintances chose to write.
When you browse without a Pihole and a blocker, it does.
...have you seen the internet in the last 30 years? That's exactly what remains.
You should know that this sort of rhetoric is both
a) silly, because... it's not true. Spam, phishing attempts, illegal content - all of this should be removed.
b) more damaging to whatever you're advocating for than you realize. You want a free web? So do I. But I'm not going to go around saying stuff like "all users should be able to post any URL at any time" and calling moderation actions "utterly despicable"
I just tried the same two URLs. I also got a message saying the post was removed.
I'd be curious if it's blocked if someone links just debian.org . I can definitely see a [totally overzealous] "security filter" blocking Qubes, but Debian is one of the most popular Linux distros in the world, so that would be especially ridiculous.
Relicense the kernel with license that prevents usage for dystopiadistros?
That's non-free. Quoting from https://opensource.org/osd
> 6. No Discrimination Against Fields of Endeavor The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research.
nonfree according to OSI and several other organizations. If you have strong feelings that direct you in such a way, there's no reason to hold their opinion in sacred regard. Multiple philosophies can coexist. The DFSG and the FSF's schools of thought for instance are often in conflict and yet the world keeps on spinning.
Your custom license built with your own philosophy will still interoperate just fine with many common open source licenses, and as a bonus for some, will ward off corporations with cautious lawyers who don't like unknown software licenses.
Non-open, you mean. OSI never tried to contribute to the free software movement.
Indeed. One of the most important freedoms you grant to others by using an Open Source license is the freedom to do something you might not like.
What is a "dystopiadistro"? It's not like that I don't know the individual words, but combined? What is "dystopiadistro" supposed to mean?
Companies that actively decay society for profit? PS: Compamies that support change away from lawbased society also violate the license by virtue of it being based on laws and rules
relicensing needs a cla.
Opposite anecdata point: posted a link to DistroWatch and mentioned Linux without issue.
giving you another confirmation, mate. zuck removed my post about qubeos.
Facebook has been blocking distrowatch at least part of the time for three years now, see https://news.ycombinator.com/item?id=29529312
I've been perplexed for years, I wonder if it went unnoticed all this time or they reverted then reimplement the ban.
It's probably a recurrence of the same issue.
If your domain links to content that AVs flag as malware, it gets blocked on FB. Distrowatch is likely uniquely susceptible to this because they're constantly linking to novel, 3rd-party tarballs (via the "Latest Packages" column).
In this case, it was the Privoxy 4.0.0 release from the 18th. You can see it linked in this Jan 19 snapshot of the site: https://web.archive.org/web/20250119125004/https://distrowat...
Right, a proxy focused on privacy and removing ads. Of course that's "malware" to Facebook, a site recommending devilry such as this must be silenced at all cost...
Buddy. Not everything is a conspiracy theory.
https://www.virustotal.com/gui/file/c08e2ba0049307017bf9d8a6...
It's either intentional, which would be puzzling and unsettling, or it's a bug which has gone unnoticed. In any case it is proof that big tech is in no shape to take on the responsibility for moderating discourse on the internet. This reminds me of the bug that falls into a typewriter in the beginning of the movie "Brazil" which causes a spelling error and the arrest and execution of a random innocent person. Granted, this type of automated banning without any ability to involve a real human is not costing any lives (yet), but I am increasingly worried about how big tech is becoming a Kafkaesque lawnmower. One thing is to deliberately censor speech that you do not like, another is to design a system where innocent and important speech is silently censored and noone in charge even notices.
> It's either intentional, which would be puzzling and unsettling, or it's a bug which has gone unnoticed.
I've long believed that a large part of technological evil comes from bugs which were introduced innocuously, but intentionally not fixed.
Like, your ISP wouldn't intentionally design a system to steal your money, but they would build a low-quality billing system and then prioritise fixing systematic bugs that cause errors in the customer's favour, while leaving the ones that cause overbilling.
This could easily be the same on Facebook - this got swept up in a false positive and then someone decided it's not a good one to fix.
There's a rumor that an unnamed ISP did exactly that - overcharged a large portion of its customers due to a software bug. Then decided to not fix the issue instead relying on customers to call support and have the charge fixed.
This has been going on for years. There are no humans to review your ban appeal. Tech companies don't want to spend money on customer service.
And what are you going to do about it? Get into a lawyer slap fight with a foreign trillion dollar corporation?
Distrowatch was blocked for linking to an AV-flagged privoxy 4.0.0 tarball. The same kind of anti-malware blocking you'd expect for a mass-market, non-technical audience. Nothing to do with "speech" or Linux in general.
Some context: https://sourceforge.net/p/forge/site-support/26448/
Well, that doesn't explain why someone else in this discussion had their post removed, as there was no mention of distrowatch: https://news.ycombinator.com/item?id=42840143
I guess filtering is level of: "My 11-year-old son keeps talking about this Linux thing with his computer. What is Linux? Is it a hacking tool? Should I be worried?"
Probably entirely unrelated to the distrowatch thing.
Who knows? The article says "I've tried to appeal the ban and was told the next day that Linux-related material is staying on the cybersecurity filter." -- presumably we could ask Distrowatch to share the exact wording of the response they got back, but the fact FB apparently responded in such a way suggests it wasn't a filter specific to Distrowatch.
I think the Distrowatch author is just missummarizing the interaction. https://news.ycombinator.com/item?id=42848244
Maybe! We're all just speculating about the degree of accuracy here. I messaged them on Mastodon to see if they will clarify the text. Will post back if I hear from them.
On another note, Sourceforge just removes the malware flag, but did they actually check anything or just went with the provided explanation without any concrete details? If I hijacked some software and got caught, I'd act nonchalantly like this as well and hope it'll blow over without anyone noticing.
As far as I know, they didn't check anything. (And neither have I -- no comment on whether this is an AV true positive or false positive.)
Here's VirusTotal on the tarball (note Chrome blocks its download, for the same reason): https://www.virustotal.com/gui/file/c08e2ba0049307017bf9d8a6...
Nimda was a Windows malware from 2001. It seems unlikely that would be a meaningful attack vector for a compromised privoxy in 2025. But again, I have not investigated it.
I had my post removed a couple of weeks ago for linking to aeon desktop (immutable opensuse).
Thank you for providing this, it seemed a little clickbaity. Even far less technical companies run some things in Linux so seems weird they’d ban Linux talk in general.
> Starting on January 19, 2025 Facebook's internal policy makers decided that Linux is malware and labelled groups associated with Linux as being "cybersecurity threats".
That's quite the statement to make without any source to back it up; I wonder what the evidence for this is.
I assumed that part was conjecture. However, if you define “internal policy makers” broadly from the users perspective, then it’s provably true from the result.
I get that it is worded like it was people in a boardroom making a decision after having a debate. However an overworked admin, or an AI Moderator could just as easily be lumped together as “internal policy makers” from the users perspective.
They are the source. A journo could write an article and mention distrowatch as where they got their information from. If you don't trust them - great, you can do your own research.
> I wonder what the evidence for it is
Maybe "Any posts mentioning DistroWatch and multiple groups associated with Linux and Linux discussions have either been shut down or had many of their posts removed" and "We've been hearing all week from readers who say they can no longer post about Linux on Facebook or share links to DistroWatch. Some people have reported their accounts have been locked or limited for posting about Linux"
What do you think evidence consists of if not that?
The evidence shows that Facebook is blocking Linux related posts, while the initial "policy makers decided" claim is significantly stronger and is not supported by anything. Much more obvious explanation is that some buggy ML classifier has added the distrowatch website to the spam list which triggers automated enforcement without any policy maker involvement.
The purpose of a system is what it does. If this behavior is happening because nobody with authority cares to do anything about it, that's also a decision. I never understand why people rush to make excuses for these huge companies awash in resources with no real accountability or customer support.
Sorta agree, but it's useful to distinguish proximate cause vs ultimate cause nonetheless.
I'm obviously not claiming that Facebook moderation is perfect but it's a pretty big stretch to go from "Facebook does a bad job of reducing false positives" to "Facebook purposefully bans Linux discussions".
> I never understand why people rush to make excuses for these huge companies awash in resources with no real accountability or customer support
Because if nobody pushes back against the hyperbole then it just becomes a competition of who can make up the most exaggerated claim in order to attract the most attention.
Would that people would make the same effort to push back against PR departments, which in the case of social media companies often end up enabling the industrialized production and distribution of hyperbole.
Where can I downvote the PR department?
Wherever you see someone repeating their talking points.
If "some buggy ML classifier" is allowed to make decisions that trigger broad enforcement, that classifier is, for all intents and purposes, a policy maker. The claim made by the article is somewhat broad relative to the evidence presented, but whether policy decisions are automated or not doesn't really matter.
This is a horrible butchering of language. You know that "policy maker" means person in everyday usage, stop being obtuse.
In the past I would have agreed with this statement, but nowadays I would assume an organization's actions are their policy until they state and act otherwise.
Humans made a policy that said the computer system could do this, so while GP might be inaccurate, you’re not right either.
That's only if humans are properly in charge of the system. With lots of moderation tools, they aren't.
then in this case, the policy maker is the person that empowered the AI.
doesn’t change the fact that the AI is seemingly being given final authority over policy decisions.
They have a screenshot of Facebook reviewing the post and deciding not to restore it, so I guess it isn’t just a buggy ML classifier (although it could be a buggy ML classifier combined with a human that doesn’t feel able to overturn it).
I don't think they actually ever review anything.
I've reported nazi content a number of times and it never violated the policy.
What you just did is a fallacy. That's fine, but it needs to be asked: what sort of "Nazi content" did you report?
If it was a user calling Trump a Nazi, then it should have been removed, and their moderation failed.
If it just espouses Nazi ideology or rhetoric, that's free speech in the US.
That's just how it is. It's part of this country. I have to listen to both the throaty, greasy growl of the white supremacist and the piercing howl of the victims wounded by words.
edit to add additional context: There's a difference between someone "posting" "nazi" content on facebook and here on HN, for example. on FB they figure you're seeing it because of your actions. Your friends, a group you joined, etc. If it's a friend posting on their wall, your moderation task is easy, block the friend, unfriend, talk to the friend, call them out. regardless of your decision, FB doesn't have any obligation or, i would argue, right to step in and moderate in those circumstances. If it's in a group, the moderators of the group have to decide if it represents the group. If it does and you disagree, leave the group.
Someone spouting nazi nonsense on HN is spouting it into a megaphone on the streetcorner, as it were. I have to read the content, even if i didn't actively follow that user or "join" that group.
there are different moderation strategies. merely invoking "nazi" as the boogyman to back up your point is fallacious.
It's to easy to hide behind a computer to avoid responsibilities. "It's not my fault, the computer did it!" is a bad excuse. Computers don't have agency but people do. Anything a computer someone own do is one's fault. One had the choice to not boot it. One had the choice to not buy it.
The evidence only shows that fb is blocking distrowatch links
And it's doing so because of, or as a consequence of their policies.
If it's a consequence of a 'buggy ML classifier', well, it's FB's policy to use one for censorship.
You can't launder accountability with an 'It's AI' black box.
Three claims are there:
- Facebook is censoring this content
- They decided Linux is malware
- They label groups associated with Linux as "cybersecurity threats"
The first one they seem to give evidence for the second two seem to be assumptions.
> - Facebook is censoring this content
I’m surprised we haven’t yet heard from the “it isn’t censorship if a private company is doing it” crowd in this conversation
As a member of that crowd, you're misrepresenting the argument. It is absolutely censorship when a private company does it, but they have the right to do so; it is not illegal. But they also cannot force me to use their platform, I have the right not to use it.
I don't have a problem with the censorship here on HN, so I post here. I do have a problem with the censorship on Meta properties (aside from being offended by their product design and general aims as an organization), so I don't have accounts with them or view content on their properties. I also have the right to criticize them for their censorship, but not the right to prevent anyone else from using it if they want.
I’m not misrepresenting the argument because you are not a member of the crowd I was talking about.
There are people here who literally argue “it isn’t censorship because a private company did it”. Here’s a random example of a recent such comment: https://news.ycombinator.com/item?id=42787234 - other examples: https://news.ycombinator.com/item?id=42664998 https://news.ycombinator.com/item?id=41385109
There are really three separate issues:
(a) can something a private entity decides to do, without any government pressure to do it, count as “censorship”?-this is a definitional question
(b) is such private censorship illegal (in whatever jurisdiction)?-this is a factual question of what the law actually is
(c) should such private censorship be illegal (in whatever circumstances)?-this is a public policy question of what the law ought to be
You are talking about (b), whereas I was talking about (a)
Why do you bring up that you are surprised? Doesn't seem to add to the conversation.
What would a definition of censorship be that includes private entities? Can you link to one?
Why would he bring up what he views as hypocrisy of members of this community when they espouse the view that it is not censorship when a private entity censors one view point(something they disagree with) but stays silent(viewed as tacitly agreeing) when there is outrage over viewpoints being removed that those members agree with.
IMO, it adds more to the conversation than all the comments the dog-piled with "It's not censorship because it's not the government".
>What would a definition of censorship be that includes private entities?
Censorship is the suppression of speech, public communication, or other information. This may be done on the basis that such material is considered objectionable, harmful, sensitive, or "inconvenient". Censorship can be conducted by governments and private institutions.
https://en.wikipedia.org/wiki/Censorship
Censorship, the suppression of words, images, or ideas that are "offensive," happens whenever some people succeed in imposing their personal political or moral values on others. Censorship can be carried out by the government as well as private pressure groups. https://www.aclu.org/documents/what-censorship
Censorship, the changing or the suppression or prohibition of speech or writing that is deemed subversive of the common good. It occurs in all manifestations of authority to some degree, but in modern times it has been of special importance in its relation to government and the rule of law. https://www.britannica.com/topic/censorship
I would ask you if you can link to a definition of censorship that only calls out the government? Aside from XKCD's terrible comic. https://xkcd.com/1357/
> What would a definition of censorship be that includes private entities? Can you link to one?
Merriam-Webster defines censorship [0] sense 1(a) as "the institution, system, or practice of censoring" and sense 1(b) as "the actions or practices of censors". Neither definition includes an explicit requirement that it must be done by the government as opposed to a private entity, although we also have to look at their definitions of "censoring" and "censors". Their example for sense 1(a) does mention the government ("They oppose government censorship") – but I don't think we should read examples as limiting the scope of the definition, plus the very phrase "government censorship" suggests there may also be "non-government censorship".
For "censor" (noun), their sense (1) is "a person who supervises conduct and morals" – it doesn't say such a person can only belong to the government. It then says "such as" (which I read as implying that the following subsenses shouldn't be considered exhaustive), sense (1)(a) "an official who examines materials (such as publications or films) for objectionable matter" – an "official" needn't be government – indeed, their definition of "official" [2] gives two examples, a "government officials" and a "company official", clearly indicating that officials can be either public or private. Their example for censor noun sense (1)(a) mentions "Government censors..." – but again, examples don't limit the scope of the definition, and qualifying them as "government" implies there may be others lacking that qualification.
For "censor" as a verb, Merriam-Webster gives two senses, "to examine in order to suppress (see suppress sense 2) or delete anything considered objectionable" (example: "censor the news"), and "to suppress or delete as objectionable" (example: "censor out indecent passages"). Neither gives any hint of being limited to the government. Let me give my own example of the verb "censor" being used, quite naturally, in a sense in which the government is not directly involved: "The Standards and Practices department of NBC censored one of Jack Paar's jokes on the February 10, 1960, episode of The Tonight Show", from the Wikipedia article "Broadcast Standards and Practices". [3] Now, you might argue that NBC was forced into censorship by the FCC – possibly, but I'm not sure if the FCC would have objected to the specific joke in question, and NBC had (and still does have) their own commercial motivations for censorship separate from whatever legal requirements the FCC imposed on them.
Similarly, Wiktionary's definition of "censorship" starts with "The use of state or group power to control freedom of expression or press..." [4]. The fact it says "state or group" as opposed to just "state" implies that non-governmental actors can engage in censorship per their definition.
Wiktionary's definition of the noun "censor" includes "An official responsible for the removal or suppression of objectionable material (for example, if obscene or likely to incite violence) or sensitive content in books, films, correspondence, and other media" [5] – it never says the official has to be a government official, and their example sense is "The headmaster was an even stricter censor of his boarding pupils’ correspondence than the enemy censors had been of his own when the country was occupied" – which could very easily be about a private school rather than a government-run one.
I should also point out that the Catholic Church has officials called "censors". To quote the 1908 Catholic Encyclopaedia article "Censorship of Books" [6], "Pius X in the Encyclical 'Pascendi Dominici gregis' of 8 September, 1907 (Acta S. Sedis, XL, 645), expressly orders all bishops to appoint as censors qualified theologians, to whom the censorship of books appertains ex officio." And the Catholic Church still employs "censors" to this day, [7] although their role has shrunk greatly – generally they are theologians (most commonly priests, although I believe laypersons are eligible for appointment) to whom a bishop delegates the review of certain publications (primarily religious education curricula) and who then makes a recommendation to the bishop as to whether to approve the publication or demand changes to it. Obviously if the Catholic Church has "censors", the concept includes private bodies, since the Catholic Church is a private body almost everywhere (Vatican City and the Holy See excluded).
[0] https://www.merriam-webster.com/dictionary/censorship
[1] https://www.merriam-webster.com/dictionary/censoring
[2] https://www.merriam-webster.com/dictionary/official
[3] https://en.wikipedia.org/wiki/Broadcast_Standards_and_Practi...
[4] https://en.wiktionary.org/wiki/censorship
[5] https://en.wiktionary.org/wiki/censor
[6] https://www.newadvent.org/cathen/03519d.htm
[7] see 1983 CIC Canon 830, https://www.vatican.va/archive/cod-iuris-canonici/eng/docume...
Here's a great recent, very dense video, almost exclusively about non-state censorship :
https://m.youtube.com/watch?v=B6GWoJTDttU (fr)
While I agree with you this is off-topic. I am happy to not have to see that argument. And you were the wine to bring it here now.
There is a screenshot in the article of the appeal, confirming 1 and 2. Three follows logically.
No, this doesn't show (2). It shows that Distrowatch specifically was considered malware -- not Linux in general.
https://news.ycombinator.com/item?id=42847474
To be fair, the screenshots are data points, but more would be needed to support the generalization being made.
Confirms that one time, one moderator labeled it that, not that "Facebook decided" it.
I thoroughly dislike Facebook as much as the next person, but none of what you quoted constitutes evidence for a ban on discussing Linux on the platform.
Reading the post, it sounds like this may rather be because of incorrect categorization of DistroWatch and links to it than an outright ban on Linux discussion. So yet another issue with Facebook's content moderation methods.
Does the distinction matter?
Yes; the scope of censorship over discussing Linux at all vs the scope of censorship of linking to Distro Watch is vastly different.
If Facebook was removing links to an Pro-Catholic website for some reason but still allowed the discussion of Catholicism, Catholic Church groups, etc. You would be daft to claim that FaceBook is banning all Catholics and discussion of thereof.
That's circular logic and none of it is evidence.
"A bad thing is happening and the evidence of it happening is that I said it's happening."
By the way, I love DistroWatch and do think FB is messing with their posts. But there's no evidence to show if it's a new policy, a glitch in the moderation or an internal screw up.
That's not circular. They are citing sources. The evidence is the direct experience of the sources.
If you don't believe them, that's a different objection.
And glitch policies are policies if they're getting enforced.
Probably this: "I've tried to appeal the ban and was told the next day that Linux-related material is staying on the cybersecurity filter." (from the OP) .. Of course, it would have helped if the post author quoted FB's response so we could judge that for ourselves.
the evidence is that facebook is blocking this content.
[flagged]
I can't speak for anyone else, it just seems that statement is a very specific accusation with nothing backing it up. I'm curious, that's all. It is very much possible that there's some evidence of policy makers discussing this, or even a public statement; nothing to do with "proving a negative".
Surely this is entirely the opposite of proving a negative? It's a direct, testable, provable claim.
How is this asking to prove a negative?
Ok, what's the true story?
It is obviously allowed to discuss Linux. There is plenty of discussion about Linux on Facebook, including some about the recent "ban".
My guess is that some automated scanner found something wrong about the linked page. Maybe there is some link to a "hacking"-oriented distro, maybe some torrents, some dubious comment, etc... Probably a false positive, it happens.
Probably some jobsworth decided that free software = piracy.
I knew a company that leapt to the same conclusion regarding GitHub.
Meta is one of the biggest contributors to free software in the world. They certainly don’t believe that it’s equivalent to piracy. If your guess is indeed what happened, it will be corrected by higher-ups soon.
It is perfectly possible that someone at a lower level, especially a non-technical person, would believe that. Moderators are not going to be highly paid and skilled people.
It has to get to the attention of higher ups.
The one time I have reported a comment to FB, it was horrible racism (said "do not interbreed with [group x] because they are [evil - not sure of exact wording]" and got a reply saying that it did not violate community standards.
I thought Facebook fired all the human moderators ?
But at this point, in 2025, it's perfectly reasonable for GAFAMs (and other Russian/Chinese/USian infocoms) to be blocked (ideally at the state level).
And particularly in the context of work primarily about communication or computing : having an official Xitter account for a journalist or a GitHub account for a software developer is like promoting a brand of cigarettes or opiates by a doctor - a violation of professional deontology.
The pic accompanying mentions openKylin. Kylin is China's Unix, formerly based on FreeBSD, now Linux/Ubuntu.
I presume that it is used for launching hacks, but even so discussion should not be banned.
Just makes me wonder if DistroWatch is telling the whole story.
“ Just makes me wonder if DistroWatch is telling the whole story.”
Nobody outside of Facebook can possibly know the whole story. Hell, most people within Facebook can’t know, either.
Are you suspecting that distrowatch knows more about the context than they are letting on?
They know more than us, by definition. They could do more analysis and not be so dramatic. I'm not alleging anything nefarious.
But how else are you going to get the clicks?
Kali is one example. That said Kali is not a bad thing.
We are obligated to have an external auditor run PCI DSS penetration testing and network segmentation testing every year.
Their second request (after a network diagram) is always to create an EC2 instance running Kali.
Which, honestly, confuses me a bit -- all of the packages are available in AL or Ubuntu, so why do they care? I don't know, and I guess I don't care enough to ask. Just give me the attestation document please. :)
My assumption is it's for reducing the number of things they need to configure, and therefore troubleshoot.
It's easy to say "The newest Kali release is the distro the org will use" instead of "Use whatever Linux flavor you want and here's an install script that may or may not work or break depending on your distro and/or distro's version".
Them spending time troubleshooting a setup that's out-of-spec is still time billed, so it's better for their customers for everything to roll smoothly too. They also just want to execute their job well, not spend time debugging script / build issues.
From my experience, it is obviously not all the packages in Kali Repo will be in Ubuntu (or other regular distro) Repl. Lots of specific pentesting tool can be installed with just `apt install ...` in Kali, which make it a lot more convenient when you need to do pentesting.
Out of the box experience and some extra scripts :-)
Think about all the time saved not having to do sudo or su.
Kali has actually used a non-root user as default for a while now.
Anyways, if you don't run `sudo -s` as your first command in a shell - are you really hacking?
They don't know how to use Linux they just know Kali
More like compiling a bunch of github projects written by hackers is a pain in the ass, so “make me an ec2 with Kali” is more cost effective
Fair point
pic mentions openKylin, I suppose Kylin is a bit like Kali?
Likewise, discussion should be allowed.
The actual title of this story is literally not believable if you take the most generic meaning of discussion and Linux.
I'd go even further: I don't believe that anyone could believe that the title is believable.
It is believable if you've experienced anything to do with moderation on Facebook. It's a dystopian experience that defies any ordinary expectation of normalcy.
Somewhat ironic given that actual linux packages are mirrored there.
http://www.fedora.mirror.facebook.net/
Reminds me of when they do 'firewall updates' at work, and many of the common open-source repositories/hosting etc are blocked.
I understand than some malicious software may use things like curl, but it's also annoying to have to re-create the same ticket and submit to internal IT, and then if someone working on the ticket hasn't done this before, they close it, we have to have a meeting about why we need access to that site...
The inverse isn't tolerated. If you're a software developer, you get tested for IT knowledge with phishing emails. Yet in IT it's perfectly normal to have an ignorance of the core needs of the developers - and computing itself - that results in reduced productivity or shadow IT systems.
It's not an exaggeration to say I've experienced it at every employer I've had.
I was on a penetration testing team at a large corp that doesn't specialize in cybersecurity and I downloaded Metasploit and about 15 minutes later an IT person came up to my desk to talk about the malware I just downloaded. I had to walk him to my manager to get him to understand what it was and why it was okay for me to download it.
Remember the old saying, "it's easier to ask for forgiveness than permission".
Was reading a news article the other day that described wget as a "hacking tool" and about rolled my eyes into the back of my head.
Last I checked (2008) Facebook Linux was indeed a Fedora derivative.
Their OS is based on CentOS Stream, I think they're one of the very few major organizations that stuck with CentOS post-Stream and did not switch to something else entirely.
Untrue, it's purist startup people and some ISVs who believe that Alma or Rocky are the somehow "better".
Meta runs 10M+ CentOS 9 Stream boxes migrating to 10 eventually.
Cent has shorter security update availability latency and they're shipped more consistently. The benefit with Rocky and Alma is double the lifecycle time and arguably better governance, unfortunately though they're both tiny operations that suffer from a narrow bus factor, are always playing catch-up, drifting away from RHEL compatibility, and are the definition of fragmentation.
If you need RHEL-ish for servers, use CentOS Stream. It's not great for desktop. Use Fedora or something more LTS for that.
> Untrue, it's purist startup people and some ISVs who believe that Alma or Rocky are the somehow "better".
It's anyone who appreciates the value of stability in server software. In my personal opinion, that value is quite high and far too quickly cast aside by others in the industry.
I am one of those people who agree with you. On my main family computer we run Alma Linux with flatpaks for the main accounts.
I use guix to get up to date tools for development stuff.
(On my laptop I run aeon desktop and guix. I really do think that model is the future. Right now I am hoping to be able to run aeon desktop but with the opensuse slowroll packages which would give me all the benefits of aeon but without the constant updates).
I think it's unsurprising that the company that coined "Move fast and break things" was fine using Stream.
> drifting away from RHEL compatibility
Any source for that claim? I am testing software on Rocky and never got complaints from users that run it on RHEL.
I think in 2014 era it was centos
Fedora and CentOS are both used.
Woah, Facebook is hosting malware.
Seriously though, I'm curious (have no account): are you able to post that link on Facebook?
Didn't Zuck recently announce that he's getting rid of fact checkers, on the pretext that the parties hired to do fact checking are biased and introduce censorship and unfair false positives that get accounts shut down?
Was it just a cost reduction: fact checking takes effort and those checkers have to be paid? With the result being situations like this?
Yes, which makes this claim more extraordinary. (And to be fair, I don't think there's extraordinary evidence presented here.)
It doesn't; it makes the ban more likely if anything. See how on certain topics the censorship immediately increased as Musk took over Twitter.
Kowtowing to the king
Their phrasing was "mainstream discourse" wouldn't be censored.
I guess Linux needs to go mainstream first.
I'm pretty sure we're entering the year of the Linux desktop :)
There is no such thing as unbiased information. So FWIW, I think fact checking is really just a fight for censorship. Official lies and half truths instead of lies from everywhere intermixed with truths.
There are so many ways to do it wrong even if you tag info as true or fake and in principle you do it with good intention. For example it was the case that certain information was tagged as fake and when claimed for a correction the administrators "could not do anything" (Spain cases researched by Joan Planas by doing requests himself personally for the biggest official agency in Spain, called Newtral, which is intimately tied to the Socialist Party in Spain... really, the name makes me laugh, let us call war peace etc. like in 1984). But they were way faster in doing it in the other direction or often found excuses to clearly favor certain interests.
Now put this in the context of an election... uh... complicated topic, but we all minimally awake people know what this is about...
Your point doesn't hold together because it seems to be conflating fact checking with bias elimination.
They are obviously different and mostly separate.
A presentation of facts can be biased.
E.g. a news agency can have a characteristic political slant, yet not make up facts to suit that narrative.
When a bias is severe, such that it leads to behaviors like concealing important facts in order to manipulate the correct understanding of a situation, then fact checking can find a problem with it.
We have repeteadly found fake news in the fact checking as well as official truths in the case of Spain and I am pretty sure the pattern is replicated in other places. The funds that bought the newspapers, etc. in Spain are all the same around Europe.
They might not be the same, but they are interrelated sonce this is a fight to monopolize the truth and bias and lies are what you end up seeing. Many times they say sorry and get away with it,, but they are not saying sorry: they are working for some interests.
What happened to Biden's son in Ukraine. They totally disappeared before an election, for example. Why? Why it did not get through and went viral? I do not give a hell from these agencies. They are everything but seeking the truth. Yes, for some irrelevant info they might be ok but we all know who they work for.
Remember part of the leakages that Musk showed when he bought Twitter also with the mail exchanges of what to censor. Only a retarded would believe those agencies at this point.
Not to say fake news do not exist though.
whats the bias on "1+1=2"?
Bias towards base 10 numbering? How did you know 1+1 wasnt wanted to be calculated in binary?
1+1=2 has a correct interpretation in any base 3 and higher.
How we know that it wasn't to be calculated in binary is that the digit 2 occurs.
We have to have a reason to suspect that it was intended to be binary, otherwise we are inventing an inconsistency that isn't there in order to find a false or not-well-formed interpretation.
the decision to include the information or not include it in the discussion in the first place, regardless of whether it's objective information
I was going to say something but the other two replies illustrate well enough things, especially the one of what information to hide or show. Others: where a headline goes, how fast information is corrected, what is the protocol to correct and if that protocol has a neutral appearance that favors someone more than others.
In fact I believe neutrality does not exist as such. No problem with it, objective information and multiple sources with their biases are ok to get an idea as long as facts are shown. But an official truth? Come on, what is that? It is dangerously similar to a dictatorship to have the monopoly of truth.
> Was it just a cost reduction
No, it was clearly an attempt to court Trump, unfortunately 'not enough ass kissing, yet' according to the trump team.
Both. Clearly both.
I recall a headline from (checks notes) 2014. Linux users are extremists according to the NSA (http://www.linuxjournal.com/content/nsa-linux-journal-extrem...).
I imagine something about that caused certain lists to be populated in certain ways, and no linux user cares enough about Facebook to help them correct the problem.
I cared tiny bit. I even went out and bought a phone so that I could "prove I was a real person" or whatever to try to make a FB account. Account creation failed, my IP was banned, and I just blocked every FB domain and haven't looked back.
Ah shit I guess they’ve been storing my browsing history then…
Interesting. Same Facebook, refuses take action on reports submitted on fake accounts created to spam people and harassment videos.
Yeah, I was really surprised by this. Last year, I reported a number of people, who were trying to scam me (via Messenger messages related to Marketplace listings). Not only did Facebook did not see anything wrong with the accounts and scammy messages, I was flagged for sending useless reports.
I suspect that the report button doesn't actually cause any human to be involved in the process.
it (used to) in aggregate trigger human review (i.e. if many people report the same post). However, the humans who reviewed it were underpaid, overworked, and unlikely to have any context, so the output was not necessarily better than the automated system...
Their filters are comically bad. I belong to a Selectric Typewriter enthusiats group and we keep having to re-word things so they don't go into a black hole. Typewriter parts like "operational shaft" or "type ball" or even brand names of gun cleaners and lubricants that are popular with typewriter folks will cause a post not to appear.
Oh just stop it you, I feel my loins tingling.
I think they're wrong about the policy. It's more likely that the policy is "let's run the moderation bots unattended to save costs" and is actually site agnostic.
It's just some "AI" hallucinating.
Seems antithetical to Zuckerberg's recent "More Speech and Fewer Mistakes" announcement.
To be fair whenever something radically changes the risk of a regression is higher than normal.
Well, my confidence in the owner of this company is as high as... so I am not surprised that if he is paid (I have no idea this os the case in this very situation), he will no wonder do what the money dictates without any consideration whatsoever. Did anyone see the ridiculous change he made after years of selling (at least in Europe) fact checking, following censorship and teaming up, the scandal selling data to influence an election before. I do not expect anything nice from this leadership. That is why I stopped using Facebook years ago as much as I could.
So this is what Zuck meant when he said Meta was "getting back to its roots"? (And I thought he was talking about reviving Facemash)
I'm not convinced this is intentional. I think their auto-moderation stuff is just buggy lately. To illustrate part of why I say that:
Yesterday I tried to submit a link to a Youtube video of the Testament song "Native Blood". Nothing terribly controversial about that, and I'm nearly 100% sure I've posted that song before with no problems. But it kept getting denied with some "link not allowed because blah, blah" error.
So is "Native Blood" banned on FB? Well, I tried a link to a different video of the same song, and was able to submit it just fine. This feels like a bug to me, and I wouldn't be surprised if similar bugs were interfering with other people trying to post stuff.
Granted that's just speculation so take this for what it's worth.
Maybe it is about time that we stop relying on closed gardens, censored and managed on a whim, and start reclaiming our internet and freedom back, publishing in open platforms?
Open platforms are still subject to all of this, the only thing they give is that you don't need to create an account to see the contents of a link.
Avoid platforms altogether.
Like the others have mentioned, I don't think this is anything more sinister than AI moderation gone wild.
I'd argue that automated ""AI""-driven moderation is actually more sinister than a human being deciding it. Censorship and control over communication by automated processes should be held to a very high standard (and probably regulated, I'd think). From IBM in 1979: "A computer can never be held accountable, therefore a computer must never make a management decision." ( https://web.archive.org/web/20221216204215/https://twitter.c... )
Yeah, these days it's basically the opposite: since a computer making a decision means we (in the C-suite) can't be held accountable, ALL decisions should be made by computer..
haha, I was thinking something along those lines as I was typing the prior msg! "oh, machine algo, not our fault. we'll try to fix it in the future" >_>
I wonder how thinly veiled those decisions can be and still fly under a court's radar [0].
[0] https://www.smbc-comics.com/comic/algo?ht-comment-id=1201119...
Surely that's the result of a rogue moderator's overreach.
I attempted to post the distrowatch link to my feed and it was blocked as 'spam'.
That seems pretty automated to me.
Their "moderators" are bots, not humans, so it seems that the bots have "decided" that Linux-related links are malware or something.
Or some overly optimistic attempt at AI moderation.
I agree, overzealousness sounds like the most likely reason for this.
> Starting on January 19, 2025 Facebook's internal policy makers decided that Linux is malware and labelled groups associated with Linux as being "cybersecurity threats".
The author gives no evidence to back up on this claim.
> The author gives no evidence to back up on this claim.
How can one provide evidence that something is not being displayed on a website? Isn't this, like, a formal fallacy, or something?
> We've been hearing all week from readers who say they can no longer post about Linux on Facebook or share links to DistroWatch. Some people have reported their accounts have been locked or limited for posting about Linux.
You've implied it's impossible to give such evidence and then you've immediately proved yourself wrong by giving it.
But anyway, they're not asking for evidence that something isn't being displayed. They're asking for evidence that 'Starting on January 19, 2025 Facebook's internal policy makers decided that Linux is malware and labelled groups associated with Linux as being "cybersecurity threats"'.
> I agree, overzealousness sounds like the most likely reason for this.
Who was overzealous if not one or more internal policy makers?
Machine learning algorithms? Someone hacked Facebook to block Linux? So, there are other options besides overzealous policy makers.
That sounds like a distinction without a difference. It doesn’t seem to meaningfully refute the point; it’s just hung up on the semantics of “policy-maker”. Who cares that the policy-maker is an algorithm?
This is the trouble with automation. It's clear this isn't a malicious post, it just matched some keywords their moderation bot identified as such.
I think a lot of the censorship problems would be resolved if they just shut the bots off and relied on user flagging. Does that require a lot more people? Sure. But the long-run result would be far more people would use and trust these networks (covering the revenue of hiring moderators). I know I'd be a lot happier if there was a thinking human deciding my fate than a random script that only a few people know the inner-workings of.
As-is, it seems like a lot of these social networks are just shooting themselves in the foot just to avoid costs and get a false sense of control over the problem.
Um, no. I don't want to see pics of NSFL gore before the userbase has had a chance to remove them. Which is what most moderators spend time removing from FB, to the point where it psychologically traumatizes them.
You don't have to. That's actually a place where automation could help. You could just use image detection and auto-tag stuff as to what you think it contains. Then, have a list of sensitive tags that are automatically blurred out in the feed (and let users customize the list as they see fit).
If it's something trending towards illegal, toss it into an "emergency" queue for moderators to hand-verify and don't make it visible until it's been checked.
So in your example, if someone uploads war imagery, it would be tagged as "war," "violence," "gore" and be auto-blurred for users. That doesn't mean the post or account needs to be outright nuked, just treated differently from SFW stuff.
You assume automation solves the problem. If it did, Facebook wouldn't be hiring close to 100,000 humans to inspect content.
Automation + human intervention, yes. In the setup I described, worst case scenario something gets blurred out that's benign, but it doesn't create a press/support nightmare for Meta.
Considering they've open sourced one of their image detection API [1], I'd imagine it's more a problem of accuracy and implementation at scale than a serious technical hurdle.
[1] https://github.com/facebookresearch/detectron2
Those are subjective clarifications, and so will differ between each person. And models are pre-trained to recognize these classifications.
Since you mentioned war, I'm reminded of Black Mirror episode "Men Against Fire", where an army of soldiers have eye implants that cause them to visually see enemy soldiers as unsightly. (My point being this is effectively what Facebook can do.)
Is there really no legal way to go after gore posters (in spaces banning gore) ?
There should be - after all, this is akin to graffiti, which is typically fined.
What is not acceptable, is a platform creating a paralegal environment.
No, Facebook is Not Censoring "Linux", Only "DistroWatch".
https://www.youtube.com/watch?v=xOdMTS6XVu4
I'm not watching a 20 minute video on the topic, but there is a user in an HN comment[1] stating links to debian.org and qubes-os.org were removed by facebook.
[1] https://news.ycombinator.com/item?id=42840143
Thus Facebook is not censoring Linux discussions or Linux content, what DistroWatch claimed, it blocks linking to what Facebook deems as malicious links (correctly or incorrectly), something a lot of software does these days.
This is what the yanks call "a complete nothing-burger".
It's a shame that this is one of the only accurate top-level comments and it's downvoted to hell.
I think the complaint is that it's not really a "comment", so much as it's a link to Bryan's own 20 minute video talking about it. It comes off as an annoying bit of self-promotion.
Though I will admit that Bryan is just a deeply unlikable human who is generally under-informed-at-best on any given subject that he's talking about, so people might be looking at it more cynically than if someone else posted it.
Fair enough.
[flagged]
[flagged]
No personal attacks, please.
https://news.ycombinator.com/newsguidelines.html
I didn’t say I felt bad.
The cost of pissing of devs is so high, why cant companies just knuckle under- stop attacking add-blocking browsers like firefox or dev-operating systems. Why would you want to enter that world of pain of getting a ton of adversaries with while balancing on stack o swiss-cheese and duct tape? What is going wrong in those decision maker heads.
Didn't Mark Zuckerberg say he would reduce censorship on FB just a few weeks ago?
Facebook is just a website. Move on!
Facebook is a cyber security threat.
I thought Zuckerberg was removing any fact checkers and platform censoring. I'm thoroughly confused. But maybe since Zuckerbergs death the company changed directions again.
What am I missing here? Why would anyone go to Facebook to discuss Linux?
Can't sell Linux users AI.
Inability to market directly is antithesis to Facebook and its ilk.
Linux gives users control. That is the very last thing anyone in power wants anyone else to have.
Wonder if someone used "qubes" as a way to work around a ban on "pubes" and the filter thought it was porn?
There's no excuse for Facebook's behavior, but... Who savvy enough to use Linux also uses Facebook?
The ones who have family and friends that use it.
At which point you should start protecting your loved ones by having a conversation about this not being acceptable behavior ?
Platforms seem to get a lot more leeway than abusing drugs (alcohol,smoking...) for some reason ?
Hahaha, I was going to make a similar retort but couldn't be bothered.
Fortunately, my parents rejected Facebook from the start; and they're online plenty.
We're all going to have to start having the same conversations about LinkedIn, AKA Facebook Pro.
I'm genuinely surprised that people were using facebook of all things to discuss Linux distros.
The idea of having to wade through AI generated pictures of Shrimp Jesus and my mad uncle posting about his latest attempts to turn lead into gold (yes, really) to find out about new distros to try seems very alien to me.
I'm sure lead technically can be turned into gold or anything for that matter with enough energy
Several groups actually have both intentionally and unintentionally.
In most cases they're pretty radioactive isotopes of gold. But IMO that just makes it feel even more like alchemy. The gold is cursed.
Your electricity bill might be greater than the value of the produced gold though.
You can indeed turn lead into gold; it will indeed be more expensive than it is worth; also the gold will be radioactive.
Gold is sold by weight, so those extra neutrons are pure profit.
Unfortunately all the heavy isotopes of gold are unstable and will decay in a few months.
Shhh, don't tell that to my buyer!
That sounds like the buyer's problem though.
I think they turned some platinum atoms into gold. In a particle accelerator.
I want to know more about your uncle. I'm not on Facebook, but that would make me consider joining...
It's entertaining in the abstract but fairly depressing when he's telling you in person that he's spending his children's inheritance on turning lead slightly yellow. Still, on the bright side, he seems to have stopped talking about the "globalists" so much.
Globalists are history now, Trump will take care of them :)
(sob) Shrimp Jesus is real! (sniffle)
Also, turning lead into gold is easy: Just break all the protons off to get Hydrogen and maybe Helium, then compress it back so you get a star to form, and wait for it to go nova. Or, if you're in a hurry, you can compress your Hydrogen more and if you kind of jiggle it just the right way then you should get some gold along with other heavy elements.
Your uncle sounds like a lot more fun than the latest javascript build system.
Imagine being confident enough to believe and document that. Crazy? Maybe, but a crazy one can appreciate.
I know you're only half serious, but ...
The problem isn't when one uncle is doing this. The problem is when the bulk of the content you see on FB is as crazy as this.
I mean, if you like purchasing the National Enquirer and flipping through it, then by all means, this is for you.
Yeah, I can understand. I'm fortunate not to have many uncles and aunts who were old enough to use Facebook, and my parents were fairly tech-antagonistic. I did get to see a little of what you are referring to when some of my coworkers added me on FB and started sharing political content.
I still prefer that to all of the fake AI-slop message boards and meme/video culture that seems to have replaced it on FB.
Tech obviously isn't a strong suit, but elsewhere Facebook does have corners with good/entertaining/useful small communities. They have good SNR and are more personal than Reddit.
The secret is to train your feed by bookmarking the groups and linking to them directly instead of accepting whatever flailing nonsense the algo decides to default to.
Having said that - I hope everyone has worked out by now that when you have a "free speech" culture based on covert curation and moderation of contentious issues, it's not just going to be about porn and trans people.
Non-mainstream (i.e. non-consumer) tech is going to be labelled bad-think and suppressed too.
Interesting, lets just see what Facebook runs on...
I am getting no response on that link.
Distrowatch seems to be under heavy load (probably because of this news story)
It makes sense
I assume Facebook doesn't want anything posted on FB that can't be turned into a racist diatribe. There's not a whole lot of racism potential in Kernel tuning.
Maybe you could squeeze in anti-Finnish rant about Linus, but it would be minimal
For information X and Meta are not Social Network but Identity Tracing Network.
Give their database to bot for search and destroy and you will understand by how many will survive.
Good luck!
> Starting on January 19, 2025 Facebook's internal policy makers decided that Linux is malware
I'm glad someone finally said it.
It's been known for over 20 years: https://homestarrunner.com/sbemails/118-virus
I get blocked anytime I share my github project.
After the ethnic cleaning of the Linux maintainers list I would say I'm not impressed
Only morons use garbage like Twitter or Facebook anyway.
Welcome to 2020 Facebook, except they're coverage of valid topics to ban and censor has expanded more broadly now. This might've been avoided had more of its users sent a message 4-5 years ago that social media censorship isn't acceptable in a society that prides itself on free speech.
Lol. So glad they let literal lies and propaganda going with community notes but keep something usefull. Garbage as always.
discuss GNU instead... or maybe microcontrollers
Perhaps they've become closer buddies with MICROS~1. I wouldn't be surprised if they did this in exchange for "AI" compute, i.e. that losing the Linux audience is worth less than being seen favourably by elder oligarchs.
MS doesn't care about Linux any more like they did in the 90's and 00's.
Sure they do. They really, really don't want government agencies and non-techies to realise that there is a better option for most everyday computer tasks.
Another great reason to not use Facebook or any other social media.
No one, more than linux users, cares about privacy and freedom. What is even the point of using crapbook? Everyone in linux community is either hanging out on IRC or matrix or have self hosted forums
to talk to people who are not yet using linux?
What? Google is a linux user - I doubt they care about privacy or freedom. Same with facebook - that company uses linux a lot while actively opposing privacy.
Lots of people use linux because it's a good OS, irrespective of privacy concerns (see the occasional flareup about some software or another automatically shipping off bug reports - some people don't care, others are incredibly concerned).
My wife was temporarily banned for a photo of a marble statue. My mother receives invitations to groups that share photos of migrants drowned in the Mediterranean. Don't use Facebook, and certainly don't depend on it.
Edit: Recently, a lot of associations working to prevent HIV, sexually-transmitted diseases and family planning have been progressively de-listed, or their content blocked and their accounts banned, all over the world on all META platforms. This is the true face of freedom of expression according to META and its “community rules”.
Meta censorship of abortion pill content (french) : https://www.radiofrance.fr/franceinter/podcasts/veille-sanit...
Facebook seems to be going really heavy handed with moderation. They also blocked some wikimedia sites like https://meta.wikimedia.org and https://species.wikimedia.org
Define Blocked? I just posted a https://meta.wikimedia.org/ link without issue.
I tried to make a post with the https://species.wikimedia.org/ link, and I get "Your content couldn't be shared, because this link goes against our Community Standards".
Being generous, it could be there's NSFW imagery in there? I can't be arsed to dig into a mountain of scientifically named links, but you can find troves of pr0n among other things in Wikimedia if you know where to look.
(I'm not sure why my comment is now collapsed by default. It doesn't seem to be flagged, and has a score of 15.)
I tried again, and this time I get "Posts that look like spam are blocked", and a similar message if I try to leave a link in a comment.
I wonder if spammers have been vandalizing Wikispecies and posting the links, but unlike Wikipedia the editors of Wikispecies struggle to remove the spam in time? The project has hundreds of thousands of pages, but the vast majority would have very little content or oversight. It could be the Wiki project with the worst pages-per-editor ratio.
There isn't pornography, or at least only indirectly — Wikispecies doesn't host any images itself (says https://en.wikipedia.org/wiki/Wikispecies ).
I guess if they blocked *.wikimedia.org to get at commons.wikimedia.org that could make sense. However all those images are also accessible via an en.wikipedia.org url.
I mean you can find porn on Google and you don't have to look that hard
Thank you for actually spelling porn. This whole thing around altering spelling to avoid blocking which I presume comes out of other apps has gotten to be quite annoying.
"Pr0n" is classic internet slang that predates onerous censorship let alone "apps".
Right, the form to deal with onerous censorship is “corn”.
I can’t believe we have grown ass adults posting “seggs” on social media and bleeping it out in audio.
People shape themselves so much around algorithms.
It's a little depressing to think about how much better modern LLMs are at preventing "harmful" content than past systems.
What the fork?
Was totally unaware thanks!
It absolutely came from censorship. IRC chat rooms and PHPBB message boards with blacklists of words that would get starred out. Hoping it wasn't implemented with substring match so typing "shell" didn't come out "s****".
Based on this comment https://www.facebook.com/story.php?story_fbid=90140663553077... (can i even directly link to fb. Idk)
Some people were complaining about meta, but species seems the main one. And only the main site, the mobile site is fine.
Thanks, added to https://phabricator.wikimedia.org/T341665#10499534
[flagged]
“Our greatest ethical imperative is to create our own life's meaning, while protecting the freedom of others to do the same.”
— Simone de Beauvoir
Meta barely changed their moderation policy. The community standard docs which list every violation are still extremely long and cover a large swath of speech https://transparency.meta.com/policies/community-standards/, to which they only added 2 bullet point exceptions (and eventually the future addition of community notes)
The exceptions are not minor. Some groups can be called mentally ill, but other similarly situated groups cannot.
It's a capitulation to the idea that speech standards should be determined by public opinion and not by reason, evidence and a scientific mindset.
> speech standards should be determined by public opinion and not by reason, evidence and a scientific mindset.
Yes this is largely a debate between a top-down technocratic worldview vs democratic/meritocratic one. The point is FB is still very much on the former highly centralized expert-defined guideline/automated system side while only making small moves in the other direction with community notes. Maybe they'll keep going in that direction but what they say vs do is an important distinction.
And it still works less well than community moderation.
“Works less well” is a subjective claim.
How are you evaluating this? Are you including the truth of the Facebook post, whether moderators correctly/accurately act upon the flagging, whether users choose to stick on the platform after seeing the content, whether users stop believing in any objective truth, or something else?
Community notes only does fact-checking, but moderation has the ability to reduce the activity of bad actors. They serve 2 different purposes from where I stand.
Let me make a correction please:
> a top-down technocratic worldview vs democratic/meritocratic one.
You mean to say: "a top-down technocratic worldview vs majoritarian one."
Majoritarian != Democracy
To be clear, the people who believe speech standards should be determined by public opinion are as incorrect as, say, flat earthers.
I don't have a huge problem with community notes per se. I do have a huge problem with blatantly unequal standards just because large parts of the public have morally rotten views.
Speech standards have never been set by "reason, evidence and a scientific mindset". The people who are complaining now that the shoe is on the other foot were quite happy when it was their side setting the rules.
Objective standards would be best, but subjective standards that you pretend are objective are far worse than subjective standards that are honest about it.
I think what Meta did is objectively bad.
Speech standards should be determined by public opinion, science has never had a seat at the table in the West. If anything Communism was the pro-science approach, typically centrally planned societies love science and technocrats - they put a lot of effort into working out a true and optimal way and it didn't work very well. The body count can be staggering.
The moment we start talking about speech standards being set by "science" you get a lot of people who are pretending that their thing is scientific. Ditto reason and evidence.
The win for free speech is setting up a situation where people who are actually motivated by science, reason and evidence can still say their piece without threatening the powerful actors in the community. And limiting the blast radius of the damage when they get things wrong despite being technically correct. But principles of free speech go far beyond what is true, correct and reasonable.
> science has never had a seat at the table in the West.
Other than science being the entire reason the US were able to corner the fascists in WW2. Let a lone all the scientific break throughs in the last few decades coming from the West. Heck before WWII, the automobile?
Perhaps you meant it wasn’t primarily embraced.
1. There was an entire sentence, taking the second part without the first ("Speech standards should be determined by public opinion") removes essential context.
2. The fascists were Westerners (and leaders in science/technology, for that matter, the US didn't beat them with more technology).
I still disagree, science has had a seat at the table in the West especially around speech. Speech was either locked down using control of technologies or speech was empowered using proliferation of technologies.
> science being the entire reason the US were able to corner the fascists in WW2
Not to my knowledge
Economic heft had a lot to do with it as did the weight of numbers
I love science, BTW. But it is not the source of all knowledge.
We took their scientists, broke their codes, and built a bomb that took out two cities. That requires science in my book.
For the Japanese. The war was shortened. But by the time of the bomb they were doomed. They could not replace their losses like the Americans could
The Germans were beaten mostly by the Soviets. They (the Germans) were overwhelmed. And they too could not replace their losses like the Soviets could. Especially humans
And none of their forces would have been so drastically depleted without science.
> Speech standards should be determined by public opinion
To confirm, you are making a normative "ought" statement here, not just a descriptive "is" statement?
> science has never had a seat at the table in the West.
This is a strange idea to me. As a simple example, vaccinations are mandatory for a reason. The unfreedom there is clearly justified.
> If anything Communism was the pro-science approach, typically centrally planned societies love science and technocrats - they put a lot of effort into working out a true and optimal way and it didn't work very well. The body count can be staggering.
What James Scott called high modernism is indeed bad. The problem was not the fact that science was used, but the fact that the models used weren't complex enough to describe local conditions, and that politically motivated models (e.g. Lysenkoism) gained prominence. Science was also used in other parts of the world to much better effect, such as vaccines and HIV medications.
> The moment we start talking about speech standards being set by "science" you get a lot of people who are pretending that their thing is scientific. Ditto reason and evidence.
True, and yet some of those people are more correct than others. This is challenging, but it is not a challenge we can run away from.
> The win for free speech is setting up a situation where people who are actually motivated by science, reason and evidence can still say their piece without threatening the powerful actors in the community. And limiting the blast radius of the damage when they get things wrong despite being technically correct. But principles of free speech go far beyond what is true, correct and reasonable.
I think people not applying reason is far, far worse of a problem today than people applying it.
@dang: I do not know why this is flagged, but I think it's a significant development and it shouldn't be.
Even LWN is covering it.
https://lwn.net/Articles/1006328/
I've turned the flags off now. It's not a very good thread, though—mostly jokes and generic reactions, which is what happens when an article contains little information, but the information it does contain is provocative. (Edit: the comments got somewhat better through the afternoon.)
These little scandals nearly always turn out to be glitchy emphemera in the Black Box of $BigCo, rather than a policy or plan. I imagine that's the case here too. Why would Facebook ban discussion of the operating system it runs on, after 20+ years?
(Btw: @dang doesn't work - if you want reliable message delivery you need to email hn@ycombinator.com)
I flagged it when it first showed up because “Facebook ban on discussing Linux” is obviously bullshit, it took me half a minute to confirm The Linux Foundation was posting about Linux as recently as an hour ago.
I can believe DistroWatch the website got blocked by Facebook for whatever reason and I can sympathize, but exaggerating it to something obviously false doesn’t do them any favors. I think the title needs to be changed if it’s allowed to stay up.
It's not just distrowatch that is blocked, so it's not "obviously bullshit": https://news.ycombinator.com/item?id=42840143
That’s still not remotely a “ban on discussing Linux”. “Some Linux-related links or discussions are blocked on Facebook” may be a reasonable title.
Also, according to a recent comment https://news.ycombinator.com/item?id=42847474 it’s not at all clear if these blocks are related.
Thank you very much!
TBH I didn't think the @ thing would work - I was just hoping you'd notice. I have been meaning to email you, though.
Ultimately, @dang seems to have worked just fine, so well that it worked even without working.
[flagged]
I get quite a few of those
Anybody here have connections at Meta? Seems like this should be fixed.
I have written to their PR department. TBH I am not expecting a useful answer.
Good luck connecting to a human at Meta. Even the CEO is a cyborg.
:-)
But also :-(
Lord Astor was right.
This should not be flagged.
Post itself is a little light on evidence, but there are people here already who've tried to post Linuxey things, and have seen it in action.
I notice a lot of topics being flagged recently.
I would ask flaggers to simply skip those posts and let people who are interested in discussing those topics have their discussion.
Shutting down other peoples conversations is a disturbing trend and it is giving HN more of a one sided echo chamber feel.
Tech censorship, especially during the pandemic, made some giddy with power that they thought would extend to everything forever.
If I'm reading right, the same facebook who announced a week or so ago that they where scaling back all moderation and validation around online safety, are now putting a blanket ban on users discussing such a fundamental aspect of modern technology that facebook itself runs on it?
If this is a genuine policy, I'm at a complete loss to understand Facebook's stance on anything.
Distrowatch has taken the observation that distrowatch URLs are blocked and really hyperbolized that into the broader and incorrect claim that discussion of Linux is banned. It isn't.
the "free speech" was a promise to promote right wing speech. do not mistake it for ideology.
banning left wing activism, either acknowledging the genocide in Gaza or apparently now promoting free (less surveilled) software is against what the authoritarians want so it is banned.
this is all consistent if you see it through that lens
But it's not consistent because Linux is not aligned with either side of US politics. This doesn't address OP's confusion.
dang's probably right that its a glitch- but I honestly believe Linux is Free as in Freedom, which is opposed by both parties but primarily the radical authoritarians in charge right now
Linux is free software, and software freedom is communist. It's also the brainchild of a Finn, and every red-blooded American knows that Europeans are all commies.
Real patriots use good ol' American operating systems, like Oracle Solaris™.
I thought they were going to go full free speech. /s
Seriously, if you haven’t already, sign up for a Mastodon account. This is the motivation you need. Encourage some friends and family members to connect with you there.
Why is this submission flagged?
HN moderation outsourced to FB? /joke
<https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...>
Look at the quality of the posts.
This is an obvious mistake, it's obvious Facebook isn't deliberately banning Linux posts, it's obvious their moderation is incorrectly flagging some posts for some reason, it'll get fixed. It could have been an interesting story and discussion about problems with false positives and automated moderating, or about the lack of human contact at Facebook scale, but instead it's just passionate screeds from too easily excitable posters.
(I didn't flag it, btw.)
People flag stuff.
Dump Facebook.
My hypothesis is that they're now censoring things that seem "lefty".
They have been censoring lefty things for ages… It's well known how you could be openly nazi but never openly communist.
Bot or ML gone wrong and it misunderstood the mention of Linux when associated with bad things and just equated them?
this is why content moderation is a really bad idea. the false positives are going to dominate any moderation you do.
I'd assert that using private walled gardens as primary distribution channels is the root bad idea here.
they go together. once you have a walled garden, the temptation to moderate/censor it is too large. censorship was practically impossible in the old internet before social media.
It was also largely unnecessary because folks hadn't normalized acting like wild animals in online spaces, tools for automating acting like a wild animal online were lacking, and reach was extremely limited so there was little financial incentive for private interests to engage with the space in any way. All of which takes a back seat to folks more or less agreeing that online is where bullshit lived and only an embarrassing rube would take any of it seriously. The great irony here being the amount of bullshit online has only increased decade over decade yet weirdly at some point folks started taking it seriously, with utterly predictable results.
I didn't notice. Maybe because I have a long list of facebook domains in my hosts.deny.