Most people who think "AI will fix it" wildly underestimate the difficulty of the problem.
A bunch of people with their phones around chickens in a pen is a zoo. A bunch of people with their phones around chickens in a pen clawing each other is a cock fight. That's an awfully specific thing for an AI to attempt to properly understand.
Then there is quoting, satire, fair use, and a million other niche very specific things. Whoever thought AI could "solve" that problem quickly has clearly never actually used any AI before.
Well that's because it's not really a true AI. It's just a bunch of math formulas that follow a rule set made by the admins. Hence, no thinking, and no ability to tell the difference between a cock fight and a zoo, ever.
Plus, it is brittle: paint the cock metallic gray and the algorithm thinks it is a robot fight.
I find it darkly hilarious that a company with literally ~30 Billion dollars in annual revenue is so fixated on reducing the extremely small fraction of personnel and cost required to keep it from becoming even more of a toxic hellhole than it already is. I'm so sad that Facebook needs to employ actual people, and give them actual money, to make so much more money. F&#k facebook and Zuckerberg, everyone should delete their facebook accounts in protest, just to stick it to that awful sociopath.
The people who are employed to catch the bad stuff are affected by the bad stuff they catch. People who see death, child porn, Nazi propaganda, etc. all day long often end up with mental health problems. Sure, somebody has to do it, but it would be better if that somebody where a computer that wouldn't end up with nightmares when they go to sleep at night.
Except the computer can't do it and probably never will be able to, hence the current situation.
The ultimate solution to this problem is for the human race to evolve beyond the petty, squabbling, vacuous, argumentative, easily-manipulated, trolling creatures we currently are. No progress on that so far.
Fuck off for telling me how I'm supposed to think.
Case in point. You're exactly what I'm talking about.
DOWNVOTED
Oh no, please don't...
A person who makes a joke aimed at people who are normally like that, and feels sad that sarcasm never works on the internet unless a big flashing neon sign tells people it is (thereby spoiling the joke)?
Yep.
Friends of mine think that Facebook is persecuting them when innocent content gets blocked and I have to tell them that Facebook's moderation algorithm is just getting confused.
I didn't know that it sucked West Nile-carrying mosquito-infested stagnant pond water though. Sounds like they need to re-engineer it from the ground up.
EDIT: My solution to Facebook for this isn't more AI. I don't think they can ever get that right based on the godawful UX of their Creator Tools.
That is true. But is also true that FB's algorithms can effectively block content based on basic keyword matching and user reports. The issue is that these basic filters certainly do not seem to be enabled in the vast groups hosting extremist ideas or simply in posts from users with some degree of connectivity in the network. Just check what happens if you report a casual user trying to sell something or posting something mildly inappropriate. Now, try doing the same on the content stemming out of "influencer" groups and see what happens (the BBC had this experience when reporting some major groups illegally selling land plots in the Amazon forest).Most people who think "AI will fix it" wildly underestimate the difficulty of the problem.
A bunch of people with their phones around chickens in a pen is a zoo. A bunch of people with their phones around chickens in a pen clawing each other is a cock fight. That's an awfully specific thing for an AI to attempt to properly understand.
Then there is quoting, satire, fair use, and a million other niche very specific things. Whoever thought AI could "solve" that problem quickly has clearly never actually used any AI before.
Yes, people have this view that AI is much more advanced than the reality - cf self-driving Level 5 vehicles that are due any day now.
The motivation is for more hate speech because hate speech drives engagement!Is there any incentive or motivation for Facebook to remove Hate Speech?
Most people who think "AI will fix it" wildly underestimate the difficulty of the problem.
A bunch of people with their phones around chickens in a pen is a zoo. A bunch of people with their phones around chickens in a pen clawing each other is a cock fight. That's an awfully specific thing for an AI to attempt to properly understand.
Then there is quoting, satire, fair use, and a million other niche very specific things. Whoever thought AI could "solve" that problem quickly has clearly never actually used any AI before.
Executives find the idea of an AI that can do this sort of thing extremely appealing because the only alternative is paying lots and lots of people to do the moderation, so they succumb to wishful thinking, expecting the AI solution to succeed because they *want* it to succeed.
The only way moderation of this sort will work for the foreseeable future to any level of success is to use actual human eyeballs attached to actual human brains.
Still, Facebook’s leadership has been more concerned with taking down too many posts, company insiders told WSJ. As a result, they said, engineers are now more likely to train models that avoid false positives, letting more hate speech slip through undetected.
Those aren't the same things, are they? Reducing false positives would allow more legit speech through that was (previously) erroneously marked as hate speech. Their particular approach could still mean that their adjustments would also reduce true positives, but I'm not sure we can assume that.
That is true. But is also true that FB's algorithms can effectively block content based on basic keyword matching and user reports. The issue is that these basic filters certainly do not seem to be enabled in the vast groups hosting extremist ideas or simply in posts from users with some degree of connectivity in the network. Just check what happens if you report a casual user trying to sell something or posting something mildly inappropriate. Now, try doing the same on the content stemming out of "influencer" groups and see what happens (the BBC had this experience when reporting some major groups illegally selling land plots in the Amazon forest).
Yeah it really seems like they are failing at this.
I've seen a group about 1:87 scale "HO scale" model trains go away because people are apparently using derogatory slurs in the posts (HO as in someone who sleeps with random people for fun) in talking about the HO scale models.
Another model trains (general any scale) group is freaking out because they are being told if they don't stop using slurs/hate language (again, HO scale trains) in some posts they will be removed.
I've seen a Star Trek group in full panic because people are apparently terrorists for discussing the omnipotent alien named "Q" (Star Trek: Next Generation, Star Trek: Voyager...maybe others).
I've seen people post stuff about how "we should eliminate people who infringe on any so-called rights so they can never do it again" but when I point out murdering people for their perceived "rights" is bad I'm then removed for encouraging violence by calling it what it is instead of trying to dance around the truth. (This is usually people talking about not wearing masks and wanting to have parties and encouraging "stopping" anyone who says they need to wear one or shouldn't have gatherings). When I report the people who dance around it, I'm told they aren't violating any standards.
The system is beyond broken. They allow violent, racist, derogatory, etc. content - you just have to say it without the keywords.
Well content moderation is definitively seen as pure expense and outsourced to the lowest bidder. So you have some non-native speaker in a third world country that's passed some bare minimum proficiency test who's paid by the click to determine in seconds if this is a violation or not. And you have to dredge the sewers of human existence every day, I'd rather work first tier call center support than that so you're not exactly getting the best of the best.Their human moderators can't figure out quoting, satire, fair use, etc either.
Most people who think "AI will fix it" wildly underestimate the difficulty of the problem.
A bunch of people with their phones around chickens in a pen is a zoo. A bunch of people with their phones around chickens in a pen clawing each other is a cock fight. That's an awfully specific thing for an AI to attempt to properly understand.
Then there is quoting, satire, fair use, and a million other niche very specific things. Whoever thought AI could "solve" that problem quickly has clearly never actually used any AI before.
Friends of mine think that Facebook is persecuting them when innocent content gets blocked and I have to tell them that Facebook's moderation algorithm is just getting confused.
I didn't know that it sucked West Nile-carrying mosquito-infested stagnant pond water though. Sounds like they need to re-engineer it from the ground up.
EDIT: My solution to Facebook for this isn't more AI. I don't think they can ever get that right based on the godawful UX of their Creator Tools.
The solution to FB is less FB. A lot less FB...
I'm really not entirely sure what the upside of Facebook is these days.
Yeah it really seems like they are failing at this.
I've seen a group about 1:87 scale "HO scale" model trains go away because people are apparently using derogatory slurs in the posts (HO as in someone who sleeps with random people for fun) in talking about the HO scale models.
Another model trains (general any scale) group is freaking out because they are being told if they don't stop using slurs/hate language (again, HO scale trains) in some posts they will be removed.
I've seen a Star Trek group in full panic because people are apparently terrorists for discussing the omnipotent alien named "Q" (Star Trek: Next Generation, Star Trek: Voyager...maybe others).
I've seen people post stuff about how "we should eliminate people who infringe on any so-called rights so they can never do it again" but when I point out murdering people for their perceived "rights" is bad I'm then removed for encouraging violence by calling it what it is instead of trying to dance around the truth. (This is usually people talking about not wearing masks and wanting to have parties and encouraging "stopping" anyone who says they need to wear one or shouldn't have gatherings). When I report the people who dance around it, I'm told they aren't violating any standards.
The system is beyond broken. They allow violent, racist, derogatory, etc. content - you just have to say it without the keywords.
Yeah, I follow a bunch of model related groups where people post images of their nice models of WWII airplanes. And guess what some of the german planes have .... yup. Small swastikas on the tail fin. So now most groups requier everyone to blurr a swastika on a modelplan as they otherwise can get deleted ...
There doesn't seem to be any context check in their moderating/banning.
Most people who think "AI will fix it" wildly underestimate the difficulty of the problem.
A bunch of people with their phones around chickens in a pen is a zoo. A bunch of people with their phones around chickens in a pen clawing each other is a cock fight. That's an awfully specific thing for an AI to attempt to properly understand.
Then there is quoting, satire, fair use, and a million other niche very specific things. Whoever thought AI could "solve" that problem quickly has clearly never actually used any AI before.
“And if they do care, there’s nothing they can do about it.”"They trust me — dumb fucks" - Mark Zuckerberg, Facebook CEO
QED
They're generally the same thing. When you evaluate a model, you plot a curve with (true acceptance rate, false acceptance rate) or in more human terms if I want to block X% of the bad posts, how many good posts will be blocked too as collateral damage.Those aren't the same things, are they? Reducing false positives would allow more legit speech through that was (previously) erroneously marked as hate speech. Their particular approach could still mean that their adjustments would also reduce true positives, but I'm not sure we can assume that.
Small thing to consider with that chart though, I believe that's only in the case where you expect comparable amount of content. If bad content is a low % of the overall (one hopes with social media, but I don't know of course), then the random classifier would actually have a far worse impact in terms of false positives.They're generally the same thing. When you evaluate a model, you plot a curve with (true acceptance rate, false acceptance rate) or in more human terms if I want to block X% of the bad posts, how many good posts will be blocked too as collateral damage.Those aren't the same things, are they? Reducing false positives would allow more legit speech through that was (previously) erroneously marked as hate speech. Their particular approach could still mean that their adjustments would also reduce true positives, but I'm not sure we can assume that.
![]()
Most of the time, you evaluate AUC = area under curve - the higher you get, the better and very often the better model is stronger at all thresholds. For very strong models they often measure TAR @ FAR 1e-N for particular N, like what's my positive rate if I accept 1/1000 errors or 1/1000000 errors.
Most people who think "AI will fix it" wildly underestimate the difficulty of the problem.
A bunch of people with their phones around chickens in a pen is a zoo. A bunch of people with their phones around chickens in a pen clawing each other is a cock fight. That's an awfully specific thing for an AI to attempt to properly understand.
Then there is quoting, satire, fair use, and a million other niche very specific things. Whoever thought AI could "solve" that problem quickly has clearly never actually used any AI before.
Yes, people have this view that AI is much more advanced than the reality - cf self-driving Level 5 vehicles that are due any day now.
They're generally the same thing. When you evaluate a model, you plot a curve with (true acceptance rate, false acceptance rate) or in more human terms if I want to block X% of the bad posts, how many good posts will be blocked too as collateral damage.
![]()
Most of the time, you evaluate AUC = area under curve - the higher you get, the better and very often the better model is stronger at all thresholds. For very strong models they often measure TAR @ FAR 1e-N for particular N, like what's my positive rate if I accept 1/1000 errors or 1/1000000 errors.
You're absolutely right: it's everyone else's fault you're not good at sarcasm, and you're definitely not an asshole about it.A person who makes a joke aimed at people who are normally like that, and feels sad that sarcasm never works on the internet unless a big flashing neon sign tells people it is (thereby spoiling the joke)?
Small thing to consider with that chart though, I believe that's only in the case where you expect comparable amount of content. If bad content is a low % of the overall (one hopes with social media, but I don't know of course), then the random classifier would actually have a far worse impact in terms of false positives.
So, let them hire people to do it.
Insane you say? Just why is it unreasonable to expect companies to hire enough people to control their company? Facebook has over 1.5 billion members and makes hundreds of billions of dollars. If they need to go out and hire a million people to moderate FB then so be it.
If a company grows to the point where it cannot be managed that is corporate malpractice and that is the case with FB now. We need to make them be better and if that bankrupts them in the process?? Well, someone else will come along with the replacement.
Friends of mine think that Facebook is persecuting them when innocent content gets blocked and I have to tell them that Facebook's moderation algorithm is just getting confused.
I didn't know that it sucked West Nile-carrying mosquito-infested stagnant pond water though. Sounds like they need to re-engineer it from the ground up.
EDIT: My solution to Facebook for this isn't more AI. I don't think they can ever get that right based on the godawful UX of their Creator Tools.
Well, you can't evaluate if an algorithm is right or wrong without deciding what's your desired outcome. And by correct in this context I mean the binary variable of what Facebook wants to block, to not get lost in an ideological debate of what constitutes hate speak or what they should and shouldn't block. Even if they don't agree you could hold some sort of majority vote to get a measuring stick to measure by.Unfortunately a receiver operating characteristic curve relies on a clear and well defined "true positive" and "true negative". It generally doesn't work for nondichotomous continuous variables, and it can be horribly misleading if there isn't a gold standard for what is actually positive or negative (or the standard is subjectively dependent). For hate speech, both of those things are true, with the "gold standard" testers having variable criterion themselves.
This...I find it darkly hilarious that a company with literally ~30 Billion dollars in annual revenue is so fixated on reducing the extremely small fraction of personnel and cost required to keep it from becoming even more of a toxic hellhole than it already is. I'm so sad that Facebook needs to employ actual people, and give them actual money, to make so much more money. F&#k facebook and Zuckerberg, everyone should delete their facebook accounts in protest, just to stick it to that awful sociopath.
The people who are employed to catch the bad stuff are affected by the bad stuff they catch. People who see death, child porn, Nazi propaganda, etc. all day long often end up with mental health problems. Sure, somebody has to do it, but it would be better if that somebody where a computer that wouldn't end up with nightmares when they go to sleep at night.
Or maybe robot porn? Is that allowed on FB?Most people who think "AI will fix it" wildly underestimate the difficulty of the problem.
A bunch of people with their phones around chickens in a pen is a zoo. A bunch of people with their phones around chickens in a pen clawing each other is a cock fight. That's an awfully specific thing for an AI to attempt to properly understand.
Then there is quoting, satire, fair use, and a million other niche very specific things. Whoever thought AI could "solve" that problem quickly has clearly never actually used any AI before.
Well that's because it's not really a true AI. It's just a bunch of math formulas that follow a rule set made by the admins. Hence, no thinking, and no ability to tell the difference between a cock fight and a zoo, ever.
Plus, it is brittle: paint the cock metallic gray and the algorithm thinks it is a robot fight.
Yeah it really seems like they are failing at this.
I've seen a group about 1:87 scale "HO scale" model trains go away because people are apparently using derogatory slurs in the posts (HO as in someone who sleeps with random people for fun) in talking about the HO scale models.
Another model trains (general any scale) group is freaking out because they are being told if they don't stop using slurs/hate language (again, HO scale trains) in some posts they will be removed.
I've seen a Star Trek group in full panic because people are apparently terrorists for discussing the omnipotent alien named "Q" (Star Trek: Next Generation, Star Trek: Voyager...maybe others).
I've seen people post stuff about how "we should eliminate people who infringe on any so-called rights so they can never do it again" but when I point out murdering people for their perceived "rights" is bad I'm then removed for encouraging violence by calling it what it is instead of trying to dance around the truth. (This is usually people talking about not wearing masks and wanting to have parties and encouraging "stopping" anyone who says they need to wear one or shouldn't have gatherings). When I report the people who dance around it, I'm told they aren't violating any standards.
The system is beyond broken. They allow violent, racist, derogatory, etc. content - you just have to say it without the keywords.
Yeah, I follow a bunch of model related groups where people post images of their nice models of WWII airplanes. And guess what some of the german planes have .... yup. Small swastikas on the tail fin. So now most groups requier everyone to blurr a swastika on a modelplan as they otherwise can get deleted ...
There doesn't seem to be any context check in their moderating/banning.
I see. The goal of the moderation is to lower costs, not lower hate.Tim De Chant":3rf90dom said:That year, Facebook made it a goal to “reduce $ cost of total hate review capacity by 15%,” one document says.
Yoda":3rf90dom said:Hate leads to suffering.
Right, that word is clearly sexual/nudity according to their system...I have a hard time believing they can't design a more effective system, given some of the advanced AI today.
But they're *NOT* using AI for the content moderation. They just say they are. So let's stop it, OK?
They're using simple word-matching bolstered by some conditions, that a human puts in place when enough people complain (or the media makes them look silly) that the word "breast" isn't "hate speech, etc.