Facebook AI moderator confused videos of mass shootings and car washes

Post content hidden for low score. Show…
Most people who think "AI will fix it" wildly underestimate the difficulty of the problem.

A bunch of people with their phones around chickens in a pen is a zoo. A bunch of people with their phones around chickens in a pen clawing each other is a cock fight. That's an awfully specific thing for an AI to attempt to properly understand.

Then there is quoting, satire, fair use, and a million other niche very specific things. Whoever thought AI could "solve" that problem quickly has clearly never actually used any AI before.

Well that's because it's not really a true AI. It's just a bunch of math formulas that follow a rule set made by the admins. Hence, no thinking, and no ability to tell the difference between a cock fight and a zoo, ever.

Plus, it is brittle: paint the cock metallic gray and the algorithm thinks it is a robot fight.

I think you just thought up a more ethical solution to cock fights - robot cock fights.

Com' on down to the Robo Rooster Ranch!
 
Upvote
10 (10 / 0)

CraigJ ✅

Ars Legatus Legionis
27,007
Subscriptor
I think most people grossly overestimate the abilities of AI and ML, including, apparently, most of the companies deploying it. We've done lots of work with AI and ML for an application we've been building for the last three years, and I can tell you from first hand experience that neural nets are time consuming to train and fairly easy to corrupt. The broader they become in scope, the less accurate they become.

Artificial Intelligence isn't "intelligent" at all - it's a bunch of pattern matching algorithms. Sophisticated ones to be sure, but just algorithms.

AI can be very good at telling you that a thing is a class of thing within a defined training set, for example, let's say you want to identify fasteners in your industrial plant - AI can tell you that something is a bolt, nut, washer, or screw - that can be very reliable. Expecting AI to tell you what specific screw it is - thread pitch, head type, gauge, etc, is a much more difficult task, and you're going to be hard pressed to get over 90% accuracy reliably, and that's if you control for ambient light conditions and other variables.

AI is a vastly overrated class of technologies at this time - it's good at a lot of things, but it's pretty mediocre to downright crappy at most things.
 
Upvote
25 (26 / -1)

DriveBy

Ars Tribunus Militum
1,853
I find it darkly hilarious that a company with literally ~30 Billion dollars in annual revenue is so fixated on reducing the extremely small fraction of personnel and cost required to keep it from becoming even more of a toxic hellhole than it already is. I'm so sad that Facebook needs to employ actual people, and give them actual money, to make so much more money. F&#k facebook and Zuckerberg, everyone should delete their facebook accounts in protest, just to stick it to that awful sociopath.

The people who are employed to catch the bad stuff are affected by the bad stuff they catch. People who see death, child porn, Nazi propaganda, etc. all day long often end up with mental health problems. Sure, somebody has to do it, but it would be better if that somebody where a computer that wouldn't end up with nightmares when they go to sleep at night.

Except the computer can't do it and probably never will be able to, hence the current situation.

The ultimate solution to this problem is for the human race to evolve beyond the petty, squabbling, vacuous, argumentative, easily-manipulated, trolling creatures we currently are. No progress on that so far.

Fuck off for telling me how I'm supposed to think.

Case in point. You're exactly what I'm talking about.

DOWNVOTED

Oh no, please don't...

A person who makes a joke aimed at people who are normally like that, and feels sad that sarcasm never works on the internet unless a big flashing neon sign tells people it is (thereby spoiling the joke)?

Yep.

You keep proving my point about humans and online communication. And frankly, your sarcasm sucks.
 
Upvote
27 (27 / 0)

CraigJ ✅

Ars Legatus Legionis
27,007
Subscriptor
Friends of mine think that Facebook is persecuting them when innocent content gets blocked and I have to tell them that Facebook's moderation algorithm is just getting confused.

I didn't know that it sucked West Nile-carrying mosquito-infested stagnant pond water though. Sounds like they need to re-engineer it from the ground up.

EDIT: My solution to Facebook for this isn't more AI. I don't think they can ever get that right based on the godawful UX of their Creator Tools.

The solution to FB is less FB. A lot less FB...
 
Upvote
20 (20 / 0)
Most people who think "AI will fix it" wildly underestimate the difficulty of the problem.

A bunch of people with their phones around chickens in a pen is a zoo. A bunch of people with their phones around chickens in a pen clawing each other is a cock fight. That's an awfully specific thing for an AI to attempt to properly understand.

Then there is quoting, satire, fair use, and a million other niche very specific things. Whoever thought AI could "solve" that problem quickly has clearly never actually used any AI before.

Yes, people have this view that AI is much more advanced than the reality - cf self-driving Level 5 vehicles that are due any day now.
That is true. But is also true that FB's algorithms can effectively block content based on basic keyword matching and user reports. The issue is that these basic filters certainly do not seem to be enabled in the vast groups hosting extremist ideas or simply in posts from users with some degree of connectivity in the network. Just check what happens if you report a casual user trying to sell something or posting something mildly inappropriate. Now, try doing the same on the content stemming out of "influencer" groups and see what happens (the BBC had this experience when reporting some major groups illegally selling land plots in the Amazon forest).

Filtering out millions of daily posts is a challenge that requires sci-fi AI capabilities. However, the root cause of this problem is mostly not technological, but an explicit decision by the FB execs on how content is selectively allowed. The fact is that if FB enabled basic filtering of content from groups spreading anything from conspiracy theories to extremism they would be excluding a very vocal segment of its user base that has been living in FB for years.
 
Upvote
10 (10 / 0)
Most people who think "AI will fix it" wildly underestimate the difficulty of the problem.

A bunch of people with their phones around chickens in a pen is a zoo. A bunch of people with their phones around chickens in a pen clawing each other is a cock fight. That's an awfully specific thing for an AI to attempt to properly understand.

Then there is quoting, satire, fair use, and a million other niche very specific things. Whoever thought AI could "solve" that problem quickly has clearly never actually used any AI before.

Executives find the idea of an AI that can do this sort of thing extremely appealing because the only alternative is paying lots and lots of people to do the moderation, so they succumb to wishful thinking, expecting the AI solution to succeed because they *want* it to succeed.

The only way moderation of this sort will work for the foreseeable future to any level of success is to use actual human eyeballs attached to actual human brains.

Well that's not strictly true. There's the thrid way, which is the Facebook approach, don't pay lots and lots of people to do moderation and don't build an AI solution and just let the world burn. That way they save on people costs and development costs

That's not necessarily a bad thing, as, let's be honest, the second Facebook build a working AI moderation tool it'll go full Skynet and it'll be curtains for the human race.
 
Upvote
-1 (1 / -2)

wxfisch

Ars Scholae Palatinae
719
Subscriptor++
Still, Facebook’s leadership has been more concerned with taking down too many posts, company insiders told WSJ. As a result, they said, engineers are now more likely to train models that avoid false positives, letting more hate speech slip through undetected.

Those aren't the same things, are they? Reducing false positives would allow more legit speech through that was (previously) erroneously marked as hate speech. Their particular approach could still mean that their adjustments would also reduce true positives, but I'm not sure we can assume that.

I think based on the results we can all see it is a very safe assumption that in this case they are the same thing.
 
Upvote
1 (1 / 0)
That is true. But is also true that FB's algorithms can effectively block content based on basic keyword matching and user reports. The issue is that these basic filters certainly do not seem to be enabled in the vast groups hosting extremist ideas or simply in posts from users with some degree of connectivity in the network. Just check what happens if you report a casual user trying to sell something or posting something mildly inappropriate. Now, try doing the same on the content stemming out of "influencer" groups and see what happens (the BBC had this experience when reporting some major groups illegally selling land plots in the Amazon forest).

There is a good reason to be extra careful around the big groups/influencers. Those are bound to have a lot of people reporting them just to "troll" and you certainly don't want an overzealous, underpaid moderator to ban the "Red Cross" official account (if there is) for showing "drugs" for example. I am not saying that the same rules shouldn't apply to all accounts (big or small), just that there is certainly a reason to consider twice or thrice seriously before doing something that will influence potentially millions of people at once (the same for recent Twitter "scandal").
 
Upvote
1 (1 / 0)

SwedBear

Seniorius Lurkius
23
Subscriptor++
Yeah it really seems like they are failing at this.

I've seen a group about 1:87 scale "HO scale" model trains go away because people are apparently using derogatory slurs in the posts (HO as in someone who sleeps with random people for fun) in talking about the HO scale models.

Another model trains (general any scale) group is freaking out because they are being told if they don't stop using slurs/hate language (again, HO scale trains) in some posts they will be removed.

I've seen a Star Trek group in full panic because people are apparently terrorists for discussing the omnipotent alien named "Q" (Star Trek: Next Generation, Star Trek: Voyager...maybe others).

I've seen people post stuff about how "we should eliminate people who infringe on any so-called rights so they can never do it again" but when I point out murdering people for their perceived "rights" is bad I'm then removed for encouraging violence by calling it what it is instead of trying to dance around the truth. (This is usually people talking about not wearing masks and wanting to have parties and encouraging "stopping" anyone who says they need to wear one or shouldn't have gatherings). When I report the people who dance around it, I'm told they aren't violating any standards.

The system is beyond broken. They allow violent, racist, derogatory, etc. content - you just have to say it without the keywords.

Yeah, I follow a bunch of model related groups where people post images of their nice models of WWII airplanes. And guess what some of the german planes have .... yup. Small swastikas on the tail fin. So now most groups requier everyone to blurr a swastika on a modelplan as they otherwise can get deleted ...

There doesn't seem to be any context check in their moderating/banning.
 
Upvote
17 (17 / 0)

Kjella

Ars Tribunus Militum
1,992
Their human moderators can't figure out quoting, satire, fair use, etc either.
Well content moderation is definitively seen as pure expense and outsourced to the lowest bidder. So you have some non-native speaker in a third world country that's passed some bare minimum proficiency test who's paid by the click to determine in seconds if this is a violation or not. And you have to dredge the sewers of human existence every day, I'd rather work first tier call center support than that so you're not exactly getting the best of the best.

And nobody's mentioned the other side - the data will be notoriously noisy as people report stuff just to be vindinctive or they're crazy or part of a trolling/activist campaign to get legal content removed from the site. So it's not like you want to escalate all the "I don't see any violation" results either. In fact, judging by the constant stream of people innocently banned I'm guessing many moderators just go by "where there's smoke, there's fire" and ban stuff even if they don't understand the post and fake it until they make it or get fired.

TL;DR pay peanuts, get monkeys
 
Upvote
13 (13 / 0)

Xepherys

Ars Scholae Palatinae
852
Subscriptor
Most people who think "AI will fix it" wildly underestimate the difficulty of the problem.

A bunch of people with their phones around chickens in a pen is a zoo. A bunch of people with their phones around chickens in a pen clawing each other is a cock fight. That's an awfully specific thing for an AI to attempt to properly understand.

Then there is quoting, satire, fair use, and a million other niche very specific things. Whoever thought AI could "solve" that problem quickly has clearly never actually used any AI before.

This to the nth degree!

The cockfighting example is great, but let's take it a step further. In a backyard chicken raising group, someone posts a picture of two of their hens fighting (which sometimes happens) asking about solutions. How would the AI contextually know that it was a legitimate request for help rather than people taking bets on two hens trying to gain the alpha status?

But even worse, when content is incorrectly reported, the appeals process is abysmal. Most appeals just get set to the same removed state, and the option to appeal to the board is almost always declined. My wife is on a month-long mute after her and a friend were teasing each other - precisely the way they do in real life. Since the AI can't detect sarcasm, and the humans who supposedly intervene on users' behalf don't bother, I'm really not entirely sure what the upside of Facebook is these days.
 
Upvote
12 (12 / 0)

graylshaped

Ars Legatus Legionis
61,664
Subscriptor++
Friends of mine think that Facebook is persecuting them when innocent content gets blocked and I have to tell them that Facebook's moderation algorithm is just getting confused.

I didn't know that it sucked West Nile-carrying mosquito-infested stagnant pond water though. Sounds like they need to re-engineer it from the ground up.

EDIT: My solution to Facebook for this isn't more AI. I don't think they can ever get that right based on the godawful UX of their Creator Tools.

The solution to FB is less FB. A lot less FB...

If they have grown to a scale where they can't effectively monitor their own domain, then it is past time for their demise.
 
Upvote
18 (18 / 0)

Inaksa

Ars Scholae Palatinae
736
I've reported a dozen times groups for basically saying: "lets kill the poor because they are the source of crime" and it was never taken down. Within that group there were posts of people encouraging other members to "put to a stop to the mass of immigrants stealing our jobs" yet the automatic moderation tools were NEVER flagging / removing those posts... I can't say the AI field is crap, because I am pretty sure it is not, but facebook implementation surely seems to be crappy, particular outside of english speaking countries... it even fails with spanish the second language with the most native speakers...
 
Upvote
2 (2 / 0)
Yeah it really seems like they are failing at this.

I've seen a group about 1:87 scale "HO scale" model trains go away because people are apparently using derogatory slurs in the posts (HO as in someone who sleeps with random people for fun) in talking about the HO scale models.

Another model trains (general any scale) group is freaking out because they are being told if they don't stop using slurs/hate language (again, HO scale trains) in some posts they will be removed.

I've seen a Star Trek group in full panic because people are apparently terrorists for discussing the omnipotent alien named "Q" (Star Trek: Next Generation, Star Trek: Voyager...maybe others).

I've seen people post stuff about how "we should eliminate people who infringe on any so-called rights so they can never do it again" but when I point out murdering people for their perceived "rights" is bad I'm then removed for encouraging violence by calling it what it is instead of trying to dance around the truth. (This is usually people talking about not wearing masks and wanting to have parties and encouraging "stopping" anyone who says they need to wear one or shouldn't have gatherings). When I report the people who dance around it, I'm told they aren't violating any standards.

The system is beyond broken. They allow violent, racist, derogatory, etc. content - you just have to say it without the keywords.

Yeah, I follow a bunch of model related groups where people post images of their nice models of WWII airplanes. And guess what some of the german planes have .... yup. Small swastikas on the tail fin. So now most groups requier everyone to blurr a swastika on a modelplan as they otherwise can get deleted ...

There doesn't seem to be any context check in their moderating/banning.

You can be sure there is plenty of context checking. If such content is posted by an "influencer" or in group that plays a significant role in FB's network in terms of content distribution then nothing will happen. If the same content is posted by a casual user or in a small hobbyist group then the content has a high likelihood of being automatically blocked and the group seriously risks being shut down if the "offenses" are repeated. FB allowed for years a non-irrelevant number of groups to disseminate misinformation and hate speech. Today, these groups are major content distribution hubs and FB has absolutely zero motivation to change the status quo.
 
Upvote
5 (5 / 0)

DarkWolf77

Wise, Aged Ars Veteran
121
Most people who think "AI will fix it" wildly underestimate the difficulty of the problem.

A bunch of people with their phones around chickens in a pen is a zoo. A bunch of people with their phones around chickens in a pen clawing each other is a cock fight. That's an awfully specific thing for an AI to attempt to properly understand.

Then there is quoting, satire, fair use, and a million other niche very specific things. Whoever thought AI could "solve" that problem quickly has clearly never actually used any AI before.

Very true, I actually recently got a comment flagged because I said that pro-lifers only care about life before it's born - once they're born, just line 'em up and shoot 'em down.
 
Upvote
2 (6 / -4)

Kjella

Ars Tribunus Militum
1,992
Those aren't the same things, are they? Reducing false positives would allow more legit speech through that was (previously) erroneously marked as hate speech. Their particular approach could still mean that their adjustments would also reduce true positives, but I'm not sure we can assume that.
They're generally the same thing. When you evaluate a model, you plot a curve with (true acceptance rate, false acceptance rate) or in more human terms if I want to block X% of the bad posts, how many good posts will be blocked too as collateral damage.

330px-Roc_curve.svg.png


Most of the time, you evaluate AUC = area under curve - the higher you get, the better and very often the better model is stronger at all thresholds. For very strong models they often measure TAR @ FAR 1e-N for particular N, like what's my positive rate if I accept 1/1000 errors or 1/1000000 errors.
 
Upvote
8 (8 / 0)
> The posts removed by AI tools only accounted for 3–5 percent of views of hate speech and 0.6 percent of views of violence and incitement.

Not trying to defend Fb, but wouldn’t it be true that content removed by automated systems would be removed far quicker and thus less likely to get any views at all? Anything that survived long enough to need manual moderation would almost certainly have time to attract more attention.
 
Upvote
1 (1 / 0)
Those aren't the same things, are they? Reducing false positives would allow more legit speech through that was (previously) erroneously marked as hate speech. Their particular approach could still mean that their adjustments would also reduce true positives, but I'm not sure we can assume that.
They're generally the same thing. When you evaluate a model, you plot a curve with (true acceptance rate, false acceptance rate) or in more human terms if I want to block X% of the bad posts, how many good posts will be blocked too as collateral damage.

330px-Roc_curve.svg.png


Most of the time, you evaluate AUC = area under curve - the higher you get, the better and very often the better model is stronger at all thresholds. For very strong models they often measure TAR @ FAR 1e-N for particular N, like what's my positive rate if I accept 1/1000 errors or 1/1000000 errors.
Small thing to consider with that chart though, I believe that's only in the case where you expect comparable amount of content. If bad content is a low % of the overall (one hopes with social media, but I don't know of course), then the random classifier would actually have a far worse impact in terms of false positives.
 
Upvote
5 (5 / 0)
Most people who think "AI will fix it" wildly underestimate the difficulty of the problem.

A bunch of people with their phones around chickens in a pen is a zoo. A bunch of people with their phones around chickens in a pen clawing each other is a cock fight. That's an awfully specific thing for an AI to attempt to properly understand.

Then there is quoting, satire, fair use, and a million other niche very specific things. Whoever thought AI could "solve" that problem quickly has clearly never actually used any AI before.

Yes, people have this view that AI is much more advanced than the reality - cf self-driving Level 5 vehicles that are due any day now.

And in the meantime, they treat their centers stuffed full of moderators like animals:

https://nymag.com/intelligencer/2019/02 ... ation.html

There is no hope for these people getting any break from AI (or a fair living wage, while we're at it Mark?)
 
Upvote
5 (5 / 0)

Happy Medium

Ars Tribunus Militum
2,024
Subscriptor++
They're generally the same thing. When you evaluate a model, you plot a curve with (true acceptance rate, false acceptance rate) or in more human terms if I want to block X% of the bad posts, how many good posts will be blocked too as collateral damage.

330px-Roc_curve.svg.png


Most of the time, you evaluate AUC = area under curve - the higher you get, the better and very often the better model is stronger at all thresholds. For very strong models they often measure TAR @ FAR 1e-N for particular N, like what's my positive rate if I accept 1/1000 errors or 1/1000000 errors.

Unfortunately a receiver operating characteristic curve relies on a clear and well defined "true positive" and "true negative". It generally doesn't work for nondichotomous continuous variables, and it can be horribly misleading if there isn't a gold standard for what is actually positive or negative (or the standard is subjectively dependent). For hate speech, both of those things are true, with the "gold standard" testers having variable criterion themselves.

My bet is that Facebook has set the threshold on its AI detection so far to the left of the ROC curve that it basically will only catch the most obvious of the obvious, and all other slightly obfuscated hate speech are falsely classified as negative. BTW this is the exact OPPOSITE of what you want to do for a "screening algorithm", which should be tuned to catch most cases but also alot of false positives. These are then tested with gold-standard testing to identify true positives and true negatives. Screening allows you to maximize your yield for gold standard testing (which is often expensive). Once again Facebook being cheap and trying to do the wrong thing with the wrong analytic testing.
 
Upvote
10 (10 / 0)
A person who makes a joke aimed at people who are normally like that, and feels sad that sarcasm never works on the internet unless a big flashing neon sign tells people it is (thereby spoiling the joke)?
You're absolutely right: it's everyone else's fault you're not good at sarcasm, and you're definitely not an asshole about it.
 
Upvote
14 (14 / 0)

Pariah

Ars Tribunus Militum
2,671
So, let them hire people to do it.
Insane you say? Just why is it unreasonable to expect companies to hire enough people to control their company? Facebook has over 1.5 billion members and makes hundreds of billions of dollars. If they need to go out and hire a million people to moderate FB then so be it.
If a company grows to the point where it cannot be managed that is corporate malpractice and that is the case with FB now. We need to make them be better and if that bankrupts them in the process?? Well, someone else will come along with the replacement.
 
Upvote
4 (4 / 0)

Kjella

Ars Tribunus Militum
1,992
Small thing to consider with that chart though, I believe that's only in the case where you expect comparable amount of content. If bad content is a low % of the overall (one hopes with social media, but I don't know of course), then the random classifier would actually have a far worse impact in terms of false positives.

No, but for the curve to be valid the test distribution needs to match the training distribution. So if you're looking for the needle in the haystack, you need to train with a realistic amount of needles to hay. If you train it with 50-50 needles and then use it on data with 0.1% needles you're going to get a poor result. Maybe not a completely terrible result as it should still learn "needle features" and "hay features" but definitively not optimal boundary conditions.
 
Upvote
2 (2 / 0)

Happy Medium

Ars Tribunus Militum
2,024
Subscriptor++
So, let them hire people to do it.
Insane you say? Just why is it unreasonable to expect companies to hire enough people to control their company? Facebook has over 1.5 billion members and makes hundreds of billions of dollars. If they need to go out and hire a million people to moderate FB then so be it.
If a company grows to the point where it cannot be managed that is corporate malpractice and that is the case with FB now. We need to make them be better and if that bankrupts them in the process?? Well, someone else will come along with the replacement.

You forget that the goal of today's tech company leadership is to eliminate everyone from companies except for C-suite executives. That way all profit can go to C-suite and hedge funds which in their minds is the definition of a sustainable business model. Also, have you actually tried to talk to one of these "working class" people? I mean, you can't even chat about flying halfway around the world in their private jet for a round of golf and a (comped) 20k dinner with them! Why would you care to have any of them in your company!? /s

Edit: A reminder that most of these assholes consider the world of Atlas Shrugged to be their utopia, with everyone being "self actualized" architects whose primary job is pontificating about how great they are and how everyone else are just mooching off of them, while their food/clothing/shelter somehow mysteriously gets created without someone needing to be paid for them. A book written by a woman whose foundational philosophy was that selfishness was the ideal human state, who died broke and living off Social Security and Medicare because she couldn't help herself and the only people who were willing to help her out when she fell on hard times were godless communist humanitarian social workers.
 
Upvote
9 (11 / -2)

magicland

Smack-Fu Master, in training
35
Friends of mine think that Facebook is persecuting them when innocent content gets blocked and I have to tell them that Facebook's moderation algorithm is just getting confused.

I didn't know that it sucked West Nile-carrying mosquito-infested stagnant pond water though. Sounds like they need to re-engineer it from the ground up.

EDIT: My solution to Facebook for this isn't more AI. I don't think they can ever get that right based on the godawful UX of their Creator Tools.

I literally spent 90 out of 100 days this summer in "facebook jail" because crappy AI algorithms flagged posts that any human could easily see didn't violate a single of their false "standards". And there's no longer any way to protest or get a review (other than by the same crappy AI that flagged you in the first place). There used to be a secret one, and every single time prior I was able to get them to drop the bans, yet they kept escalating the "punishment". I actually had to create a new profile so that the next time their crappy AI bans me (and it will), it'll start out again at 24 hours and not 30 days...
 
Upvote
3 (3 / 0)

Kjella

Ars Tribunus Militum
1,992
Unfortunately a receiver operating characteristic curve relies on a clear and well defined "true positive" and "true negative". It generally doesn't work for nondichotomous continuous variables, and it can be horribly misleading if there isn't a gold standard for what is actually positive or negative (or the standard is subjectively dependent). For hate speech, both of those things are true, with the "gold standard" testers having variable criterion themselves.
Well, you can't evaluate if an algorithm is right or wrong without deciding what's your desired outcome. And by correct in this context I mean the binary variable of what Facebook wants to block, to not get lost in an ideological debate of what constitutes hate speak or what they should and shouldn't block. Even if they don't agree you could hold some sort of majority vote to get a measuring stick to measure by.
 
Upvote
1 (1 / 0)

Steve-D

Ars Scholae Palatinae
1,151
Subscriptor++
I find it darkly hilarious that a company with literally ~30 Billion dollars in annual revenue is so fixated on reducing the extremely small fraction of personnel and cost required to keep it from becoming even more of a toxic hellhole than it already is. I'm so sad that Facebook needs to employ actual people, and give them actual money, to make so much more money. F&#k facebook and Zuckerberg, everyone should delete their facebook accounts in protest, just to stick it to that awful sociopath.

The people who are employed to catch the bad stuff are affected by the bad stuff they catch. People who see death, child porn, Nazi propaganda, etc. all day long often end up with mental health problems. Sure, somebody has to do it, but it would be better if that somebody where a computer that wouldn't end up with nightmares when they go to sleep at night.
This...
Facebook will pay $52 million in settlement with moderators who developed PTSD on the job.
So its not just the cost of additional moderators...there's the risk of more financial exposure to PTSD claims from the expanded group of moderators.
Its a lose lose for Facebook and God help the poor employees/contractors would have to review all that cr@p. It's no wonder they hope to deal with this with AI.
 
Upvote
5 (5 / 0)

fractl

Ars Praefectus
3,002
Subscriptor
Most people who think "AI will fix it" wildly underestimate the difficulty of the problem.

A bunch of people with their phones around chickens in a pen is a zoo. A bunch of people with their phones around chickens in a pen clawing each other is a cock fight. That's an awfully specific thing for an AI to attempt to properly understand.

Then there is quoting, satire, fair use, and a million other niche very specific things. Whoever thought AI could "solve" that problem quickly has clearly never actually used any AI before.

Well that's because it's not really a true AI. It's just a bunch of math formulas that follow a rule set made by the admins. Hence, no thinking, and no ability to tell the difference between a cock fight and a zoo, ever.

Plus, it is brittle: paint the cock metallic gray and the algorithm thinks it is a robot fight.
Or maybe robot porn? Is that allowed on FB? :D
 
Upvote
1 (1 / 0)

Perfectly Frank

Ars Centurion
295
Subscriptor
Yeah it really seems like they are failing at this.

I've seen a group about 1:87 scale "HO scale" model trains go away because people are apparently using derogatory slurs in the posts (HO as in someone who sleeps with random people for fun) in talking about the HO scale models.

Another model trains (general any scale) group is freaking out because they are being told if they don't stop using slurs/hate language (again, HO scale trains) in some posts they will be removed.

I've seen a Star Trek group in full panic because people are apparently terrorists for discussing the omnipotent alien named "Q" (Star Trek: Next Generation, Star Trek: Voyager...maybe others).

I've seen people post stuff about how "we should eliminate people who infringe on any so-called rights so they can never do it again" but when I point out murdering people for their perceived "rights" is bad I'm then removed for encouraging violence by calling it what it is instead of trying to dance around the truth. (This is usually people talking about not wearing masks and wanting to have parties and encouraging "stopping" anyone who says they need to wear one or shouldn't have gatherings). When I report the people who dance around it, I'm told they aren't violating any standards.

The system is beyond broken. They allow violent, racist, derogatory, etc. content - you just have to say it without the keywords.

Yeah, I follow a bunch of model related groups where people post images of their nice models of WWII airplanes. And guess what some of the german planes have .... yup. Small swastikas on the tail fin. So now most groups requier everyone to blurr a swastika on a modelplan as they otherwise can get deleted ...

There doesn't seem to be any context check in their moderating/banning.

How times change!

In the mid 60's, plastic kits of these planes came with decals for the markings, but without the small swastikas, so you had to buy a separate sheet of swastika decals to complete your model. Think they were illegal in some countries.
 
Upvote
1 (1 / 0)
I would hope that if a tool or group only hit a 3-5% success rate with hopes of up to 20%, a sane company would bolster or replace it.

Ah well. "Artificial intelligence? I see no evidence of intelligence."

Tim De Chant":3rf90dom said:
That year, Facebook made it a goal to “reduce $ cost of total hate review capacity by 15%,” one document says.
I see. The goal of the moderation is to lower costs, not lower hate.

Yoda":3rf90dom said:
Hate leads to suffering.
 
Upvote
1 (1 / 0)

mmiller7

Ars Legatus Legionis
11,987
I have a hard time believing they can't design a more effective system, given some of the advanced AI today.

But they're *NOT* using AI for the content moderation. They just say they are. So let's stop it, OK?

They're using simple word-matching bolstered by some conditions, that a human puts in place when enough people complain (or the media makes them look silly) that the word "breast" isn't "hate speech, etc.
Right, that word is clearly sexual/nudity according to their system...
 
Upvote
0 (0 / 0)