Mastodon, an alternative social network to Twitter, has a serious problem with child sexual abuse material according to researchers from Stanford University. In just two days, researchers found over 100 instances of known CSAM across over 325,000 posts on Mastodon. The researchers found hundreds of posts containing CSAM related hashtags and links pointing to CSAM trading and grooming of minors. One Mastodon server was even taken down for a period of time due to CSAM being posted. The researchers suggest that decentralized networks like Mastodon need to implement more robust moderation tools and reporting mechanisms to address the prevalence of CSAM.
While the study itself is a good read and I agree with the conclusions—Mastodon, and decentralized social media need better moderation tools—it’s hard to not read the Verge headline as misleading. One of the study authors gives more context here https://hachyderm.io/@det/110769470058276368. Basically most of the hits came from a large Japanese instance that no one federates with; the author even calls out that the blunt instrument most Mastodon admins use is to blanket defederate with instances hosted in Japan due to their more lax (than the US) laws around CSAM. But the headline seems to imply that there’s a giant seedy underbelly to places like mastodon.social[1] that are rife with abuse material. I suppose that’s a marketing problem of federated software in general.
- There is a seedy underbelly of mainstream Mastodon instances, but it’s mostly people telling you how you’re supposed to use Mastodon if you previously used Twitter.
In my opinion the biggest issue the author points out is that cached materials are sometimes retained even after moderator action. Which honestly just sounds like a straight up bug more than anything. Though if I were running an instance, the feds showing up at my door with a warrant because I’ve been accidentally distributing CSAM would be my nightmare scenario. And of course jurisdiction plays a part, too: an American user on a Canadian server might see drawn depictions of sexualized minors, think “weird but not illegal,” and now the Canadian admin has content that’s illegal in Canada on their Canadian server and has no idea.
IMO I think the best solution to this is something similar to what Renaud Chaput (Mastodon’s resident infra boffin) described in his recent blog post. Effectively, give admins a way to hand this off to pluggable third-party services. Admins that are worried about this sort of thing can then have some degree of safety via e.g. PhotoDNA, whereas others can take on additional risk and preserve additional privacy.
All that said: yeah the headline makes it sound like .social is some 8chan-esque hellhole, whereas in reality my feed is 99% German programmers sharing milquetoast political takes.
deleted by creator
The person outright rejects defederation as a solution when it IS the solution, if an instance is in favor of this kind of thing you don’t want to federate with them, period.
I also find worrying the amount of calls for a “Fediverse police” in that thread, scanning every image that gets uploaded to your instance with a 3rd party tool is an issue too, on one side you definitely don’t want this kinda shit to even touch your servers and on the other you don’t want anybody dictating that, say, anti-union or similar memes are marked, denounced and the person who made them marked, targeted and receiving a nice Pinkerton visit.
This is a complicated problem.
Edit: I see somebody suggested checking the observations against the common and well used Mastodon blocklists, to see if the shit is contained on defederated instances, and the author said this was something they wanted to check, so i hope there’s a followup
The person outright rejects defederation as a solution when it IS the solution
It’s the solution in the sense that it removes it from view of users of the mainstream instances. It is not a solution to the overall problem of CSAM and the child abuse that creates such material. There is an argument to be made that is the only responsibility of instance admins, and that past that is the responsibility of law enforcement. This is sensible, but it invites law enforcement to start overtly trawling the Fediverse for offending content, and create an uncomfortable situation for admins and users, as they will go after admins who simply do not have the tools to effectively monitor for CSAM.
Defederation also obviously does not prevent users of the instance from posting CSAM. Admins even unknowingly having CSAM on their instance can easily lead to the admins being prosecuted and the instance taken down. Section 230 does not apply to material illegal on a federal level, and SESTA requires removal of material that violates even state level sex trafficking laws.
Yeah I recall that the Japanese instances have a big problem with that shit. As for the rest of us, Facebook actually open sourced some efficient hashing algorithms for use for dealing with CSAM; Fediverse platforms could implement these, which would just leave the issue of getting an image hash database to check against. All the big platforms could probably chip in to get access to one of those private databases and then release a public service for use with the ecosystem.
That’d be useless though, because first, it’d probably opt-in via configuration settings and even if it wasn’t, people would just fork and modify the code base or simply switch to another ActivityPub implementation.
We’re not gonna fix society using tech unless we’re all hooked up to some all knowing AI under government control.
That’s not the point. Yes, child porn sites can host child porn. Other sites/instances can’t stop that. But what other instances can stop, is redistributing said child porn. And for that purpose, such technology would be useful.
researchers found 112 instances of known CSAM across 325,000 posts on the platform
So you’re willing to vacuum up the hashes of every image file uploaded on thousands of decentralized systems into a centralized systems (that is out of “our” control and coupled with direct access for law enforcement and corporations) to prevent the distribution of 0.034% of files that are CSAM and that could just as well be reported and deleted by admins and moderators? Remember how Snowden warned us about metadata?
If you think that’s a wise tradeoff, I guess, go ahead. But then I’d have to question the entire goal of being decentralized in the first place. If it’s all about “a billionare can’t wreak havok upon my social network”, then yeah, I guess decentralization helps a bit but even that remains to be seen.
But if you’re actually willing to do that, you’d probably also be in favor of having government backdoors into chat encryption (and thus rendering the entire concept moot, because you can’t have backdoors that cannot be discovered by other nefarious actors) and even more censorship-resistant systems like Tor because evil people use it to exchange CSAM anonymously as well?
If you read the article, there’s actually more. The problem also isn’t just that they post the material directly onto Mastodon, they also use the platform to network.
More… or less, given that US-centric CSAM detectors mark AI, CG and drawings at the same level as IRL images.
Preventing the networking, is called defederation, that’s already there.
I don’t know how you get the impression that this increases censorship.
Instance admin already manually block content. And they are already able to do that to any extend they wish to do.
This tool would simply automate that process.
Admins would not gain or lose any ability to block content. Identifying child porn would simply be easier.
(Imagine an admin going to their database and doing a CTRL+F with the term “child porn”, and then going through the posts to find offending ones. But instead of CTRL+F it’s an AI.)
(For some reason I don’t get a notification when you answer my comment. Is that a known issue? Did you block me or something?)
I’m referring to the CSAM scanning systems that are outside of the control of almost anyone except governments, three letter agencies, other law enforcement and parts of the private sector.
These systems must be fed every hash of every file submitted to as many instances as possible to be efficient with close to no oversight or public scrutiny.
Pass.
Edit: I’m not blocking you but I noticed intermittent connectivity issues on lemmy.ml today, possibly around the time where I replied.
I don’t know how you get the impression that this increases censorship.
This tool would simply automate that process.
Well… precisely?
Censorship is any removal of material considered “undesirable”, whether you agree with why it is considered “undesirable” or not.
If you want more censorship of “material that you personally consider undesirable”, then just say so, don’t hide behind some disingenuous “but it isn’t censorship”. Then we can discuss the merits of that classification, and of the means proposed to achieve such censorship.
You seem to be missing my point.
This tool would not increase censorship.
Admins are already able to implement all censorship they want.
Admins are already able to block left-wing opinions, right-wing opinions, child porn, normal porn.
And that already happens.
Lots of instances (like feddit.de) block pornographic content.
Lots of instances (like lemmy.blahaj.zone) block right-wing content.
It is already possible, and it is already happening.
An AI which can detect CSAM (and potentially other content) won’t change that. It will simply make the admins’ job easier.
I think you’re missing the opposite point.
An AI trained on a given instance’s admin decisions, would increase the same censorship the admins already apply. We can agree on that.
An AI trained by a third-party on unknown data (and actually illegal to be known) which can detect “CSAM (and potentially other content)”, would increase censorship of both CSAM… and of “potentially other content” out of the control, preferences or knowledge of the instance admins.
Using an external service to submit ALL content for an AI trained by a third-party to make a decision, not only allows the external service to collect ALL the content (not just the censored one), but also to change the decision parameters without previous notice, or any kind of oversight, and apply it to ALL content.
The problem is a difference between:
- instance modlog -> instance content filtered by instance AI -> makes similar decisions as instance admins
- [illegal to know dataset] -> third-party captures all content, feeds to undisclosed AI -> makes unknown decisions in the name of removing CSAM
One is an AI that can make mistakes, but mostly follows whatever an admin would do. The other, is a 100% surveillance state nightmare in the name of filtering 0.03% of content.
That’d be useless though, because first, it’d probably opt-in via configuration settings and even if it wasn’t, people would just fork and modify the code base or simply switch to another ActivityPub implementation.
No it wouldn’t, because it’d still be significantly easier for instances to deal with CSAM content with this functionality built into the platforms. And I highly doubt there’s going to be a mass migration from any Fediverse platform that implements such a feature (though honestly I’d be down to defederate with any instance that takes serious issue with this).
And the instances who want to engage with that material would all opt for the fork and be done with it. That’s all I meant.
Right, and the rest of us would be able to more effectively filter it out from our instances.
Of course, I didn’t say that though.
Pedos that got banned from platforms turn to other platform who hasnt done it yet
In other news: the sky is blue
While white knights propose ways to control everyone everywhere everytime, in the name of catching the pedos who will just hop to the next platform (or have already).
“massive child abuse material problem”
“112 instances of known CSAM across 325,000 posts”
While any instance is unacceptable, does 112/325,000 constitute a “massive problem”?
0.0000034462% of posts are unacceptable! Massive problem!
You moved the period in the wrong direction. It’s
0.034462%
.That’s just the material they knew was CSAM from previous investigations.
There were also 713 uses of the top 20 CSAM-related hashtags across the Fediverse on posts that contained media, as well as 1,217 text-only posts that pointed to “off-site CSAM trading or grooming of minors.” The study notes that the open posting of CSAM is “disturbingly prevalent.”
Hi, since Mastodon is no longer acceptable due to the 0.04 percent of instances found to have abusive material, would someone please suggest the alternative social network with 0 percent of these incidents? Companies like Facebook and Twitter are driven by shareholders and greed, Mastodon is a community effort and you’ll certainly find bad actors there, but I feel less dirty contributing to a community project, versus helping billionaires like Zuck and Elon line their pockets harvesting my data.
Is there any way mastodon stands out from other self hosted websites? Would the CSAM material be harder to distribute or easier to prosecute if they ran, say, a self-hosted bulletin board for it instead?
Privately hosted websites are only useful for established clients. Via social media and image sharing platforms the distributors try to reach new clientele. They often have more or less hidden tags and codes that can attract potential customers. When someone reacts to these they carefully try to see if the person could be trusted to have access to private sharing. It’s how drug dealers online sometimes work or extremist groups.
The researchers suggest that decentralized networks like Mastodon need to implement more robust moderation tools and reporting mechanisms to address the prevalence of CSAM.
I agree, but who’s going to pay for it? Those aren’t just freely available additions to any application that you only need to toggle on.
I agree, but who’s going to pay for it?
How about police/the tax payer?
If university researchers can find the stuff, then police can find it too. There should be an established way to flag the user (or even the entire instance) so that content can be removed from the fediverse while simultaneously asking for all data that is available to try to catch the criminals.
And of course, if regular users come across anything illegal they will report it too, and it should be removed quickly (I’d hope immediately in many cases, especially if the post was by a brand new/untrusted account).
A decentralised platform like the Fediverses won’t easily work with nation states and their taxes. Even with Wikipedia today, it’s not funded directly via any government - but rather by certain universities giving some money to it + all the private doners.
And even if we get that working, power politics will mess this up like so often when things actually get troublesome.
It might be interesting to explore cryptocurrencies as for donations here though. They do have international liquidity and they can’t be misused foe power politics.
I’m not suggesting Beehaw/etc should be government funded. Rather I’m suggesting it’s already possible for basically anyone in the fediverse to report a post as needing urgent moderator attention.
I think there will be tax payer funded efforts, donation funded efforts, volunteers, etc that are unaffiliated with any specific instance but go through major instances and hit the report button where they consider it to be appropriate — not just manually with people but also with automated tools such as searching for images by a hash of their contents or maybe even running messages through a Large Language Model to check if it is, for example, a form of targeted harassment.
And yes, the report feature will be abused. That’s unavoidable and needs to be taken into account when deciding how to respond to a report. An algorithm could easily prioritise reports based on the history of past reports made by the same person / organisation.
Stack Exchange has a pretty good system - decisions by individuals are not trusted. Rather those trigger a review by a randomly selected (and trusted) individual to get a second opinion. And even after a decision has been made and an action has been taken (ban a user, etc) there’s often a third or even fourth review. And there are processes to appeal and question decisions.
It’s not an easy problem to solve, but as the creator of mastodon said - many hands make light work. The fediverse can some day have a billion people doing moderation tasks - where even simple acts like hitting the upvote button become part of the moderation system (upvote would imply that this account holder tends to make valuable contributions to the community, and should make the moderation system less likely to come down with a ban hammer).
And I also think there is scope for some communities to be entirely government funded. For example I’d love for every city in the world to run an offical community, with official local government anouncements as well as moderated discussions relevant to people who live in or are visiting the city.
I am exactly doubting your suggestion of tax paid donations. I don’t think this will happen, unless we actually come together and try to actually enforce this on the political level in various countries.
After all, open source software is an essential and critical foundation since many decades - but I’m not sure, whether there is any government that has made a pledge to donate a certain amount of money per year into the development and funding of such general purpose software. (Maybe I’m wrong though.)
Before the fediverse can get any public funding, we need to make some political efforts. the UN is the largest such institution - and it took all the fiasco with the 2 world war to get many countries pledge to donate to it every year…
I’m not sure, whether there is any government that has made a pledge to donate a certain amount of money per year into the development and funding of such general purpose software
Tor (The Onion Router) was invented by a United States Naval Intelligence Unit. They released the source code as open source and handed control over to the EFF but several US Government agencies continued to provide substantial funding (especially the Bureau of Democracy, Human Rights and Labor Affairs). As far as I know they continue to fund it.
There are definitely examples of Governments funding open source software, especially things that are as valuable as a social network.
I am exactly doubting your suggestion of tax paid donations. I don’t think this will happen, unless we actually come together and try to actually enforce this on the political level in various countries.
I meant private donations, which are already happening.
I think tax revenue would be spent on government employees looking over content in search of evidence of crimes/etc, which I’m sure is also already happening. I hope they don’t just look - they should be reporting whatever they find.
The researchers can’t be taken seriously if they don’t acknowledge that you can’t force free software to do something you don’t want it to.
Even if we started way down at the stack and we added a CSAM hash scanner to the Linux kernel, people would just fork the kernel and use their own build without it.
Same goes for nginx or any other web server or web proxy. Same goes for Tor. Same goes for Mastodon or any other Fedi/ActivityPub implementation.
It. Does. Not*. Work.
* Please, prove me wrong, I’m not all knowing, but short of total surveillance, I see no technical solution to this.
Mastodon.art doesn’t.
And the beauty of Mastodon is you can block an entire instance, as can your admin, when something awful is posted. Mastodon even has a hashtag they use as an alert for this kind of thing. (#Fediblock)
not surprised at all. this is a growing pain here too because this was previously a thing handled invisibly by platforms and federation makes it fall to individual sysadmins and whoever they have on staff. the tools for this stuff are, in general, not here yet–and as people have noted there are potential conflicts with some of the principles of federation introduced by those tools that can’t be totally handwaved.
I don’t trust stanford to not work on behalf of the CIA or other 3 alphabet orgs. They kind of turn a blind eye to CSA in churches but a federated media? This sounds like a smear job.
I really don’t think the CIA cares to be honest…
I think some of the problematic instances have been defederated, IIRC there’s a large japanese instance that was defederated long time ago due to child abuse content. But still since I’ve been seeing increases of hate speech and dog whistling misogyny and homophobia in some instances, I won’t be surprised if CSAM stuff has been trading under our noses.
The main issue is that, with so many users nowadays and small moderation teams, especially in the larger instances, it’s hard to moderate and tackle CSAM problems effectively. I really wish larger instances would limit user registrations or start splitting off into smaller manageable ones.
Also, since they are trading using certain hashtags, blocking those hashtags might not be a bad idea.
Removed by mod
This is a whataboutist counterpoint at best. Universities and their researchers are not a monolith.
This is just bad press. The actual study is quite good and offers good recommendations on how to improve moderation on the fediverse
I’m not fully sure about the logic and perhaps hinted conclusions here. The internet itself is a network with major CSAM problems (so maybe we shouldn’t use it?).