Also, just to make this clear, Tumblr lied. I mean, we already know this, but I mean they liiiieeeeed. All that stuff they promised about what would or would not be censored? That cannot be delivered on with a system this simple. Nude classical sculptures, political protests, male-presenting nipples (really Tumblr?), nude art outside the context of sex, all that? You cannot train a bicategory one-layer neural network to exclude those things. It cannot be done. Tumblr never intended for those things to actually be permitted, they were just lying. Because the system they have cannot actually do what they said it would and never will be able to.
Also, this kind of system is super vulnerable to counter-neural strategies. I bet you before the end of the month someone hooks up their own open source one layer bicategory neural network which puts an imperceptible (to humans) layer of patterned static over arbitrary images, and trains it by having it bot-post static-ed images to Tumblr and reinforcing based on whether the images are labeled nsfw or sfw. Seriously, within a month someone will have an input-output machine which can turn any image ‘sfw’ in Tumblr’s eyes.
This is genuinely pathetic. Like, I have real pity for whoever implemented this, because it’s clear Tumblr doesn’t actually have any engineers with any expertise with machine learning left at all and they foisted the job off on some poor bastard who has no idea what they’re doing and is going to get all kinds of flak for their (perfectly reasonable and predetermined) failure from management.