This is a non safe for work, as in lewd, instance.
It’s safe to assume that anything you see here will be lewd.
Just letting you know. If you view the content and then pull a surprised pikachu when you see big anime tiddies, Judy Hopps getting railed or some furry vore… Then it’s on you.
Should this popup continue to show up, you may want to enable cookies or disable privacy focused addons on your browser, I assure you we won’t track our user.
Should that fail, some users claim they got rid of it by hammering the ok button.
Other than citing the entire training data set, how would this be possible?
The entire training set isn’t used in each permutation. Your keywords are building the samples based on metadata tags tied back to the original images.
If you ask for “Iron Man in a cowboy hat”, the toolset will reach for some catalog of Iron Man images and some catalog of cowboy hat images and some catalog of person-in-cowboy-hat images, when looking for a basis of comparison as it renders the image.
These would be the images attributed to the output.
Do you have a source for this? This sounds like fine-tuning a model, which doesn’t prevent data from the original training set from influencing the output. The method you described would only work if the AI is trained from scratch on only images of iron man and cowboy hats. And I don’t think that’s how any of these models work.