This isn't social media. Its like an apple police force. No thanks.
Point is that many companies have these sorts of teams already.
This isn't social media. Its like an apple police force. No thanks.
That sounds like you are hiding homophobic images?
Point is that many companies have these sorts of teams already.
I read somewhere that the retention rates and staff wellbeing are awful within the departments that manually review abuse, beheadings and hate crimes on social media
Pay is high but most people don't last more than a few months. Plus they have to do things like mandatory counciling. I know someone who did it for the police for a while.
He made just enough to move house, pay off his mortage and then quit.
Sounds like a great job for me actually, but how much does it pay?
£60k for doing nothing but sitting on your arse looking at images and watching video? Where do I sign up?
£60k for doing nothing but sitting on your arse looking at images and watching video? Where do I sign up?
How easy is it to fake an image hash?
like can I make a picture of something (or just random static). Have a similar hash to a known abuse image?
Could be an ace way to get access to anyone’s phone or discredit them just send them an innocus image that flags the system
Image hashing is the process of using an algorithm to assign a unique hash value to an image. Duplicate copies of the image all have the exact same hash value. For this reason, it is sometimes referred to as a 'digital fingerprint'.
How is image hashing used in the fight against Child Sexual Abuse Material?
Hashing is a powerful tool used by hotlines, Law Enforcement, Industry and other child protection organisations in the removal of Child Sexual Abuse Material (CSAM). This is because it enables known items of CSAM to be detected and removed without requiring them to be assessed again by an analyst.
Because we know that once CSAM exists online it is often shared thousands of times, using hashing technology has an enormous impact. It reduces the workload and emotional stress for analysts and law enforcement of reviewing the same content repeatedly, and reduces the harm to the victim by minimizing the number of people who witness the abuse.
What about if the image is edited?
In earlier versions of hashing technology, if an image underwent very minor alterations, such as being cropped or changed to black and white, then each edited version of the image would be assigned a different hash value. This made using the technology to help remove known CSAM much less effective.
However, in 2009, Microsoft in collaboration with Dartmouth College developed PhotoDNA. PhotoDNA uses hash technology but with the added ability that it 'recognises' when an image has been edited so still assigns it the same hash value. Learn more about PhotoDNA here.
Does image hashing affect my privacy?
No. Many platforms use hash technology to detect known CSAM in order to remove it from their platforms. This does not violate users' privacy because the technology only detects matching hashes and does not 'see' any images which don't match the hash. Hash values are also not reversible, so cannot be used to recreate an image.
Learn more about how Artificial Intelligence is used in the fight against CSAM here.
Something something ... Googles 'infant circumcision for my anti circumcision spam spams' ... 'Oh my, should I be in jail now?'.
Have a read of this: https://inhope.org/EN/articles/what-is-image-hashing
I believe he’s implying that an image of a child circumsision looked up by an activist against forced genital mutiliation for a campaign against it could have the activist labelled a pedophile