face recognition etc

He had some interesting posts in the main Covid one, Bird Flu and the UFO Pentagon one.
Says some outlandish things but offers no proof if I remember correctly then has a go at everybody for asking.

This is true just search face recognition is real and staff to get alerts. I dont get what your problem is. Bet you go running to feek now try and get me banned because you dont like what you see. I have given proof look at facewatch etc. Best its you stay clear of forums and internet if it affecting you this bad.
 
Last edited:
He had some interesting posts in the main Covid one, Bird Flu and the UFO Pentagon one.
Says some outlandish things but offers no proof if I remember correctly then has a go at everybody for asking.

At least i didnt fall for the covid propaganda and yes bird flu etc have been in the news quite a lot recently too trying to fear the mass again.

Please note admins SGF bought this up not me.
 
Last edited:
Your delivery needs to improve, but you do raise a legitimate comcern, unfortunately.

The issue is, at the moment it’s all wrapped up as helping to protect you, saftey etc but there is an underlining theme of control. I mean, just look at Palantir as an example.
 
At least i didnt fall for the covid propaganda and yes bird flu etc have been in the news quite a lot recently too trying to fear the mass again.

Please note admins SGF bought this up not me.
Yeah that propaganda of people dying, who'd fall for that?
 
It's not just face recognition now, they can do it with your Wang.

Get that out in a shop and boom, they have that stored and linked to you.

Fortunately for me, most shops don't have the storage capacity to save the footage as it's so huge.

Pretty concerning stuff though......
 
Please note admins SGF bought this up not me.
1766482885090.png
 
At least i didnt fall for the covid propaganda

What do you think about my colleagues still dealing with seriously ill Covid patients in wards now?
Or is it propaganda?
Tell me what you think they have really got so I can pass your expert clinical knowledge on to treat them.
 
Its complicated by the fact that shops/pubs have always had the right to refuse admission, as long as it's not based on race or ethnicity. This ability for a shop to "pass around" a face to other shops on the system is a bit concerning. If a shop or a bad tempered manager wrongly takes umbrage (sp ?) with you, you could end up being refused entry into any number of places.

At what point is the image of the face deemed biometric, and what are the rules about commercial premises passing around biometric data ?
 
Last edited:
This is true just search face recognition is real and staff to get alerts. I dont get what your problem is. Bet you go running to feek now try and get me banned because you dont like what you see. I have given proof look at facewatch etc. Best its you stay clear of forums and internet if it affecting you this bad.
The Facewatch example was interesting, but the core issue remains that there could be bias in the training data. As always, there are two main concerns here:
  1. Poorly trained models can lead to false positives and are likely to be more hassle than they’re worth in the long run.
  2. Poor implementation and usage.
I have no issue with the police using this technology to actively catch wanted criminals, and, frankly, a store owner can ban me for any reason they like. I firmly believe they have no right to my business, nor I to theirs. However, using AI to extend that ban nationwide based on nothing more than a hunch feels like a step too far. That said, we are talking about edge cases here; the majority of the public, myself included, will likely never encounter the issues I’m raising.

In short, I agree that it’s a risk, but not a particularly big one at present, and there are clearly much more troubling issues around the erosion of our civil liberties in the name of protecting them...
 
The Facewatch example was interesting, but the core issue remains that there could be bias in the training data. As always, there are two main concerns here:
  1. Poorly trained models can lead to false positives and are likely to be more hassle than they’re worth in the long run.
  2. Poor implementation and usage.
I have no issue with the police using this technology to actively catch wanted criminals, and, frankly, a store owner can ban me for any reason they like. I firmly believe they have no right to my business, nor I to theirs. However, using AI to extend that ban nationwide based on nothing more than a hunch feels like a step too far. That said, we are talking about edge cases here; the majority of the public, myself included, will likely never encounter the issues I’m raising.

In short, I agree that it’s a risk, but not a particularly big one at present, and there are clearly much more troubling issues around the erosion of our civil liberties in the name of protecting them...
Given the lighting, position and the quality of the cameras it's a huge one on that alone.

IIRC there have been several instances already of people basically being banned from multiple stores because of faulty information and no way to rectify it with them being told publicly when entering other stores they're banned for stealing etc.

IF the system is to be used then there needs to be an easy and legally enforceable way to get recourse when the system says you did something, especially given the rate of false positives.

I'm not even that happy about the police using it in general situations as the systems have so many flaws outside of tightly controlled conditions that it's basically putting the onus on the accused to prove they're not who the camera says they are.
 
Given the lighting, position and the quality of the cameras it's a huge one on that alone.

IIRC there have been several instances already of people basically being banned from multiple stores because of faulty information and no way to rectify it with them being told publicly when entering other stores they're banned for stealing etc.

IF the system is to be used then there needs to be an easy and legally enforceable way to get recourse when the system says you did something, especially given the rate of false positives.

I'm not even that happy about the police using it in general situations as the systems have so many flaws outside of tightly controlled conditions that it's basically putting the onus on the accused to prove they're not who the camera says they are.
It would be interesting to FOI that info after the claimed success in London.
 
Back
Top Bottom