The AI is taking our jerbs thread

They've been using "AI" long before the current hype around generative AI. (Granted rudimentary systems from the 90s that just involve databases and extracting key words arguably don't.)

Ranking, parsing, classification etc. (still largely keyword driven) all involve machine learning which is in turn a subset of AI. Modern "Generative AI" from the past few years is a subset of machine learning (specifically deep learning with neural nets and the attention mechanism - used to build both diffusion models for image generation and LLMs (large language models) for text).

In fact (AFAIK) most ATS systems are still predominantly using the "classic" ML techniques rather than the sort of generative AI being referred to in this thread. If they were to transition more to more recent models then keywords matter less, those models have a much better grasp of semantic similarity so can't be gamed so easily that way. Though I guess it does also open up the prospect of other potential angles to game them; perhaps statements that might be semantically similar to something that would be very desirable might help boost a CV in the eyes of a new ATS while avoiding an outright lie when read by a human.

The issue with ATS is actually an information problem and it's one that's seen in cryptography.

The hiring manager gives away their job spec with information.
The AI application bot uses that to create a higher grading applicant by matching rather providing what people have actually done.
Interviewing is then cursory and only further within the pipeline is the non-match detected.

The way to prevent this is to limit the information from the public job spec to the absolute bare minimum, have applications that are then filtered (they can't be tailored by AI) and only once public information from the applicant is checked does it progress into job spec discussions.
 
Not much of paper. Its not written as is expect an academic paper and credits a blog. Basically days it needs to do better for people to have trust in it. I think.
 
Last edited:
Hiring is hit and miss anyway. How do you rate someone who keyword matches on their CV and talks up their experience vs someone who perhaps doesn't keyword match but has more rounded experience.
 
Hiring is hit and miss anyway. How do you rate someone who keyword matches on their CV and talks up their experience vs someone who perhaps doesn't keyword match but has more rounded experience.

I think the problem is the volume, recruiters and hiring companies advertise a role, and you get 5000x applicants for every job, trying to filter them manually and evaluate each one fairly is almost impossible. This is just something to help narrow things down.
 
The issue with ATS is actually an information problem and it's one that's seen in cryptography.

The hiring manager gives away their job spec with information.
The AI application bot uses that to create a higher grading applicant by matching rather providing what people have actually done.
Interviewing is then cursory and only further within the pipeline is the non-match detected.

The way to prevent this is to limit the information from the public job spec to the absolute bare minimum, have applications that are then filtered (they can't be tailored by AI) and only once public information from the applicant is checked does it progress into job spec discussions.

Not really, if you're just limiting some skills that would be implied by the other parts of the job spec then that's trivial to infer for an LLM.
 
Not really, if you're just limiting some skills that would be implied by the other parts of the job spec then that's trivial to infer for an LLM.

I can see that being generic, however it's usually the non-generic stuff they're actually interested in, the examples but that's too far into the process. The recruiter qualifying will probably find an AI agent on the call in future using a prompt from the hiring manager to maximise the initial contact.
 
I can see that being generic, however it's usually the non-generic stuff they're actually interested in, the examples but that's too far into the process. The recruiter qualifying will probably find an AI agent on the call in future using a prompt from the hiring manager to maximise the initial contact.

Bit hard to do that with video calls! (And somewhat pointless anyway as it's hardly a time consuming element.) What's an issue these days though is candidates using AI in the background:

https://cluely.com/ for example.
 
University lecturer - I reckon I'm about 3 or 4 years out from having my teaching responsibilities replaced by AI tutors. Hopefully those of us that are left can lean more heavily into supervisory functions and a bit more time for research.
 
University lecturer - I reckon I'm about 3 or 4 years out from having my teaching responsibilities replaced by AI tutors. Hopefully those of us that are left can lean more heavily into supervisory functions and a bit more time for research.

To correct the mistakes of your replacement AI tutors :D
 

This is why..

The new boss is just like the old boss. There is nothing new or nothing real about AI that cannot be done by a human being in rational thought. I shall continue to ignore AI wherever possible.
 
There is nothing new or nothing real about AI that cannot be done by a human being in rational thought.

Objective data analysis over a huge scale - having medical data and records that can be interpreted by AI would be something that will allow advancements in medical care, and it will allow data correlation that we just can't do with doctors currently and could/will provide much better early diagnosis, genetic issues, and other things people actually care about.
AI isn't just about taking a job in a call centre, or replacing peoples ability to think for themselves - if anything I'd hope it will allow people more time to do that. Modern society is responsible for everything being in a hurry, and people are using AI tools to get ahead albeit it not realising they may do things better themselves, but they don't have the time.

Avoid/ignore AI if you like - but it isn't going to ignore you.
 

This is why..

The new boss is just like the old boss. There is nothing new or nothing real about AI that cannot be done by a human being in rational thought. I shall continue to ignore AI wherever possible.


Given infinite human resources, sure, an army of people instructed to follow specific rules and act as a giant abacus is Turing-complete and can compute anything, eventually (perhaps after the heat-death of the planet once the sun goes supernova).

In reality, computers are millions and billions of times faster. For all the standard uses of AI mike data mining, regression, classification then there is simply no comparison
 
Objective data analysis over a huge scale - having medical data and records that can be interpreted by AI would be something that will allow advancements in medical care, and it will allow data correlation that we just can't do with doctors currently and could/will provide much better early diagnosis, genetic issues, and other things people actually care about.
AI isn't just about taking a job in a call centre, or replacing peoples ability to think for themselves - if anything I'd hope it will allow people more time to do that. Modern society is responsible for everything being in a hurry, and people are using AI tools to get ahead albeit it not realising they may do things better themselves, but they don't have the time.

Avoid/ignore AI if you like - but it isn't going to ignore you.
Given infinite human resources, sure, an army of people instructed to follow specific rules and act as a giant abacus is Turing-complete and can compute anything, eventually (perhaps after the heat-death of the planet once the sun goes supernova).

In reality, computers are millions and billions of times faster. For all the standard uses of AI mike data mining, regression, classification then there is simply no comparison

Yes, things computers are good at sure. Like now massive data analysis but I think that things that humans are good at like music or literature or art or architecture may be equalled but cannot be beaten. And that is only by training and replicating human achievements.

I feel that a lot of people are just going to stop thinking or achieving anything original and that is sad.

Speed is useful but not everything.
 
I'm.sure all the AI bosses building bunkers for themselves is a sign everything is going to turn out great and we're all going to be happy.
 
I'm.sure all the AI bosses building bunkers for themselves is a sign everything is going to turn out great and we're all going to be happy.

Whoever survives on the surface needs to make it their mission to weld the doors shut from the outside.

Also the govenment currently refurbing nuclear bunkers. Which the plebs won't get near without being shot once the apocalypse starts (this is a real policy).
 
Last edited:
Whoever survives on the surface needs to make it their mission to weld the doors shut from the outside.

Also the govenment currently refurbing nuclear bunkers. Which the plebs won't get near without being shot once the apocalypse starts (this is a real policy).
I'm.starting to think they're using bunkers to protect themselves from the people rather than AI
 
Back
Top Bottom