They've been using "AI" long before the current hype around generative AI. (Granted rudimentary systems from the 90s that just involve databases and extracting key words arguably don't.)
Ranking, parsing, classification etc. (still largely keyword driven) all involve machine learning which is in turn a subset of AI. Modern "Generative AI" from the past few years is a subset of machine learning (specifically deep learning with neural nets and the attention mechanism - used to build both diffusion models for image generation and LLMs (large language models) for text).
In fact (AFAIK) most ATS systems are still predominantly using the "classic" ML techniques rather than the sort of generative AI being referred to in this thread. If they were to transition more to more recent models then keywords matter less, those models have a much better grasp of semantic similarity so can't be gamed so easily that way. Though I guess it does also open up the prospect of other potential angles to game them; perhaps statements that might be semantically similar to something that would be very desirable might help boost a CV in the eyes of a new ATS while avoiding an outright lie when read by a human.
The issue with ATS is actually an information problem and it's one that's seen in cryptography.
The hiring manager gives away their job spec with information.
The AI application bot uses that to create a higher grading applicant by matching rather providing what people have actually done.
Interviewing is then cursory and only further within the pipeline is the non-match detected.
The way to prevent this is to limit the information from the public job spec to the absolute bare minimum, have applications that are then filtered (they can't be tailored by AI) and only once public information from the applicant is checked does it progress into job spec discussions.