I know people like to dismiss LLMs as just predictive text systems, but saying the neural models being created inside them as just text prediction doesn't do them justice at all. Quite interesting article the other day that a LLM was being tested on some data, and it figured out it was probably being tested based on the data it was being fed.I think it will take longer than people think to actually replace developers. I believe it will be a long time before AI can actually think and reason. What we're seeing now is a very powerful predictive text system, not actual machine intelligence. Don't get me wrong, I think generative AI will, and has already, changed the way a developer works. It will make us more productive. Junior developers will still exist, but their job will look/feel different.
Actual AGI is also extremely scary if you really think about it.
Basically, just because the means of communicating with the NN is through a natural language interface that is predicting the next word, doesn't mean the NN itself can't continue to evolve and display intelligence as the network grows larger and more complex.
Ultimately, any NN has to have physical IO, be it a text interface, camera (eye), other senses, temperature, sound etc.