Is ChatGPT/AI making kids stupid?

The reason I found it alarming is that AI is constantly telling me inaccurate information, if university students are using it as a primary method of study then I feel for them

Which AI specifically? It isn't difficult to prompt it to cite its sources. Then you can easily check these to see if it has made things up.

Of course, if people are just prompting it with 'Answer this' and copying and pasting its output, they're in for a rough ride.
 
Which AI specifically? It isn't difficult to prompt it to cite its sources. Then you can easily check these to see if it has made things up.

Of course, if people are just prompting it with 'Answer this' and copying and pasting its output, they're in for a rough ride.
Claude and ChatGPT are the two I interact with and I quite often have to prompt them to justify answers or search for references as I can see their mistakes

I’m sure a heck of a lot of users just trust what is presented to them
 
This is 99% of users though.

In that case, it would be the user's fault rather than the technology itself. It would be similar to just using the first Google result you get when searching a topic and not cross-referencing it, etc.

Claude and ChatGPT are the two I interact with and I quite often have to prompt them to justify answers or search for references as I can see their mistakes

I’m sure a heck of a lot of users just trust what is presented to them

It sounds like they're doing what's required when prompted correctly?

If users are only using the initial output from a basic prompt, which is just generated text with no citations, then that is entirely their responsibility.
 
The company I work at recently sacked someone for using ChatGPT. Not only did she feed it confidential data, but she didn't bother to check the reports she asked it to generate. Lots of errors were picked up after she submitted them and eventually she had no choice but to admit what she had done. Madness...

I've not used AI once and don't think I will. I understand it can be helpful in some cases but I'd rather just search for myself.
 
In that case, it would be the user's fault rather than the technology itself. It would be similar to just using the first Google result you get when searching a topic and not cross-referencing it, etc.

Agreed, I’m not arguing against AI’s usefulness it saves me time almost daily
 
I’ve noticed it in older people too, but yes I think Ai is making people lazier in places where it can be used

Most people are lazy by default.

Look at the situation with Ozempic instead of eating better and exercise .
 
Last edited:
In that case, it would be the user's fault rather than the technology itself. It would be similar to just using the first Google result you get when searching a topic and not cross-referencing it, etc.
It's definitely user fault but also driven by the fact that the LLM will state things as fact, whereas Google might list results in some form of relevance order, but it's returning several results instead of just 1 and saying "This is right, you can trust this 100%".

The LLMs aren't really at fault either. Sure, they can be improved and optimised, but it's providing and answer based on the context that it decided was relevant or has access to, but people are generally too lazy (or thick) to bother properly understanding a topic.
 
but it's returning several results instead of just 1 and saying "This is right, you can trust this 100%".

It doesn't. Take ChatGPT, for example. It states this at the bottom of every new chat. "ChatGPT can make mistakes. Check important info."

Claude has a similar message and I believe that the other major ones do too. If users believe they can trust any output 100%, it's simply a skill issue on their behalf.
 
Last edited:
It doesn't. Take ChatGPT, for example. It states this at the bottom of every new chat. "ChatGPT can make mistakes. Check important info."

Claude has a similar message and I believe that the other major ones do too. If users believe they can trust any output 100%, it's simply a skill issue on their behalf.
I wasn’t referring to the disclaimers, which people obviously don’t read or consider.

I’m talking about the way it responds with “This is the answer to your question because of these reasons”, which is what people fall for, whereas if you’ve ever used one of these things in earnest, especially on a topic that you’re familiar with, you’ll see its limitations.

End of the day, it’s down to the user to use it correctly; unfortunately, the majority of the population aren’t equipped for that.
 
I wasn’t referring to the disclaimers, which people obviously don’t read or consider.

I’m talking about the way it responds with “This is the answer to your question because of these reasons”, which is what people fall for, whereas if you’ve ever used one of these things in earnest, especially on a topic that you’re familiar with, you’ll see its limitations.

End of the day, it’s down to the user to use it correctly; unfortunately, the majority of the population aren’t equipped for that.

The argument that “people fall for” structured explanations also misses the point. Presenting reasoning is not deception, it’s transparency. Yes, the reasoning can be wrong, but so can textbooks, teachers, or journalists. The difference is that AI can be iteratively improved with guardrails, fine-tuning, and clearer prompts. Saying the majority are “unequipped” is a low view of the public. We don’t expect everyone to understand the internals of a search engine to use Google responsibly, yet society adapts.

So instead of dismissing AI as a tool that most people “can’t use correctly,” it’s more accurate to see it as a literacy challenge. We improve education around it, build better interfaces and increase transparency, just as we did with every transformative technology before it.
 
Last edited:
The argument that “people fall for” structured explanations also misses the point. Presenting reasoning is not deception, it’s transparency. Yes, the reasoning can be wrong, but so can textbooks, teachers, or journalists. The difference is that AI can be iteratively improved with guardrails, fine-tuning, and clearer prompts. Saying the majority are “unequipped” is a low view of the public. We don’t expect everyone to understand the internals of a search engine to use Google responsibly, yet society adapts.

So instead of dismissing AI as a tool that most people “can’t use correctly,” it’s more accurate to see it as a literacy challenge. We improve education around it, build better interfaces and increase transparency, just as we did with every transformative technology before it.
You vastly overestimate the average person. Half of America voted for Trump; people still falling for crypto scams; TikTok and influencer mind rot. People lack critical thinking skills.

The models are reasoning on available context, which isn’t always complete, non-conflicting or even factually correct. They will keep improving, and having more subject specific models will help improve accuracy, but they’re fundamentally limited and will always require a level of scepticism, whereas most people are happy to just accept what they say and move on.

Literacy will naturally improve over time, and it’s in these companies best interest to assist with that. People are already becoming more sceptical vs the blind hype we saw before.
 
User's need to know something about the credibility of the sources - most of the time I look at references AI is stupidly using data that has come from multiple sources which themselves have a common source,
which it neither digs into nor explains. - caveat emptor
 
Consider thta 50% of the population have an below average IQ and even the smart ones can be lazy...too lazy to even fact check.

 
You vastly overestimate the average person. Half of America voted for Trump; people still falling for crypto scams; TikTok and influencer mind rot. People lack critical thinking skills.

The models are reasoning on available context, which isn’t always complete, non-conflicting or even factually correct. They will keep improving, and having more subject specific models will help improve accuracy, but they’re fundamentally limited and will always require a level of scepticism, whereas most people are happy to just accept what they say and move on.

Literacy will naturally improve over time, and it’s in these companies best interest to assist with that. People are already becoming more sceptical vs the blind hype we saw before.

It’s an oversimplification to say “people lack critical thinking skills” as if that’s a fixed trait. History shows that societies adapt to disruptive technologies over time. When the printing press arrived, many feared people couldn’t handle the flood of information yet literacy rates skyrocketed. When television became dominant, critics said it would rot minds, instead, we built media literacy frameworks.

Yes, some people fall for scams, propaganda, or influencer hype but that’s not new, nor is it unique to the digital age. Humans have always been susceptible to persuasion and misinformation. The difference now is scale and speed, which makes the problem more visible, but it also means awareness campaigns, regulations, and cultural adaptations spread faster.

On the AI side, the idea that models will “always require scepticism” is true but that doesn’t make them uniquely dangerous. All knowledge systems (news, textbooks, even experts) require healthy scepticism. The difference is that AI has the potential to amplify literacy. People who never had access to experts now get instant explanations, translations, or tutoring. That’s not just mind rot, it can be empowerment. Assuming the “average person” can’t or won’t improve is self-defeating.

In short, the doomer narrative underestimates people’s capacity to adapt. The trajectory of history suggests the opposite, we tend to rise to the challenge, given time and the right support.
 
It’s an oversimplification to say “people lack critical thinking skills” as if that’s a fixed trait. History shows that societies adapt to disruptive technologies over time. When the printing press arrived, many feared people couldn’t handle the flood of information yet literacy rates skyrocketed. When television became dominant, critics said it would rot minds, instead, we built media literacy frameworks.

Yes, some people fall for scams, propaganda, or influencer hype but that’s not new, nor is it unique to the digital age. Humans have always been susceptible to persuasion and misinformation. The difference now is scale and speed, which makes the problem more visible, but it also means awareness campaigns, regulations, and cultural adaptations spread faster.

On the AI side, the idea that models will “always require scepticism” is true but that doesn’t make them uniquely dangerous. All knowledge systems (news, textbooks, even experts) require healthy scepticism. The difference is that AI has the potential to amplify literacy. People who never had access to experts now get instant explanations, translations, or tutoring. That’s not just mind rot, it can be empowerment. Assuming the “average person” can’t or won’t improve is self-defeating.

In short, the doomer narrative underestimates people’s capacity to adapt. The trajectory of history suggests the opposite, we tend to rise to the challenge, given time and the right support.
I'm all for it; it's made my life easier and I absolutely see the value. I agree that it has the potential to enhance society and I want it to keep improving and be more available. The conversational style vs your standard Google search is generally much better especially when trying to learn something. Hopefully more focused models will make them more reliable.

Unfortunately literacy/training is secondary to just getting people to use it, and none of these tech companies are in it with the primary goal of bettering humanity so they can have a negative impact on less skeptical users. Obviously still very early days and like I mentioned previously, the recent skepticism due to inaccuracies, agents dropping production DBs, etc. is a healthy thing.
 
Back
Top Bottom