Soldato
- Joined
- 1 May 2013
- Posts
- 10,028
- Location
- M28
Terrence Howard would strongly disagree.Calculators don't tend to be wrong.![]()
Last edited:
Terrence Howard would strongly disagree.Calculators don't tend to be wrong.![]()
The reason I found it alarming is that AI is constantly telling me inaccurate information, if university students are using it as a primary method of study then I feel for themAs opposed to studying with.. Google or.. a book?
The reason I found it alarming is that AI is constantly telling me inaccurate information, if university students are using it as a primary method of study then I feel for them
This is 99% of users though.Of course, if people are just prompting it with 'Answer this' and copying and pasting its output, they're in for a rough ride.
Claude and ChatGPT are the two I interact with and I quite often have to prompt them to justify answers or search for references as I can see their mistakesWhich AI specifically? It isn't difficult to prompt it to cite its sources. Then you can easily check these to see if it has made things up.
Of course, if people are just prompting it with 'Answer this' and copying and pasting its output, they're in for a rough ride.
This is 99% of users though.
Claude and ChatGPT are the two I interact with and I quite often have to prompt them to justify answers or search for references as I can see their mistakes
I’m sure a heck of a lot of users just trust what is presented to them
I've not used AI once and don't think I will. I understand it can be helpful in some cases but I'd rather just search for myself.
In that case, it would be the user's fault rather than the technology itself. It would be similar to just using the first Google result you get when searching a topic and not cross-referencing it, etc.
I’ve noticed it in older people too, but yes I think Ai is making people lazier in places where it can be used
It's definitely user fault but also driven by the fact that the LLM will state things as fact, whereas Google might list results in some form of relevance order, but it's returning several results instead of just 1 and saying "This is right, you can trust this 100%".In that case, it would be the user's fault rather than the technology itself. It would be similar to just using the first Google result you get when searching a topic and not cross-referencing it, etc.
but it's returning several results instead of just 1 and saying "This is right, you can trust this 100%".
I wasn’t referring to the disclaimers, which people obviously don’t read or consider.It doesn't. Take ChatGPT, for example. It states this at the bottom of every new chat. "ChatGPT can make mistakes. Check important info."
Claude has a similar message and I believe that the other major ones do too. If users believe they can trust any output 100%, it's simply a skill issue on their behalf.
I wasn’t referring to the disclaimers, which people obviously don’t read or consider.
I’m talking about the way it responds with “This is the answer to your question because of these reasons”, which is what people fall for, whereas if you’ve ever used one of these things in earnest, especially on a topic that you’re familiar with, you’ll see its limitations.
End of the day, it’s down to the user to use it correctly; unfortunately, the majority of the population aren’t equipped for that.
You vastly overestimate the average person. Half of America voted for Trump; people still falling for crypto scams; TikTok and influencer mind rot. People lack critical thinking skills.The argument that “people fall for” structured explanations also misses the point. Presenting reasoning is not deception, it’s transparency. Yes, the reasoning can be wrong, but so can textbooks, teachers, or journalists. The difference is that AI can be iteratively improved with guardrails, fine-tuning, and clearer prompts. Saying the majority are “unequipped” is a low view of the public. We don’t expect everyone to understand the internals of a search engine to use Google responsibly, yet society adapts.
So instead of dismissing AI as a tool that most people “can’t use correctly,” it’s more accurate to see it as a literacy challenge. We improve education around it, build better interfaces and increase transparency, just as we did with every transformative technology before it.
You vastly overestimate the average person. Half of America voted for Trump; people still falling for crypto scams; TikTok and influencer mind rot. People lack critical thinking skills.
The models are reasoning on available context, which isn’t always complete, non-conflicting or even factually correct. They will keep improving, and having more subject specific models will help improve accuracy, but they’re fundamentally limited and will always require a level of scepticism, whereas most people are happy to just accept what they say and move on.
Literacy will naturally improve over time, and it’s in these companies best interest to assist with that. People are already becoming more sceptical vs the blind hype we saw before.
I'm all for it; it's made my life easier and I absolutely see the value. I agree that it has the potential to enhance society and I want it to keep improving and be more available. The conversational style vs your standard Google search is generally much better especially when trying to learn something. Hopefully more focused models will make them more reliable.It’s an oversimplification to say “people lack critical thinking skills” as if that’s a fixed trait. History shows that societies adapt to disruptive technologies over time. When the printing press arrived, many feared people couldn’t handle the flood of information yet literacy rates skyrocketed. When television became dominant, critics said it would rot minds, instead, we built media literacy frameworks.
Yes, some people fall for scams, propaganda, or influencer hype but that’s not new, nor is it unique to the digital age. Humans have always been susceptible to persuasion and misinformation. The difference now is scale and speed, which makes the problem more visible, but it also means awareness campaigns, regulations, and cultural adaptations spread faster.
On the AI side, the idea that models will “always require scepticism” is true but that doesn’t make them uniquely dangerous. All knowledge systems (news, textbooks, even experts) require healthy scepticism. The difference is that AI has the potential to amplify literacy. People who never had access to experts now get instant explanations, translations, or tutoring. That’s not just mind rot, it can be empowerment. Assuming the “average person” can’t or won’t improve is self-defeating.
In short, the doomer narrative underestimates people’s capacity to adapt. The trajectory of history suggests the opposite, we tend to rise to the challenge, given time and the right support.