The AI is taking our jerbs thread

I do worry about young people that rely on it more than actually learning core information and skills
I have a somewhat controversial opinion which is that learning information is getting less important in the modern world, and that it's more important to a) know what information you need and b) know how to retrieve (and ultimately harness) that information efficiently. To a lesser extent I even think we need to reimagine what skills are, if we have tools that can remove most of the need for that skill. An example would be, the rise of pocket calculators probably harmed mathematical skills but the reality is it is very rare you need to do anything more than basic arithmetic unaided by technology. So it becomes more about framing the question you are trying to answer than actually being able to answer it unaided, and I see AI is going down this route somewhat. If you can prompt an AI effectively, and appraise the results it provides you with (challenging where appropriate) then it may replace the need for you to hold some information/skills.

I'm saying this an an AI luddite that hardly ever uses it aside from meeting summaries, and someone who is potentially at risk from AI because it is helping others do stuff that I've traditionally considered a strength of mine. Probably the biggest surprise for me is how much it is used by middle-aged people, not just younger people. I really ought to experiment a lot more with it, the problem I've had is that I ask it to do things and it does 80% of what I've asked but the remaining 20% is not straightforward to manually fill. An example would be I had what I considered a fairly basic character recognition thing (a screenshot with a big list in it), I wanted Copilot to extract the list and put it in digital format (so I could just dump it in a spreadsheet). But it kept missing items off the list despite numerous prompts and requests to include the full list so in the end it was quicker to manually type them all in. I know there might be other tools to do that, but one of the selling points of these tools is that you should be able to give it a wide variety of tasks rather than needing an armoury of 50 different tools.
 
Last edited:
I have a somewhat controversial opinion which is that learning information is getting less important in the modern world, and that it's more important to a) know what information you need and b) know how to retrieve (and ultimately harness) that information efficiently. To a lesser extent I even think we need to reimagine what skills are, if we have tools that can remove most of the need for that skill. An example would be, the rise of pocket calculators probably harmed mathematical skills but the reality is it is very rare you need to do anything more than basic arithmetic unaided by technology. So it becomes more about framing the question you are trying to answer than actually being able to answer it unaided, and I see AI is going down this route somewhat. If you can prompt an AI effectively, and appraise the results it provides you with (challenging where appropriate) then it may replace the need for you to hold some information/skills.

I'm saying this an an AI luddite that hardly ever uses it aside from meeting summaries, and someone who is potentially at risk from AI because it is helping others do stuff that I've traditionally considered a strength of mine. Probably the biggest surprise for me is how much it is used by middle-aged people, not just younger people. I really ought to experiment a lot more with it, the problem I've had is that I ask it to do things and it does 80% of what I've asked but the remaining 20% is not straightforward to manually fill. An example would be I had what I considered a fairly basic character recognition thing (a screenshot with a big list in it), I wanted Copilot to extract the list and put it in digital format (so I could just dump it in a spreadsheet). But it kept missing items off the list despite numerous prompts and requests to include the full list so in the end it was quicker to manually type them all in. I know there might be other tools to do that, but one of the selling points of these tools is that you should be able to give it a wide variety of tasks rather than needing an armoury of 50 different tools.

Intelligence has to come from somewhere. We will get to a point where IQ will regress over generations if it isn't already because AI will do everything and we do nothing taxing anymore.

It wasn't so long ago that we were doing everything with dos. Slipstreaming updates into a windows XP installation and making a new bootable iso. Making games in visual basic. Coding everything by hand. Even doing LAN was an effort. It is all just so easy now.

You can attribute this to many walks of life.

Take flying for example. You get taught the old fashioned way using a CRP5 flight computer which is essentially a posh slide rule. You use it to calculate a flight track. The amount of drift correction needed for wind. The ground speed you are going to fly and how long it will take. It can also be used for many other things but requires a certain amount of brain power to use.

Now all you do with GPS is follow the purple line and that's it.

It does make you wonder that if we got to a certain point and a world catastrophic event happened would we have the brain power to start all over again without the help of Ai?

Everyone would have forgotten basic skills like agriculture or even building shelter.
 
Last edited:
Numerous skills are already regressed through technological advancements even before [modern] AI, but I think if there was a catastrophic event you'd get a subset of people capable of rebuilding. We wouldn't sustain a world population of 8 billion but we might sustain 8 million, considering how many indigenous tribes etc there are. Survival of the fittest and all that, over time knowledge would get transferred, you'd have a new generation growing up learning from the few that did still hold the necessary skills. Leaving that to one side, I suspect the whilst the genpop might regress, you'd still have the smartest minds seeking out challenges, even if those challenges are things like "how can I make better AI".

You could make an argument that DOS, XP customisation, VB etc are already a layer of abstraction, with stuff like vibe coding as another layer, just as programming languages are an abstraction layer over assembly code and in turn machine code. Admittedly, there's a lot more variability going on there but in general humans have striven to automate things for centuries. Free up the minds from doing mundane coding so they can tackle bigger problems.

Another factor to consider is that the fact that IQs today are so much higher than they were just 100 years ago, so even if we regress significantly we would likely still be smarter than humans were in the pre-war period. Essentially, humans much dumber than the current batch were able to elevate themselves to our level over time, which may provide some hope. 50 years ago, I doubt many people did computer programming but probably many were capable with some training and trial-and-error.
 
I have a somewhat controversial opinion which is that learning information is getting less important in the modern world, and that it's more important to a) know what information you need and b) know how to retrieve (and ultimately harness) that information efficiently. To a lesser extent I even think we need to reimagine what skills are, if we have tools that can remove most of the need for that skill. An example would be, the rise of pocket calculators probably harmed mathematical skills but the reality is it is very rare you need to do anything more than basic arithmetic unaided by technology. So it becomes more about framing the question you are trying to answer than actually being able to answer it unaided, and I see AI is going down this route somewhat. If you can prompt an AI effectively, and appraise the results it provides you with (challenging where appropriate) then it may replace the need for you to hold some information/skills.

I'm saying this an an AI luddite that hardly ever uses it aside from meeting summaries, and someone who is potentially at risk from AI because it is helping others do stuff that I've traditionally considered a strength of mine. Probably the biggest surprise for me is how much it is used by middle-aged people, not just younger people. I really ought to experiment a lot more with it, the problem I've had is that I ask it to do things and it does 80% of what I've asked but the remaining 20% is not straightforward to manually fill. An example would be I had what I considered a fairly basic character recognition thing (a screenshot with a big list in it), I wanted Copilot to extract the list and put it in digital format (so I could just dump it in a spreadsheet). But it kept missing items off the list despite numerous prompts and requests to include the full list so in the end it was quicker to manually type them all in. I know there might be other tools to do that, but one of the selling points of these tools is that you should be able to give it a wide variety of tasks rather than needing an armoury of 50 different tools.


I think don't think you're wrong about the importance of tool fluency. Framing the question, picking the right tool and then checking what it gives you back does feel like a big part of what “skills” mean today. And I agree, calculators and now AI have already changed how we think about what people need to hold in their heads.

Where I’d gently push back is the idea that learning information itself is less important. Without at least some core knowledge it’s harder to know which tool to pick or whether the output makes sense. There’s also the risk of becoming a bit too dependent. Tools fail, they give patchy results, or they’re simply not there when you need them. If you don’t have that base to fall back on, you can get stuck.

I’d also say that some of those older skills give you a kind of intuition that’s easy to lose. Doing maths yourself, even roughly, builds a feel for numbers that helps you spot when a result looks off. If you never build that intuition, you can end up trusting the tool blindly - I've seen that a lot at work where people will blindly accept answers. It's a bit like when we did exams or tests, we had to show our working, even if we didn't get the answer right, the teacher could see our methodology and guide us on to how to get the right answer.

So for me it’s not about rejecting your point, but about balance. Tools make knowledge more powerful, but knowledge makes the tools meaningful. I suppose the real challenge is how we strike that balance without leaning too much one way or the other. Interesting challenge though thanks for bringing it up! :)
 
Kids were telling me they know some of their peers have forgotten how to read an analogue clock.

I never remember phone numbers but since a mobile phone I've no need to. Or map read. Though planning route on GPS is kinda similar.

I've been testing AI at work. I don't think Copilot is a good example. I think other AIs might be better. It still gets things wrong a lot, can't find files, leaves thing out. Can't do OCR very well. It's mostly good to get you started doing that 0-40% that you might take a while to get to. Transcription and summaries it does well. Coding I've not had much experience but it wasn't great. Others have said it's good. But they might be using different AIs to me.

I'm not seeing it replacing anyone soon. Another 2-5yrs who knows.
 
Had a good problem to test today. While doing some work on my pet project (3D navigation system for UnrealEngine)....I found an issue where NPCs can path through walls in certain circumstances. Was only certain circumstances mind, but it could definitely jump through walls while the A* algorithm was exploring.

Using the new Claude Sonnet 4.5 and some Opus 4.1 :

1) Built a range of test cases to try and reproduce the failure.

Tests were good and added a bunch of coverage I didn't have before, but ultimately, they all passed. I couldn't reproduce the exact scenario I was seeing in the demo level.....

2) Set Opus on some Ultrathink analysis to explore the code.

It found two additional bugs in the path finding code that I hadn't noticed. Not major, but would have affected the efficiency of the pathfinding. Didn't get to the root of the problem though.

3) Had Sonnet smash in some logging to throw errors on some edge cases

This threw up some errors when I was seeing the issue in my demo level.

4) Pasted the logging into Claude again, and it identified a pattern in the logging which was the root of the issue. It then correctly implemented the fix, which required some spatial understanding of how the 3D space is presented in voxels and traversals between nodes etc.

5) Then I got it to strip out all the extra logging and tidy things up.

Job done. Could have taken me forever to hunt this down doing it all manually. The real time saver? I just did this in the background while I was doing my day job :P
 
Kids were telling me they know some of their peers have forgotten how to read an analogue clock.
My dad has mentioned to this me in the past, kids not knowing how to tell them time using an analogue clock as though it's some sort of travesty.

My view is, telling the time by an analogue clock is an anachronism. Basically, the reason clocks are designed the way they are, is not because a clock face is a good way of telling the time per se, it's because the technology of the era relied on having a clockwork mechanism rotating some dials at a steady pace because we simply didn't have the technology to make a better timepiece. People still like the aesthetic of analogue timepieces but in terms of pure utility I think digital is better, much less wasted space.

Let's face it, they might be pretty but analogue clocks suck when it comes to displaying the time. It has to display every value all the time, 1-12 plus the smaller increments to track minutes/seconds. I'm writing this message literally as the clock turns 11pm. So all 3 dials are clustered around the 11 and 12 numbers. The other 10 numbers are kinda redundant space until time moves on. It also doesn't tell me if its 11am or 11pm, which I appreciate is rarely an issue but even so.

Knowing how to read a clock face I don't see as a skill that's really worth that much. Being able to navigate a touchscreen device is more useful, whether people like it or not.

If we take it to extremes, one could ask if kids not knowing how to read a sundial is an issue. That's basically a crap version of an analogue clock, which is a crap version of a digital clock.
 
I'd argue its not an anachronism. Ergonomically the human brain can read a scale like a dial faster than just digits because there is context and scale. Its why still use them even on digital displays. Sometimes a vertical scale is used with digit's.

Digital clocks and watches have been around for over half a century. Yet smart watches still have analog displays. Most people around today grew up with digital watches. Yet analog displays are still here.
 
Last edited:
My work has gone from blocking AI to turning it on for everyone. Which also means they now want analytics to justify the licences. To demonstrate people are using it. All in a few months.
 
Last edited:
My dad has mentioned to this me in the past, kids not knowing how to tell them time using an analogue clock as though it's some sort of travesty.

My view is, telling the time by an analogue clock is an anachronism. Basically, the reason clocks are designed the way they are, is not because a clock face is a good way of telling the time per se, it's because the technology of the era relied on having a clockwork mechanism rotating some dials at a steady pace because we simply didn't have the technology to make a better timepiece. People still like the aesthetic of analogue timepieces but in terms of pure utility I think digital is better, much less wasted space.

Let's face it, they might be pretty but analogue clocks suck when it comes to displaying the time. It has to display every value all the time, 1-12 plus the smaller increments to track minutes/seconds. I'm writing this message literally as the clock turns 11pm. So all 3 dials are clustered around the 11 and 12 numbers. The other 10 numbers are kinda redundant space until time moves on. It also doesn't tell me if its 11am or 11pm, which I appreciate is rarely an issue but even so.

Knowing how to read a clock face I don't see as a skill that's really worth that much. Being able to navigate a touchscreen device is more useful, whether people like it or not.

If we take it to extremes, one could ask if kids not knowing how to read a sundial is an issue. That's basically a crap version of an analogue clock, which is a crap version of a digital clock.

Thats up there with someone I knew complaining why kids dont know how to wire plugs anymore. When most plugs you cant open unless you butcher it :rolleyes:

My work has gone from blocking AI to turning it on for everyone. Which also means they now want analytics to justify the licences. To demonstrate people are using it. All in a few months.

Same in my company, every month I have to show reports of people using Co-Pilot.
 
Thats up there with someone I knew complaining why kids dont know how to wire plugs anymore. When most plugs you cant open unless you butcher it :rolleyes:



Same in my company, every month I have to show reports of people using Co-Pilot.
The question is, are the people using CoPilot a lot going on the naughty list, or the promotion list? :P
 
The question is, are the people using CoPilot a lot going on the naughty list, or the promotion list? :P

Its not seen as a negative.

Looking at the analytics it's not clear if people are just using it for search or active working on prompts. So I'm a little unsure of the stats.
 
Ignore any usage stats companies promote. Many are mandating use of the AI tools they've paid lots of money for to show how useful they are and how much people love using them....

They're handy, sometimes, but certainly not to the extent that they deserve the hype.
 
Had a good problem to test today. While doing some work on my pet project (3D navigation system for UnrealEngine)....I found an issue where NPCs can path through walls in certain circumstances. Was only certain circumstances mind, but it could definitely jump through walls while the A* algorithm was exploring.

Using the new Claude Sonnet 4.5 and some Opus 4.1 :

1) Built a range of test cases to try and reproduce the failure.

Tests were good and added a bunch of coverage I didn't have before, but ultimately, they all passed. I couldn't reproduce the exact scenario I was seeing in the demo level.....

2) Set Opus on some Ultrathink analysis to explore the code.

It found two additional bugs in the path finding code that I hadn't noticed. Not major, but would have affected the efficiency of the pathfinding. Didn't get to the root of the problem though.

3) Had Sonnet smash in some logging to throw errors on some edge cases

This threw up some errors when I was seeing the issue in my demo level.

4) Pasted the logging into Claude again, and it identified a pattern in the logging which was the root of the issue. It then correctly implemented the fix, which required some spatial understanding of how the 3D space is presented in voxels and traversals between nodes etc.

5) Then I got it to strip out all the extra logging and tidy things up.

Job done. Could have taken me forever to hunt this down doing it all manually. The real time saver? I just did this in the background while I was doing my day job :P



But you used multiple tools, went through several steps, already used a some of your experience and intuition, orchestrated it all.

None of this is feasible for non-coders, and junior engineers would also not get the same outcome.
 
Back
Top Bottom