The AI is taking our jerbs thread

The goal of AI agents isn't to become autonomous, so i am not sure what you are not understanding here


to continue, i think the biggest misconception and false marketing is around so call Large Reasoning Models. They don't reason, and are architectural the same as an LLM, and have mostly the same training methodology.

LLMs are nothing more than non-linear lossy data compression and approximate pattern matching to predict probabilities of tokens following a matched sequence. LRMs are exactly the same, but have additional data sets with intermediate tokens, and extended RL with the old ideas of chain-of-thought and multi-shot prompting. During inference LRMs produce additional intermediate tokens, but nothing stops you doing that with an LLM and CoT process.

This is shown in the recent paper by Apple showing neither LLMs nor LRMs have any reasoning capabilities, and so there are many problems that current models will be useless at.

It is ironic because AI has a long history of reasoning and planning using symbolic processing. I don't think we will see a big threat to jobs like software engineering until we have models that properly combine symbolic and sub-symbolic processing to apply actual reasoning to the language comprehension. Conceptually this doesn't seem hard but i remember this problem being discussed 25 years when i was studying AI (and alcohol ) as a wee nipper at undergrad level.
 
It is ironic because AI has a long history of reasoning and planning using symbolic processing. I don't think we will see a big threat to jobs like software engineering until we have models that properly combine symbolic and sub-symbolic processing to apply actual reasoning to the language comprehension. Conceptually this doesn't seem hard but i remember this problem being discussed 25 years when i was studying AI (and alcohol ) as a wee nipper at undergrad level.
Software engineering won’t cease to be a required profession.

But I’ve gone from managing a team of programmers using what I would now consider legacy tools….to basically managing an array of AI agents instead.

I’d say I’m approximately as productive as a mixed team of four programmers of varying seniority using legacy tools.

By legacy, I mean actually sitting and typing code into an IDE character by character. I honestly can’t imagine going back to working like that, it would feel like trying to code with a pencil and paper.

I still need to watch what is going on and I’m frequently intervening and making adjustments, but the agent is doing all the typing. It’s smashing out boilerplate, testing etc in no time.

I need to evaluate some tech? I can literally have an agent smash out a working prototype of a whole application stack in 5 minutes and try it out. The sort of stuff that could end up eating weeks of time doing by hand.

Basically you don’t need amazing reasoning models to put vast numbers of software engineers out of a job, because the smart ones that are leveraging AI tools can be so much more productive with just the current generation of tools.
 
But I’ve gone from managing a team of programmers using what I would now consider legacy tools….to basically managing an array of AI agents instead.
Tbh, I think that says more about the team/project than it does AI agents. That isn't a dig, but it sounds like you were paying people to do basic tasks that could have been automated years ago.
 
Tbh, I think that says more about the team/project than it does AI agents. That isn't a dig, but it sounds like you were paying people to do basic tasks that could have been automated years ago.


Indeed, probably the specific work and team capabilities were simple and particularly amenable to automation.


In my experience we see maybe a 5-10% improvement overall, mostly the likes of Copilot only able to help on the most basic coding tasks. The complex work and there is literally zero benefits today, which is where the team spends 90% of the efforts. We also see areas of negative productivity as generated code contains subtle bugs, or more time is spent prompt tuning than simply coding
 
Tbh, I think that says more about the team/project than it does AI agents. That isn't a dig, but it sounds like you were paying people to do basic tasks that could have been automated years ago.
It's partly the tools I was in my last job (Anvil is ****), but mostly that it's simply vastly quicker to use an agent to write code than have teams of people typing out code by hand. I don't need to spend 60% of my time in the various meetings that are required when you're coordinating a team of people. I don't have junior programmers I need to spend time mentoring and developing, I don't have senior programmers writing unecessarily complex systems that need reworking, all the stuff that comes from having people involved.
Indeed, probably the specific work and team capabilities were simple and particularly amenable to automation.
It's a completely different domain and toolchain (JS/TS/Python) rather than C++, which has various productivity benefits. But I've been building an agent development team to build my personal stuff in Unreal which is coming together nicely as well.
In my experience we see maybe a 5-10% improvement overall, mostly the likes of Copilot only able to help on the most basic coding tasks. The complex work and there is literally zero benefits today, which is where the team spends 90% of the efforts. We also see areas of negative productivity as generated code contains subtle bugs, or more time is spent prompt tuning than simply coding
Copilot is ****. There are massive differences between the different AI coding tools available today. Anthropic are way ahead in terms of coding agents.
 
to continue, i think the biggest misconception and false marketing is around so call Large Reasoning Models. They don't reason, and are architectural the same as an LLM, and have mostly the same training methodology.

LLMs are nothing more than non-linear lossy data compression and approximate pattern matching to predict probabilities of tokens following a matched sequence. LRMs are exactly the same, but have additional data sets with intermediate tokens, and extended RL with the old ideas of chain-of-thought and multi-shot prompting. During inference LRMs produce additional intermediate tokens, but nothing stops you doing that with an LLM and CoT process.

This is shown in the recent paper by Apple showing neither LLMs nor LRMs have any reasoning capabilities, and so there are many problems that current models will be useless at.

It is ironic because AI has a long history of reasoning and planning using symbolic processing. I don't think we will see a big threat to jobs like software engineering until we have models that properly combine symbolic and sub-symbolic processing to apply actual reasoning to the language comprehension. Conceptually this doesn't seem hard but i remember this problem being discussed 25 years when i was studying AI (and alcohol ) as a wee nipper at undergrad level.

I would say that the LLM is also an associative onothology between the billions of inputs it's trained on.

RAG is (as Dowie has corrected me before) a method of simply compressing those relationships of documents into a vector map that can then be augmented into the LLM vector map. It's a cludge to reduce expensive computational resources - the same a directory is stored on a hard drive and why data is store on a hard drive and not CPU cache ram.

I think there is a vital missing element that is not present in an LLM or 'reasoning' machine.

Would I trust it to steer my company, no. It's not aware enough.
Would I trust it to summarise my tax return, no. It's not accountable.

Until an AI (and the provider) are held accountable for an AI.. then we're not going to see a reduction of hype and lies about the ability of the AI.

Only when the AI is held accountable, will it need to demonstrate to itself and others a certification to complete a tax advisory or a tax return.. for example.
 
Last edited:
Software engineering won’t cease to be a required profession.

But I’ve gone from managing a team of programmers using what I would now consider legacy tools….to basically managing an array of AI agents instead.

I’d say I’m approximately as productive as a mixed team of four programmers of varying seniority using legacy tools.

By legacy, I mean actually sitting and typing code into an IDE character by character. I honestly can’t imagine going back to working like that, it would feel like trying to code with a pencil and paper.

I still need to watch what is going on and I’m frequently intervening and making adjustments, but the agent is doing all the typing. It’s smashing out boilerplate, testing etc in no time.

I need to evaluate some tech? I can literally have an agent smash out a working prototype of a whole application stack in 5 minutes and try it out. The sort of stuff that could end up eating weeks of time doing by hand.

Basically you don’t need amazing reasoning models to put vast numbers of software engineers out of a job, because the smart ones that are leveraging AI tools can be so much more productive with just the current generation of tools.

I'd really like to try and understand here.

Would you be able to describe exactly what job one of your AI agents is doing, why it displaces a person, and most importantly why your company is making money from it?

I've been trying to research AI agents, everything I'm finding is focused on content creating. I'm not interested in content creation.

I've been using AI to help me with a degree course I'm doing so I can see how powerful it is, but it can't do the work for me only support me. I'm struggling to see how if I put more effort into learning AI that it could make me money directly though.
 
Last edited:
It's partly the tools I was in my last job (Anvil is ****), but mostly that it's simply vastly quicker to use an agent to write code than have teams of people typing out code by hand. I don't need to spend 60% of my time in the various meetings that are required when you're coordinating a team of people. I don't have junior programmers I need to spend time mentoring and developing, I don't have senior programmers writing unecessarily complex systems that need reworking, all the stuff that comes from having people involved.
Honestly, this just sounds like a poorly managed team and project. You can't replace productive engineers with AI agents.

Spending 60% of your time in meetings points to poor planning or just engineers that aren't up to scratch/lack critical thinking and need constant guidance on coding-related items.

Also, no decent senior engineer is writing unnecessarily complex code, so again, either bad engineers or poorly communicated requirements. LLMs are inherently bad at writing anything complex - there's a recent paper from Apple on this too.

Out of interest, is the codebase mostly Python?
 
Honestly, this just sounds like a poorly managed team and project. You can't replace productive engineers with AI agents.
You have a pipeline of work. A good programmer using good AI tools can be much more productive, hence you need less engineers to get the job done. IT's not like you sack one engineer and put an agent in their place, it's an overall reduction in headcount requirements.

For smaller tasks, our business users can simply raise an issue and tag an agent, and it'll have a go at fixing it. For simple things it's perfectly competent and I can simply accept the PR. For more complex stuff it will get assigned to me for a more hands-on look at it.
Spending 60% of your time in meetings points to poor planning or just engineers that aren't up to scratch/lack critical thinking and need constant guidance on coding-related items.
It would be highly unusual for a Lead Programmer to have 40% of their time available for coding, in many organisations there is zero expectations of a lead being hands on with code.
Also, no decent senior engineer is writing unnecessarily complex code, so again, either bad engineers or poorly communicated requirements.
You're either not a manager, or every engineer you've ever worked with has been absolutely perfect and never needed any guidance or improvement?
LLMs are inherently bad at writing anything complex - there's a recent paper from Apple on this too.
Good thing an experienced programmer is in the driving seat then.
Out of interest, is the codebase mostly Python?
The parts I'm on at the moment are mainly Python or Typescript, although I'll be picking up some Go/Zig stuff soon, and am using it for C++ in my own time.
 
You have a pipeline of work. A good programmer using good AI tools can be much more productive, hence you need less engineers to get the job done. IT's not like you sack one engineer and put an agent in their place, it's an overall reduction in headcount requirements.

For smaller tasks, our business users can simply raise an issue and tag an agent, and it'll have a go at fixing it. For simple things it's perfectly competent and I can simply accept the PR. For more complex stuff it will get assigned to me for a more hands-on look at it.
I don't disagree with this for very simple/repetitive work, largely akin to what a junior might do. It's perfectly reasonable to let it fix simple bugs.
It would be highly unusual for a Lead Programmer to have 40% of their time available for coding, in many organisations there is zero expectations of a lead being hands on with code.
Really depends on the size of the company - if it's large enough that the lead doesn't need to code then it's also large enough that you can't replace the majority of your team with unpredictable agents. A lead might be in a lot of meetings, but they wouldn't all be handholding people while they code.
You're either not a manager, or every engineer you've ever worked with has been absolutely perfect and never needed any guidance or improvement?
I've worked with a ton of rubbish engineers, but that was my point, if you have rubbish engineers then of course you can replace them. If you have good engineers, an agent isn't going to replace them (assuming your team has been right-sized).
The parts I'm on at the moment are mainly Python or Typescript, although I'll be picking up some Go/Zig stuff soon, and am using it for C++ in my own time.
I figured as much (not judging); having used various agents across Python, Java and Rust, I find they get progressively worse with more complex languages. Python is quite forgiving and easily best-case for these tools. They're great for all of the boilerplate in languages like Java and Go, but they fall apart going beyond a class or 2.
 
Even in mid_gen's example where AI assistants are effective due to the limited scope, complexity and process inefficiencies etc., that doesn't necessarily mean that there needs to be a reduction in workforce. If mid_gen maintained the same team size and they could all leverage AI assistants to boost productivity, then the team could release software faster, or develop software of higher complexity , or develop completely new products in parallel.

Increasing the value that a software developer can deliver opens up many new possibilities and can ultimately increase the demand for software engineers.The important aspect about technology is there isn't a fixed demand, but essentially infinite demand limited by imagination and budgets. Reducing costs will permit many new products.

On mid_gens case, maybe he mets go 4 engineers but they are now all empowered to own their projects with AI assistants supporting the coding, or they join larger enterprises that are heavily investing in next generation projects that are only now becoming feasible.
 
However, you can only do that if the demand is there for your business. Not all software development companies can grow exponentially and clients don't necessarily need more complex software or completely new products. A significant part of the work usually involves making the software less complex and easier to maintain, or building on existing systems.

You'll find that those who adopt its usage and employ people who are proficient in it will dominate. Those who are reluctant to adopt it or simply let their employees use it to become slightly more efficient will be left behind.

Many people use it just to get work done faster and then enjoy the additional downtime if they aren't micromanaged. They're reluctant to reveal how much it's helping them, because, let's be honest, salaried employees don't want more work, even if they can now complete it more efficiently.

Following the 80/20 rule, 20% of people in any given department or team usually do the majority of the best output. If you take a step back from your work, work ethic and specific sector, it's easy to see that, at the very least, there will be a reduction in headcounts in numerous low skill areas. We've barely scratched the surface of these tools, but we're now starting to see the start of true software development automation. If you went back and tried to use ChatGPT 3 to help with any of the things mid_gen is discussing, it would be laughable. Even the latest Copilot is pretty useless. As he said, Anthropic is miles ahead and that's just with an open LLM (cost permitting). The bespoke stuff or things that will never be used publicly is likely already well ahead.
 
Last edited:
I'm just getting chat gpt to review my latest uni assignment. For the first time I've subscribed (I kept running out of free allocation), and I've initiated the deep research option.

I've already done much of the work (getting help from AI on statistical interpretation along the way or fixing errors). Now I'm asking it to review my report.

It's given me a huge amount of feedback including helping me find references. Just going through it.

BUT - all this good stuff aside - I still don't know how I directly make money from AI. Which is of course the goal.
 
Last edited:
The goal of AI isn't just to directly make money from it... drop the sigma male grindset dude.

You had this same spiel the other day about a reduction in working hours only being a pay rise if you could then use that time productively.
 
Last edited:
The goal of AI isn't just to directly make money from it... drop the sigma male grindset dude.

You had this same spiel the other day about a reduction in working hours only being a pay rise if you could then use that time productively.
My goal is to make more money somehow, using these tools.

On your second point, off topic here really but it's true, especially if you're salaried.
 
My goal is to make more money somehow, using these tools.

On your second point, off topic here really but it's true, especially if you're salaried.

You might then need to change your perspective and realise that it could still be happening indirectly.

This isn't true unless you believe that every waking hour should be spent being productive.

Stop getting caught up in the nonsense of hustle culture and start living in the real world.
 
Exactly why I came into this thread, to try and understand more about it and how it can be used, precisely because all I see on YouTube etc is hustle culture crap which I'm not interested in.

Fair enough, but you've only just paid for the ChatGPT subscription that provides that deep research option and you've already seen some benefits.

It's not difficult to extrapolate this over time and realise that it could lead to some form of financial gain.

It's the old saying: 'Learn to walk before you can run'.
 
Fair enough, but you've only just paid for the ChatGPT subscription that provides that deep research option and you've already seen some benefits.

It's not difficult to extrapolate this over time and realise that it could lead to some form of financial gain.

It's the old saying: 'Learn to walk before you can run'.
In the sense that its helping me get a qualification which could lead to better job in the future, sure. But that's not really what Im looking for. The 'knowledge' im gaining from using it in this way isn't helping me gain experience or credibility so I don't think it will really have much influence in the long term. In other words when it comes to actually getting a job, the use of AI means I won't really have the knowledge or experience I need and that risks being found out as a fraud.

So that's not what Im looking for. I want to use AI almost passively, on something that I don't need to interact with others on, to generate some income.
 
Then you've failed at the first hurdle of understanding what ChatGPT does, and you aren't in a position like mid_gen is (who's using different models and tools btw), where you can leverage it appropriately.

The idea that you can just subscribe to ChatGPT and start generating an income almost immediately is laughable.
 
Back
Top Bottom