The AI is taking our jerbs thread

What've noted is that LLMs are good if you have a full specification first - then run through once to produce code.

I suspect the attention mechanisms fail to maintain the specification once it becomes an iterative definition. This latter limitation results in poor architecture and systems integrations from what I've seen so far.

Do you use more persona based rule sets in the context? For example - adding the security analyst, the operational monitoring, the financial cost management (for both cost visibility and savings etc)?
You need to understand and manage the context that is being ingested while you’re interacting with an LLM. There’s a finite window, and when you start iterating and going back and forth with it, that context window is filling up, and eventually the stuff you started with starts to fall out of the context.

Similarly with tools and MCP, there is a finite amount you can use and have it remain useful. 30 tools is about the upper limit that can be used.

There are various context management strategies that improve things. Using a scratch memory to track tasks etc. Claude does this when you ask it to plan something.

Also, smaller contexts are generally better. If you have 10 rules in a prompt, it’ll be pretty good at sticking to them. Have 100 rules, much less so.
 
You need to understand and manage the context that is being ingested while you’re interacting with an LLM. There’s a finite window, and when you start iterating and going back and forth with it, that context window is filling up, and eventually the stuff you started with starts to fall out of the context.

Similarly with tools and MCP, there is a finite amount you can use and have it remain useful. 30 tools is about the upper limit that can be used.

There are various context management strategies that improve things. Using a scratch memory to track tasks etc. Claude does this when you ask it to plan something.

Also, smaller contexts are generally better. If you have 10 rules in a prompt, it’ll be pretty good at sticking to them. Have 100 rules, much less so.
Then how is LLM meant to replace us if you need to hand over every single possible spec and edge case of something?

Even in the real world, u can get your design spec done on point before a single line of code is written to then find out midway into coding that the spec design has flaws etc and you pivot and fix and tweak. If LLM cant even do that then how is AI going to take our jobs?
 
I think we're a long way off reaching that stage i.e. 60%, but the idea of some form of universal basic income is one that gets bandied around a lot. However, it's difficult to imagine a stable society when there's an even bigger gap between the haves and the have-nots, with the majority of the population being given the same scraps to feed on.

A pessimistic take but eventually, I think people will mostly live in virtual worlds and take substances to cope with the monotony. At the moment, we're not really far off that, with people mostly living their lives through social media and with the rise in prescription drugs such as SSRIs.

Very much this.
Many people would probably just exist consuming social media now and taking those SSRIs.

I'm currently trying to figure out how to break out of software because it's just stressful now. Constantly looking over your shoulder at how close AI is to taking your job.

I'm OK with chaos in general. Much more than most. All I have been able to find is 1 year jobs. This is incredibly stressful for most people with mortgages and kids.


The level of insecurity is high and I don't see that going away.
 
Then how is LLM meant to replace us if you need to hand over every single possible spec and edge case of something?
You don’t NEED to hand over everything. You just need to provide it with sufficient information to do the task in hand.
Even in the real world, u can get your design spec done on point before a single line of code is written to then find out midway into coding that the spec design has flaws etc and you pivot and fix and tweak. If LLM cant even do that then how is AI going to take our jobs?
Agentic coding tools can and do adapt if they encounter problems while performing a task.
 
Very much this.
Many people would probably just exist consuming social media now and taking those SSRIs.

I'm currently trying to figure out how to break out of software because it's just stressful now. Constantly looking over your shoulder at how close AI is to taking your job.

I'm OK with chaos in general. Much more than most. All I have been able to find is 1 year jobs. This is incredibly stressful for most people with mortgages and kids.


The level of insecurity is high and I don't see that going away.


Get into security. There has never been a higher demand, and the more people that churn out fundamentally flawed code using the like of cursor then the security risks will exponentiate, combine that with people that actually know how to leverage LLMs rapidly developing ever more complex attacks.
Just this morning i read about a prompt attack in a companies automated email response that even without users doing anything, just receiving the email, would trigger the release of secure information from the host.


A guy i worked with for a couple of years is earning 7 figures working in securing companies against insecure LLM code.


Coding faster doesn't mean you can avoid code reviews with experts, infosec analysis , pen tests, gaming. And here is the paradox - if junior engineers are churning out more code with less direct supervision, then the senior engineers will busier than trying to review and analyze code that they had less input into designing.
 
Get into security. There has never been a higher demand, and the more people that churn out fundamentally flawed code using the like of cursor then the security risks will exponentiate, combine that with people that actually know how to leverage LLMs rapidly developing ever more complex attacks.
Just this morning i read about a prompt attack in a companies automated email response that even without users doing anything, just receiving the email, would trigger the release of secure information from the host.


A guy i worked with for a couple of years is earning 7 figures working in securing companies against insecure LLM code.


Coding faster doesn't mean you can avoid code reviews with experts, infosec analysis , pen tests, gaming. And here is the paradox - if junior engineers are churning out more code with less direct supervision, then the senior engineers will busier than trying to review and analyze code that they had less input into designing.

Security can be fun. I briefly worked with a guy who's job it was to be part of the "red team". He'd actually get jobs at the company they were assessing and infiltrate the offices as something low level (like a cleaner), then plug USB sticks in to machines to get in to the network :D
 
Last edited:


After completing the study, developers estimate that allowing AI reduced completion time by 20%. Surprisingly, we find that allowing AI actually increases completion time by 19%--AI tooling slowed developers down


I'm not at all surprised TBH, but i would have more expected the reduction to be in long term software dev costs due to the added tech-debt of AI generated code.
 

I'm not at all surprised TBH, but i would have more expected the reduction to be in long term software dev costs due to the added tech-debt of AI generated code.
Not a surprising result given their test methodology, of just taking existing established repositories and simply asking developers to switch to using AI tools, most of whom have not used them before.

It takes time to establish pipelines and guardrails to make effective use of AI coding tools, and takes a significant amount of practical experience to learn how to make effective use of it.
 





I'm not at all surprised TBH, but i would have more expected the reduction to be in long term software dev costs due to the added tech-debt of AI generated code.
Many developers don’t fully grasp prompt semantics, as logically structured prompts, i.e. contextual logic etc... same way the majority of people dont really know how to use search engines sematics.
 
Last edited:
Many developers don’t fully grasp prompt semantics, as logically structured prompts, i.e. contextual logic etc... same way the majority of people dont really know how to use search engines sematics.

There's a side element - the turkey building the turkey processing machine.

One interesting point that seems to be occurring is that the news that people transforming companies into AI are being sidelined (ie redundant). It's not clear if that's an "AI doesn't give the ROI thus close the budget that pays for their position" or "AI works thus close the budget" (even if the AI doesn't the company needs to look like it's got a handle on AI).

The engineering cost of AI is starting to rise to the top and become visible. People are spending considerable amounts per month although less than a local onshore engineering team. The AI vendors are in a price war in a land grab but only Grok has taken the step of increasing prices to not leave money on the table in the replacement of the engineering teams.

AI companies can't loss lead forever and only volume pricing is going to present them a way to commodity sell AI. There's not a large enough differentiator between the models to warrant anything other than a commodity pricing. What's Grok's differentiator?

This has occurred time and time again - new software came to replace excel but in reality the cost to operate and change was too high, so companies elected to use excel. So is AI's pricing tied to a maximum before it becomes unviable and a human can be employed instead?

There will always be a cheap and and expensive option (with middling options along the way). AI's being specialised to differentiate to enable effective pricing.

It seems to me that economics may be the thorn in AI's side. And the time scale of long term datacenter investment ROI being too long to support agility in that space if it's a race to the bottom.

A time old tradition of economics ******* up the grand ideas of engineers.
 
AI companies can't loss lead forever and only volume pricing is going to present them a way to commodity sell AI. There's not a large enough differentiator between the models to warrant anything other than a commodity pricing. What's Grok's differentiator?
There are huge differences between the models.....and also within the models over time. Claude is the best of main models for coding, in practice, by a long shot, although they are constantly tinkering with it and it's regressed a bit in the last week or so.

I'm not sure what break even point for a service like Claude is. We have a Max subscription for every developer which is £90 a month each. Other than that we have piles of free credits for Gemini/OpenAI/
Bedrock so are constantly evaluating what the state of the art is.

Grok's USP seems to be that it's a mouthpiece for Musk. I don't know anyone working with LLMs that even considers Grok a competitor, it's a fundamentally compromised product.
 
I have a friend. He's a senior programmer at a VERY large global company.
he works on Tool paths etc.

He said to me:
AI is like a very cleaver intern. However, that intern has the memory of a goldfish.

Clearly it helps. But its useless on complex, not done before topics.
 
I have a friend. He's a senior programmer at a VERY large global company.
he works on Tool paths etc.

He said to me:
AI is like a very cleaver intern. However, that intern has the memory of a goldfish.

Clearly it helps. But its useless on complex, not done before topics.

It gets worse when you add coding standards etc, even with pre-processed vectors being injected directly to optimise the window use.
 
Last edited:
I have a friend. He's a senior programmer at a VERY large global company.
he works on Tool paths etc.

He said to me:
AI is like a very cleaver intern. However, that intern has the memory of a goldfish.

Clearly it helps. But its useless on complex, not done before topics.
Because today it may not be as complex as you like, but tomorrow it will be. This is what the average programmer does not seem to understand. The A.I evolution is completely different to all other evolutions in human technology history,

The A.I evolution is incredibly fast and we are now producing some serious self-taught A.I versions soon to be ready for cyber security meaning it will not need human involvement, the A.I are being developed are based on their environment. I would would not like to work in the cyber sec industry. The next industry will be the accounting industry and the one after that will be the legal industry. If you work in those get as much as you can now. The medical industry will be slightly protected from A.I when it comes to job losses.

It could collapse the world economy, I believe it would as everyone wants a competitive advantage. Governments are going to have a big headache with unemployment and poverty.
 
Last edited:
Because today it may not be as complex as you like, but tomorrow it will be. This is what the average programmer does not seem to understand. The A.I evolution is completely different to all other evolutions in human technology history,

The A.I evolution is incredibly fast and we are now producing some serious self-taught A.I versions soon to be ready for cyber security meaning it will not need human involvement, the A.I are being developed are based on their environment. I would would not like to work in the cyber sec industry. The next industry will be the accounting industry and the one after that will be the legal industry. If you work in those get as much as you can now. The medical industry will be slightly protected from A.I when it comes to job losses.

It could collapse the world economy, I believe it would as everyone wants a competitive advantage. Governments are going to have a big headache with unemployment and poverty.

Unfortunately - the current algorithms will need to make a considerable change.

Now unless you're going to pay an AI for every update to keep up with the AI changes for your business...

It's a pyramid scheme :D
 
"Where will big corporations get their money" is not the big problem you think it is, or what people are concerned about.

What most people are concerned about is what happens when big corporates *continue* to rake in vast amounts of money while increasingly automating tasks and reducing the demand for workers, leaving the state to provide a living for an increasingly large portion of the population.

Ultimately, that answer is either complete societal breakdown, or the large corporation siphoning off ever more money will have to start contributing more to the state.

To me, the reason to worry can be seen in today's politics. Big businesses making record profits. Stagnant wage growth. Unsustainable public finances.

AI is apparently going to take away many (most?) of our jobs within 5/10/20 years. And we are supposed to believe that when this happens, large companies are going to suddenly become philanthropic? Instead of large corporations and wealthy individuals threatening to leave the country whenever anyone mentions raising taxes upon them, they're going to pay enough to sustain the country, with a UBI on top?

Yeah, pull the other one :cry:
 
Last edited:
Is there even any point saving into a pension/hoping for a retirement now for those of us particularly exposed?
If there are vastly fewer desk jobs in 5 years and increased competition for physical jobs that's a huge net loss tax take and increase in welfare.


How can any of us possibly conceive what's going to happen?
It'll be like district 9. A few living in a utopia and everyone else with nothing.

Its impossible to plan life even a few years in the future now. I'm sure my career will be gone and thus I'm struggling to decide what to do about it.

I feel the UK is particularly exposed due to the large service sector, already high welfare bill, and privatised critical infrastructure.



Personally I'm living for the moment more now than I ever have as I literally have no idea what even a few years looks like.

Or AGI will come along and all bets are off! :D
 
Surely Govenments will at some point act to prevent mass unemployment and that kind of thing.

There has to be some kind of balance and order to things surely.
 
Surely Govenments will at some point act to prevent mass unemployment and that kind of thing.

What do you think governments are going to do?

If the UK limited the use of AI in business, it could save millions of jobs. And that would last months before UK businesses started folding due to a lack of competitiveness. And then we would lose those jobs anyway. The overall outcome would be worse as the businesses would be gone too

Limiting the use of AI is something which could only happen with honest, genuine global consensus. And that simply isn't going to happen.
 
Last edited:
Back
Top Bottom