The AI is taking our jerbs thread


It will replace jobs just not yet, those economists.fail to understand that as each month's passes the A.I becomes more precise and more knowledgeable, but like everything it takes time. I believe there a 5 to. 10 year window then you'll see less jobs in areas that require knowledge and information.
 
Well, I've been spending this week re-engineering a React application that the CEO of the startup I joined managed to build entirely using Claude.

It's definitely super powerful in that he was able to pull something together without any coding knowledge, sufficient to put a pitch together and get funding. Now it needs a software engineer to tidy it up. I have deleted a LOT of stuff :p

On the other hand, I've not touched javascript or UIs for over a decade, but using Gemini 2.5Pro and Continue I've been able to get up and fully productive in a couple of days, and part of that was building a linux VM to work in. Genuine timesaver.

I'm going to build out the context docs so the LLM adheres to the patterns I've been implementing, pretty confident I can make it so CEO can adjust things without it totally ****ing the bed in future.
 
Well, I've been spending this week re-engineering a React application that the CEO of the startup I joined managed to build entirely using Claude.

It's definitely super powerful in that he was able to pull something together without any coding knowledge, sufficient to put a pitch together and get funding. Now it needs a software engineer to tidy it up. I have deleted a LOT of stuff :p

On the other hand, I've not touched javascript or UIs for over a decade, but using Gemini 2.5Pro and Continue I've been able to get up and fully productive in a couple of days, and part of that was building a linux VM to work in. Genuine timesaver.

I'm going to build out the context docs so the LLM adheres to the patterns I've been implementing, pretty confident I can make it so CEO can adjust things without it totally ****ing the bed in future.


CEO...

3jimCQf.png
 
AI slop for bug bounty reports now.


Example - https://hackerone.com/reports/3125832
 
Been making extensive use of Cascade in Windsurf the past few days. Starting to get a feel of what it's good and not so good for, trying out different models as well.

One thing it is absolutely brilliant for is refactoring. This application I'm working on had everything just smooshed into one big config file with a proper rats nest of includes all over the place. I just told it how I wanted the code split up and where I wanted the files and folders putting, and it just smashed it out in a few seconds, makes the new files, move the code, updates references.

Magic I tell thee. Saved me hours.
 
Been making extensive use of Cascade in Windsurf the past few days. Starting to get a feel of what it's good and not so good for, trying out different models as well.

One thing it is absolutely brilliant for is refactoring. This application I'm working on had everything just smooshed into one big config file with a proper rats nest of includes all over the place. I just told it how I wanted the code split up and where I wanted the files and folders putting, and it just smashed it out in a few seconds, makes the new files, move the code, updates references.

Magic I tell thee. Saved me hours.
Both Windsurf and Cursor are mostly just using Anthropic's Claude LLM models under the covers, so we have Anthropic to thank for making such a powerful LLM. :D

I personally use Cursor Pro, although some some folks in my company do use and like Windsurf. I've read that Cursor and Windsurf are very similar tools, although Cursor tends to come out with new features a bit faster than Windsurf. I haven't dug into using Cursor Rules yet, but that's next on my list.
 
Both Windsurf and Cursor are mostly just using Anthropic's Claude LLM models under the covers, so we have Anthropic to thank for making such a powerful LLM. :D

I personally use Cursor Pro, although some some folks in my company do use and like Windsurf. I've read that Cursor and Windsurf are very similar tools, although Cursor tends to come out with new features a bit faster than Windsurf. I haven't dug into using Cursor Rules yet, but that's next on my list.
I'm switching between models, using Gemini 2.5 Pro at the moment. I was actually using the Cascade Base free version for a while and it's not bad at all.
 
Been using Windsurf on this new AI startup stuff for a week or so now. Already I don't think I could go back to coding without it. Installed the Windsurf plugin on Jetbrains Rider to give it a whirl on my Unreal projects yesterday evening.

I set Claude 3.7 (thinking) to implementing a feature I'd banged my head against for a few days previously.

It smashed it out in 5 minutes with a couple of prompts. I then got it to bosh out a load of refactoring and tidying, documentation etc I'd been putting off.

If this startup goes **** up I think I'm just going to start publishing my own games, it was already feasible to be a one-man developer if you slogged away at it, but with smart use of AI tools a solo developer will be able to smash stuff out really quick now.
 
@mid_gen I just tried Windsurf and I found that Windsurf's performance was much slower and the experience more laborious than I have with Cursor.

For me, Cursor's significant speed advantage and "terseness" in its output makes it a big winner over Windsurf. I really wish that Cursor would build a Jetbrains IDE extension, but it's obvious from their docs that they have no desire to do that, so I'll have to keep switching back and forth between Cursor and Jetbrains as needed.
 
@mid_gen I just tried Windsurf and I found that Windsurf's performance was much slower and the experience more laborious than I have with Cursor.

For me, Cursor's significant speed advantage and "terseness" in its output makes it a big winner over Windsurf. I really wish that Cursor would build a Jetbrains IDE extension, but it's obvious from their docs that they have no desire to do that, so I'll have to keep switching back and forth between Cursor and Jetbrains as needed.
Haven't tried Cursor, but it's going to depend a bit on what model you want to use. I'm using GPT-4.1 at the mo for most tasks as it's discounted.

If you want it to be more terse, then just give it a memory telling it to be more terse!

can you add a memory please, to make your replies less verbose. I don't need the justifications for your changes and offer to help more, I will take that as implied.

Memory added: replies will now be concise and direct, with no extra justifications or offers of further help unless you ask.
Feedback submitted

Auto-generated memory was updated
Created "USER prefers concise, non-verbose replies" memory.
 
Last edited:
Haven't tried Cursor, but it's going to depend a bit on what model you want to use. I'm using GPT-4.1 at the mo for most tasks as it's discounted.

If you want it to be more terse, then just give it a memory telling it to be more terse!
You can tell Cursor to use GPT 4.1, although I personally find Claude 3.7 to be better than the GPT models for what I want to achieve.

Also, it isn't that the Windsurf agent is verbose in its explanations as I already add rules to make it be less verbose. It's that the agent UI interface is overly "chatty" in its reporting of what is going on, and the agent UI is overly big and clunky (big UI blocks telling me what files it's looking at, etc). Cursor has struck a really good balance of having diminutive UI that provides feedback that's still useful.
 
AI is going to be taking the job of a construction planning consultant I’m using very soon.

I’ve been feeding old engineering and architectural drawings into GPT and asking it to come up with timelines, build methodologies, sequencing, staging, etc, and comparing it with the old construction programmes I used in the past. It’s remarkable how consistent and accurate it is. Equally, it’s also remarkable how consistently incorrect my consultant is. In fact, for an upcoming project I’m purely going to be using GPT as I’m fairly confident about the actual timelines and GPT is coming up with the same thing, within spitting distance.

I’m expecting big things from AI in construction staging, planning, measuring, methodologies, etc.
 
AI is going to be taking the job of a construction planning consultant I’m using very soon.

I’ve been feeding old engineering and architectural drawings into GPT and asking it to come up with timelines, build methodologies, sequencing, staging, etc, and comparing it with the old construction programmes I used in the past. It’s remarkable how consistent and accurate it is. Equally, it’s also remarkable how consistently incorrect my consultant is. In fact, for an upcoming project I’m purely going to be using GPT as I’m fairly confident about the actual timelines and GPT is coming up with the same thing, within spitting distance.

I’m expecting big things from AI in construction staging, planning, measuring, methodologies, etc.

It's good if you are confident on the results it's giving you. But for people who are not and will take what Chatgpt gives them which could not be correct, different story.
 
Last edited:
Someone made a good point this morning which is interesting:

Job applications are becoming not what you've done, or who you've worked for, but rather - what can you do with AI for the job role.

Now I would temper that - after all people operate on empathy you will gel with the team and be productive. Furthermore, every company has legacy and current digital estates. However it has made me think. Previously I'd relate to the company and its goals, with the experience I have and possibly some thinking (note I've been taken for a ride providing steerage before).

Thoughts?
 
Last edited:
you can't take your learned AI system from previous employee with you when you change jobs - so innate capability still needs to be demonstrated.

---------

AI's word is gospel -> the redundant inteligence teams are miffed, channel 4 had an interview last week , suggesting the error tolerance in their AI was relaxed.

The report states that as many as 37,000 Palestinians were designated as suspected militants who were selected as potential targets. Lavender’s kill lists were prepared in advance of the invasion, launched in response to the Hamas attack of October 7, 2023, which left about 1,200 dead and about 250 hostages taken from Israel. A related AI program, which tracked the movements of individuals on the Lavender list, was called “Where’s Daddy?” Sources for the +972 Magazine report said that initially, there was “no requirement to thoroughly check why the machine made those choices (of targets) or to examine the raw intelligence data on which they were based.” The officials in charge, these sources said, acted as a “rubber stamp” for the machine’s decisions before authorizing a bombing. One intelligence officer who spoke to +972 admitted as much: “I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added value as a human, apart from being a stamp of approval. It saved a lot of time.”

It was already known that the Lavender program made errors in 10 percent of the cases, meaning that a fraction of the individuals selected as targets might have had no connection with Hamas or any other militant group. The strikes generally occurred at night while the targeted individuals were more likely to be at home, which posed a risk of killing or wounding their families as well.

A score was created for each individual, ranging from 1 to 100, based on how closely he was linked to the armed wing of Hamas or Islamic Jihad. Those with a high score were killed along with their families and neighbors despite the fact that officers reportedly did little to verify the potential targets identified by Lavender, citing “efficiency” reasons. “This is unparalleled, in my memory,” said one intelligence officer who used Lavender, adding that his colleagues had more faith in a “statistical mechanism” than a grieving soldier. “Everyone there, including me, lost people on October 7. The machine did it coldly. And that made it easier.”

The IDF had previously used another AI system called “The Gospel,” which was described in a previous investigation by the magazine, as well as in the Israeli military’s own publications, to target buildings and structures suspected of harboring militants. “The Gospel” draws on millions of items of data, producing target lists more than 50 times faster than a team of human intelligence officers ever could. It was used to strike 100 targets a day in the first two months of the Gaza fighting, roughly five times more than in a similar conflict there a decade ago. Those structures of political or military significance for Hamas are known as “power targets.”

 
It's good if you are confident on the results it's giving you. But for people who are not and will take what Chatgpt gives them which could not be correct, different story.
Confidence in the results from AI is simply a matter of time. They will inherently by their nature improve the more they are used.

Anyone betting that AI won’t be good enough to do their job is going to have a rude awakening.
 
Confidence in the results from AI is simply a matter of time. They will inherently by their nature improve the more they are used.

Anyone betting that AI won’t be good enough to do their job is going to have a rude awakening.
They also inherently make up information, so while it will improve, without a confidence score/filter, there will always be doubt.

Obviously won’t matter for the large majority of people as they won’t bother to second guess it, but still matters in terms of industry applications.
 
They also inherently make up information, so while it will improve, without a confidence score/filter, there will always be doubt.

Obviously won’t matter for the large majority of people as they won’t bother to second guess it, but still matters in terms of industry applications.
You don't do away with all the same usual processes and guardrails you have to prevent mistakes, just the same as with any critical work, you don't rely on one single decision-maker.

Except now, the checking can be done by other agentic AI systems. I'm already working like this, using AI systems to create code, which is then pushed to other agentic systems to review it and send it back for review to the first AI. Breaking systems down into networks of agents that have very narrow and focused domains with strong guardrails, and generally knowing how to use LLMs and build a proper context for them to work in, this is the future. I've been an software engineering manager for a long time, and frankly, I just don't need a team of people to manage any more, I have AI agents do the work, and I oversee it.

Just tapping in a one line command to ChatGPT is so far away from the cutting edge of how these systems are being built and deployed right now.
 
You don't do away with all the same usual processes and guardrails you have to prevent mistakes, just the same as with any critical work, you don't rely on one single decision-maker.

Except now, the checking can be done by other agentic AI systems. I'm already working like this, using AI systems to create code, which is then pushed to other agentic systems to review it and send it back for review to the first AI. Breaking systems down into networks of agents that have very narrow and focused domains with strong guardrails, and generally knowing how to use LLMs and build a proper context for them to work in, this is the future. I've been an software engineering manager for a long time, and frankly, I just don't need a team of people to manage any more, I have AI agents do the work, and I oversee it.

Just tapping in a one line command to ChatGPT is so far away from the cutting edge of how these systems are being built and deployed right now.
What I mean is that you still require human interaction. It also heavily relies on the languages that you're using and the general complexity of the work that you're doing.

LLM's don't run out of answers, and I've had more than one situation where, and this is across a variety of models, that it'll just loop between the same 2 or 3 broken solutions, or keep putting forward old code. It might work for your specific use case but that doesn't mean it can work generally. They're all trained on the same data so if the data is rubbish then they all give you rubbish answers just in a different format.
 
Back
Top Bottom