The AI is taking our jerbs thread

Our ISP added AI as a blocked by default category and people lost their minds.

Not really surprised, that seems like a very bizarre and somewhat destructive decision to make.

The main concern I have with AI is its use in universities and education, there is tons upon tons of videos out there of AI answering student questions, writing reports and more.

The thing is that's mostly bookwork stuff - obvs it can help with programming assignments but that should be taken into account when setting them - really universities are going to have to make sure they place the bulk of the marks on end-of-term or end-of-year exams.

I'd agree, but the responses you will get on Reddit are usually along the lines of "OMG U need to prompt better dude"

They're partly right, it's not just how you prompt though it's how you break down the problem too - people often get thrown by trying to one-shot big chunks of code in one go, there are also obvious limitations - like if the knowledge cut off is before a change to a library then you're going to errors, they're more familiar with some languages and frameworks than others too or indeed some domains more than others.

People have been "vibe coding" 3D browser games fairly rapidly in recent days for example, not using a game engine but just three.js, adding multiplayer just being a case of prompting too etc..

Also, boilerplate code is (and has been for a while) very straightforward - they're a good time saver just in that respect alone.
 
When all these years they have been using Applicant Tracking Systems and applicants have been trying to find ways to get around it. Now we have AI to get around ATS, recruiters and hiring managers don’t like when they had to upper hand for so long :rolleyes:

The tables have now turned thanks to public AI platforms :)

Even at a base level they filter CV's on buzzwords, it's ridiculous for them to be upset that people are responding in kind to the very poor hiring processes we've seen for a decade plus.

The entire recruitment sector is a borderline scam self-insert industry at this point depending on what you're hiring for, there's instances of "required experience" in platforms being longer than said platforms have existed.
 
Last edited:
You could always bring back the old-fashioned written exam.

Many of them still take vendor exams in addition to their programmes.

It is far more important for apprentices to be able to demonstrate skills and behaviours and to be able to discuss their work at end point assessment.

An exam would tell you very little about them, as the vast majority would pass it, unless you made it harder than it should be for the level they’re working at.
 
Not really surprised, that seems like a very bizarre and somewhat destructive decision to make.

For granularity sake it makes perfect sense. Lots of institutions don't have proactive IT management. We have on site filtering and just use their filter as a back-stop. The defaults are configurable
 
For granularity sake it makes perfect sense. Lots of institutions don't have proactive IT management. We have on site filtering and just use their filter as a back-stop. The defaults are configurable

I'm just commenting on the outcome being silly, whether the background config makes sense in theory and someone messed up by not catching it is another matter.

AI is here to stay so blocking academics(?) from using it would be a but futile.

On the broader topic, I think it's going to make some people significantly more productive in addition to (hopefully) wiping out a bunch of white-collar jobs - someone mentioned earlier mortgage advisor being safe but that seems like something that could almost be replaced now if regulators were to allow it - but that sort of thing certainly shouldn't need a human in future.
 
Last edited:
First week in my new job and had a meeting about forming a new team dedicated only for AI. But to assist teams not to replace them.

Will be very interesting what will come of this.
 
I think what's taken me a bit by surprise is how many 'normal' people are leveraging AI, like you hear about all these random people using AI for writing covering letters for jobs, for creating documents etc. I'd wrongly assumed it would be a lot more nerdy and used for specific tasks under direction from others rather than individuals just randomly using AI for their tasks. I've been a bit caught on the hop, as a nerd that hasn't embraced AI yet and 'normal people' have sort of jumped ahead of me in that regard.

One of the things I've been told I'm good at in the past is writing emails and I see general prose creation no longer being a differentiator, basically a lot of people will be using AI to do 90% of it and then just tweak it a bit.

Basically what I forsee is yes in the long term AI will take some jobs. There will be a maturity curve to get to that point however with AI being used to assist humans initially, then AI does the lifting with humans supervising, before fully autonomous AI takes over. Different organisations will be on different stages of this lifecycle.

As for the AI in education piece, I guess I have mixed views. Part of me says being able to leverage AI effectively is a good skill to have. I mean, ultimately we want a workforce that is productive and it might be that someone who is good at piloting AI delivers more/better than a smarter academic that does everything manually. In some cases AI is just being used like a very fancy search engine that can draw inference from the findings. When I was at uni 25 years ago the fears were about the internet making plagiarism too easy or detracting from traditional research skills. When I was at secondary school, research meant going down the library to find stuff and that was just inefficient.
 
I think what's taken me a bit by surprise is how many 'normal' people are leveraging AI, like you hear about all these random people using AI for writing covering letters for jobs, for creating documents etc. I'd wrongly assumed it would be a lot more nerdy and used for specific tasks under direction from others rather than individuals just randomly using AI for their tasks. I've been a bit caught on the hop, as a nerd that hasn't embraced AI yet and 'normal people' have sort of jumped ahead of me in that regard.

GPT2 was indeed very much in line with your thinking there - released in 2019, weights made available publicly, was there for nerds to experiment with etc.

Amusingly there were even fears about releasing GPT2, people worried it could be dangerous.

GPT3 was initially a bit nerdy too - for a short while it was only available via API, but as soon as they made it accessible in an easy way via the web in late 2022; "ChatGPT" - then it just blew up, it was a more powerful model than anything seen until then and it could be easily accessed.

I guess as long as they make advanced tech simple to use then it just becomes normal - we've been using AI for decades in some form or another after all - my little nephews love shouting at Alexa to request songs to play, people have been telling others to "google it" for years and even simple kitchen appliances use it - Japanese rice cookers have used AI (fuzzy logic) for many years now to ensure you get perfect rice every time.

As for the AI in education piece, I guess I have mixed views. Part of me says being able to leverage AI effectively is a good skill to have. I mean, ultimately we want a workforce that is productive and it might be that someone who is good at piloting AI delivers more/better than a smarter academic that does everything manually. In some cases AI is just being used like a very fancy search engine that can draw inference from the findings. When I was at uni 25 years ago the fears were about the internet making plagiarism too easy or detracting from traditional research skills. When I was at secondary school, research meant going down the library to find stuff and that was just inefficient.

Yup indeed - I guess similar stuff re: Wikipedia etc.

Are the worries about too much reliance on AI more of the same mentality as; "you're not going to walk around with a calculator in your pocket when you're older".

Of course, you don't typically need a calculator when it comes to much of advanced mathematics (aside from calculating numerical results in some applied or stats exams).

I'm sure UK universities can just carry on placing emphasis on examinations and setting coursework appropriately while taking into account that LLMs might be used for assistance, US universities (which are typically coursework-heavy) might need to make more adjustments.
 
We’ve recently signed up to this at work and we’re encouraged to find good uses for it.

I’ve tried 2 things, the first it just couldn’t do anything meaningful. It was processing a large amount of data and giving a result.

The second is that the field I work on requires programming constrained random test environments. Which basically means writing a load of constraints and when you randomize. All those constraints would be met. I wrote mine manually, as I’ve always done. Put them in to ChatGPT asked it what it was doing. It worked it out (this part it is good at, it understands code and will explain what it does well). And pointed out a flaw in my constraints. And wrote its own. Except mine weren’t flawed and the ones it produced were flawed.

In its current state I don’t think it’s that good tbh. It will likely get better.

The line at work is it will make us better developers. But imo if you’re needing to use chatGPT to write code for you. Then you’re probably not good enough for your job anyway. And I don’t see how it would make you a better developer. Instead of learning to write good code, you just learn how to prompt an AI assistant.
 
Last edited:
I think part of the issue is that firms are buying cheap (relatively) general purpose AIs and then being disappointed that it's not giving them results on really specific use cases.

It's entirely possible to build LLMs on specific topics e.g. law or specific use cases e.g. customer service chat, and you get much better quality outputs - but that requires more effort and cost on the part of the company.

I'm convinced that a lot of jobs (or at least large chunks of time within certain jobs) will be automated with the right AI model, but right now it hasn't bedded in well enough for many companies to see that.
 
We’ve recently signed up to this at work and we’re encouraged to find good uses for it.

I’ve tried 2 things, the first it just couldn’t do anything meaningful. It was processing a large amount of data and giving a result.

The second is that the field I work on requires programming constrained random test environments. Which basically means writing a load of constraints and when you randomize. All those constraints would be met. I wrote mine manually, as I’ve always done. Put them in to ChatGPT asked it what it was doing. It worked it out (this part it is good at, it understands code and will explain what it does well). And pointed out a flaw in my constraints. And wrote its own. Except mine weren’t flawed and the ones it produced were flawed.

In its current state I don’t think it’s that good tbh. It will likely get better.

The line at work is it will make us better developers. But imo if you’re needing to use chatGPT to write code for you. Then you’re probably not good enough for your job anyway. And I don’t see how it would make you a better developer. Instead of learning to write good code, you just learn how to prompt an AI assistant.


I saw a quote recently that seems very true, it went something like:
I used to spend 8 hours a day coding and 2 hours debugging. Now i spend 2 hours a day prompting and 8 hours a day debugging.
 
I think part of the issue is that firms are buying cheap (relatively) general purpose AIs and then being disappointed that it's not giving them results on really specific use cases.

It's entirely possible to build LLMs on specific topics e.g. law or specific use cases e.g. customer service chat, and you get much better quality outputs - but that requires more effort and cost on the part of the company.

I'm convinced that a lot of jobs (or at least large chunks of time within certain jobs) will be automated with the right AI model, but right now it hasn't bedded in well enough for many companies to see that.


For sure, but you shift the problem from having good software engineers doing the coding to having more specialized ML engineers and MLOp fine-tuning LLMs, adding and maintaining RAG and KAG etc, then realizing how complex a true production quality RAG system is, adding guardrails, hallucination detection and prevention, model monitoring etc.

That maybe OK for some very large software projects with many engineers doing basic glue code/KTLO bug fixes etc. but the economics might not work out, and is outright inhibitive for smaller projects.

There is also the knowledge and skill risk. Sure with a suitably complex genAI coder much basic day to day code could be semi-automated and owned by less skilled engineers. But without the knowledge znd skills of experts there is a big risk when things inevitably go wrong.
 
For sure, but you shift the problem from having good software engineers doing the coding to having more specialized ML engineers and MLOp fine-tuning LLMs, adding and maintaining RAG and KAG etc, then realizing how complex a true production quality RAG system is, adding guardrails, hallucination detection and prevention, model monitoring etc.

That maybe OK for some very large software projects with many engineers doing basic glue code/KTLO bug fixes etc. but the economics might not work out, and is outright inhibitive for smaller projects.

There is also the knowledge and skill risk. Sure with a suitably complex genAI coder much basic day to day code could be semi-automated and owned by less skilled engineers. But without the knowledge znd skills of experts there is a big risk when things inevitably go wrong.
Yeah, my assumption is that we'll move to more specialised products - so rather than buying in copilot or ChatGPT enterprise and calling it job done, you'll see firms buying in LLMs specialising in their coding languages or in specific services. I've already seen a couple of examples of people buying in AI-powered chatbots that are trained on customer service. And obviously then it has been built by specialist ML engineers, but you as the end user just has to worry about how you design the user experience and at what point you get a human to step in.
 
Had a new one today, a Director gave me a 16 page document for an image processing application and asked me to build it. The document was literally just his transcript from ChatGPT outlining what Python scripts to use, hardware requirements and a basic workflow. Needless to say it didn't make much sense.

Edit/ should also point out I'm not a Developer, I'm a Systems Engineer :p
 
Last edited:
  • Like
Reactions: RxR
I tried AI with a fairly simple question.

It gave me back 5 points. Of the 5 points was legally incorrect, 2 were wrong, one was made up. Only one answer was correct.

Now that's ok because I knew the correct answer, but if someone didn't they could think it was correct. Consider how much fake stuff people don't fact check, COVID, trump etc. obviously people will assume it's correct and spread this misinformation.

Another issue I have is the people like the director in the previous post who are throwing things into AI and they can't validate the responses because they don't know the data themselves. They just assume it's right when it isn't.
 
It seems super limited for most usage cases in technical IT roles but we are embracing it slowly.
My son - who is bright enough and could do A levels - wants to go down the trade route of carpenter or electrician. I've encouraged and supported that because I feel that those type of roles will be the last to be affected. We would literally be at the point where we have walking robots version 1001 before the physical jobs like that are taken over imo.
Same with barbers/hairdressers. Nobody is trusting a robotic arm with blades.
 
Back
Top Bottom