The AI is taking our jerbs thread

What I mean is that you still require human interaction. It also heavily relies on the languages that you're using and the general complexity of the work that you're doing.

LLM's don't run out of answers, and I've had more than one situation where, and this is across a variety of models, that it'll just loop between the same 2 or 3 broken solutions, or keep putting forward old code. It might work for your specific use case but that doesn't mean it can work generally. They're all trained on the same data so if the data is rubbish then they all give you rubbish answers just in a different format.
All software engineering is an exercise in taking big problems and breaking them down into small solutions. It’s no different now AI is doing much of the work.

The models are being trained all the time and are continually improving. Sitting and declaring they aren’t perfect while they are advancing at the rate isn’t going to keep you in a job.
 
All software engineering is an exercise in taking big problems and breaking them down into small solutions. It’s no different now AI is doing much of the work.

The models are being trained all the time and are continually improving. Sitting and declaring they aren’t perfect while they are advancing at the rate isn’t going to keep you in a job.
That's not my point; I use an LLM daily and it has benefits in terms of productivity, but equally creates the most insane garbage. Anyone relying on it is either building incredibly simple solutions, or is putting out incredibly broken solutions.

Using an LLM doesn't require less skill from the engineer, I'd actually argue that it requires more skill to set the correct prompts and ensure that the results align with the bigger picture. Lower skill/junior engineers will be the ones that struggle the most due to lack of experience.
 
Anyone know of any good videos / documentaries on Artificial General Intelligence (that aren’t AI generated slop)?
 
When I see stuff like this i'm like, we are cooked....


Unitree stuff isn't as good as boston dynamics in terms of it's movement, but it's the price point that is crazy low. 16k is not a lot for something so advanced.
 
Yes, bouncers are a thing of the past when an AI robot can knock you into the sixth dimension if it deems you too drunk or aggressive to enter.
 
7 weeks into my new job and I've spoken to my boss about being part of the new AI Team they want to build.

This will be interesting........
 
That's not my point; I use an LLM daily and it has benefits in terms of productivity, but equally creates the most insane garbage. Anyone relying on it is either building incredibly simple solutions, or is putting out incredibly broken solutions.

Using an LLM doesn't require less skill from the engineer, I'd actually argue that it requires more skill to set the correct prompts and ensure that the results align with the bigger picture. Lower skill/junior engineers will be the ones that struggle the most due to lack of experience.


I agree. In general when trying to develop something more complex then LLMs today are more complex and require more skill and more time to get working than simply developing directly, and the output will often have significant technical debt. It is also generally bad for developers to try and heavily rely on LLMs as they will lose the important skills that will keep them employed. Currently, the concept of a co-pilot is still accurate and i honestly don't see that changing in a long time.
 
I work in the planning industry and I feel it is unlikely to replace my job as planning is full of shades of gray / applying a "planning balance" in decision making, which is based on numerous qualitative (and quantitative) factors. Planning is nowadays also about taking the decision makers "on a journey". Not to mention planning is (for better or worse) political rather than 100% an evidence based decision.

AI is quite useful in identifying and analysing baseline data so overall it could well be a net positive for the industry. Just have to keep an eye on people joining the industry who are very familiar with AI to make sure they have the understanding of what the AI is producing and doing from us older people who never had it.
 
Not doomsday , but future uses and developments.
Lots of the model providers give updates but they're usually full of marketing speak and bigging themselves up.
As for developments its tricky, "AI" encompasses so much but most people are thinking of LLMs and image/video generation.

In the LLM space people are always racing to get good scores on the various industry benchmarks to one-up each other, either by making bigger models or by making quantized models so smaller footprint but still 'almost' as good. For the everyday user its probably not something you care about.

Agents is the big thing at the moment (that I see at least), so software that makes decisions and interacts with other things to achieve a goal. Think of asking it to get a good price on a decent set of bluetooth earphones, it'll go off and search what 'decent earphones' means so maybe looks at reviews, then it'll go and get prices from various places then present you with the options and, potentially, then buy them for you - what could go wrong?! :D
 
Last edited:
Lot of people using AI in forums these days and people don't fact check the results so it's mostly been wrong or badly out of date where I've seen it quoted. Which is weird because the information it needed for theses queries has been publicly available for years.

We've started using agents and AI at work, though most of us don't have licences. The people that do have licenses don't understand the data they are using. They think if they throw enough mud (data) at a wall it will work it all out. We'll see.
 
AI is only good for sorting information you provide it. If you just let it go online and do things by itself you get nonsense mostly, because the source is mostly nonsense.
 
Last edited:
AI is only good for sorting information you provide it. If you just let it go online and do things by itself you get nonsense mostly, because the source is mostly nonsense.
Most people don't realise that LLM responses can be confidently incorrect. It's just piecing together and regurgitation what it's been trained on, and it's entirely possible to get different answers to the exact same question.
 
Most people don't realise that LLM responses can be confidently incorrect. It's just piecing together and regurgitation what it's been trained on, and it's entirely possible to get different answers to the exact same question.

I've seen that one answer from two year old data the next from year old data. Both wrong as it happened..
 
In my experience people don't check facts enough, when they get them from people, the internet or AI. Because it's often wrong often intentionally.
 
Last edited:
Back
Top Bottom