The AI is taking our jerbs thread

I've seen that one answer from two year old data the next from year old data. Both wrong as it happened..
I've had numerous instances of various models recommending deprecated code, or just making it up - this has improved though.

There was one case where they all kept coming up with different versions of the wrong answer - I eventually found the 7 year old repo that they had been trained on.
 
That's by design to a degree.
not really. LLM simply provide a probability distribution over tokens given a sequence of tokens, a token is then selected stochastically based on the distribution and temperature params. Then the whole sequence is fed back in to generate 1 more token.

So ot can provide slightly different text to the same input due to the random sampling. But it should also be consistent. E.g if you ask what is the capital of France it should always be confident and correct in Paris due to the probability distribution, but it could rephrase the answer slightly differently due to randomness.


When you get wildly different outputs it is often due to hallucinations.
 
So... what I said was true then, it gives different answers by design.

To ask it a general question like "where is nice to go on my holidays?" works differently to "what is the capital of France?" The latter is where the hallucinations (mostly) occur or are at least (mostly) noticed as there is a factual answer than can be right or wrong. It telling you to go to Norway instead of Morocco isn't as much of an issue. But this is all sort of a distraction when yes, they hallucinate way too much and stopping it is a big headache.
 
Last edited:
There was a thread on here many years ago where the general concensus at the time was that there would be no loss of jobs. People would retrain and move sectors/skills.
Will be interesting to see how/if that view changes over time.
 
I'm read all on the Internet about people loosing their jobs to AI but I never physically met anyone who has.

But met plenty of people who lost their jobs to cheaper countries.
 

Plato conjecture is that when all models get to a certain size they essentially become the same model.

So that has ramifications on businesses that use AI, where they will struggle to differentiate to create value if they only use the standard large models.
 
That's been happening for years. The whole M&S debacle is also a good lesson in getting what you pay for.

That's what I mean.

All this talk about "AI taking our jobs" we really haven't see that in action. But we still see companies moving teams to cheaper countries and not moving them towards AI to reduced costs.
 
That's what I mean.

All this talk about "AI taking our jobs" we really haven't see that in action. But we still see companies moving teams to cheaper countries and not moving them towards AI to reduced costs.
They're definitely trying - corporations are all about maximising profits even if it's at the expense of quality, so it will be the same thing as with outsourcing; some companies will push for it and maybe they're lucky enough to avoid issues, but over the years there'll definitely be a degradation in quality.
 
Back
Top Bottom