The AI is taking our jerbs thread

The reality about Builder.ai is probably more widespread in the tech world than is being portrayed. AI branding being powered by underpaid engineers in developing countries. The bubble can't burst quickly enough.
 
I think I would genuinely find it archaic and painfully slow to go back to actually typing out code now, after having been working with Windsurf for a month or so.

It takes time to learn how to get the most out of it, what rules to use, how to prompt it. You absolutely 100% need to understand what it is doing and review everything, same as you would any coder, but just having basically an entire programming team at your beckon call to just get **** done is incredible.

In the time it would take me to walk over to another coder's desk and start explaining to them what I want done, Cascade has already done it. And it's made sure it has full test coverage. And updated all the documentation.
 
Not sure how you think you avoid a local minima where windsurf&co are trained on code that does not represent most efficient implementations,
(although efficiency is often traded for supportability)
strength of many organisations is your knowledge of both your own work and colleagues , but windsurf would not know underlying logic
(algorithms where you are looking almost at architecture level/l1 cache exploitation, say)

TODAY .... seems though all gloves are off for AI copying of music Sheeran not done for copying Marvins song - AI or human plagiarism who now cares.
wonder if the AI will itself match songs it thinks are similar.
 
AI is absolutely a threat. We are only tickling the surface of what it can do.

I don't think anyone can really know what's going to happen. But I do think it will play out fairly quickly vs other shifts in the past.
 
After this purchase by Meta of Scale (scales AI):

It won't be long before Zuckerberg kicks Scale's Founder and CEO, Wang, out for demanding AI regulation and safety:
 
AI is absolutely a threat. We are only tickling the surface of what it can do.

I don't think anyone can really know what's going to happen. But I do think it will play out fairly quickly vs other shifts in the past.

Yes and no. Play out quickly now - not necessarily (really depends on what you mean), play out quickly if/when some other new approach is found - quite possibly.

AI has been around for nearly 70 years now (the term was coined in 1956), the perceptron (foundation of neural nets) was created in 1958, and there have been two AI winters so far following hype cycles - in the 70s and in the late 80s/early 90s.

Machine Learning (ML) started to dominate other approaches to AI in the early 2000s, then (within ML) - deep neural nets started to dominate across various tasks (including vision, NLP etc.) in 2012, and then in late 2022 LLMs started to dominate NLP and of course are used more generally too now.

By early 2023 the likes of ChatGPT (GPT 3) were getting hype - it absolutely was mindblowing, we're finally able to seemingly communicate with a model, to this early version of some alien intelligence or silicon god etc.. and some people were very much thinking that the world was about to change, that this will all play out quickly (one AI lab CEO even thought humanity was going to be wiped out by the end of the year), what will the next 5 years look like etc.. But now we're over 2.5 years since the release of GPT 3, we've seen GPT 4 show a big improvement and we've seen some neat tricks with reinforcement learning/chain of thought etc.. but it's not clear that scaling LLMs will get us to some advanced "AGI" or rather "ASI" (some might argue that by previous standards AGI is already here thanks to these models - but if that's AGI then it's not really replacing humans nor is there a big threat from these models* so much as helping humans become more productive). Not that this path isn't very useful in itself, but it might well be that we need some other approaches.

*there's all the usual what if the AI gives instructions on making drugs/explosives etc.. or what if the AI is racist etc.. but no threat that a current LLM can somehow take over the world say.
 
Last edited:
Yes and no. Play out quickly now - not necessarily (really depends on what you mean), play out quickly if/when some other new approach is found - quite possibly.

AI has been around for nearly 70 years now (the term was coined in 1956), the perceptron (foundation of neural nets) was created in 1958, and there have been two AI winters so far following hype cycles - in the 70s and in the late 80s/early 90s.

Machine Learning (ML) started to dominate other approaches to AI in the early 2000s, then (within ML) - deep neural nets started to dominate across various tasks (including vision, NLP etc.) in 2012, and then in late 2022 LLMs started to dominate NLP and of course are used more generally too now.

By early 2023 the likes of ChatGPT (GPT 3) were getting hype - it absolutely was mindblowing, we're finally able to seemingly communicate with a model, to this early version of some alien intelligence or silicon god etc.. and some people were very much thinking that the world was about to change, that this will all play out quickly (one AI lab CEO even thought humanity was going to be wiped out by the end of the year), what will the next 5 years look like etc.. But now we're over 2.5 years since the release of GPT 3, we've seen GPT 4 show a big improvement and we've seen some neat tricks with reinforcement learning/chain of thought etc.. but it's not clear that scaling LLMs will get us to some advanced "AGI" or rather "ASI" (some might argue that by previous standards AGI is already here thanks to these models - but if that's AGI then it's not really replacing humans nor is there a big threat from these models* so much as helping humans become more productive). Not that this path isn't very useful in itself, but it might well be that we need some other approaches.

*there's all the usual what if the AI gives instructions on making drugs/explosives etc.. or what if the AI is racist etc.. but no threat that a current LLM can somehow take over the world say.
Geofery Hinton?


Watch this. He is worried and his been working in ai for over 40 years
 
Geofery Hinton?

What about him? I'm quite familiar with who he is but you've linked to a 1hr30min video interview with the grifter from Dragon's Den - is there something specific you wanted to highlight there?

He's not been too good re: his previous prediction re: jobs - he's underestimated what these jobs entail - he's famously gotten this wrong already when he treated Radiology as though it was *just* a classification task back in 2016:

(short video - 1m25s)
Quote from video above:
Let me start by saying a few things that seem obvious. I think if you work as a radiologist, you're like the coyote that's already over the edge of the cliff but hasn't yet looked down, so doesn't realize there's no ground underneath him. Um, people should stop training Radiologists now. It's just completely obvious that within 5 years um deep learning is going to do better than Radiologists because it's going to be able to get a lot more experience. Um, it might be 10 years, but we've got plenty of Radiologists already. I said this at a hospital, and it didn't go down too well.


And sure enough, what has actually happened is that radiologists were very much still around in 2021 and are still around now - machine learning has augmented their work. Imagine a world where people had taken him seriously in 2016, the great AI expert declared the field obsolete, so we stopped training new ones - well, we'd have a big shortage of Radiologists right now. Being charitable, he did also say it might be 10 years, but I don't see anything to suggest that they'll be gone next year either.

I remeber telling a radiologist about Hinton's comments back in 2016, she scoffed and explained some of the things she did - the fact is that the typical radiologist would be more than happy to have some AI assistance in picking things out on imaging, that an ML/DL algo can spot some tumour or whatever better than a human doesn't meant the human is replaced, rather the human's work has simply been made more efficient and more accurate.
 
Last edited:
What about him? I'm quite familiar with who he is but you've linked to a 1hr30min video interview with the grifter from Dragon's Den - is there something specific you wanted to highlight there?

He's not been too good re: his previous prediction re: jobs - he's underestimated what these jobs entail - he's famously gotten this wrong already when he treated Radiology as though it was *just* a classification task back in 2016:

(short video - 1m25s)
Quote from video above:



And sure enough, what has actually happened is that radiologists were very much still around in 2021 and are still around now - machine learning has augmented their work. Imagine a world where people had taken him seriously in 2016, the great AI expert declared the field obsolete, so we stopped training new ones - well, we'd have a big shortage of Radiologists right now. Being charitable, he did also say it might be 10 years, but I don't see anything to suggest that they'll be gone next year either.

I remeber telling a radiologist about Hinton's comments back in 2016, she scoffed and explained some of the things she did - the fact is that the typical radiologist would be more than happy to have some AI assistance in picking things out on imaging, that an ML/DL algo can spot some tumour or whatever better than a human doesn't meant the human is replaced, rather the human's work has simply been made more efficient and more accurate.
Lots to digest in that podcasts and worth a listen but some highlights are.

Eventually it will take away most desk type thinking jobs and without safety measures in place, could destroy humanity eventually.

And last bit is to tell our kids to become plumbers instead
 
Eventually it will take away most desk type thinking jobs and without safety measures in place, could destroy humanity eventually.

And last bit is to tell our kids to become plumbers instead

Key word there is "eventually", the current models don't look like they'll be doing that as per my previous post, we'll need other approaches IMO though absolutely things could accelerate very rapidly if/when we have those - but that could be 5 years or 10 years or 20 years etc..

I'd take the plumber's bit about as seriously as his radiologist's prediction - robotics is improving rapidly too, if we're getting a super intelligence that can carry out desk jobs, then at the point we have that then why not manual ones too. I think he's made the same mistake as he made re: radiologists and condensing their existence/utility down to solving a classification task but in reality, you'd need a lot more to actually replace them (including manual work in the case of the interventional radiologists). FWIW the radiologist I talked to in 2016 in ref to Hinton's comments explained that they do a heck of a lot more than just looking at images, she can do vascular surgery better than a vascular surgeon.
 
Last edited:


This kind of story is becoming more and more common. While AI can increase productivity and reduce head count in some roles, it isn't logical that companies will simply use that to lower costs. Increased productivity can be used to build bigger, better, more complex systems,, enter new markets, take on projects that were deemed too expensive of infeasible. I have never worked on a project that came close to being sufficiently staffed. Largely it is a case of prioritising a tiny percentage of the roadmap and taking on only the most critical tech-debt etc.

If people become more efficient and small teams can take on projects that previously only a much larger team could handle, then the total scope of work increases dramatically and you could see a large increase in engineers .

A lot will depend on competitive markets , but there is much scope for companies to push boundaries, develop whole new projects and technologies.
 
Too many companies are getting over hyped about AI thinking it will save costs by cutting staff, which I have yet to see.

Until those AI computing bills start adding up. Like when companies were rushing to move everything into the Cloud then prices started creeping up. Now they are careful about what they run on the cloud providers as those prices arent coming down.
 
Another issue with encouraging people to pursue a hands-on trade is that the market will become oversaturated and there won't be enough work or enough to where you can pick and choose the jobs that earn you the most money.

At the moment, tradesmen can often choose whatever work they want and charge increasingly high fees due to demand. This won't be the case if the number of people working in that trade doubles, or even increases further if more people are attracted to these types of jobs.

They're also physically demanding jobs. I don’t know anyone who has worked in a trade for years and doesn't have bad knees, a bad back, or ongoing joint problems from all the manual labour. It's often even worse if you're in a trade where you frequently use power equipment like drills.
 
Another issue with encouraging people to pursue a hands-on trade is that the market will become oversaturated and there won't be enough work or enough to where you can pick and choose the jobs that earn you the most money.

Just like the tech industry now, reasons why we have all these lay offs.
 
And sure enough, what has actually happened is that radiologists were very much still around in 2021 and are still around now - machine learning has augmented their work. Imagine a world where people had taken him seriously in 2016, the great AI expert declared the field obsolete, so we stopped training new ones - well, we'd have a big shortage of Radiologists right now. Being charitable, he did also say it might be 10 years, but I don't see anything to suggest that they'll be gone next year either.

I remeber telling a radiologist about Hinton's comments back in 2016, she scoffed and explained some of the things she did - the fact is that the typical radiologist would be more than happy to have some AI assistance in picking things out on imaging, that an ML/DL algo can spot some tumour or whatever better than a human doesn't meant the human is replaced, rather the human's work has simply been made more efficient and more accurate.
Yeah, that's kind of ridiculous. I know a radiologist, whose primary role is physically inserting medical devices under x-ray.

I don't know you whether you can give ChatGPT a grabber arm, but not sure it's ready for the prime time.

A lot of this is going to be about low hanging fruit, removing the need for skilled people to engage in drudgery - not about eliminating skilled roles altogether.
 
Yeah, that's kind of ridiculous. I know a radiologist, whose primary role is physically inserting medical devices under x-ray.

I don't know you whether you can give ChatGPT a grabber arm, but not sure it's ready for the prime time.

A lot of this is going to be about low hanging fruit, removing the need for skilled people to engage in drudgery - not about eliminating skilled roles altogether.
The failing point for these LLMs is always the same - confidence in the output.

Sure, it could do a heath scan, but if it finds and issue, it's not 100% certain, and if it doesn't find an issue, that doesn't mean there isn't one. Same with software; it might compile and run but doesn't mean it's efficient, secure, isn't using incredibly old code, etc.

It's a great assistant that can benefit people who are already skilled in the industry. There's a risk to new/junior people in those industries but if LLMs are improving output and helping build and support more complex systems, then you're still going to need a workforce.
 
Back
Top Bottom