AI is absolutely a threat. We are only tickling the surface of what it can do.
I don't think anyone can really know what's going to happen. But I do think it will play out fairly quickly vs other shifts in the past.
Hope you are right but at the speed in which ai is going, It can't be long, surely?Still along way to go before good software engineers should be worried.
![]()
AI Models from Google, OpenAI, Anthropic Solve 0% of ‘Hard’ Coding Problems | AIM
Despite claims of AI models surpassing elite humans, 'a significant gap still remains, particularly in areas demanding novel insights.'analyticsindiamag.com
Geofery Hinton?Yes and no. Play out quickly now - not necessarily (really depends on what you mean), play out quickly if/when some other new approach is found - quite possibly.
AI has been around for nearly 70 years now (the term was coined in 1956), the perceptron (foundation of neural nets) was created in 1958, and there have been two AI winters so far following hype cycles - in the 70s and in the late 80s/early 90s.
Machine Learning (ML) started to dominate other approaches to AI in the early 2000s, then (within ML) - deep neural nets started to dominate across various tasks (including vision, NLP etc.) in 2012, and then in late 2022 LLMs started to dominate NLP and of course are used more generally too now.
By early 2023 the likes of ChatGPT (GPT 3) were getting hype - it absolutely was mindblowing, we're finally able to seemingly communicate with a model, to this early version of some alien intelligence or silicon god etc.. and some people were very much thinking that the world was about to change, that this will all play out quickly (one AI lab CEO even thought humanity was going to be wiped out by the end of the year), what will the next 5 years look like etc.. But now we're over 2.5 years since the release of GPT 3, we've seen GPT 4 show a big improvement and we've seen some neat tricks with reinforcement learning/chain of thought etc.. but it's not clear that scaling LLMs will get us to some advanced "AGI" or rather "ASI" (some might argue that by previous standards AGI is already here thanks to these models - but if that's AGI then it's not really replacing humans nor is there a big threat from these models* so much as helping humans become more productive). Not that this path isn't very useful in itself, but it might well be that we need some other approaches.
*there's all the usual what if the AI gives instructions on making drugs/explosives etc.. or what if the AI is racist etc.. but no threat that a current LLM can somehow take over the world say.
Geofery Hinton?
Let me start by saying a few things that seem obvious. I think if you work as a radiologist, you're like the coyote that's already over the edge of the cliff but hasn't yet looked down, so doesn't realize there's no ground underneath him. Um, people should stop training Radiologists now. It's just completely obvious that within 5 years um deep learning is going to do better than Radiologists because it's going to be able to get a lot more experience. Um, it might be 10 years, but we've got plenty of Radiologists already. I said this at a hospital, and it didn't go down too well.
Lots to digest in that podcasts and worth a listen but some highlights are.What about him? I'm quite familiar with who he is but you've linked to a 1hr30min video interview with the grifter from Dragon's Den - is there something specific you wanted to highlight there?
He's not been too good re: his previous prediction re: jobs - he's underestimated what these jobs entail - he's famously gotten this wrong already when he treated Radiology as though it was *just* a classification task back in 2016:
(short video - 1m25s)
Quote from video above:
And sure enough, what has actually happened is that radiologists were very much still around in 2021 and are still around now - machine learning has augmented their work. Imagine a world where people had taken him seriously in 2016, the great AI expert declared the field obsolete, so we stopped training new ones - well, we'd have a big shortage of Radiologists right now. Being charitable, he did also say it might be 10 years, but I don't see anything to suggest that they'll be gone next year either.
I remeber telling a radiologist about Hinton's comments back in 2016, she scoffed and explained some of the things she did - the fact is that the typical radiologist would be more than happy to have some AI assistance in picking things out on imaging, that an ML/DL algo can spot some tumour or whatever better than a human doesn't meant the human is replaced, rather the human's work has simply been made more efficient and more accurate.
Eventually it will take away most desk type thinking jobs and without safety measures in place, could destroy humanity eventually.
And last bit is to tell our kids to become plumbers instead
Another issue with encouraging people to pursue a hands-on trade is that the market will become oversaturated and there won't be enough work or enough to where you can pick and choose the jobs that earn you the most money.
Yeah, that's kind of ridiculous. I know a radiologist, whose primary role is physically inserting medical devices under x-ray.And sure enough, what has actually happened is that radiologists were very much still around in 2021 and are still around now - machine learning has augmented their work. Imagine a world where people had taken him seriously in 2016, the great AI expert declared the field obsolete, so we stopped training new ones - well, we'd have a big shortage of Radiologists right now. Being charitable, he did also say it might be 10 years, but I don't see anything to suggest that they'll be gone next year either.
I remeber telling a radiologist about Hinton's comments back in 2016, she scoffed and explained some of the things she did - the fact is that the typical radiologist would be more than happy to have some AI assistance in picking things out on imaging, that an ML/DL algo can spot some tumour or whatever better than a human doesn't meant the human is replaced, rather the human's work has simply been made more efficient and more accurate.
The failing point for these LLMs is always the same - confidence in the output.Yeah, that's kind of ridiculous. I know a radiologist, whose primary role is physically inserting medical devices under x-ray.
I don't know you whether you can give ChatGPT a grabber arm, but not sure it's ready for the prime time.
A lot of this is going to be about low hanging fruit, removing the need for skilled people to engage in drudgery - not about eliminating skilled roles altogether.