Chatgpt - Seriously good potential (or just some Internet fun)

General public-facing LLMs, yes - but most companies with any level of common sense will be actively seeking how to give their employees access to LLMs.
Indeed. It's like banning google or any academic research sites. If you use it to complement your work rather than replace it, it becomes a really useful tool to help you get going on a task. I spent some time with our IT team to create an AI usage policy, it's fairly open/relaxed, just with some caveats like, don't rely on the answers as the correct answer, nor use it for commercial evaluation etc... Basic guidelines like that. However we did a post on yammer to let people actively engage with it. We're also trialling co-pilot to help with some elements of data capture and basic automation. Very early days yet.

It's just a tool, it doesn't solve all your problems nor should it, but to ignore a tool is as daft as not listening to other people's opinions. They don't have to be right, but it can help inform/challenge your thinking.
 
Basically any job that can be done that doesn't require face to face contact will be the first to go, so customer service centers, helplines/support, then the slightly more involving jobs, eventually many legal services will be handled with AI, only those that need face to face contact will be kept, but everything is at risk given time, fast food places will become automated with only a skeleton crew of staff.
 
Last edited:
General public-facing LLMs, yes - but most companies with any level of common sense will be actively seeking how to give their employees access to LLMs.

Of course, but I think widespread use for SMEs is still a few years away. We've toyed with a bespoke one for our company and it's miles away from where it needs to be. The cost can be huge and there's a lot of groundwork to be done to make it work in a way that doesn't create more work.

The argument may then be whether they can survive without the efficiency of AI when they're competing in sectors where it's rampant. As freefaller said, it's a tool, and like most tools, it's up to the user to know how to use it.

eventually many legal services will be handled with AI

That's a can of worms to be honest and a long way off.
 
I can see it becoming a very common tool for teaching, being able to have the AI explain exactly how something is done will help students learn, specially those that suck at reading.

This is my favourite part about this. I can't wait to use this when learning new concepts. I don't suck at reading, but in academic settings I always sucked at asking for help or speaking up in person, which meant I would simply struggle to try and find a resource to help me figure something out instead of asking for help, which at times seriously hindered my learning.

Being able to use this to clarify little things when learning something new, so it doesn't turn into a roadblock that could derail the entire learning process for that topic is huge.
 
Do you think they're sandbagging? Is it a coincidence that coders made something that can't code well, or is it just the nature and complexity of the work? My plan was to learn the 10% of coding it can't do (or didn't steal from stack overflow).

Law is just recalling existing info. In this country, it's gatekept through the training contract, the one thing you can't buy. It's also one of those prestige careers like game dev or vfx, where people jump through hoops to do it they wouldn't for other jobs.

Reality is there are too many law students and not enough jobs. Has been for a decade or two. I put it down to 60s parents encouraging law/medicine instead of business/finance/coding/trades. So I won't be sad to see it go.

Nor call centres. No amount of gaslighting will make me miss waiting on the phone for an hour only to get cut off. At least when it does happen, we won't hear about it taking jobs anymore. I'd love to see the talking heads get replaced, and the drones that fill their comments sections get heavenbanned.
 
Do you think they're sandbagging? Is it a coincidence that coders made something that can't code well, or is it just the nature and complexity of the work? My plan was to learn the 10% of coding it can't do (or didn't steal from stack overflow).

Law is just recalling existing info. In this country, it's gatekept through the training contract, the one thing you can't buy. It's also one of those prestige careers like game dev or vfx, where people jump through hoops to do it they wouldn't for other jobs.

Reality is there are too many law students and not enough jobs. Has been for a decade or two. I put it down to 60s parents encouraging law/medicine instead of business/finance/coding/trades. So I won't be sad to see it go.

Nor call centres. No amount of gaslighting will make me miss waiting on the phone for an hour only to get cut off. At least when it does happen, we won't hear about it taking jobs anymore. I'd love to see the talking heads get replaced, and the drones that fill their comments sections get heavenbanned.
Writing code for a complete solution is far more complex than sticking together some code stubs. The current LLMs are basically just throwing out the next most likely bit of code, without actually knowing whether it's correct or performant.

I don't know if they'll ever replace a fully qualified and experienced individual, but I see there value as assistants. There's also a lot of value in things like law and medicine, which is subject to a ton of inconsistency and human error.
 

Think again

I'm thinking more along the lines of complex behavior that would start with the discussions about where and when to launch the craft, what to do during the flight and then returning to a landing place and the extra steps required to get the craft ready for flight again. That's a long way off automation.

Actually flying the plane isn't that difficult for AI, any videogame with planes can attest to that.
 
I'm thinking more along the lines of complex behavior that would start with the discussions about where and when to launch the craft, what to do during the flight and then returning to a landing place and the extra steps required to get the craft ready for flight again. That's a long way off automation.

Actually flying the plane isn't that difficult for AI, any videogame with planes can attest to that.
That's very different and specific from "anything that requires manual dexterity and thinking on the fly"

If we leave AI to be completely autonomous and make decisions for itself, we'll end up in the Matrix or with Skynet, but nobody is suggesting that, it's obvious there will still be "some" humans in work making decisions/orders for the AI

AI in video games is also not true AI like we're getting here - https://en.wikipedia.org/wiki/Artificial_intelligence_in_video_games it doesn't learn, much like current tech like autopilot doesn't learn, or robots in a car factory don't learn
 
Last edited:
Seems you can get 10 questions off of GPT-4o before falling back on 3.5. That included a picture attachment. Looks like it's mostly an ad for GPT Plus. I'll stick with Chatbot-UI with api key since it's way cheaper. Although it struggles to upload non-picture files, so there's still some utility for 4o.
 
Remember when Sam drew a website on a napkin and had GPT4 code a website from it? Haven't seen it since. Seems it can assist, but not replace coders.

That was GPT4 - what do you mean by "haven't seen it since"? It's been available for a year+

But yeah LLMs can improve productivity, it's not so much that they'll fully replace coders (at least not at the moment) but rather that a smaller number of coders + LLM can do the job of a larger number of coders without an LLM. A big thing they can handle in that respect is boilerplate code.

General public-facing LLMs, yes - but most companies with any level of common sense will be actively seeking how to give their employees access to LLMs.

100%

Quite trivial to do so, you can run nearly bleeding edge models on a MacBook (if quantised) or Mac Studio these days, don't even need a big powerful server with loads of GPUs thanks to Apple and their unified memory.

Best retail GPU - Nvidia 4090 24GB of memory
Best commercial GPU - Nvidia H100 80GB of memory

Little MacStudio.... up to 196 GB of unified memory!!!!

That's a can of worms to be honest and a long way off.

Yeah in terms of actually replacing lawyers then for sure, could be a long time before people are comfortable with that. In terms of productivity though automated contract drafting has been around for a few years now and pre-dates the explosion of LLMs, GANs were suitable for that too last decade as there are so many standard terms etc..

Essentially, just like with programmers using LLMs to generate boilerplate code, you can automate the drafting of a custom contract on the fly using an LLM with some context/understanding of the terms required with a solicitor overseeing it step by step.
 
I mean coding something like a website from scratch from just a rough doodle.

But it can do that, the model from the demo is the model that's been out for a while, not some vaporware thing they've demoed and then hidden away. Someone even got a complete working version of flappy bird in a single prompt from the new GPT4o model too. It's a popular game, it will have seen lots of versions of it in training so it's quite easy for it, just as basic website stuff should be able to produce at least boilerplate code - so when Greg sketched a basic layout with a couple of buttons it could quite easily generate the code for that.

 
Last edited:
AI might be a useful tool in the short term but I see the window of that usefulness being quite narrow - a few years, maybe less, until the point where the AI does everything.

Eventually the concept of learning is going to be a recreational activity i.e. literally 'recreating' an activity that used to be done as a means to an end such as hunting, horse riding, and sailing. Perhaps it will become a sport to pit transhumans against each other as they compete for who has the best memory chip implant.
 
That's very different and specific from "anything that requires manual dexterity and thinking on the fly"

Yup, and robotic surgery has been around for a while. The really cool thing though is LLMs are actually good at modeling (or assisting to) some tasks requiring dexterity, I'm not sure that would have been predicted a few years ago. Obviously using deep learning to control a robot was an obvious step but using a language model... well it's got an internal world model and it helps.

edit - to be clear it's the language model writing the code and its awareness of the world (which it only knows of via text) that's impressive, it's inherent world model has enabled this sim to reality:

"I’m excited to announce DrEureka, an LLM agent that writes code to train robot skills in simulation, and writes more code to bridge the difficult simulation-reality gap. It fully automates the pipeline from new skill learning to real-world deployment. "

Like this is an insanely cool demo from Nvidia/UPenn - trained in sim and it works right away in the real world too:

If we leave AI to be completely autonomous and make decisions for itself, we'll end up in the Matrix or with Skynet, but nobody is suggesting that, it's obvious there will still be "some" humans in work making decisions/orders for the AI

AI in video games is also not true AI like we're getting here - https://en.wikipedia.org/wiki/Artificial_intelligence_in_video_games it doesn't learn, much like current tech like autopilot doesn't learn, or robots in a car factory don't learn

Yeah, there's still got to be human involvement in most things at some level. It depends on the AI to some extent, some of RL overlaps a bit with control systems and AI agents monitoring stuff, controlling things, carrying out basic repeated tasks is already a thing. For example, I saw a presentation by a vertical farm few years ago and lots of it was controlled by RL agents monitoring and optimising the growing conditions. Obviously that's not going to turn into skynet, you can run basic agents on a pi or something.

Re: Games AFAIK classic ML/AI typically isn't learning (updating parameters) when deployed for inference tasks such as in games but there is no reason why it couldn't be in some contexts, it's called online learning or online machine learning - certainly can be used to adapt the behaviour of NPCs or adjust things to a players skill level. Typically lots of "AI" has been quite basic though.

Beyond the classic stuff for sure; if deploying LLMs the underlying model isn't being updated via each interaction, though I guess there is some scope for "learning" in so far as there is a context window (and also a developer could perhaps try some other tricks like saving some details of earlier interactions beyond a context window, maybe updating some external parameters that are then fed to the LLM at the start of a new interaction etc.. so in a sense some degree of "learning" can occur).
 
Last edited:
AI might be a useful tool in the short term but I see the window of that usefulness being quite narrow - a few years, maybe less, until the point where the AI does everything.

Eventually the concept of learning is going to be a recreational activity i.e. literally 'recreating' an activity that used to be done as a means to an end such as hunting, horse riding, and sailing. Perhaps it will become a sport to pit transhumans against each other as they compete for who has the best memory chip implant.

Ah missed this - I guess it's rather unknown at this point. Like last year it certainly seemed that way as there had already been big leaps from GPT2 to GPT3 and then with GPT4 it was another big leap... but it seems like we are maybe running into some limitations now - both in the amount of (real) data available and possibly in general with this type of model*. Things could plateau a bit with maybe more multimodal models being released at the same level and improvements to latency, inventive third-party applications for them etc.. Though either way I don't think we're going to see an AI winter as these things are already incredibly useful as they stand right now and we're perhaps barely scratching the surface in terms of their deployment and use.

*There's plenty of debate re: the scope of these things ranging from some more skeptical/grounded takes by the likes of Yann LeCun at FAIR to Ilya Sutskever and others at OpenAI who believe that LLMs even a couple of years ago were slightly conscious.
 
Back
Top Bottom