Chatgpt - Seriously good potential (or just some Internet fun)

Our current production is not enough to sustain our current population. The measure for this is affordability.

We are a long way from UBI.

Basically you'd need to be able to support a family on the average salary, or median, whatever. And unemployment would need to be high, 30% or more.
Why do you feel that our current production is not enough? Outside of housing and maybe medical care I cannot think of any area that has a lack of production.
 
Well some kind of backlash seems to have started. I'm sitting here in a hotel in Italy and have just tried to log onto to chatGPT only to be greated with a message that access to chatGPT is disabled for users in Italy.
 

That's more an issue with housing and NIMBYism than technological progress.

You can of course go and live in a super cheap area if you like and work just a few hours a week (especially if you work remotely) or indeed if you're in the US or Australia then you can just go and live in the middle of nowhere at low cost too and still have a rather nice life (relatively) compared to previous generations.

Of course, if you want a load of nice things (like a flashy car or a home in a desirable area) then you might find yourself competing with others of those things!

ChatGPT can certainly speed up things in various areas and make people way way more productive but that isn't going to change say competition among humans... there are only so many people who can own the big desirable houses or whatever else becomes desirable but limited.
 
I still don't think you're being very clear.

We can agree that the end goal is that all humans basic needs are met, in that world of hyper efficiency and productivity, money does take a less important role.

But the path towards that is a massive dislocation in available jobs for humans, where goods still cost money. It's that crossover, it's not a restructuring of the jobs marketplace. Where a job lost is a job gained somewhere else. It's a continuous path of net negative jobs

The crossover you refer too is far more out into the future than you are making it out to be is what i am saying. UBI will be required in what you describe yes, but im saying that you are being extremely premature in that.

I dont know if you are expecting super intelligent AGI, or you over-estimate our current position in terms of the world economy and so on.

Why do you feel that our current production is not enough? Outside of housing and maybe medical care I cannot think of any area that has a lack of production.

You dont think energy or food are expensive?
 
Last edited:
The crossover you refer too is far more out into the future than you are making it out to be is what i am saying. UBI will be required in what you describe yes, but im saying that you are being extremely premature in that.

I dont know if you are expecting super intelligent AGI, or you over-estimate our current position in terms of the world economy and so on.

I know it's pretty far out. But your original post didn't put any timeframe on it. The way you worded your post sounded like you believe that even with super intelligent AI and automation, new jobs will replace the lost jobs.
 
I know it's pretty far out. But your original post didn't put any timeframe on it. The way you worded your post sounded like you believe that even with super intelligent AI and automation, new jobs will replace the lost jobs.

To go from chatgpt 4 to superintelligent AGI, is like expecting neaderthals to be able to make a base on the moon within a decade.
 
To go from chatgpt 4 to superintelligent AGI, is like expecting neaderthals to be able to make a base on the moon within a decade.

What has that got to do with anything?

Yes if you need 10 people to do a job, now you need only 1, now 9 people can do something else.

Its restructuring.

This is the post i quoted. The idea that this AI and automation journey just leads to 'restructuring' of jobs.
It's just not the case. The journey is massive dislocation in available jobs for humans to do.
 
Last edited:
You dont think energy or food are expensive?
Food isn’t expensive because of a lack of production. I could be wrong but i thought the margins on food is really tight. So adding more production won’t bring the cost down. Also looking at the metrics of this country I don’t think we can really say we have a problem with a lack of food.

Energy does need more production but that’s just a lack of investment from the government coupled with the war.
 
Last edited:
What has that got to do with anything?



This is the post i quoted. The idea that this AI and automation journey just leads to 'restructuring' of jobs.
It's just not the case. The journey is massive dislocation in available jobs for humans to do.

How do you keep asking what it has to do with anything, what you are saying is very far into the future, im telling you why i think
Food isn’t expensive because of a lack of production. I could be wrong but i thought the margins on food is really tight. So adding more production won’t bring the cost down. Also looking at the metrics of this country I don’t think we can really say we have a problem with a lack of food.

Energy does need more production but that’s just a lack of investment from the government coupled with the war.

Its not just production but the cost of that production, that lowers the prices
 
How do you keep asking what it has to do with anything, what you are saying is very far into the future, im telling you why i think

Your original post didn't put any timeframe on it. How am i meant to know that your restructuring comment was only for the next 5 to 10 years. The post you were replying to gave the impression that you were talking end game stuff.
 
To go from chatgpt 4 to superintelligent AGI, is like expecting neaderthals to be able to make a base on the moon within a decade.


It very much depends on your personal definition of "superintelligent"

For super LLMs like GPT4 will not directly lead to AGI as they are simply statistical pattern matchers (although the emergent behaviors will likely expedite many of the subsequent steps). However, they fundamentally solve national language comprehension and generation to native expert level. Similar advances in DL models for vision are close to human level. Thus the sub-symbolic requirements have largely been met. We also see from performance in systems like Alpha Go the symbolic processing is also at human expert level, albeit within more confined domains. The steps to AGI are more in line of systems integration and hybrid symbolic - sub-symbolic processing, with a broadening of the domain training.

AGI is about 5 years away at this rate, 10 years would be quiet slow.
 
It very much depends on your personal definition of "superintelligent"

For super LLMs like GPT4 will not directly lead to AGI as they are simply statistical pattern matchers (although the emergent behaviors will likely expedite many of the subsequent steps). However, they fundamentally solve national language comprehension and generation to native expert level. Similar advances in DL models for vision are close to human level. Thus the sub-symbolic requirements have largely been met. We also see from performance in systems like Alpha Go the symbolic processing is also at human expert level, albeit within more confined domains. The steps to AGI are more in line of systems integration and hybrid symbolic - sub-symbolic processing, with a broadening of the domain training.

AGI is about 5 years away at this rate, 10 years would be quiet slow.

Will they be conscious though?
 
To go from chatgpt 4 to superintelligent AGI, is like expecting neaderthals to be able to make a base on the moon within a decade.

How can you put a timeline on it when ML researchers don't seem to know... it may well take a few decades, it may well take a few years. A bottleneck for models beyond GPT4 at the moment seems to be simply data and hardware (or for other organisations, lack of ML researchers and Engineers capable of building them - it's only really the US and UK that have the organisaitons with the capability at that level at the moment though that will change), though synthetic data, LLMs training LLMs etc.. may solve that, we've got this tool now that we know can scale and we can certainly try and scale them much further, some combination of LLM(s) and an agent mat well result in some emerging sentience.

As for AGI, I'm not sure even humans are really "GI" in the first place, we can already have models that can exceed humans in a wide range of tasks, now we have a model that has a very broad/huge range of knowledge simply from and it's frankly pretty impressive. People can dismiss it as just predicting the next string, just matrix multiplication etc. but then again there isn't necessarily anything magical about biological neurons either, we don't really fully understand sentience/consciousness/intelligence and those are things we may well see emerge in future.

Conversely, things could plateau, I don't think there will be an AI winter as in previous decades as this is just such a huge leap forwards already and the applications of LLMs have barely scratched the surface so I'd expect the economy to be radically affected by this tech over the next few years even if the progress from GPT2 -> GPT3 -> GPT4 ends up plateauing... and that's the pessimistic (in terms of AI) progress view, the optimistic one is that things could become radically different very quickly!
 
Last edited:
What are the channels to watch for AI? I started with LTT's WAN show. Great introduction, but if you want to get ahead of the headlines, I found AI Explained useful, and they're getting explosive growth. This video about GPT4 improving itself without leading prompts really shows the difference between 4 and 3.


As for foreign adversaries building their models faster, I don't see it happening. The way the US kneecapped China with the October 2022 sanctions will have devastating effects in the future if not lifted. I know the official reason was to "stop them modelling nukes", but that seems like an excuse from 40 years ago, not today with the 3rd biggest nuclear power. If AI can double your productivity, the gap between developed and AI-developed economies will be like that between developed and developing countries. We still have countries in the agrarian phase who rely on rainwater for irrigation. The alternative is Cerebras full-wafer AI chips, where yield and the lowest nm doesn't matter, but I haven't heard much from them, and I suspect it's because Nvidia has the software on lock. Omniverse is just better integrated into the ecosystem.
 
Will they be conscious though?

TBH, it is not a very interesting question and mostly comes down to your own personal definition of consciousness.

Artificial consciousness is its own area of research, although there is some overlap with AGI. One interpretation of consciousness is that it is simply an emergent behavior of complexity in symbolic processing.

To start with it would be useful to understand if you a dog has consciousness or not, and if not why not.


It is interesting to note that GPT4 already passes Theory of Mind tests, yet it is hard to suggest it is conscious.
 
Will they be conscious though?
It's largely a metaphysical discussion.

If something can sense the world around it, process and store that information in a way that forms it's future outputs....then fundamentally it does everything a human does, it's just a matter of complexity.
 
Back
Top Bottom