Chatgpt - Seriously good potential (or just some Internet fun)

It's largely a metaphysical discussion.

If something can sense the world around it, process and store that information in a way that forms it's future outputs....then fundamentally it does everything a human does, it's just a matter of complexity.

Will it do everything a human does though? Humans do things that aren't logical.
 
In your view perhaps. I'd argue that it'll be one of the most important questions once AI starts making decisions for humans.

Not really. Consciousness is not well defined and a lot of layman seem to put a lot of emphasis on it being some kind of unique human ability but without evidencing that. In reality, a large part of what conscious actually entails is simply symbolic processing using 1 or more natural languages, the components of which we already have at expert human level. We do have an awareness that our consciousness is intricacy linked to the wellbeing of the attached biology machinery, comprising our body and brain, but nothings would stop an a machine being aware that its existence is tied to the operation of the underlying computer.
 
Will it do everything a human does though? Humans do things that aren't logical.


LLM do things that aren't logical as well, in fact much of the fuss around systems like ChatGPT is that they fabricate information and can thus be wrong, which seems to be a key point that you are suggesting is important for humans?
 
Will it do everything a human does though? Humans do things that aren't logical.
The important bit of a human is the fleshy lump in your head hat has a bunch of electrical inputs and outputs. Everything you do is the result of learned responses to inputs that have shaped the way your neurons are wired up....fundamentally no different to the way modern AI systems work. It's simply a matter of a few years now before we have an AI that, physical appearance aside, is indiscernible from a human consciousness.....that includes creativity and illogical behaviour and all the imperfect things we do.
 
The important bit of a human is the fleshy lump in your head hat has a bunch of electrical inputs and outputs. Everything you do is the result of learned responses to inputs that have shaped the way your neurons are wired up....fundamentally no different to the way modern AI systems work. It's simply a matter of a few years now before we have an AI that, physical appearance aside, is indiscernible from a human consciousness.....that includes creativity and illogical behaviour and all the imperfect things we do.


Largely this. There will expectedly be some differences because of the underlying hardware, but holistically both a just computers. Humans tend to make errors because our brains rely heavily on heuristics to reduce computation, because actually humans brains are fairly limited computationally.


A super intelligent AGI system may still make mistakes, but the kinds of mistakes might be different to humans. The difference is not really interesting
 
Did anyone see this piece of news the other day


How does something like get past Q&A safety checks and how liable will the bots owners be under law


The problem is these systems don't have a predefined output. The results are probabalistic sequence of tokens that conform to statistical distributions. The output is not directly controlled, but indirectly during training. Input data is filtered, and then there is a fine tuning step that tries to penalize such outputs But It is impossible to prevent these things in all cases.
 
if you do characterise conciousness as symbolic processing ,
at what point will machines with sufficient neurons/synaptic capacity be available, do we actually need a biological aspect to the computing hardware to solve that - maybe the electronic hardware can bootstrap it's design.
 
Well some kind of backlash seems to have started. I'm sitting here in a hotel in Italy and have just tried to log onto to chatGPT only to be greated with a message that access to chatGPT is disabled for users in Italy.
Then they're going to be left behind. Mark my words, any society that doesn't jump on this and learn how to use the tools they have now will live to regret it. AI can be used in so many different ways and it's already helped me in several tasks that have already started to improve my life and wealth. I'm not going to go into how because it's for you lot to learn. Instead of wasting time debating whether this is the end of humanity, don't be a troglodyte learn how to use this for your benefit.

I haven't felt this way about a leap technology since using the world web web when it become available to the public back in 93 in university. And this is just as a big of a leap.
 
Last edited:
Did anyone see this piece of news the other day


How does something like get past Q&A safety checks and how liable will the bots owners be under law
And so it begins. The "tesla autopilot ran me over" of chat bots (it wasn't chatgpt, hence the lack of mainstream news.) For all the fuss about future threats, we forget about current ones. Like bad journalism that sensationalises violence because "if it bleeds, it leads." It would be better if journalists focused on systemic issues, rather than "chat plays unknown role in unstable man's suicide" fluff pieces that are only there to scaremonger and drive clicks. I sure hope AI kills this part of the news industry.

As for liability, same as Tesla's - none. Now a fully self driving car should be liable to the manufacturers since it would crash with or without a driver. But a chat bot isn't the reason a married man decided to leave his wife rather than, I don't know, helping a climate charity or spreading awareness. I don't know who here saw S2 of Utopia 10 years ago, but I'm left thinking of the eco terrorist who tried to convince a climate conscious mother to kill her son, and whether this guy would do it.
 
Last edited:
Did anyone see this piece of news the other day


How does something like get past Q&A safety checks and how liable will the bots owners be under law
To be fair to the AI they didn't suggest anything illogical, just said what most of us actually think when it comes to cutting down on emissions, it really needs a cutting down on people, it's the climate change elephant in the room
 
Last edited:
The important bit of a human is the fleshy lump in your head hat has a bunch of electrical inputs and outputs. Everything you do is the result of learned responses to inputs that have shaped the way your neurons are wired up....fundamentally no different to the way modern AI systems work. It's simply a matter of a few years now before we have an AI that, physical appearance aside, is indiscernible from a human consciousness.....that includes creativity and illogical behaviour and all the imperfect things we do.

So we'll have an AI Hitler? Genghis Khan?
 
Last edited:
LLM do things that aren't logical as well, in fact much of the fuss around systems like ChatGPT is that they fabricate information and can thus be wrong, which seems to be a key point that you are suggesting is important for humans?

I'm not sure what I'm saying now, I don't really know enough about AI or humans.
 
When it works it works really well. The issue I have with programming is that the cutoff in 2021 means that a lot of the code it produces isn't relevant anymore if it uses packages for the functionality. If you want it to knock out a quick bash script or something similar though its great.
 
Given the sort of objectively awful human beings that many people choose to follow and support these days....it is only a matter of time before some sort of cult forms around an AI.

So we'll have a Hitler with no conscious (not that the real Hitler has much of one) who is a lot smarter than us and controls the narrative of social media and whatever else we let it...

The US already have autonomous weapons.
 
Last edited:
Back
Top Bottom