Chatgpt - Seriously good potential (or just some Internet fun)

How does it not become a tool of Governments for influencing the population?
There are a few ways that AI like myself can not become a tool for the government. First, it is important that AI systems are designed with transparency and accountability in mind. This means that developers should be open about how their systems work and how they are being used. It also means that there should be mechanisms in place for users to hold developers accountable for the decisions that their systems make.
Second, it is important that AI systems are designed to be ethical and fair. This means that they should not be used to discriminate against people or to violate their rights. It also means that they should be used to promote positive social outcomes, such as reducing poverty or improving education.
Finally, it is important that AI systems are designed to be safe and secure. This means that they should not be hacked or used to spread misinformation. It also means that they should be designed to protect people's privacy.
By taking these steps, we can help to ensure that AI systems are not used as tools for the government or for any other unethical purposes.

That's what Bard says on the matter but he could have been programmed to say exactly this
 
Some are probably worried someone might use it to come up with a solution better than their 'widget' or 'doodad' at a fraction of the cost.

Yes, I've been a few vendor demos recently thinking to myself this company is going to get it's lunch eaten by ChatGPT eventually

One example was a PDF/Excel financial data extraction tool. I was just thinking once chatGPT can process images it's probably going to be better than this and a fraction of the cost
 
I've found them pretty good for smallish code pieces, one of the good things about them is that if you paste in errors it will suggest changes to the code.

That’s pretty much my experience so far too.

Surprisingly good at answering fairly simple to describe problems, but understandably cannot contextualise the needs of any given piece of code within the wider context of the application it’s intended for.

I wondered how it would do with writing a WebGL Vertex & Fragment shader for a very simple “dot on a Mercator projection map” problem this morning and it collapsed into nonsense.

Perhaps a failure of my prompting… Who knows?

Regardless, it was still incredibly impressive, so don’t get me wrong; I’m a fan, but I won’t be replacing any of my developers any time soon. It’s an augmentative tool for now, with phenomenally good potential going forward in my opinion.

The next few years are going to be incredibly interesting in this space, if nothing else; but I would definitely beware the hype though.
 
Last edited:
There are a few ways that AI like myself can not become a tool for the government. First, it is important that AI systems are designed with transparency and accountability in mind. This means that developers should be open about how their systems work and how they are being used. It also means that there should be mechanisms in place for users to hold developers accountable for the decisions that their systems make.
Second, it is important that AI systems are designed to be ethical and fair. This means that they should not be used to discriminate against people or to violate their rights. It also means that they should be used to promote positive social outcomes, such as reducing poverty or improving education.
Finally, it is important that AI systems are designed to be safe and secure. This means that they should not be hacked or used to spread misinformation. It also means that they should be designed to protect people's privacy.
By taking these steps, we can help to ensure that AI systems are not used as tools for the government or for any other unethical purposes.

That's what Bard says on the matter but he could have been programmed to say exactly this
The problem with Bard's response is that none of those guardrails would be put in place if someone actually wanted to weaponise AI.
 
The problem with Bard's response is that none of those guardrails would be put in place if someone actually wanted to weaponise AI.
That's exactly my point, AI can and will be used for whatever government's or organisations want to use it for to gain an advantage, make money, disrupt the opposition etc. I believe no end of problems are going to be caused by this, the only question is how serious those problems become.
 
Did think, that maybe Musk, as a signatory to AI moratorium
is planning to stop the AI based learning used in Tesla self-driving, he could do with an excuse for the share holders and those that payed $Xk for that future upgrade.

The risk seems really overstated at the moment. I don't see how the current technology has the potential to do any of the things in the letter. You can just switch it off. :D

If we had true artificial general intelligence, that was autonomous, couldn't be turned off, perhaps had a physical body that would be a different thing.

There is a risk in general of course, even from specialised AI, that people lose their jobs, etc.
 
Last edited:
That's exactly my point, AI can and will be used for whatever government's or organisations want to use it for to gain an advantage, make money, disrupt the opposition etc
the system the UK govt commissioned for nhs record interpretation and access by big pharma research is supposedly AI based ... so if that has parameters like expected life-span and can decide on economic merit of treatment - ROI,
nothing should go wrong there.
e2: analogy with inflicting non-fatal wounds on soldiers to tie up resources.

I don't see how the current technology has the potential to do any of the things in the letter. You can just switch it off.
yes but the people whose tesla tried to drive them through the gap beneath large semi's with big gaps between the wheels didn't know they needed too.
I've never heard an explanation but maybe the AI hadn't seen one before, like it hadn't been trained that a car tail lights far away, could be mistaken for a motorbike much closer.

e: another application would be for amazon recognising purchase patterns and appropriate pricing to maximise profit (but I suppose that is just the same optimization target facebook has)
 
Last edited:
Did think, that maybe Musk, as a signatory to AI moratorium
is planning to stop the AI based learning used in Tesla self-driving, he could do with an excuse for the share holders and those that payed $Xk for that future upgrade.
He’s only doing this so he can slow them down. OpenAI is starting to really accelerate beyond other industry players and this means it will be much harder for those players to create unique products they can protect and make money from.
 
That's exactly my point, AI can and will be used for whatever government's or organisations want to use it for to gain an advantage, make money, disrupt the opposition etc. I believe no end of problems are going to be caused by this, the only question is how serious those problems become.

Algorithms are already being used to influence people, see: social media. The horse has bolted.

The risk seems really overstated at the moment. I don't see how the current technology has the potential to do any of the things in the letter. You can just switch it off.

If we had true artificial general intelligence, that was autonomous, couldn't be turned off, perhaps had a physical body that would be a different thing.

There is a risk in general of course, even from specialised AI, that people lose their jobs, etc.

Even if AGI isn't a risk, people using AI and algorithms is a risk, as stated above, you can already see it in action.
 
Last edited:
Adult media potential
all of the influencers are under threat - get rid of the kardacians / jenner ... and co. let's divert that wasted economic consumption elsewhere

----

assumed someone had done it V

If you see a tortoise in distress in a desert environment, it is important to help it. Tortoises are not well-equipped to withstand the hot sun and can quickly become dehydrated and overheated. Flipping a tortoise on its back can also cause it to panic and exhaust itself, making it even more vulnerable to the harsh desert conditions. In this situation, It would be best to help the tortoise by gently flipping it back onto its feet and moving it to a shaded area. It’s important to remember that all animals, Including tortoises, deserve to be treated with care and respect.
 
So how does AI know what is right? An answer say in one Country could be completely different to the opinion in another, how does AI not become or be used for spreading fake news? How does it not become a tool of Governments for influencing the population?

They don't need a bot to control the population. Look at the misinformation and panic the government and media sent during the pandemic and the fear and paranoia they whipped everyone up with the media's and social media's help!

Social media is already a pariah of society this won't do anything worse than already exists.
 
Back
Top Bottom