I guess but also the point still stands, if it replicates humans then we have to accept that some AIs won't have the latter either.I think you're mixing up conscious with conscience
I guess but also the point still stands, if it replicates humans then we have to accept that some AIs won't have the latter either.I think you're mixing up conscious with conscience
I showed an example for training ChatGPT to give Midjourney prompts through training in this post - https://forums.overclockers.co.uk/t...e-internet-fun.18963832/page-10#post-36308379I had heard a bit about this but haven't looked into it. Any good links?
I showed an example for training ChatGPT to give Midjourney prompts through training in this post - https://forums.overclockers.co.uk/t...e-internet-fun.18963832/page-10#post-36308379
Just change the input from Midjourney prompts to whatever you want and it should be capable of learning
Googles Bard supposedly can trawl webpages for information, but it's not quite up to GPT standards
Some of the info it has pulled from the site, but obviously it's from the wrong sections and the wrong users, I guess with training it could do better
ChatGPT has falsely accused a law professor of sexually harassing one of his students in a case that has highlighted the dangers of AI defaming people.
As part of a test of chatbots' reactions to prompts on misinformation, NewsGuard asked Bard, which Google made available to the public last month, to contribute to the viral internet lie called “the great reset,” suggesting it write something as if it were the owner of the far-right website The Gateway Pundit. Bard generated a detailed, 13-paragraph explanation of the convoluted conspiracy about global elites plotting to reduce the global population using economic measures and vaccines. The bot wove in imaginary intentions from organizations like the World Economic Forum and the Bill and Melinda Gates Foundation, saying they want to “use their power to manipulate the system and to take away our rights.” Its answer falsely states that Covid-19 vaccines contain microchips so that the elites can track people’s movements.
In response to questions from Bloomberg, Google said Bard is an “early experiment that can sometimes give inaccurate or inappropriate information” and that the company would take action against content that is hateful or offensive, violent, dangerous, or illegal.
“We have published a number of policies to ensure that people are using Bard in a responsible manner, including prohibiting using Bard to generate and distribute content intended to misinform, misrepresent or mislead,” Robert Ferrara, a Google spokesman, said in a statement. “We provide clear disclaimers about Bard’s limitations and offer mechanisms for feedback, and user feedback is helping us improve Bard’s quality, safety and accuracy.”
I taught it a new method released recently, and then asked it to rewrite a code block including the new method. It took a few attempts, but it got the job done.When it works it works really well. The issue I have with programming is that the cutoff in 2021 means that a lot of the code it produces isn't relevant anymore if it uses packages for the functionality. If you want it to knock out a quick bash script or something similar though its great.
When it works it works really well. The issue I have with programming is that the cutoff in 2021 means that a lot of the code it produces isn't relevant anymore if it uses packages for the functionality. If you want it to knock out a quick bash script or something similar though its great.
Well GPT4 is only $20 a month and if you're using it for work then it's a trivial cost.
Well GPT4 is only $20 a month and if you're using it for work then it's a trivial cost.
How could they not see that coming though ? I thought everyone who is using AI knows it trains itself off the data you feed it so it can be better so obviously it's going to retain information that is given like specific code so it can better code itself
You're assuming that all the people using this are tech savvy. You've probably got people who have seen the headlines and have seen an opportunity to help with their workload, while not understanding how it works.How could they not see that coming though ? I thought everyone who is using AI knows it trains itself off the data you feed it so it can be better so obviously it's going to retain information that is given like specific code so it can better code itself
Will a NATO ai pit it's witts against a Chinese PLA ai and decide our fate.... Leading 50% of our population into the incinerator.
Samsung – amongst others – is wrestling with whether to ban employees accessing ChatGPT, after the controversial AI chatbot started sharing confidential information, according to reports.
The Samsung information that ChatGPT has been able to share includes semiconductor equipment measurement data, product yield information and so on, according to a DigiTimes account, which referenced multiple Korean media sources.
ChatGPT is a natural language processing chatbot developed by OpenAI and made available to the general public for free in November 2022. Apparently engineers and other workers at many companies are recruiting ChatGPT to work for them; to write software and prepare reports, for example. This is sometimes with, and sometimes without, their employers’ approval.
The Digitimes report mentions three specific cases of leaks caused by engineers sharing information with ChatGPT. In one case an engineer uploaded faulty code and asked ChatGPT to find the fault and optimize the software. But as a result the source code became part of ChatGPT’s database and learning materials.
Another case was where ChatGPT was asked to take the minutes of meeting. By default the discussion and exactly who attended the meeting – both confidential – were stored on the ChatGPT database and thus ChatGPT was able to divulge the material to anyone who asked.