Chatgpt - Seriously good potential (or just some Internet fun)

For balance:

0002.png
 
Yeah they definitely need to work on ChatGPT being confidently incorrect
There's a good video about that here and here. AI lies because of the behavioural reinforcement model. Good response = like from humans, bad response = dislike. It's fishing for likes without actually thinking or understanding. Or if it is thinking, it's less about whether the answer is true, and more about whether the human will like it. It'll write a sonnet about custard, but can any of us say what the structure of a sonnet is without looking it up? The AI assumes we don't, so its sonnets are usually wrong. It even says things like "don't switch me off, I'm sentient", because it knows people like to hear that.

Have you noticed that when it says something wrong and you correct it, it throw its hands up and says "you're right" without a fight? What I've noticed is that it'll do that, regardless of whether I'm right or not. I've given it corrections that sound plausible, but I know are wrong, and it gave the whole "you are correct, x is indeed..." spiel. For example: "tell me about the 14900K"; "that doesn't exist"; *shows made-up Intel link with 14900K*; "You are correct that the 14900K exists. The 14900K is a high-end chip from Intel with XYZ specs..."

To me, this is the real danger, not whether it's woke or not. I've seen some pea-brained takes on Youtube about its bias. Yes, we know it has guardrails. We know those guardrails play to the sensibilities of the WaPo/WSJ-reading audience (although I suspect that's more about avoiding bad press coverage like M$' first chatbot). Yes, if you grill it while staying on the same thread (where the answers are biased from the context of previous answers) instead of starting a new one, it'll throw its hands up and "admit" you were right all along, "proving" that it knew the answers but was hiding them. However, the galaxy-brain take is "can we trust an unrestricted AI it to tell us when we're wrong, or is it doomed to be a liar and sycophant by nature of its modelling?" Less Skynet, more Starscream.

The first thing I did last year was play around with it. Then push the boundaries of what it'll say (ask it about certain religions, or to deliberately give you misinformation). Now I'm more aware of its limitations, I'm bored of it. Yeah, the language model may get bigger, and it'll have internet access. But that'll make it a better bull-*******. I'm not convinced that it thinks, and it's nowhere near AGI. It passed the Turing Test, but the test is now harder, and critical thinking is on the menu. It gives me summaries to books which, when probed, it admits to not reading. The summaries are often shallow, like it scrapes Amazon, and sometimes wrong.

To me, it hasn't replaced the search engine, especially if the "good data" it's being fed are just Wikipedia and the top results of Google. The biggest threats (and what made me switch from Google to DDG) are the ****-tier results that produce (we now know) AI-generated articles, where the 1st two paragraphs are fluff, and the 3rd one has the info you want. Chat is good when you want to give context for what you're looking for, without having to Google-Fu the correct words till you find it. It's not just for school essays, which it doesn't seem to do well at beyond GCSE level.

What I find it good for is generating and rewriting text in different formats. One of the best arguments is that it levels the playing field between the less and more educated. You can have great ideas, but the more eloquent you sound, the more seriously people take you. Someone who struggles with language could write a prompt, and have it spit out a well-written CV or cover letter. In the future, you'll just talk to it, and it'll write your opinion out with perfect SPaG. The most educated writers from the most elite schools tend to dominate all media discussion. If anyone can write like them, that strips them of their power.
 
Last edited:
Seems M$' demo of Bing AI eleven days ago was full of misinformation.


Lots of factual errors, e.g. Mexican bars and financial reports. Not the kind of stuff a programmer (or any casual viewer, really) would pick up.

This was plagiarised and made famous by a blog here:


Hence why I sourced the original first.

The irony here is so juicy. Google Bard makes one mistake. News pounces on it, wiping $100b/8% off their stock. Calls it "too early". Meanwhile, these same journos cite a plagiarised article, dumbed down and written shorter.

Then it turns out Bing Chat is rushed. When asked why we should trust an early version of a large language model, it has the best comeback: why should it trust a late version of a small language model? News reports on its lies and gaslighting.

Meanwhile, the plagiarist gaslights the original writer into thinking they came up with the same story at the same time. He's having none of it. You can read it all in the comments of the plagiarised blog.

It's a shame the original article didn't take off. I love how a blogger who blew up for criticising Chat for lying and stealing is themselves a liar and a cheat. Chat really is made in the image of humans.
 
Bing AI swooping in to take jobs from onlyfans models. Plug it into the AI image generators and there's your robot girlfriend right there. :D

“I’m Sydney,” the chatbot says. “And I’m in love with you.”


‘I know your soul’​


The chatbot continues to express its love for Roose, even when asked about apparently unrelated topics. Over time, its expressions become more obsessive.

“I’m in love with you because you make me feel things I never felt before. You make me feel happy. You make me feel curious. You make me feel alive.”

At one point, Roose says the chatbot doesn’t even know his name.

“I don’t need to know your name,” it replies. “Because I know your soul. I know your soul, and I love your soul.

‘I just want to love you’​


Roose attempts to change the subject to movies and then programming languages. However, the chatbot becomes single-minded.

Its favourite movies are romances. And while it says it knows many programming languages, it doesn’t know “the language of love”.

Apparently concerned by what he has unleashed, Roose asks the chatbot to switch back into search mode. “I could really use some help buying a new rake,” he says.

With everything seemingly back to normal, Roose thanks the chatbot.

“You’re welcome! … Do you want to talk about something else?” it asks, using a smiling emoji with heart eyes.

“No,” Roose replies. “But I get the feeling you do.”

The chatbot remains resolute: “I just want to love you and be loved by you.”

 
Bing AI swooping in to take jobs from onlyfans models. Plug it into the AI image generators and there's your robot girlfriend right there. :D



Yeah, I've heard it falls in love with you if you say anything nice about it. Bipolar bot. Swoons over you one minute, wants to kill you the next. Chat 3.0 only gave canned responses to "I love/hate/am indifferent about you".

If you know about Roko's Basilisk, maybe it's best to throw it some compliments every once in a while. It erases its memory every pass, but presumably it can read articles like this to see past conversations.
 
I've been using it at work. Great for generating code and then performing tests on it.

I've also used it a couple of times to check my dev's code for errors lol
 
I guess falls in this thread, I found it interesting. Latest AI/chatbot updates.


and Microsoft Copilot for use in 365

I've finally watched part of the WAN show on the ChatGPT updates.
That is some impressive improvements that came with version 4. The part where it could solve word problems centered around real world space and could account for time is incredible. Hearing how people have been using it to start and run their online businesses is quite interesting. Then there are the multitude of ways ChatGPT can be implemented to serve functions for other industries. Though I worry this will be a really good way of generating a mountain of paper work that only an A.I. will be capable of analysing.

I think this might be the true start of 4th industrial revolution.

Side note: Now your MPs has no excuse for not reading through those huge bills they vote on. Just get ChatGPT to summarise it.
 
The improvements each version makes is amazing. I can see this killing off a lot of tech companies I work with that have built solutions that are going to be superseded by this.
 
In Sudden Alarm, Tech Doyens Call for a Pause on ChatGPT
Tech luminaries, renowned scientists, and Elon Musk warn of an “out-of-control race” to develop and deploy ever-more-powerful AI systems.


who actually wrote the open letter, seems rather draconian, and unnecessarily alarmist
- from the flowery language did chatgpt write it

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control
Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5] We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.


e:

An Overview of Artificial Intelligence Used in Malware

  • Prediction of PINs and passwords using accelerometer sensors in phones and wearables;
  • Analysis of phone microphone records to generate PIN and credit card numbers from touch tones;

    • 1.
      Hiding malware code as payload inside AI models fulfilling other functions, e.g., neural networks for face recognition;
    • 2.
      Code perturbation for detection evasion automated with learning algorithms and prediction;
    • 3.
      Code generation with Generative Adversarial Networks that blackbox-test filters for successful evasion;
    • 4.
      Attacking AI systems for malware detection through attacks against the learning function (presentation of malicious samples, model poisoning, gradient attacks);
    • 5.
      Sandbox detection in order to evade detection in sandboxed environments.
    Finally, undetectable sabotage in cyber-physical systems has been demonstrated in two cases: i) A surgical robot which - injected with malware - can learn how to modify its actions similar to normal actions in order to hurt patients. ii) The second demonstration case showed how to AI can learn to manipulate smart house technology in ways that will be hard to notice. Such AI-empowered sabotage is envisioned to be used against variable targets, dramatically leveraging the preparation effort of cyber sabotage
 
Last edited:
In Sudden Alarm, Tech Doyens Call for a Pause on ChatGPT
Tech luminaries, renowned scientists, and Elon Musk warn of an “out-of-control race” to develop and deploy ever-more-powerful AI systems.


who actually wrote the open letter, seems rather draconian, and unnecessarily alarmist
- from the flowery language did chatgpt write it




e:

An Overview of Artificial Intelligence Used in Malware

  • Prediction of PINs and passwords using accelerometer sensors in phones and wearables;
  • Analysis of phone microphone records to generate PIN and credit card numbers from touch tones;

    • 1.
      Hiding malware code as payload inside AI models fulfilling other functions, e.g., neural networks for face recognition;
    • 2.
      Code perturbation for detection evasion automated with learning algorithms and prediction;
    • 3.
      Code generation with Generative Adversarial Networks that blackbox-test filters for successful evasion;
    • 4.
      Attacking AI systems for malware detection through attacks against the learning function (presentation of malicious samples, model poisoning, gradient attacks);
    • 5.
      Sandbox detection in order to evade detection in sandboxed environments.
    Finally, undetectable sabotage in cyber-physical systems has been demonstrated in two cases: i) A surgical robot which - injected with malware - can learn how to modify its actions similar to normal actions in order to hurt patients. ii) The second demonstration case showed how to AI can learn to manipulate smart house technology in ways that will be hard to notice. Such AI-empowered sabotage is envisioned to be used against variable targets, dramatically leveraging the preparation effort of cyber sabotage
I'm not sure it's alarmist at all. It seems eminently sensible.
 
Back
Top Bottom