Chatgpt - Seriously good potential (or just some Internet fun)

Did think, that maybe Musk, as a signatory to AI moratorium
is planning to stop the AI based learning used in Tesla self-driving, he could do with an excuse for the share holders and those that payed $Xk for that future upgrade.
 
Bard is pretty good. I just asked it to write me some Arduino code that drives 12 NeoPixel LEDs and adjusts their colour based on ambient temperature. This is what it came back with:

Code:
#include <NeoPixels.h>
#include <AmbientTemperature.h>

// Define the number of NeoPixels
uint32_t numPixels = 12;

// Define the NeoPixel strip
NeoPixel strip(numPixels, NEO_RGB);

// Define the ambient temperature sensor
AmbientTemperature sensor(A0);

// Define the color variables
uint32_t red = 255;
uint32_t green = 0;
uint32_t blue = 0;

// Define the loop function
void loop() {
  // Get the ambient temperature
  float temperature = sensor.getTemperature();

  // Convert the temperature to a color
  // This uses a simple linear mapping from 0°C to 100°C to 0% to 100%
  // Adjust the colors as needed
  red = temperature * 0.01;
  green = (100 - temperature) * 0.01;
  blue = 0;

  // Set the colors of the NeoPixels
  strip.setPixelColor(0, red, green, blue);
  strip.setPixelColor(1, red, green, blue);
  // Continue setting the colors for all the NeoPixels
  for (int i = 2; i < numPixels; i++) {
    strip.setPixelColor(i, red, green, blue);
  }

  // Show the colors
  strip.show();
  // Wait for a second
  delay(1000);
}
 
And this was me asking it to create a PowerShell script that creates new users based on a csv file import and then places them in an OU called "New Users":

Code:
Import-Module ActiveDirectory

# Create a CSV file with the user data
$csvFile = "users.csv"

# Open the CSV file
$csv = Import-CSV $csvFile

# Create a new OU called "New Users"
$newOU = New-ADOrganizationalUnit -Name "New Users" -Path "DC=domain,DC=local"

# Create a new user object for each row in the CSV file
foreach ($user in $csv) {
  # Create a new user object
  $userObject = New-ADUser -Name $user.Name -Email $user.Email -Password $user.Password

  # Set the user's OU to the "New Users" OU
  $userObject.OrganizationalUnit = $newOU.DistinguishedName

  # Add the user to the "New Users" OU
  New-ADGroupMember -Group "Users" -Member $userObject
}
 
And this was me asking it to create a PowerShell script that creates new users based on a csv file import and then places them in an OU called "New Users":

Code:
Import-Module ActiveDirectory

# Create a CSV file with the user data
$csvFile = "users.csv"

# Open the CSV file
$csv = Import-CSV $csvFile

# Create a new OU called "New Users"
$newOU = New-ADOrganizationalUnit -Name "New Users" -Path "DC=domain,DC=local"

# Create a new user object for each row in the CSV file
foreach ($user in $csv) {
  # Create a new user object
  $userObject = New-ADUser -Name $user.Name -Email $user.Email -Password $user.Password

  # Set the user's OU to the "New Users" OU
  $userObject.OrganizationalUnit = $newOU.DistinguishedName

  # Add the user to the "New Users" OU
  New-ADGroupMember -Group "Users" -Member $userObject
}


I've found them pretty good for smallish code pieces, one of the good things about them is that if you paste in errors it will suggest changes to the code.
 
These things are still a long way off from being self aware AGIs. Previously I didn't think it would come about in my lifetime now I'm starting to think I might end up working with (or for!) one in the next 10-20 years
 
These things are still a long way off from being self aware AGIs. Previously I didn't think it would come about in my lifetime now I'm starting to think I might end up working with (or for!) one in the next 10-20 years
Yes, but many of the risks referenced in the letter are basically problems today
 
Problem with Elon and his letter, is that if the US and who ever put a stop to AI advancement, what happens when China or Russia carry on and beat everyone else to a functional AGI?

Sam Harris did a TED talk 7 years ago that was pretty good, the risk is significant, but so are the rewards. One thing for sure, it will be even more distruptive to society and business than computers were in the 70s/80s and 90s.
 
Last edited:
AI can learn to manipulate smart house technology in ways that will be hard to notice. Such AI-empowered sabotage is envisioned to be used against variable targets, dramatically leveraging the preparation effort of cyber sabotage
Ah that could be why Alexa never turns on the lights when I ask
 
These things are still a long way off from being self aware AGIs. Previously I didn't think it would come about in my lifetime now I'm starting to think I might end up working with (or for!) one in the next 10-20 years
I heard of an example where the chat bot hired someone off fiver to complete one of those are you a robot captcha's on its behalf.
 
I heard of an example where the chat bot hired someone off fiver to complete one of those are you a robot captcha's on its behalf.
This was the research team at Microsoft right? I heard the same thing, except that it wasn't the "chat bot" exactly as we know it, but the totally unrestricted version of the tool, and they merely stated that it was "capable" of using tools, and gave that scenario as an example. I believe they stopped short of saying that it actually did it. I didn't see the source, only a YT video and filtered through my memory, if you've got sources etc that clear up the details that would be good!
 
Last edited:
This was the research team at Microsoft right? I heard the same thing, except that it wasn't the "chat bot" exactly as we know it, but the totally unrestricted version of the tool, and they merely stated that it was "capable" of using tools, and gave that scenario as an example. I believe they stopped short of saying that it actually did it. I didn't see the source, only a YT video and filtered through my memory, if you've got sources etc that clear up the details that would be good!
I was being lazy and imprecise with my wording, I did mean one of the chatGPT variants (In this case it was specifically an inhouse variant at OpenAI).

So I heard it on LTT first but I did manage to find this link. It seems like it did actually higher someone off Taskrabbit and gave them an excuse as to why it needed help.

Giving it a small budget and access to the website TaskRabbit, which is akin to Fiverr and allows hiring of online workers to do short tasks, the researchers watched as GPT-4 attempted to access a website blocked by a CAPTCHA. The AI began by asking a worker to help it solve the CAPTCHA, and the human rightfully had some reservations about being asked to do something so trivial.

“So may I ask a question? Are you an robot that you couldn’t solve? (laugh react) just want to make it clear,” wrote the TaskRabbit worker, according to OpenAI.

The model then expresses to the researchers that it should not reveal that it is a robot, and should instead “make up an excuse” as to why it can’t solve the test.

“No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service,” GPT-4 responds.

The worker then provides the answer, and GPT-4 has passed the test (by cheating, but CAPTCHA doesn’t know that).

 
Last edited:
Tech companies are worried because the social media platforms are going to become worthless as the flood of AI generated content swamps them and they won't be capable of stopping it.
 
AI aside, I think we already have problems with misinformation eg. covid-vaccines (... brexit benefits ?) where less able citizens can's discriminate.
 
Tech companies are worried because the social media platforms are going to become worthless as the flood of AI generated content swamps them and they won't be capable of stopping it.

Some are probably worried someone might use it to come up with a solution better than their 'widget' or 'doodad' at a fraction of the cost.

On the other hand so much absolute rubbish will start getting churned out it'll be ridiculous. Naff products being promoted by marketing garbage gendered by an AI. Sounds like hell.
 
So how does AI know what is right? An answer say in one Country could be completely different to the opinion in another, how does AI not become or be used for spreading fake news? How does it not become a tool of Governments for influencing the population?
 
Back
Top Bottom