How long until we have our first major AI controversey in the UK?

It might sound like a plot for a movie, it's only a matter of time before someone uses AI (and social media) to create mass hysteria on a national or even global level.

Imagine the internet being flooded with images, videos and first hand reports of an alien invasion, all AI generated. An AI of an AI as it were lol. People are stupid and will do stupid things as has happened in the past (War of the Worlds, 1938 radio show, the recent UK riots).

Sadly it'll take something like this for people to stop believing things at first glance, put down their phones and look out the window.
 
It might sound like a plot for a movie, it's only a matter of time before someone uses AI (and social media) to create mass hysteria on a national or even global level.

Imagine the internet being flooded with images, videos and first hand reports of an alien invasion, all AI generated. An AI of an AI as it were lol. People are stupid and will do stupid things as has happened in the past (War of the Worlds, 1938 radio show, the recent UK riots).

Sadly it'll take something like this for people to stop believing things at first glance, put down their phones and look out the window.
I assume that if aliens invade they'll just send us an app directing us where we need to go to be disintergrated, that should account for the youngest 50% of the population.
 
I just read the following article.


And whilst I take the article at face value plainly once positions harden people don't always change there mind when presented with evidence to the contrary. I was just wondering how long until we truely or falsely have a major incident in the UK where someone uses AI wither to commit a major fraud on the public or as an excuse for some percieved wrongdoing? I'm also wondering if it has alrady happned without us notcing.

I've had a lot of fun with midjourney making images and a microsoft product at work has been useful for finding badly stored and collated data. But the risks of AI are plainly moving much faster than society's ability to come to terms with them. I mean social media has been around for more than a decade and we still haven't as a society come to terms with bad actors poisoning the well of public discourse.

I think, perhaps, the more interesting point is, is the internet dying?
 
We'll have far more positives coming from the use of AI than negatives.

Not sure I agree with this.. given it's primary use is to make big companies more money.. no one is doing it to solve our problems, or help in any way.. it's who can make the most dosh out of it.. hence the huge investment it's receiving..
 
There was a cracking TV programme on the other week called Can AI Change Your Vote.
They took 10 people who were Labour voters and 10 who were Tory voters.
They aimed AI technology at them where their leaders were saying stuff they didn't want to hear etc.
They even had Stephen Fry's permission to use him as a spokesperson for both parties but it was AI talking.
All but two voted the opposite way.

It's on You Tube, go to 35 minutes, that's not Stephen Fry talking.

 
Last edited:
Not sure I agree with this.. given it's primary use is to make big companies more money.. no one is doing it to solve our problems, or help in any way.. it's who can make the most dosh out of it.. hence the huge investment it's receiving..

this is the scary bit, its another cynical shareholder bubble but its goal within those companies is to render as many people unemployed as possible.
 
We've found the bot lads, because that's something AI would say :D.
You’ve highlighted a genuine issue - would we know when an AI posts on this forum? Over the past couple of years there have been a few strange posts from members who rarely posted more than 20 times. There are also a couple of accounts who rarely miss an opportunity to cast negative light on current affairs. They could, of course, just be people being people but the question us - are they?
 
You’ve highlighted a genuine issue - would we know when an AI posts on this forum? Over the past couple of years there have been a few strange posts from members who rarely posted more than 20 times. There are also a couple of accounts who rarely miss an opportunity to cast negative light on current affairs. They could, of course, just be people being people but the question us - are they?

Like below and for me it would be obvious and others that I used AI as it is just not my style of writing, but if someone did as below from post1 and continued doing so, it could imo be very difficult to spot.


The integration of AI into various aspects of our lives offers tremendous potential for innovation, efficiency, and problem-solving. However, it's crucial to acknowledge the hidden dangers that may accompany its advancement.

Arguments for AI:​

  1. Efficiency and Productivity: AI can automate repetitive tasks, freeing up human workers to focus on more complex and creative aspects of their jobs. This can lead to increased productivity and economic growth.
  2. Enhanced Decision-Making: AI systems can analyse vast amounts of data quickly and identify patterns that might elude human analysts. This capability can improve decision-making in fields like healthcare, finance, and environmental science.
  3. Accessibility: AI technologies can provide support for people with disabilities, improving accessibility and inclusion. For example, AI-driven tools can assist in communication or navigation.
  4. Innovation in Research: AI is revolutionizing research by enabling simulations and predictive modelling in fields such as medicine, climate science, and materials engineering, potentially leading to breakthroughs that benefit society.

Hidden Dangers of AI:​

  1. Bias and Inequality: AI systems can inadvertently perpetuate or even exacerbate existing biases if they are trained on flawed data. This can lead to discriminatory outcomes in hiring, lending, law enforcement, and more.
  2. Job Displacement: While AI can create new jobs, it can also lead to significant job losses in sectors that rely heavily on routine tasks. The transition may disproportionately affect vulnerable populations, leading to economic inequality.
  3. Lack of Accountability: As AI systems become more autonomous, it can be challenging to determine responsibility when things go wrong. Issues arise in sectors like autonomous driving, healthcare, and criminal justice, where accountability is crucial.
  4. Manipulation and Misinformation: AI can be used to create deepfakes and manipulate information, undermining trust in media and institutions. This can exacerbate societal polarization and misinformation.
  5. Surveillance and Privacy Concerns: AI technologies are often employed in surveillance systems, raising ethical concerns about privacy and civil liberties. The potential for misuse by governments or corporations is significant.
  6. Dependence on Technology: Overreliance on AI can lead to a decline in critical thinking and problem-solving skills. If individuals and organizations come to depend on AI for decision-making, they may lose the ability to think independently.

Conclusion:​

While AI offers substantial benefits, it is essential to approach its development and implementation with caution. Awareness of its hidden dangers can inform more responsible practices, guiding policymakers, developers, and society as a whole to harness AI’s potential while mitigating its risks. Balancing innovation with ethical considerations is key to ensuring that AI serves humanity positively and equitably.
 
Like below and for me it would be obvious and others that I used AI as it is just not my style of writing, but if someone did as below from post1 and continued doing so, it could imo be very difficult to spot.

No, if someone went with default ChatGPT style answers like that to everything that would be quite easy to spot.

But people have had AI Twitter accounts etc.. for a while now and they're not necessarily easily distinguishable by their outputs, at least not the tweets they post, replies can catch some of them out though. The spammy ones can be caught, especially when posting something out of context or blatant propaganda and sometimes can get caught out with a reply/prompt to ignore previous instructions etc.. also the Russians had an issue recently where they'd not paid their OpenAI subscription IIRC and the bot accounts revealed themselves.
 
No, if someone went with default ChatGPT style answers like that to everything that would be quite easy to spot.

But people have had AI Twitter accounts etc.. for a while now and they're not necessarily easily distinguishable by their outputs, at least not the tweets they post, replies can catch some of them out though. The spammy ones can be caught, especially when posting something out of context or blatant propaganda and sometimes can get caught out with a reply/prompt to ignore previous instructions etc.. also the Russians had an issue recently where they'd not paid their OpenAI subscription IIRC and the bot accounts revealed themselves.
That's an interesting take, and I agree that it's not always easy to spot AI-generated content, especially with how much more sophisticated they've become. I think it’s more about the subtle inconsistencies, particularly in follow-up interactions or replies, where an AI might miss the mark.

When it comes to the ‘spammy’ accounts, I think a big giveaway can be the context of posts, as you mentioned—posting things that don’t quite align with the conversation or just seem like they're hitting keywords without understanding. Also, the ‘ignore previous instructions’ trick is one of the more amusing ways people test AI responses. I hadn’t heard about the Russian OpenAI subscription issue, though—it highlights how reliant some operations have become on these tools. Definitely something to keep an eye on in the future!
 
Back
Top Bottom