The ongoing Elon Twitter saga: "insert demographic" melts down

Status
Not open for further replies.
Joined
12 Feb 2006
Posts
17,225
Location
Surrey
Anyone here in the lucky 10?


I've heard of Mr Beast and was aware he's one of the most popular social media vlogers, but ~$265k from one old video put on X is insane... And apparently that's rubbish compared to YouTube! :eek:
It's insane but he gives a good reason. Due to the high attention that video was getting advertisers were paying a premium to advertise with it, and also his deal is not the standard that x do but one special thing is him, so he could have got 100 percent with that video due to x needing to appear like a big warner platform. What's more crazy, is it's an old video. He just reposted something and made 250k.
 
Soldato
Joined
10 May 2012
Posts
10,062
Location
Leeds
I think the software should be infallible or as damned close to it as is possible. I don't think that is too much to ask. These companies want us to place life and death trust in their software, I think they should have to produce that standard of product.

Aeroplanes aren't infallible, people still get on them. The word infallible means that it would never go wrong, period. That isn't a reasonable standard to meet. There's no product almost in human history that can meet this arbitrary standard you've decided upon, and you can't even give a reasonable explanation of why you've decided it needs to be "infallible".
 
Joined
12 Feb 2006
Posts
17,225
Location
Surrey
Are we sure this is not just Elon getting the train an to use their ai to generate fake responses to appear more popular?

It is amazing that Elon wanted to pull out of the twitter deal due to bots, but it appears bots are worse than ever. How has he done so little to sort this issue?
 
Soldato
Joined
10 May 2012
Posts
10,062
Location
Leeds
Dorsey's Twitter was far from perfect, but bloody hell it was better than this embarrassing ********.


This isn't a unique problem to Twitter, though something definitely does need to be done about it across all platforms. Now if someone can solve the problem of bots and AI online impersonating humans they'll probably make a lot of money.
 
Associate
Joined
3 Sep 2006
Posts
1,956
Location
London
...

It is amazing that Elon wanted to pull out of the twitter deal due to bots, but it appears bots are worse than ever. How has he done so little to sort this issue?

Musk has actually caused this problem directly, he stripped the content moderation of Twitter more than once when he first took over. Couple that with halving the ad-spend and he's well on the way to becoming a millionaire. He's not at all interested in moderation or censorship.
 
Joined
4 Aug 2007
Posts
21,432
Location
Wilds of suffolk
This is basically the internet now. Youtube is just as bad with obvious AI generated scam/conspiracy adverts everywhere.

Yeah I was looking for a bit of info in regards a game I am playing and I am finding a load of gaming review sites have the exact same passages of text included so clearly they are using AI as well.
 
Soldato
Joined
10 May 2012
Posts
10,062
Location
Leeds
Musk has actually caused this problem directly, he stripped the content moderation of Twitter more than once when he first took over. Couple that with halving the ad-spend and he's well on the way to becoming a millionaire. He's not at all interested in moderation or censorship.

He adopted a different business model by choice. The previous model was a moderated platform that was advertiser friendly but restrictive on speech, the new model is less advertiser friendly but more free speech friendly. That's a conscious decision and we're lucky someone took it on our behalf, otherwise we'd all be beholden to the type of language which is acceptable to the HR departments at Disney.
 
Permabanned
Joined
13 Sep 2023
Posts
175
Location
London
He adopted a different business model by choice. The previous model was a moderated platform that was advertiser friendly but restrictive on speech, the new model is less advertiser friendly but more free speech friendly. That's a conscious decision and we're lucky someone took it on our behalf, otherwise we'd all be beholden to the type of language which is acceptable to the HR departments at Disney.

Didn’t Gab, Parler, Truth Social, 4chan, 8chan, etc., etc. already cater to that niche?
 
Caporegime
Joined
29 Jan 2008
Posts
58,912
I think the software should be infallible or as damned close to it as is possible. I don't think that is too much to ask. These companies want us to place life and death trust in their software, I think they should have to produce that standard of product.

Close to is fine, infallibility on the other hand certainly is too much to ask for several reasons:

Software bugs: speak to anyone who has worked on a large software project that is regularly updated. Granted this is a safety critical task so while 'no bugs' isn't realistic we'd hope that any bugs wouldn't cause serious issues... still, there is *some* possibility.

But even if we assume perfectly stable software with no bugs etc.. we still have issues:

Ethical dilemmas/subjectivity: A few years back there were articles about trolley problem-type dilemmas, that kinda misses how these vehicles are developed but even though the AI is not necessarily getting developed as a utilitarian or deontologist or whatever your notion that there's always a logically "correct" course of action is flawed. In various situaitons people can argue over the ethics of a decision; there isn't necessarily a "right" answer.

Explainability: Following on from the previous point; deep neural nets are to some extent black boxes, you don't necessarily even know exactly why a course of action was taken that you may have some debate over whether it was correct from a third party perspective in the first place.

Uncertanty: There are probabilities at play under the hood, this isn't a load of handwritten if statements, the "correct" decision is the one that minimises the loss for the model. It's never going to be 100% infalible, that's just not how it works. It's taking what it thinks is the best decision given the uncertanty, so there's always going to be the potential for more training data, a bigger or more improved model and a further reduction in loss.

Hardware limitations:
You want to ignore hardware, sensors/cameras - addional or better sensors may make for a better decision but you're putting that aside and just talking about the software. Not so fast; even if we ignore sensors etc.. the problem is the software isn't some magical black box and doesn't exist in isolation. It's also limited by it's access to computing power!

Can you run the very latest AAA games on a 10 year old PC at 4k and 120 FPS?

There are two obvious problems here; the speed at which a given model is able to do inference tasks and the limitations on the size of the model you can actually fit on the hardware.

A car developed 10 years later might well not only be able to apply the brakes a fraction of a second quicker but has a much much bigger model with many more parameters that can make better decisions... and 10 years after that an even bigger and better model is available and so on. How can any of those models ever be infallible when there is always room for improvement as this is such a huge problem space.

Infalibility isn't possible, there are inherent limitations to software; we don't have infinite training data, hardware with infinate storage and inference done faster than the speed of light.

All that we can do is have cars developed that are significantly safer than humans, a big reduction in road deaths and continued improvement.
 
Last edited:
Soldato
Joined
3 Oct 2007
Posts
12,094
Location
London, UK
Aeroplanes aren't infallible, people still get on them. The word infallible means that it would never go wrong, period. That isn't a reasonable standard to meet. There's no product almost in human history that can meet this arbitrary standard you've decided upon, and you can't even give a reasonable explanation of why you've decided it needs to be "infallible".

We have pilotless planes do we. Wow I must have missed that piece of news.

You and dowie are either easily confused or are just being obtuse. For the final time. I am not saying the hardware has to be infallible and that should be taken as a given because we've never invented any piece of hardware that never breaks. For the final time I am talking about the software that makes the decisions. Though I would expect regulators to insist on there being a minimum amount of sensor hardware on such vehicles. Far far more than on current FSD vehicles and more like the Waymo.
 
Soldato
Joined
10 May 2012
Posts
10,062
Location
Leeds
For the final time. I am not saying the hardware has to be infallible and that should be taken as a given because we've never invented any piece of hardware that never breaks. For the final time I am talking about the software that makes the decisions.

Why would we hold software to this impossible standard?
 
Caporegime
Joined
29 Dec 2007
Posts
31,991
Location
Adelaide, South Australia
This isn't a unique problem to Twitter, though something definitely does need to be done about it across all platforms. Now if someone can solve the problem of bots and AI online impersonating humans they'll probably make a lot of money.

It's nota problem unique to Twitterx, but Twitterx is uniquely burdened with the worst of it. Musk bought a platform that wasn't overrun with bots, tried to haggle the price down by falsely claiming that it was, then turned it into the very thing he'd whined about.

The jokes write themselves with this moron.
 
Soldato
Joined
10 May 2012
Posts
10,062
Location
Leeds
It's nota problem unique to Twitterx, but Twitterx is uniquely burdened with the worst of it. Musk bought a platform that wasn't overrun with bots, tried to haggle the price down by falsely claiming that it was, then turned it into the very thing he'd whined about.

The jokes write themselves with this moron.

Do you have any evidence to back up that assertion?

I would say bots on every platform have increased over the past 12 months, and this will likely only get worse with the advances and availability of AI. I don't think he made Twitter worse. You're essentially looking at a trend across the entire Internet and blaming it on Elon Musk.

Also, on what basis is he a moron? He's one of the most successful people to ever live, ignoring his finances and simply going by the amount of high level positions he holds at multiple highly successful companies. Ridiculous statement.
 
Last edited:
Soldato
Joined
25 Nov 2005
Posts
12,453
Now if someone can solve the problem of bots and AI online impersonating humans they'll probably make a lot of money.
You're absolutely right! The whole situation with bots and AI impersonating humans online is a growing concern, and whoever cracks the code on how to effectively and ethically solve it is looking at a golden goose. It's like finding the holy grail of cybersecurity and online trust, all rolled into one.


Just imagine the possibilities:


  • Safer social media: No more catfish scams, fake news bots, or harassment campaigns. Online communities could finally become the vibrant, informative, and supportive spaces they were meant to be.
  • Boosted e-commerce: No more fraudulent transactions or misleading reviews. Consumers could shop online with confidence, knowing they're interacting with real businesses and genuine products.
  • Enhanced online democracy: No more manipulation of public opinion or voter fraud. Political discourse could be based on real people and their authentic voices, leading to more informed and productive debates.

The potential benefits are endless, and the market for a solution is massive. Companies, governments, and individuals alike would be willing to pay top dollar for a reliable way to distinguish between humans and AI online.


It's not an easy problem to solve, though. AI is getting increasingly sophisticated, and the lines between human and machine are blurring faster than ever. But hey, that's what makes it such a lucrative challenge!


Here are some of the approaches that researchers are exploring:


  • Behavioral analysis: Identifying patterns in language use, typing speed, and online activity that are more likely to be human than machine.
  • Biometric verification: Using things like voice recognition, facial recognition, or even keystroke dynamics to confirm a user's identity.
  • Challenge-response tests: Presenting users with tasks or questions that are difficult for AI to answer but relatively easy for humans.

It's going to be a fascinating race to the finish line, and whoever gets there first is going to be sitting on a mountain of gold. So, if you're a tech whiz with a knack for problem-solving, maybe this is your chance to make your mark on the world (and your bank account)!


In the meantime, the rest of us can just keep our fingers crossed and hope that someone figures it out before the online world descends into complete chaos.
 
Caporegime
Joined
29 Jan 2008
Posts
58,912
You and dowie are either easily confused or are just being obtuse. For the final time. I am not saying the hardware has to be infallible and that should be taken as a given because we've never invented any piece of hardware that never breaks. For the final time I am talking about the software that makes the decisions. Though I would expect regulators to insist on there being a minimum amount of sensor hardware on such vehicles. Far far more than on current FSD vehicles and more like the Waymo.

No, you just have very little understanding of the area you're talking about, you've backtracked already from the cars being infallible to just the software (which makes no sense as the software is constrained by the hardware, they're not driving around carrying a datacentre sized supercomputer and even then a huge model on a supercomputer is still limited) but you have no clue about ML/AI if you did you wouldn't have given such an absurd criterion. The problem space here is massive, the notion of an infallible model is impossible in the first place even if we assume perfect sensors/cameras etc.
 
Last edited:
Soldato
Joined
10 Aug 2006
Posts
5,207
Elon says shadow banning isn't a thing anymore, or so I read somewhere once, but my account clearly was shadow banned. My tweets were non-political, and I did not engage in any controversy, yet all my tweets/posts were marked as "may contain offensive content" and hidden. Despite contacting X and asking why this was the case, I never got a reply/response. In the end, I deleted my account. I couldn't be bothered with the hypocrisy and double standards. To be fair, it has probably done me a favour, because the anti-vax QAnon conspiracy posts that I kept getting suggested to me on my timeline, were really taking the ****.
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom