Neuralink

Ergo the AI would have to conclude that it's simulation could not be verified, nor the results trusted.
Again convergence. Even when you have the correct answer, you rerun the simulation again, in fact you run it N times as dictated by the user (no user here except AI so it decides when it's accurate enough) until the last N sims are within a tolerance of error of each other.

Then, as a final test, it simulates an action beyond the present day that it causes. It records the effect, then compares it with the same action it takes in the real world. If these are aligned, I am sure it would be satisfied with the results (speaking on its behalf of course).
 
Not really, not for what you're talking about.
Please be more specific about what is "not" really. And if you mean knowledge of the universe as is, please show how you come to the conclusion that a future AI would/could not have enough information.

you'd be rewinding many many times, in fact you'd be uncertain re: other events - it's not like all these human actions are even known - again you're completely glossing over the sensitivity of the model here.
Non-issue. Time and quality of the model is a factor for the AI to consider.

No it wouldn't, it makes the search space vast and the point is that even one isn't necessarily feasible let alone the huge number required here.
Again with the adverbs of uncertainty. You have no constructive argument or rebuttal to make here.

In fact we're going round in circles. You two are trying to pick the same holes and failing, while offering absolutely nothing positive to the discussion.
 
Again convergence. Even when you have the correct answer, you rerun the simulation again, in fact you run it N times as dictated by the user (no user here except AI so it decides when it's accurate enough) until the last N sims are within a tolerance of error of each other.

Then, as a final test, it simulates an action beyond the present day that it causes. It records the effect, then compares it with the same action it takes in the real world. If these are aligned, I am sure it would be satisfied with the results (speaking on its behalf of course).
How does the AI know it has the correct answer (absolutely 100% correct).

Where does it get the comparison data to make that determination?

Why would you run the simulation again with the same inputs? Why would you change the inputs if you thought you had the correct answer?

How do you know that your simulation won't diverge in future, even if it is accurate (that you know of) up to the present?

How does the AI get data about everybody's actions? How does it become all-seeing?
 
How does the AI know it has the correct answer (absolutely 100% correct).

Where does it get the comparison data to make that determination?

Why would you run the simulation again with the same inputs? Why would you change the inputs if you thought you had the correct answer?

How do you know that your simulation won't diverge in future, even if it is accurate (that you know of) up to the present?

How does the AI get data about everybody's actions? How does it become all-seeing?
If you can't infer answers to those from my last post, I'm not holding your hand. Make some assumptions of your own fgs man, have a little think before you punch the keyboard in anger. You are like a child that constantly asks why. I think the correct response from a tired parent is "because".
 
If you can't infer answers to those from my last post, I'm not holding your hand. Make some assumptions of your own fgs man, have a little think before you punch the keyboard in anger. You are like a child that constantly asks why. I think the correct response from a tired parent is "because".
Aaaaaand we're done here. It's been occasionally fun.
 
Aaaaaand we're done here. It's been occasionally fun.
I've offered answers to questions you didn't ask, and answered questions in multiples when you did.

Your ambition is to keep asking the most simple and mundane questions in a bid to catch me out.

Let's continue, but rephrase each of those lines in a "x isn't possible because y" fashion. Let's see if you can actually provide substance to your naysaying.

edit: Also.. just keeping this here since I missed it due to the edit
e: Also... **** you, future omnipotent AI. You wanna start something? Yeah? Come at me bro...

Last edited: Yesterday at 2:03 AM
 
Last edited:
Please be more specific about what is "not" really. And if you mean knowledge of the universe as is, please show how you come to the conclusion that a future AI would/could not have enough information.

Because it doesn't exist anymore. Every human that has existed... not just that but all their interactions down to the smallest decisions etc..

Non-issue. Time and quality of the model is a factor for the AI to consider.

Not at all - it's something rather obvious to consider when looking at the feasibility of the idea in the first place.

Again with the adverbs of uncertainty. You have no constructive argument or rebuttal to make here.

In fact we're going round in circles. You two are trying to pick the same holes and failing, while offering absolutely nothing positive to the discussion.

You have no constructive argument to make other than "magic"/hand waving as a reason to overcome the obvious objections.
 
Because it doesn't exist anymore. Every human that has existed... not just that but all their interactions down to the smallest decisions etc..
This isn't a story you're telling where you reveal more of the plot as time goes by. Please explain yourself better. "It" is what? Why is every human that ever existed consequential?

Think about this:
I can walk to town via the country road, or I can walk to town via the suburban road. I get to the same place at the same time and everything that happens after is the same. There are two universes, one where i walked the country road, the other where i walked the suburban route. Those two existences have not converged, but they follow the exact same set of events until the death of the universe. The day after I walked to town, the AI was born. And it began simulating. Given it wants to kill the the humans who didnt support it, it doesnt care about the quantum changes of the wind on the suburban route, vs the country road.

Are you saying that the above is NOT possible? Why?

If you are not saying that, then why do you mandate that every single thing must be the same every time, when our only concern is that the AI assesses the loyalty of the people alive at the time it decided to judge mankind? (Remember thats the point of Roko's Basilisk).

If every permutation of existence is possible, then every single permutation must have some chance to reach our modern day world. Let's say the big bang false-started in another universe - it exploded, then imploded, then finally exploded again. It is now behind our universe in time. Now let's say a few factors in the formation of the milky way, sol, earth, and everything on it, allowed it to catch up to us perfectly. Is this not possible? Why not?

There are myriad permutations in the middle of those two extremes where we end up with unlimited identical earths all doing the same thing at the same time, but came to the modern existence in slightly different ways. Now the AI sims 30s in the future, and compares against the 30s that just happened.

edit:
You have no constructive argument to make other than "magic"/hand waving as a reason to overcome the obvious objections.
Everything in this thread we are discussing is because I brought it to the table. :rolleyes:
 
Last edited:
This isn't a story you're telling where you reveal more of the plot as time goes by. Please explain yourself better. "It" is what? Why is every human that ever existed consequential?

Think about this:
I can walk to town via the country road, or I can walk to town via the suburban road. I get to the same place at the same time and everything that happens after is the same.
Is it? Maybe on the other road you witness a murder. Maybe you find £5 on the road that you wouldn't taking the other route.

Etc, etc, ad infinitum.

The decision of which route to take could quite literally be the difference between your life ending on that day or carrying on as normal.

If the AI has to "abstract" away your route to process the simulation quicker, then you've got no hope of any kind of accuracy at all.
 
This isn't a story you're telling where you reveal more of the plot as time goes by. Please explain yourself better. "It" is what? Why is every human that ever existed consequential?

Because they each have an impact or potential imapact on the model. Remove one human that reproduced and you change the entire chain below.

[...]
There are myriad permutations in the middle of those two extremes where we end up with unlimited identical earths all doing the same thing at the same time, but came to the modern existence in slightly different ways. Now the AI sims 30s in the future, and compares against the 30s that just happened.

And there are many more where you don't... and it's way more complicated than just approximating some solution - every approximation you make introduces errors yet this thing is supposed to be at the level of granularity of being able to tell who did what re: supporting it...

You're just putting forth another argument for approximations while ignoring how sensitive this sort of model potentially is.
 
Is it? Maybe on the other road you witness a murder. Maybe you find £5 on the road that you wouldn't taking the other route.

Etc, etc, ad infinitum.
Lol you're a nightmare. Respond directly to my other posts rather than confirming my points about you in the above post..

So the implication in my scenario was that nothing at all was different other than the route taken:

No person or animals were affected by me - this means seeing, hearing, smelling me.
No footprints or other memorabilia was left on my route that could affect anything.

Is that possible? Yes

Will the universes be exactly the same? No.

Could they re-converge? (Clue: Yes)
 
Lol you're a nightmare. Respond directly to my other posts rather than confirming my points about you in the above post..

So the implication in my scenario was that nothing at all was different other than the route taken:

No person or animals were affected by me - this means seeing, hearing, smelling me.
No footprints or other memorabilia was left on my route that could affect anything.

Is that possible? Yes

Will the universes be exactly the same? No.

Could they re-converge? (Clue: Yes)
Key point: the AI doesn't know ahead of time whether its abstraction will cause errors/divergence, and nor do you.

Is it even likely that there will be no difference whichever route you took? From your perspective there might be no difference.

But think of this. You might by taking the alternative route step on a snail, killing it. The snail might not then get eaten by a hedgehog, which finds itself looking for food elsewhere. That causes the hedgehog to be run over by an animal lover. This causes the animal lover to take the animal to a vet, missing his date with his new girlfriend.

From your perspective nothing at all was different. You witnessed no murder. You found no £5. You got to your destination and carried on as you would have otherwise (had you taken the other route).

However your actions in taking the alternative route completely changed someone else's day.

Now you say, "No person or animals were affected by me." You just aren't thinking small enough. You will have affected the lives of countless micro-organisms whichever route you took.

Ever heard of the "Butterfly Effect"? Something inconsequential to you - perhaps not even observable by you - can be the cause of a chain reaction which sets in place an entirely different outcome on the other side of the world.

e: OK let's remove animals and other life entirely.

By taking route B instead of route A, a wet leaf gets stuck to your shoe. It falls off at the top of a flight of steps near that building your were heading to. Five minutes later a woman treads on the leaf, slips down the stairs and breaks her neck.
 
Because they each have an impact or potential imapact on the model. Remove one human that reproduced and you change the entire chain below.
Absolutely, but only within a small enough time span. Nothing stopping other universes reconverging on the same storyline as ours, even if hitler and ghengis khan were swapped around.
And there are many more where you don't... and it's way more complicated than just approximating some solution - every approximation you make introduces errors yet this thing is supposed to be at the level of granularity of being able to tell who did what re: supporting it...

You're just putting forth another argument for approximations while ignoring how sensitive this sort of model potentially is.
You keep dismissing the benchmark. It doesn't matter how many errors occur during 1 simulation. It resets and continues until it gets the same result as the observations it's making in it's present day.
 
Absolutely, but only within a small enough time span. Nothing stopping other universes reconverging on the same storyline as ours, even if hitler and ghengis khan were swapped around.

but we're not talking about a general stroyline - you're talking about actual individual people and getting down the the level of their decisions and detemining whether they sufficiently helped... there are plenty of people descended from Khan for example...

You keep dismissing the benchmark. It doesn't matter how many errors occur during 1 simulation. It resets and continues until it gets the same result as the observations it's making in it's present day.

That's flawed - you don't need a complicated example to demonstrate what overfitting is...
 
SNIP....

By taking route B instead of route A, a wet leaf gets stuck to your shoe. It falls off at the top of a flight of steps near that building your were heading to. Five minutes later a woman treads on the leaf, slips down the stairs and breaks her neck.
I am not relevant in this context. I used myself as a vehicle in this analogy. If that woman was not alive in the AI's sim, np. If she was... rewind, resolve.
 
Absolutely, but only within a small enough time span. Nothing stopping other universes reconverging on the same storyline as ours, even if hitler and ghengis khan were swapped around.
You keep dismissing the benchmark. It doesn't matter how many errors occur during 1 simulation. It resets and continues until it gets the same result as the observations it's making in it's present day.
Here's a question for you.

How many potential universes have the same "observed" state at any one point in time, but go on to diverge in future?

Assuming the AI cannot observe every single sub-atomic particle and energy source in the real universe and therefore know that the state is the same (therefore no future divergence is possible).

The AI's observation will be limited by the data it can receive from the sensors it is linked to.
 
but we're not talking about a general stroyline - you're talking about actual individual people and getting down the the level of their decisions and detemining whether they sufficiently helped... there are plenty of people descended from Khan for example...
I expect FoxEye to dig into an analogy but not you.

But ok. I didn't say Genghis Khan died. Nor that he didn't sleep with the same women he did in the other universe. Wanna continue?

That's flawed - you don't need a complicated example to demonstrate what overfitting is...
I need any example to understand what overfitting is because it's a new word to me. Is this what you forum boys do to make your arguments work or something?
 
I am not relevant in this context. I used myself as a vehicle in this analogy. If that woman was not alive in the AI's sim, np. If she was... rewind, resolve.
How can anyone not be in the sim?

Argh, I should have quit this thread earlier, it's nonsense.

It magic. It's a McGuffin, and I'm stupid for carrying on with this conversation.
 
Here's a question for you.

How many potential universes have the same "observed" state at any one point in time, but go on to diverge in future? I'll take the easy option and say infinite

Assuming the AI cannot observe every single sub-atomic particle and energy source in the real universe and therefore know that the state is the same (therefore no future divergence is possible).

The AI's observation will be limited by the data it can receive from the sensors it is linked to.
The last part has been addressed more than twice now.
 
Back
Top Bottom