Neuralink

Well that's still pretty dubious - you're now talking about approximations, that might well be great for physics simulations but you're then introducing errors and excluding any outside event beyond that localised simulation especially if this is getting down to simulating individual humans and how they end up behaving etc..

Some of these humans are going to look at the stars - some subset of human relationships, babies born, engagements made, first dates etc.. might happen under said stars... now your closed simulation doesn't have them...

Oh but you'll perhaps now have to wiggle back a bit - will you approximate those stars?
Yes, because its extremely unlikely their localised effects have any bearing on our galaxy/solar system and can be ignored. For example their planets would be largely irrelevant to us and could be represented by single objects of mass if needed, rather than the entire life and death of the planet itself.
A rough calculation to work out when the star change into a state that has effects on us other than light e.g. pulsar that could have us in it's polar line of sight and then that is simulated in finer detail for those events, and the "butterfly effect" that's had on our solar system is calculated.
This way you can remove vast amount of unnecessary calculation.

The answer to these objections so far is basically a hand waving one of magical things being possible because future technology....
There is no other way. I cannot convince you that 100% accurate simulation of an entire multiverse is possible, and you can neither convince the opposite. What I can say is that due to our limited knowledge of phenomena outside of our current scientific knowledge e.g. dark matter, string theory (if they still have that on the table), is that whatever is on the layer below quantum effects may make simulating them quickly, more feasible due to compression calculations or shortcuts that give us the same results.
Not everything is equally weighted by quantum change. Sure in Schroedingers cat its the driving force and heavily weighs on the outcome of that scenario. But for a rocky moon with very few isotopes and shielded from radiation by the sun or moon there is much less random deviation over a longer period of time that needs to be calculated - so you can cut chunks of solving out of it because its not needed. For objects that are not static over much shorter durations, it is needed. The butterfly effect that occurs between those two extremes can be iterated such that it can be ruled out in N simulations in that localised area.

However, I do believe there is a non-zero chance that in the future we can simulate everything entirely without shortcuts. I do not need to believe it to appreciate the probability of the idea. And so by definition I believe Roko's Basilik is also possible, just less possible if we are only living in one universe. If it's the multiverse, then the chance that any conceivable action can happen has a much higher probability
 
Both of your arguments seem to ignore any measured data to date. By the time AI comes along, we will have insane amount of data to describe our presence and the universe to use as a benchmark for simulations.
A simple example to understand would be geomapping, which is adding huge amounts of detail to our knowledge of the earth.
Cosmologists are able to describe distant star types, masses, ages, and their planetary bodies just from colour and wobble.
What calculated assumptions could an AI that has gone beyond the singularity come up with if all it had access to was the same telescopes and satellites we have access to? Whatever it is it would be better than ours, by definition. And it would have much more advanced tech.
So this measured / calculated information on the present is used as a "benchmark" for simulations to converge on.
Again, this is just like in the present day for reverse diagnostics in real-world scenarios.

A single inaccuracy could cause the timeline to diverge to a totally different path, eventually resulting in entirely different outcomes.
...which is then compared against the known metrics to date. An entirely different outcome is detected? Rewind to a previous milestone and reevaluate.

No sun after 9b years? Rewind
No cooling earth? Rewind
No WW2? Rewind
No Musk launching a Tesla into space? Rewind.

The real complications in the simulation have only occurred in the last few thousand years where human consciousness has added extreme complexity to probability that did not exist before.
You would not be able to approximate anything or abstract anything. The second you do you introduce error. Then divergence.
See above. Divergence is expected in every simulation, even in today's simulations and is overcome with parameterisation - a set variables the solver can modify in order to rerun simulations until convergence.
Arguably to simulate all possible universes the outcome of every nondeterministic event at a sub atomic level across all matter in the universe requires another new universe.
If you want to mandate only the sub-atomic level i.e. protons and electrons - this would make the process much easier.
It just isn't grounded, you don't need to know everything about the future in order to highlight limitations with this sort of thing. If the main argument against those limitations is just a handwaving...
I'm not hearing your limitations closing any doors. Limitations always exist, engineers and scientists spend their lives overcoming them. Quite the opposite to hand-waving is a closed-minded attitude due to not having the mental capacity to understand the numbers involved and dismissing it. I cannot comprehend the numbers involved, but neither can I comprehend the processing a modern CPU does every second. But we know it's possible.
...very implausible...
I think this is really the difference of the arguments. Your glass is half empty while mine is half full.
 
Ergo the AI would have to conclude that it's simulation could not be verified, nor the results trusted.
Again convergence. Even when you have the correct answer, you rerun the simulation again, in fact you run it N times as dictated by the user (no user here except AI so it decides when it's accurate enough) until the last N sims are within a tolerance of error of each other.

Then, as a final test, it simulates an action beyond the present day that it causes. It records the effect, then compares it with the same action it takes in the real world. If these are aligned, I am sure it would be satisfied with the results (speaking on its behalf of course).
 
Not really, not for what you're talking about.
Please be more specific about what is "not" really. And if you mean knowledge of the universe as is, please show how you come to the conclusion that a future AI would/could not have enough information.

you'd be rewinding many many times, in fact you'd be uncertain re: other events - it's not like all these human actions are even known - again you're completely glossing over the sensitivity of the model here.
Non-issue. Time and quality of the model is a factor for the AI to consider.

No it wouldn't, it makes the search space vast and the point is that even one isn't necessarily feasible let alone the huge number required here.
Again with the adverbs of uncertainty. You have no constructive argument or rebuttal to make here.

In fact we're going round in circles. You two are trying to pick the same holes and failing, while offering absolutely nothing positive to the discussion.
 
How does the AI know it has the correct answer (absolutely 100% correct).

Where does it get the comparison data to make that determination?

Why would you run the simulation again with the same inputs? Why would you change the inputs if you thought you had the correct answer?

How do you know that your simulation won't diverge in future, even if it is accurate (that you know of) up to the present?

How does the AI get data about everybody's actions? How does it become all-seeing?
If you can't infer answers to those from my last post, I'm not holding your hand. Make some assumptions of your own fgs man, have a little think before you punch the keyboard in anger. You are like a child that constantly asks why. I think the correct response from a tired parent is "because".
 
Aaaaaand we're done here. It's been occasionally fun.
I've offered answers to questions you didn't ask, and answered questions in multiples when you did.

Your ambition is to keep asking the most simple and mundane questions in a bid to catch me out.

Let's continue, but rephrase each of those lines in a "x isn't possible because y" fashion. Let's see if you can actually provide substance to your naysaying.

edit: Also.. just keeping this here since I missed it due to the edit
e: Also... **** you, future omnipotent AI. You wanna start something? Yeah? Come at me bro...

Last edited: Yesterday at 2:03 AM
 
Last edited:
Because it doesn't exist anymore. Every human that has existed... not just that but all their interactions down to the smallest decisions etc..
This isn't a story you're telling where you reveal more of the plot as time goes by. Please explain yourself better. "It" is what? Why is every human that ever existed consequential?

Think about this:
I can walk to town via the country road, or I can walk to town via the suburban road. I get to the same place at the same time and everything that happens after is the same. There are two universes, one where i walked the country road, the other where i walked the suburban route. Those two existences have not converged, but they follow the exact same set of events until the death of the universe. The day after I walked to town, the AI was born. And it began simulating. Given it wants to kill the the humans who didnt support it, it doesnt care about the quantum changes of the wind on the suburban route, vs the country road.

Are you saying that the above is NOT possible? Why?

If you are not saying that, then why do you mandate that every single thing must be the same every time, when our only concern is that the AI assesses the loyalty of the people alive at the time it decided to judge mankind? (Remember thats the point of Roko's Basilisk).

If every permutation of existence is possible, then every single permutation must have some chance to reach our modern day world. Let's say the big bang false-started in another universe - it exploded, then imploded, then finally exploded again. It is now behind our universe in time. Now let's say a few factors in the formation of the milky way, sol, earth, and everything on it, allowed it to catch up to us perfectly. Is this not possible? Why not?

There are myriad permutations in the middle of those two extremes where we end up with unlimited identical earths all doing the same thing at the same time, but came to the modern existence in slightly different ways. Now the AI sims 30s in the future, and compares against the 30s that just happened.

edit:
You have no constructive argument to make other than "magic"/hand waving as a reason to overcome the obvious objections.
Everything in this thread we are discussing is because I brought it to the table. :rolleyes:
 
Last edited:
Is it? Maybe on the other road you witness a murder. Maybe you find £5 on the road that you wouldn't taking the other route.

Etc, etc, ad infinitum.
Lol you're a nightmare. Respond directly to my other posts rather than confirming my points about you in the above post..

So the implication in my scenario was that nothing at all was different other than the route taken:

No person or animals were affected by me - this means seeing, hearing, smelling me.
No footprints or other memorabilia was left on my route that could affect anything.

Is that possible? Yes

Will the universes be exactly the same? No.

Could they re-converge? (Clue: Yes)
 
Because they each have an impact or potential imapact on the model. Remove one human that reproduced and you change the entire chain below.
Absolutely, but only within a small enough time span. Nothing stopping other universes reconverging on the same storyline as ours, even if hitler and ghengis khan were swapped around.
And there are many more where you don't... and it's way more complicated than just approximating some solution - every approximation you make introduces errors yet this thing is supposed to be at the level of granularity of being able to tell who did what re: supporting it...

You're just putting forth another argument for approximations while ignoring how sensitive this sort of model potentially is.
You keep dismissing the benchmark. It doesn't matter how many errors occur during 1 simulation. It resets and continues until it gets the same result as the observations it's making in it's present day.
 
SNIP....

By taking route B instead of route A, a wet leaf gets stuck to your shoe. It falls off at the top of a flight of steps near that building your were heading to. Five minutes later a woman treads on the leaf, slips down the stairs and breaks her neck.
I am not relevant in this context. I used myself as a vehicle in this analogy. If that woman was not alive in the AI's sim, np. If she was... rewind, resolve.
 
but we're not talking about a general stroyline - you're talking about actual individual people and getting down the the level of their decisions and detemining whether they sufficiently helped... there are plenty of people descended from Khan for example...
I expect FoxEye to dig into an analogy but not you.

But ok. I didn't say Genghis Khan died. Nor that he didn't sleep with the same women he did in the other universe. Wanna continue?

That's flawed - you don't need a complicated example to demonstrate what overfitting is...
I need any example to understand what overfitting is because it's a new word to me. Is this what you forum boys do to make your arguments work or something?
 
Here's a question for you.

How many potential universes have the same "observed" state at any one point in time, but go on to diverge in future? I'll take the easy option and say infinite

Assuming the AI cannot observe every single sub-atomic particle and energy source in the real universe and therefore know that the state is the same (therefore no future divergence is possible).

The AI's observation will be limited by the data it can receive from the sensors it is linked to.
The last part has been addressed more than twice now.
 
Well your entire argument has been, "In future we'll have invented stuff to make this all possible."

So yeah, I'm an idiot. But only for keeping this conversation going :p
You just wanted to share your 24,000th post with me. :)
 
"overfitting"....
Any opportunity to run away huh?
And this isn't ML. The machine already learned.

FYI in the context of "mathematical" overfitting which I must assume you are referring to: no. There is no overfit. There is simulate, simulate, and keep simulating some more til you get consistent results. No need to fit, let alone overfit when you have enough data.

Use overfitting in a better, more relevant context next time. I personally have no data to fit. Only ideas.
 
simulating till you get consistent results - and how do you measure whether you've got consistent results... you're essentially talking about fitting the model...
Again, the AI has access to the modern world. That is the "curve" to "fit"
 
Yes, even assuming that, the issue is still there... Given you've only just found out what overfitting is then.....
I didn't know what overfitting was because I've never had to fudge results. And your usage of it was even less clear since we were not talking about data points. I saw you try using an XY graph analogy on page 3 (4?) and was glad you didn't latch on to that.

Do you wanna continue this discussion in the equivalent of 2D charts? Can try if that suits you better.
 
How are we not when you said:

"FYI in the context of "mathematical" overfitting which I must assume you are referring to: no. There is no overfit. There is simulate, simulate, and keep simulating some more til you get consistent results. No need to fit,"
Filled in the bits you chopped off my quote. The bits that came after you were called out. Carbon copy response of you not admitting being wrong in my previous call outs.

You had 3 options: stop replying, reply honestly, or reply like a liar trying to win.

What's wrong with you? Is this your life? PM me, I will edit this post and concede to you if it means so much to win.
 
Last edited:
What exactly am I lying about?
Anyone who gives a **** can read the thread and help you out. Otherwise, grab a counsellor.

You’ve not answered the question - how do you measure consistency?
Not sure how many times I referred to this during the thread but I did it a lot. The way to measure consistency between simulations is convergence.
 
Back
Top Bottom