Does Oculus Rift have an underlying issue

Caporegime
Joined
30 Jul 2013
Posts
28,943
Hand movement is way faster than head movement. The linear speed of the Vive lasers at the extent of their tracking range is 15ft*2*pi/(1s/60)=3856mph. You can't move your hand fast enough to change that by a meaningful percentage. You can basically hook the Vive Controller up to a string and whip it around as fast as you can and not lose tracking.
The Rift tracking system was optimized initially around only tracking a headset. Even at fast head movement speeds it loses an optical lock and falls back purely to IMUs. For fast hand speeds they are having lots of trouble. Two cameras forward-facing lets them re-id the LEDs quickly and gives them more SvN to work with in the edge pixel data. That's why they are stuck with that for fast hand movements. By lowering the emit-time of the LEDs they get a shorter exposure with less smear, but lose on signal vs noise. They then make up for it by having two cameras in front instead of one. With opposing cameras you can slowly walk around the room and play a point-and-click style adventure game with Oculus in opposing sensor mode, as long as you dont need to grab things off the ground due to FOV reasons, but you can't do things like swing swords unless you are in a small area hit by both cameras.
Vertical FOV is also low enough to have to tilt the camera to switch from seated to standing.
Photodiodes in Lighthouse don't have the reacquisition problem, each photodiode knows which photodiode it is, whereas the Rift Constellation system has to encode each LED's identifier in pulses over multiple frames. By having Touch visible through two offset front camera views, they can reacquire faster.
Source:
https://www.youtube.com/watch?v=asduqdRizqs&t=10m48s
Touch was delayed to put lots of computer vision engineers on the range problem caused by the above factors ("panic piled" on Touch "increasing the scale [range]")
Source:
https://www.youtube.com/watch?v=dRPn_LK2Hkc&t=4m30s
(edit: two forward facing sensors was billed as a method of improving hand-on-hand near interaction occlusion resistance, but with opposing sensors with real range like Lighthouse, you can simply stand in one of the corners without a sensor and look towards the middle of the room: bam, you now have two forward facing sensors and all the same occlusion resistance.)

Thoughts?

https://www.reddit.com/r/oculus/com...ey_notch_have_you_tried_anything_from/d0hdhpt
 
Caporegime
OP
Joined
30 Jul 2013
Posts
28,943
Well, to be honest I was only really going to use mine as a seated experience. It does sound a bit worrying that merely standing (and I assume, also crouching down) might make the IR camera lose your position.

It also gives me concerns about how accurate the touch controller will be if you move them around quickly. Just because your seated doesn't mean you won't need to move those around at speed.
 
Don
Joined
18 Oct 2002
Posts
22,775
Location
Wargrave, UK
This is why I think the Vive is a better solution. Having sensors on the devices and headset that measure when the IR from the lighthouses hit them is a much better solution than having IR LEDs on the devices and a camera that tracks them. This is why the Rift needs so many USB3 ports because the camera is essentially taking a whole bunch of HD video of the location of the LEDs and then the PC has to process it.
 
Don
Joined
24 Feb 2004
Posts
11,933
Location
-
Well, to be honest I was only really going to use mine as a seated experience. It does sound a bit worrying that merely standing (and I assume, also crouching down) might make the IR camera lose your position.

It also gives me concerns about how accurate the touch controller will be if you move them around quickly. Just because your seated doesn't mean you won't need to move those around at speed.

I didn't have those issues when using the DK2, I think its more of an issue for room-scale
 
Associate
Joined
11 Nov 2003
Posts
1,696
Location
South Yorkshire
This is why I think the Vive is a better solution. Having sensors on the devices and headset that measure when the IR from the lighthouses hit them is a much better solution than having IR LEDs on the devices and a camera that tracks them. This is why the Rift needs so many USB3 ports because the camera is essentially taking a whole bunch of HD video of the location of the LEDs and then the PC has to process it.
Both solutions have their drawbacks. Lighthouse has moving parts (so will wear out over time), has complexity in the sensor components and has been known to suffer with reflective materials. Anecdotally it works better as a standing / moving around solution than a seated one. FOV is limited by the arc that the laser transcribes.

Constellation has simpler components on the HMD and accessories: IR LEDs that flash an identifier. It has increased complexity in detection and pose tracking as it's susceptible to IR light interference, but suffers less from reflection (the software first looks at the pose, then looks at the identifiers for confirmation of orientation). Its biggest issue is in the quality of the camera technology, and they have to overcome persistence/smear. It's also their get-out-of-jail-free card, as they can improve the camera technology over time and even reduce the USB usage through iterations on the camera by doing the processing on-board each camera and communicating pose either wirelessly or through USB 2.0. Anecdotally Rift works better as a seated solution. We don't yet know how well Touch will perform.

Neither is perfect. Inside-out vs outside-in will be a debate that goes on way beyond first generation VR. There will always be FUD and misinformation about both systems, and it's only when we get them into our hands that we'll know the full story.
 
Soldato
Joined
12 Jan 2004
Posts
5,406
Location
London
Guess we'll find out soon.

Palmerluckey said:
Anything CB can do, CV1 can do better. Touch works with a single sensor, the additional sensor is to reduce occlusion and enable all kinds of interactions that just can't work with a single line of sight, no matter what system.
I am not playing 20 questions with someone who has an agenda. Too many times, I give perfectly straight answers, and it leads to people accidentally or maliciously misrepresenting what I say to support whatever their personal opinion is. Most of your questions are going to be answered or rendered irrelevant in the near future, I am not going to give you fuel for your crusade.

quite the argument raging, I have no idea whats going on.

You have a fundamental misundertsanding of how sensor fusion works. Both the Rift AND the Vive both use the IMU as the primary position tracking system. It responds extremely quickly and updates at several hundred Hz (1000Hz sampling, 500Hz reporting). However, IMUs drift due to double-integration of error. The drift is on the order of metres per second. So what both tracking systems do is squelch that error 60 times per second (both have a 60Hz global position update rate) using their optical sensors to provide an absolute position reference.
For BOTH systems, high-speed position tracking performance is down ENTIRELY to IMU performance. It wouldn't be possible at all without another absolute reference system (optical, magnetic or otherwise) but it's the IMU that's doing the grunt-work.
However, the IMU is even more important for the Vive than the Rift. The Rift's Constellation cameras are genlocked; they capture a frame at the same point in time. That means all marker positions are known at the exact same time. However, Lighthouse is a scanning system: not only do you not know the positions of markers at the same point in time, you don't even get the X and Y positions at the same point in time: there is a 4ms delay (4 scans per 16ms) between each laser strike for each sensor. If a controller is moving at a modest 1ms-1 , then between laser strikes it's moved 4mm! While throwing a controller like a cricket ball is extremely ill advised, a 150mph throw (~150mph hand speed) is 45ms-1, or 180mm between scans. Using the IMU data allows you to update parts of the position (X or Y coords, or polar coords relative to the basestation, depending on how Valve are doing their math) independently of each other.
As for Constellation having a 'smearing' issue: Commercial optical MCAP systems do not generally use active markers (though some do), but retroreflective markers and an illumination system adjacent to the camera lens. These relative dim markers are still easily discriminable in all but the harshest (e.g. outdoors in direct sunlight) conditions. If you're being clever with your blob tracking, you can even use the blob shape from the smear in order to provide an instantaneous velocity measurement, though it's generally just easier to drop the shutter speed and make your markers brighter.

TLDR; you aren't going to notice the faults in either HMDs tracking method.
 
Last edited:
Back
Top Bottom