Stereo 3D stuff

Advanced OpenGL source port fork from ZDoom, picking up where ZDoomGL left off.
[Home] [Download] [Git builds (Win)] [Git builds (Mac)] [Wiki] [Repo] [Bugs&Suggestions]

Moderator: Graf Zahl

User avatar
Graf Zahl
GZDoom Developer
GZDoom Developer
Posts: 7148
Joined: Wed Jul 20, 2005 9:48
Location: Germany
Contact:

Stereo 3D stuff

Post by Graf Zahl »

Looking at the code to check how to implement colormaps as a postprocessing step, I stumbled over this.

I cannot shake off the feeling that this doesn't really work as intended. For example, with invulnerability on the colors do not seem correct, there's always a strong emphasis on the color for the left eye. I also have doubts that other manipulation of the colors, e.g. the tonemap doesn't clobber the intended setup.

But since I do not have any means to test this, I have to ask someone else to check if this handles everything correctly. I'm not really sold on the implementation anyway, to do everything correctly I believe this should create both views in a separate render buffer and compose the final image after everything has been done.
User avatar
Rachael
Developer
Developer
Posts: 3646
Joined: Sat May 13, 2006 10:30

Re: Stereo 3D stuff

Post by Rachael »

Does Biospud even post here? He really should, because this code is pretty much his brain child.
biospud
Developer
Developer
Posts: 31
Joined: Fri Oct 11, 2013 0:49

Re: Stereo 3D stuff

Post by biospud »

Sorry your post did not come to my attention until just this moment.

I confess I probably never carefully tested the interaction between colored anaglyph stereo 3D modes, and effects such as invulnerability and radiation suit. I'll try testing these with one of the latest builds.

My current implementation of the anaglyph 3D stereo modes uses glColorMask() to render the two left and right eye views into a single framebuffer. Other implementations are possible. For example, two full-color views could be rendered into separate textures, one for each eye. Then these two textures could be combined in a separate pass, using a shader customized to the particular stereoscopic 3D mode. This latter approach would be more compartmentalized, and easier to reason about, but might have poorer rendering performance. The idea of using a custom shader for each stereoscopic 3D mode is very appealing.
biospud
Developer
Developer
Posts: 31
Joined: Fri Oct 11, 2013 0:49

Re: Stereo 3D stuff

Post by biospud »

Graf Zahl wrote: ...But since I do not have any means to test this, I have to ask someone else to check if this handles everything correctly. I'm not really sold on the implementation anyway, to do everything correctly I believe this should create both views in a separate render buffer and compose the final image after everything has been done.
OK I tested with build http://devbuilds.drdteam.org/gzdoom/gzd ... 8b7a87f.7z and found several problems with stereoscopic 3D.
* The most useful 3D mode, quad-buffered, is broken. There is something wrong with the setting and clearing of the GL_BACK, GL_BACK_LEFT, and GL_BACK_RIGHT buffers. So not only does this mode not work correctly, it leaves a persistent static overlay/underlay showing, even after returning to other 3D modes.
* The colored anaglyph modes do not interact correctly with the invulnerability effect. I'm unsure whether this ever worked correctly. The final composed view should display colored fringes where the two eye views differ, but I only see the grayscale invulnerability colors. The stereoscopic composition needs to happen AFTER the invulnerability effect has been applied.

You are correct that the implementation needs to change. The general 3D rendering flow would need to proceed as follows:

For each display frame:
1- Activate the left-eye render buffer
a- Render the scene using left-eye-specific adjustments to the projection matrix, view matrix, cross-hair position, and weapon position
b- Apply all post-processing steps, including color maps, blending, ambient occlusion, etc.
2- Activate the right-eye render buffer
a- Render the full scene again, using right-eye-specific adjustments to the projection matrix, view matrix, cross-hair position, and weapon position
b- Apply all post-processing steps, including color maps, blending, ambient occlusion, etc.
3- Activate the primary display (in the case of quad-buffered stereo, it probably needs to be the real hardware display. Other modes could render to a render buffer, if there is a real use case for this)
a- Compose the left and right eye views using a method specific to the particular 3D mode

In the case of the 3D modes "Mono", "Left eye only", and "Right eye only" we should avoid the extra composition render pass, and render directly to the primary display, for best performance.

Because we are rethinking the architecture here, I'll mention some forward-looking possibilities that we might want to leave room for:
* I'd like to eventually have an advanced screenshot mode for creating lenticular 3D posters. This would involve rendering and composing about 18 slightly different viewpoints at the same moment in the playsim. So the number of viewpoints might not always be "1" or "2"
* For efficiency and correctness, some 3D modes might be best rendered into per-eye render buffers with unusual pixel widths, heights, and aspect ratios. So the size and shape of the buffer into which the scene is rendered might not always match the official final display geometry.

I'll study the current architecture and think about how to proceed.
User avatar
Graf Zahl
GZDoom Developer
GZDoom Developer
Posts: 7148
Joined: Wed Jul 20, 2005 9:48
Location: Germany
Contact:

Re: Stereo 3D stuff

Post by Graf Zahl »

The invulnerability effect at the moment cannot possibly work with how this is implemented because it's part of the postprocessing. But since this needs to be changed anyway I did not bother.

My idea would be, in addition to your list

- 3D stereo REQUIRES active renderbuffers. No renderbuffers, no 3D stereo. Otherwise it'd become too messy.
- The renderbuffer framework will provide as many buffers as are needed here. This needs to be checked if it is capable of that.

Once that is done they can be composed in the main screenbuffer, this can be done by a special present shader which masks out the unwanted color channels.
For one-viewpoint modes, the loop will only run once and the regular present shader be used.


(BTW, seeing how this eats video memory, I have to wonder how this will affect mods which are creating 1+GB of model vertex buffer data... ;) )
biospud
Developer
Developer
Posts: 31
Joined: Fri Oct 11, 2013 0:49

Re: Stereo 3D stuff

Post by biospud »

Thank you for the thoughtful feedback.
Graf Zahl wrote: - 3D stereo REQUIRES active renderbuffers. No renderbuffers, no 3D stereo. Otherwise it'd become too messy.
Yes; and not just to avoid mess, renderbuffers would seem to be required for correct rendering now. By the way, is there a precedent for dynamically changing the options menu items, to only display options that are available at the moment?
Graf Zahl wrote: - The renderbuffer framework will provide as many buffers as are needed here. This needs to be checked if it is capable of that.
Do you mean that the hardware capabilities need to be checked at runtime? Or do you mean rather that we developers need to study the current gzdoom renderbuffer architecture to ensure it is capable of providing renderbuffers for this purpose?
Graf Zahl wrote: Once that is done they can be composed in the main screenbuffer, this can be done by a special present shader which masks out the unwanted color channels.
For one-viewpoint modes, the loop will only run once and the regular present shader be used.

(BTW, seeing how this eats video memory, I have to wonder how this will affect mods which are creating 1+GB of model vertex buffer data... ;) )
In the case of quad-buffered 3D stereo, the final present shader would need to be invoked twice, once for the GL_BACK_LEFT buffer, and once for the GL_BACK_RIGHT buffer; to make use of this old OpenGL API. Or perhaps GL_LEFT and GL_RIGHT, in case double buffering is a toggleable option in gzdoom.

Done cleverly, stereo 3D should require just one more renderbuffer than what would be used without stereo 3D. But I'm not sure I want be so clever, at least until after it's working correctly. Using two additional renderbuffers keeps things cleaner. Especially if we use buffers that are not the same size as the main display.
User avatar
Graf Zahl
GZDoom Developer
GZDoom Developer
Posts: 7148
Joined: Wed Jul 20, 2005 9:48
Location: Germany
Contact:

Re: Stereo 3D stuff

Post by Graf Zahl »

biospud wrote:Thank you for the thoughtful feedback.
Graf Zahl wrote: - 3D stereo REQUIRES active renderbuffers. No renderbuffers, no 3D stereo. Otherwise it'd become too messy.
Yes; and not just to avoid mess, renderbuffers would seem to be required for correct rendering now. By the way, is there a precedent for dynamically changing the options menu items, to only display options that are available at the moment?
Yes, you can tie options to other CVARs and gray them out if that CVAR is off.

biospud wrote:
Graf Zahl wrote: - The renderbuffer framework will provide as many buffers as are needed here. This needs to be checked if it is capable of that.
Do you mean that the hardware capabilities need to be checked at runtime? Or do you mean rather that we developers need to study the current gzdoom renderbuffer architecture to ensure it is capable of providing renderbuffers for this purpose?
I mean the renderbuffer code needs to provide sufficient buffers for such a case. How it does that needs to be decided.
biospud wrote: Done cleverly, stereo 3D should require just one more renderbuffer than what would be used without stereo 3D. But I'm not sure I want be so clever, at least until after it's working correctly. Using two additional renderbuffers keeps things cleaner. Especially if we use buffers that are not the same size as the main display.
Yes, one more should be enough. The actual rendering can happen in the same MSAA-buffer, but we will need one more per eye position to keep the separate images for the present shader.
dpJudas
Developer
Developer
Posts: 798
Joined: Sat Jul 23, 2016 7:53

Re: Stereo 3D stuff

Post by dpJudas »

The way code is right now in master, the Set3DViewport call binds the scene frame buffer and then everything is rendered there. Then the post processing is done in those buffers too. When we reach the end, at eye->TearDown() in RenderViewpoint, the full scene for an eye can be copied to any destination we desire. All we have to do there is either bind it to a texture unit (mBuffers->BindCurrentText(texunit)) or bind it to a read frame buffer and glBlitFramebuffer it (effectively today's mBuffers->BindCurrentFB()).

The way I see it, we have a couple of options. The first is to say that today's EyePose is responsible for doing this copy. This will give the advantage that an eye for quad rendering could copy it directly to GL_BACK_LEFT or GL_BACK_RIGHT (probably using a present shader to get the gamma applied). In the 3D poster case each of the 18 eyes would copy it to each their texture. For the most simple case where we only got one eye we just do nothing and keep the scene image in the renderbuffer.

The other option is to require that FGLRenderBuffers handles all this. In this variant FGLRenderBuffers will have all the eye buffers. The only bigger change here is who is responsible for allocating and maintaining the final destination buffers for the render. Here it is all concentrated in one "manager" for it, but there's pros and cons for that.

I think the big elephant in the room is the 2D. I'm not sure how the old rendering handled all that, but essentially Begin2D needs to know where it is rendering things to. I'm assuming that for HMD rendering we'd want this to end up in some texture that is then overlaid on top of each eye's scene rendering. For a 3D poster thing I'm not sure where it should end up. The current eye code doesn't seem to concern itself with it at all from what I can tell.
biospud
Developer
Developer
Posts: 31
Joined: Fri Oct 11, 2013 0:49

Re: Stereo 3D stuff

Post by biospud »

dpJudas wrote:...I think the big elephant in the room is the 2D. I'm not sure how the old rendering handled all that, but essentially Begin2D needs to know where it is rendering things to. I'm assuming that for HMD rendering we'd want this to end up in some texture that is then overlaid on top of each eye's scene rendering. For a 3D poster thing I'm not sure where it should end up. The current eye code doesn't seem to concern itself with it at all from what I can tell.
This is a good point. This is one reason I have not even begun to implement side-by-side nor HMD modes in gzdoom (such modes ARE present in my gz3doom fork). There probably needs to be an additional rendertexture to receive the HUD elements, status messages, menus, title screens, and intermission animations, so these can be separately placed for each eye view. The currently implemented 3D modes in gzdoom are sorta OK with having these 2D elements painted once over the whole screen, because the correct depth for these items is probably "in the plane of the screen". But in general, such elements should be drawn separately for each eye, especially in side-by-side and HMD 3D modes.
biospud
Developer
Developer
Posts: 31
Joined: Fri Oct 11, 2013 0:49

Re: Stereo 3D stuff

Post by biospud »

I've been studying the current gzdoom architecture. Here is my current understanding:

Stereoscopic 3D rendering will require up to TWO additional render textures (ignoring the 18 view poster case for the moment): 1) one to store the left-eye view while the right-eye view is being rendered, and 2) one to store the 2D elements rendered outside the FGLRenderer::RenderViewpoint() method (automap, menus, messages, hud elements, console, intermission animations, title screens). For video memory efficiency, these additional rendertextures should be lazy loaded: i.e. not instantiated until the player attempts to invoke a relevant 3D mode. At that point, the stereo 3D manager should instantiate the rendertextures, possibly using FGLFramebuffers::CreateFrameBuffer(...) and friends. But those methods are private to FLGRenderbuffers. So I might need to get access to those methods, or reimplement similar functionality in the stereo 3D system.

Any such 3D modes will be unavailable as options, as long as the new CVAR "gl_renderbuffers" is false, indicating that render buffers are not supported/desired. So these options will be greyed out in the opengl display options menu, in this case.

The FGLRenderBuffers class already contains two "pipeline" rendertextures, apparently used in ping-pong fashion, to support an unlimited series of post-rocessing effects. The special 3D rendertextures described above must be distinct from these two pipeline textures.

So in the case where multisampling is ON, and a two-eyed stereo 3D mode is ON, and postprocessing is ON, the sequence of events in the render loop would be:

1. Clear the scene buffer
2. Render the left eye view into the multisampled scene buffer
3. Apply all postprocessing effects, resulting in a final left-eye image in one of the pipeline buffers
4. Blit the resulting left eye image into the 3D-specific "LeftEye" buffer
5. Compose the 2D HUD elements onto the left-eye view
6. Clear the scene buffer
7. Render the right eye view into the scene buffer
8. Apply all postprocessing effects, resulting in a final right-eye image in one of the pipeline buffers
9. Compose the 2D HUD elements onto the right-eye view
10. Compose the left and right eye images and present them to the screen, using a 3D-mode-specific process

I believe the way I compose the 2D HUD elements in GZ3Doom causes them to appear one frame later than they should. However I'm unaware of any actual gameplay problem caused by this delay.
User avatar
Graf Zahl
GZDoom Developer
GZDoom Developer
Posts: 7148
Joined: Wed Jul 20, 2005 9:48
Location: Germany
Contact:

Re: Stereo 3D stuff

Post by Graf Zahl »

You have to be careful with the 2D stuff. First, the weapon sprite is special. It's not part of the HUD overlay but the scene, it will be drawn before postprocessing.
Second, the way the vertex buffer works, you have to wait with the 2D pass until the 2D drawer gets flushed, which is right before presentation. At that point it needs to be rendered into both stereo screens which is just executing its list twice with different targets. You should not need a separate buffer for this. Of course that also depends on what is faster: Running the list of commands twice or doing two internal buffer copies.
dpJudas
Developer
Developer
Posts: 798
Joined: Sat Jul 23, 2016 7:53

Re: Stereo 3D stuff

Post by dpJudas »

biospud wrote:At that point, the stereo 3D manager should instantiate the rendertextures, possibly using FGLFramebuffers::CreateFrameBuffer(...) and friends. But those methods are private to FLGRenderbuffers. So I might need to get access to those methods, or reimplement similar functionality in the stereo 3D system.
For the upcoming release I'd probably let FGLRenderbuffers manage those buffers. Something ala FGLRenderbuffers::BlitToEyeTexture(index) + FGLRenderbuffers::BindEyeTexture(index) and then some TArray<GLint> thing internally in FGLRenderbuffers handling them.

Ideally, in the longer run, it would probably be better to have some FGLHardwareFramebuffer class that works together with FGLHardwareTexture, but that'd require refactoring of things.
biospud wrote:So in the case where multisampling is ON, and a two-eyed stereo 3D mode is ON, and postprocessing is ON, the sequence of events in the render loop would be:
I'd recommend it being something ala this:

1. Clear the scene buffer
2. Render the left eye view into the multisampled scene buffer
3. Apply all postprocessing effects, resulting in a final left-eye image in one of the pipeline buffers
4. Blit the resulting left eye image into the 3D-specific "LeftEye" buffer (mBuffers->BlitToEyeTexture(0))
5. Clear the scene buffer
6. Render the right eye view into the scene buffer
7. Apply all postprocessing effects, resulting in a final right-eye image in one of the pipeline buffers (mBuffers->BlitToEyeTexture(1))
8. Compose the 2D HUD elements onto the left-eye view (mBuffers->BindEyeFB(0))
9. Compose the 2D HUD elements onto the right-eye view (mBuffers->BindEyeFB(1))
10. Compose the left and right eye images and present them to the screen, using a 3D-mode-specific process (mBuffers->BindEyeTexture(0) + BindEyeTexture(1))

If I understood Graf correctly, 8 and 9 can be done executing the same series of draw calls twice. Steps 1-7 will be done in the FGLRenderer::RenderViewpoint and 8-10 in FGLRenderer::CopyToBackbuffer (or the place that calls this function).
biospud
Developer
Developer
Posts: 31
Joined: Fri Oct 11, 2013 0:49

Re: Stereo 3D stuff

Post by biospud »

I see from the forums here that Graf Zahl feels almost ready for a new release, and that this Stereo 3D stuff is considered a major blocker. I think I might be able to put a couple of days work into it this weekend, but I'm unsure whether that will be enough time to resolve these issues. @Graf Zahl and @dpJudas: will you be available to consult with me this Friday, Saturday and Sunday while I work on this? @dpJudas: Do you have anaglyph glasses and nvidia 3D vision glasses you can use to help test the stereo 3D functionality?

In the short term, I'd like to create a feature branch to repair the functionality of the currently exposed 3D modes: Mono, Left, Right, Green/Magenta, Red/Cyan, Blue/Yellow, and Quad-buffered. @djJudas if you'd like to get started on this while I"m occupied with my day job the rest of this week, I'd be pleased for you to do so.

Longer term, I'd like to also create a an additional feature branch for an OpenVR (HTC Vive) stereo mode. This OpenVR branch will take significant time and effort to mature to a production quality state, but I think it would be worthwhile to sketch it out now, while I have the new 3D architecture considerations in my mental L1 cache. The renderbuffer gymnastics turn out to be rather different in this HMD case, so I'd like to think this through while we're pondering the responsibilities of the FLGRenderBuffers API.
dpJudas
Developer
Developer
Posts: 798
Joined: Sat Jul 23, 2016 7:53

Re: Stereo 3D stuff

Post by dpJudas »

I have an Oculus DK2, although I'm not sure if it even works anymore after they released the consumer version. That's about the only thing I have. But I can prepare FGLRenderBuffers and the eye handling code to run like I described. I don't have any plans this weekend, so sure, I'll be around. :)
User avatar
Graf Zahl
GZDoom Developer
GZDoom Developer
Posts: 7148
Joined: Wed Jul 20, 2005 9:48
Location: Germany
Contact:

Re: Stereo 3D stuff

Post by Graf Zahl »

I can't tell yet. I'm most likely not available on Friday-Sunday afternoon (13:00-1800 CET) but later in the evening should be fine, but not continuously.
Locked

Return to “GZDoom”