SSAA Discussion

Advanced OpenGL source port fork from ZDoom, picking up where ZDoomGL left off.
[Home] [Download] [Git builds (Win)] [Git builds (Mac)] [Wiki] [Repo] [Bugs&Suggestions]

Moderator: Graf Zahl

Locked
Rachael
Developer
Developer
Posts: 3623
Joined: Sat May 13, 2006 10:30

SSAA Discussion

Post by Rachael » Tue Sep 13, 2016 19:36

(Split from: http://forum.drdteam.org/viewtopic.php?f=22&t=7148)

Also - since render buffers are now a thing in GZDoom, it is now possible to implement Super-Sampling Anti-Aliasing (aka SSAA). In concept it works similarly to MSAA except the execution is different - the *entire screen* is resampled at a different resolution, and then it is sized down to the target resolution. If you were looking for something better than MSAA, Graf - this is what you're looking for.

An example of 4xSSAA on a 1024x768 monitor would be this: The render buffer creates a texture the size of 2048x1536. It draws the screen at the higher resolution as if the user selected that. Once the pass is done, the post processing effects are applied, and then the texture is internally resized to fit the target screen or window (using a resizing filter - bilinear usually works but some find bicubic looks better).

After that, the texture can be applied to the screen and the result is often very good looking - however, given the double resolution, and the requirement to resize the entire texture for every frame - it usually murders your FPS on lower-end graphics cards.

Something that's useful about this technology that's great for LCD screens is it also allows you to render at lower resolutions and resize the texture up - this is an option that allows people to increase their framerate.

This is often combined with FXAA (done before the texture resizing) to create an image that is extremely crisp and jaggies are virtually eliminated.

If your graphics is of the professional variety, it probably can handle being combined with MSAA instead of FXAA - for even better results - but that process is usually enough to cripple home-user graphics cards.

The main advantage of this AA method is *everything* gets resampled - including textures. That makes texture blending at a distance much less noticeable.
Spoiler: Zen Sarcasm

User avatar
Graf Zahl
GZDoom Developer
GZDoom Developer
Posts: 7148
Joined: Wed Jul 20, 2005 9:48
Location: Germany
Contact:

Re: FXAA Test

Post by Graf Zahl » Tue Sep 13, 2016 20:06

Yes, it's definitely worth a try to implement that and see how it performs.

Rachael
Developer
Developer
Posts: 3623
Joined: Sat May 13, 2006 10:30

Re: FXAA Test

Post by Rachael » Tue Sep 13, 2016 20:51

I would try for this implementation myself but I do not know how to program OpenGL at all. I haven't even programmed my own pot and kettle demo yet. My experience in 3D comes strictly from CPU rendering.
Spoiler: Zen Sarcasm

dpJudas
Developer
Developer
Posts: 798
Joined: Sat Jul 23, 2016 7:53

Re: FXAA Test

Post by dpJudas » Wed Sep 14, 2016 0:00

Branch implementing SSAA and render at lower resolutions: https://github.com/dpjudas/zdoom/tree/viewport_scale

It has a few outstanding issues before it is PR worthy, mainly some bug fixes I discovered I have to do to zdoom's video mode handling, but should give a good idea of how such a thing looks like. I must say that SSAA looked far more crisp than I had expected. My card could even do it at 60 fps at 4K with 8x MSAA (7680x4320 render buffers!) but at 16x MSAA it dropped to 6 fps. :D

Branch has to cvars: gl_viewport_scale and gl_viewport_filter. The first one sets the scale of the render buffers relative to the screen size, so 0.5 means half resolution buffers and 2.0 means SSAA. The second cvar is a boolean controlling whether it should do nearest or linear filtering when upscaling. This allows someone going for a super palette low res "I miss the 90's" style look can enable the palette tonemap and get a nonblurred pixelated look.

Rachael
Developer
Developer
Posts: 3623
Joined: Sat May 13, 2006 10:30

Re: FXAA Test

Post by Rachael » Wed Sep 14, 2016 0:05

Well, I was attempting this but you beat me to the punch. I hadn't even managed to get a scaling viewport working. :P But... I did manage to make the screen go black and my FPS to drop to 6! Progress!

I will check this out tonight. I am very eager to see what this looks like. I'm tempted to try 32x multisampling and 4xSSAA. ;) (Time to get some ice packs for the GPU...)
Spoiler: Zen Sarcasm

dpJudas
Developer
Developer
Posts: 798
Joined: Sat Jul 23, 2016 7:53

Re: FXAA Test

Post by dpJudas » Wed Sep 14, 2016 0:16

If you had the viewport scaling working already you were pretty close. :) I did have the advantage of knowing that SetOutputViewport could already do it if I just slightly adjusted it.

Rachael
Developer
Developer
Posts: 3623
Joined: Sat May 13, 2006 10:30

Re: FXAA Test

Post by Rachael » Wed Sep 14, 2016 0:30

I think I hit the wrong location. I mostly attached code to the function FGLRenderBuffers::Setup - I created a float CVAR and put multipliers on some of these lines:
[spoiler]

Code: Select all

	if (width == mWidth && height == mHeight && mSamples != samples)
	{
		CreateScene(mWidth, mHeight, samples);
		mSamples = samples;
	}
	else if (width != mWidth || height != mHeight)
	{
		CreatePipeline(width, height);
		CreateScene(width, height, samples);
		mWidth = width;
		mHeight = height;
		mSamples = samples;
	}
[/spoiler]

Obviously - it didn't work, but it was interesting seeing the effects. When at 1.0, everything worked as expected, but when I started using other values, something somewhere got invalid (my guess, anyway) and it didn't like that. It was still processing the rendering at the resolution that I wanted, but ultimately it was not actually showing anything.

When JPL reported the particle crash bug, I reverted the changes and started compiling a debug build with the intent to test remote debugging with my laptop, since there has already been more than one instance now where it could have been useful, and that was where I stopped playing with it.
Spoiler: Zen Sarcasm

dpJudas
Developer
Developer
Posts: 798
Joined: Sat Jul 23, 2016 7:53

Re: FXAA Test

Post by dpJudas » Wed Sep 14, 2016 0:50

Changing it at that location did indeed increase the buffer sizes, but what was missing was that OpenGL always renders into a viewport rectangle. So while the buffers got larger there, the viewport remained the same size. If you look at the places in the code where it calls Setup you'll see that it always uses mScreenViewport for width/height. So you were very close - had you adjusted the variables used one level further up it would have worked. :)

Rachael
Developer
Developer
Posts: 3623
Joined: Sat May 13, 2006 10:30

Re: FXAA Test

Post by Rachael » Wed Sep 14, 2016 1:12

Ah, I see.

Well, I've had a chance to try it out. It does indeed work - at 2, my video card is okay, at 4, though, it chokes. I had gl_multisample set at 8, though, so I guess that would explain that. :P (After turning multisample off, the framerate smoothed out a lot)

There is one problem, though - the screen flash now occupies the top corner of the screen. It no longer covers the whole screen as it should.

Also - any value beyond 2 does not scale properly. It appears that the texture filter used cannot handle more than doublesize textures.
Spoiler: Zen Sarcasm

dpJudas
Developer
Developer
Posts: 798
Joined: Sat Jul 23, 2016 7:53

Re: SSAA Discussion

Post by dpJudas » Wed Sep 14, 2016 11:16

Yes, anything above scale 2 would require a special shader that did the filtering itself. Probably best to lock the cvar in a 0.1 - 2.0 range for a final version of this feature.

Rachael
Developer
Developer
Posts: 3623
Joined: Sat May 13, 2006 10:30

Re: SSAA Discussion

Post by Rachael » Wed Sep 14, 2016 11:19

Considering how big displays are getting these days, I think a more flexible 0.01-2.0 would be better. This is keeping in mind, of course, some people are going to want to emulate "ye olde 320x200". Ultra HD displays are already locked out of that with the 0.1 limitation.
Spoiler: Zen Sarcasm

dpJudas
Developer
Developer
Posts: 798
Joined: Sat Jul 23, 2016 7:53

Re: SSAA Discussion

Post by dpJudas » Wed Sep 14, 2016 11:29

Maybe it is simpler to make the lower bound be 0, and then clamp the final used value to a 200 height minimum.

I generally see four common usages for this cvar: palette mode pixelated users that want 320x200, low end hardware users that want stuff probably in the 0.5-1.0 range, retina Mac/4K/5K users that want hidpi disabled which they get with 0.5, and SSAA users that wants 2.0. The big question for me here is how to best present this in the menus?

Rachael
Developer
Developer
Posts: 3623
Joined: Sat May 13, 2006 10:30

Re: SSAA Discussion

Post by Rachael » Wed Sep 14, 2016 11:42

I would say a slider. The slider can have a range of 0.1 to 2.0 (that makes it a simple 20 steps), or even 0.2 to 2.0 (even more simple 10 steps).

If the user wants to fine tune the final resolution they can use the console. They'll probably have to use a bit of math for that, anyway.

If possible, the menu may also show what the new "effective" resolution will be. And as a counter to that, a console command can be added like gl_set_lines X where X is taken by GZDoom's current "actual" resolution and divided X/res for gl_viewport_scale. Then all someone has to do is type "gl_set_lines 200" - done. (Sorry if it seems I am getting a bit ahead of myself)
Spoiler: Zen Sarcasm

Locked

Return to “GZDoom”