Distance fields for font rendering (Part 1)

Pushing myself forward again, I will publish some entries about implementing font rendering into a texture atlas, encoded with distance fields and with support for Unicode chars (the possibility to reduce the texture size will probably allow large amounts of glyphs in one single texture) to finally show proper labels and text boxes in Engendro3D.


 

By checking info about how to render fonts in OpenGL, I found this question, which lead me to the Valve’s paper (.pdf).

The results, along with the need to have huge amounts of chars pre-rendered to a font atlas for certain languages, got my interest.

Since the Distance fields looks slightly like blurred images, I made a quick png texture in Gimp containing 6 letters (with Eufm10 font) and I used the Gaussian Blur filter (10 radius) to produce this faked distance Field:

wtx_blured

Then, with a very simple GLSL fragment shader, this is the result:

dfblured

No Bilinear interpolation:

gblured_noint

Zoomed into the ‘B’:

zoomblur

No Bilinear interpolation:

zoomblur_noint

Has outline and glow and, while the result is not the best, the logic works.

The shader I wrote can probably be improved a lot, but I will use the same for all the tests I’ll do:

#version 110

uniform float time;
uniform sampler2D tex;

void main()
{
float color = texture2D(tex,gl_TexCoord[0].st).r;
	if (color <= 0.5)
		gl_FragColor = vec4(0,0,1,1);
	else
		if (color <= 0.6){
			gl_FragColor = vec4(1,1,1,1);}
		else
		{
			if (color < 1.0)
				gl_FragColor = max(0.0,1.0 - color) * 
				vec4(.9,.4,.2,1) * 2.0 * max(sin(-time),sin(time));
			else
				gl_FragColor = vec4(0.0);
		}
}

 

In the next part I will show a Distance Field generator in Python and Pillow (PIL).

PyOpenGL performance 1

The first concern about rewriting the engine in Python + OpenGL was about having to learn one more language that in the beginning I didn’t wanted to learn. The speed of OpenGL was never any worry, mainly because I trusted in the findings of Valve.

The second concern, was Python speed. Being an interpreted language, it seemed that it was going to be inherently slower that C++ or even worst, slower than C#. Working with Basic.net previously, I was accustomed to the idea of having slow code, but SharpDX is being said to be not so distant to raw C++ in speed. And it proved to be truth after my initial tests, so… What to expect from PyOpenGL?

The first test I made was to load the Stanford dragon (after converting my meshes loader to Python, with Pyassimp) and the result was not satisfactory (or so I thought): The FPS is around 40 (up to 60 without recording), wich seemed to be slow, even in my slow laptop. Still, I could not say anything yet, until making a test in ‘the big one’ and having a comparison with SharpDX. After some days, here it is:

Amazing!

PyOpenGL 2/3.1 is faster than SharpDX (DX9), just like the C++ versions. Even better, I have not made any optimization to the Python version. I’m not using PyOpenGL-Accelerate nor Cython!, so it is possible to get even better speed with those tools 😀 .

The only real difference between the two tests is the shader used, but I doubt that the impact from it is that big.

For reference, the next data:

Graphics card: Nvidia GTX 560 se

Processor: Intel Core 2 Duo 2.6GB

OS: Windows 7 32 bits

Memory: 3GB

Direct X 9 without fixed function Pipeline and shader model 2

OpenGL 2 without fixed function Pipeline and GLSL 1.2

Max FPS without recording:

– DirectX: 570 (first)

– OpenGL: 649 (second)

dx9 ogl2

Something interesting is the fact that the OpenGL app will reduce its speed if the DirectX one is running, as seen on the “Both” part of the video. Also, the OGL will slow down with any movement of any window or any interaction with the desktop. The DX window will remain steady. Does that mean that Windows gives preference to DX tasks over OGL or was it my card?

In the PyOpenGL site, they accept the wrapper being slow, but for some deprecated features, that I’m not using anyway, so I’m safe from that. And with this results and the Cython option, I’m really happy, so the conversion continues.

Postprocessing shaders using FX Composer

If you are reading this, I suppose you already know something about writing Post Processing shaders, but if you don’t, a very good place to start is this page:

http://blog.josack.com/2011/07/my-first-2d-pixel-shaders-part-1.html

Actually, that’s the page from where I learned what I’m showing here. Now, if you know nothing about shaders and you have curiosity, one of the best explained intros to the shader world is here:

http://www.conitec.net/shaders/

Here, I’m basically just pasting some screen caps of my first achievements in PP shaders. I’m still learning how to do this so the stuff is very basic.

Note: The toroid has applied a “normals” shader I am trying, and I skipped the background in all captures but the last one.

Normal Scene
 

default

 
Continue reading “Postprocessing shaders using FX Composer”