Engendro 3D™ on Github

Finally, Engendro 3D, the Python based OpenGL game engine arrives to Github. If you want to try the engine so far, to criticize, help or  whatever, visit

https://github.com/jr-garcia/Engendro3D

to clone or fork and try it.

While the published version is still a very early and incomplete alpha, I needed to upload the work I have been doing on this engine and the supporting projects, at least as a backup… and I’m very happy for  doing it, since just yesterday my disk failed, taking with it a lot of work. But now I have an almost updated copy!

I’m not a friend of backups, and this have slowed down the progress on many little things I have started (and never finished) in the past, including the engine, when I sufer a failure, but now I can just clone from the repos 😀

Engendro 3D Progress Report (1 year, 7 months)

This little engine that is not born yet is still developing well. Right now, there are some news to share with anybody who  wants to hear:

  • GPU Animations are up and running.
  • I improved the performance by 10x (specially good for animated meshes) after converting two lines of the uniform upload code to Cython
  • When adding a ‘fullscreen effects’ framework, I placed the code in the wrong place and it ended up being capable of overriding the entire rendering path, so I implemented an easy-to-override rendering pipeline without even planning to (I’ll use it to add deferred rendering one day).

Sometimes the progress feels very slow on  this, but then it speeds up. I hope it will be faster now.

And next, the funny pictures:

sun

Dwarf model downloaded from animium.com, but originally belongs to http://www.psionicgames.com

Small blog edition

After more than a year with this little blog I have noticed that almost all the visits are going to the entry SharpDX multiple material performance and effects.  While there is no problem with that narrow field of interest on the other topics of the blog, most people are reaching that post by the wrong search terms, so the ‘article’ will just disappoint them (visitors often expect high level stuff related to shaders wich the article has none). Since I do not want people wasting time on stuff they don’t care (and since no one cares about stories on rotating ducks and such) I have decided to edit the entries to remove useless parts and from now on, everything I post will have the only thing all of us care about: code snippets.

Or else I will (try to) avoid publishing it.

Distance fields for font rendering (Part 2)

To properly test the font rendering from distance fields,  I adapted chunks of code from Here and Here, to make a Python signed distance field creator:

from PIL import Image
import os
from subprocess import call
from time import time
from math import sqrt


def check(limit, ox, oy, px, py, current):
    lp = getpixel(px, py)
    if lp == -1:
        return current
    if lp != limit:
        nw = float(ox - px)
        nh = float(oy - py)
        new = (nw * nw) + (nh * nh)
        if new < current:
            current = new
    return current


def getField(limit, searchLen):
    print('Calculating field {0}...'.format(str(limit + 1)))
    if limit == 1:
        limit = 255
    distances = []
    for w in range(oim.size[0]):
        distances.append([])
        for h in range(oim.size[1]):
            current = limit if limit > 0 else (searchLen * searchLen)
            for x in range(w - searchLen, w + searchLen):
                for y in range(h - searchLen, h + searchLen):
                    current = check(limit, w, h, x, y, current)
            current = int(sqrt(current))
            current *= int(g)
            distances[w].append(current)
    return distances


def getpixel(x, y):
    try:
        p = inpixels[x, y][0]
        return p
    except IndexError:
        return -1


inPath = os.path.dirname(__file__)
outPath = os.path.join(inPath, 'out.png')
inPath = os.path.join(inPath, 'in.png')
oim = Image.open(inPath).convert('RGBA')
img = Image.new('RGBA', oim.size, (0, 0, 0, 0))

ct = time()

inpixels = oim.load()
outpixels = img.load()

search = 8
last = 0.0
g = 255 / search

field1 = getField(0, search)
ct = time() - ct
print('Took: {0}'.format(str(round(ct, 1))))
ct = time()
field2 = getField(1, search)
ct = time() - ct
print('Took: {0}'.format(str(round(ct, 1))))
ct = time()
print('Mixing fields...')

for cw in range(oim.size[0]):
    for ch in range(oim.size[1]):
        fc1 = 255 - field1[cw][ch]
        fc2 = field2[cw][ch]
        fc = int((fc1 + fc2)/2)
        outpixels[cw, ch] = (fc, fc, fc, 255)

ct = time() - ct
print('Took: {0}'.format(str(round(ct, 1))))
ct = time()

print('Resizing and saving output image (128 width)...')
rf = float(oim.size[0]) / 128
img = img.resize((128, int(oim.size[1] / rf)), Image.BILINEAR)

img.save(outPath)

ct = time() - ct
print('Took: {0}'.format(str(round(ct, 1))))

if os.name.startswith('darwin'):
    call(('open', outPath))
elif os.name == 'nt':
    os.startfile(outPath)
elif os.name == 'posix':
    call(('xdg-open', outPath))

(Since this is only a test, the input and output file paths must be changed manually as needed).

The resulting image is certainly better suited for the font rendering:

SDF:

Signed Distance Field Test

Result (same shader shown in previous part):

SDF font render

Zoomed in (no weird artifacts like with the faked DF):

SDF font render zoomed

Unfortunately, the creation time for the whole DF is around 2 minutes for small images (256*256), and that is too much for the usage that I’m thinking. Now,  that usage involves the GPU so, could be faster to do this with the graphics card? Isn’t the purpose of the GPU to do per-pixel work anyway?

 

In the next part I will publish a complete tool to calculate the distance fields in the GPU, hopefully, in real time .

Edit:

For Python 3 compatibility, data passed to ‘outpixels’ needs to be send as int. Code updated to reflect it (and improved file paths).

Distance fields for font rendering (Part 1)

Pushing myself forward again, I will publish some entries about implementing font rendering into a texture atlas, encoded with distance fields and with support for Unicode chars (the possibility to reduce the texture size will probably allow large amounts of glyphs in one single texture) to finally show proper labels and text boxes in Engendro3D.


 

By checking info about how to render fonts in OpenGL, I found this question, which lead me to the Valve’s paper (.pdf).

The results, along with the need to have huge amounts of chars pre-rendered to a font atlas for certain languages, got my interest.

Since the Distance fields looks slightly like blurred images, I made a quick png texture in Gimp containing 6 letters (with Eufm10 font) and I used the Gaussian Blur filter (10 radius) to produce this faked distance Field:

wtx_blured

Then, with a very simple GLSL fragment shader, this is the result:

dfblured

No Bilinear interpolation:

gblured_noint

Zoomed into the ‘B’:

zoomblur

No Bilinear interpolation:

zoomblur_noint

Has outline and glow and, while the result is not the best, the logic works.

The shader I wrote can probably be improved a lot, but I will use the same for all the tests I’ll do:

#version 110

uniform float time;
uniform sampler2D tex;

void main()
{
float color = texture2D(tex,gl_TexCoord[0].st).r;
	if (color <= 0.5)
		gl_FragColor = vec4(0,0,1,1);
	else
		if (color <= 0.6){
			gl_FragColor = vec4(1,1,1,1);}
		else
		{
			if (color < 1.0)
				gl_FragColor = max(0.0,1.0 - color) * 
				vec4(.9,.4,.2,1) * 2.0 * max(sin(-time),sin(time));
			else
				gl_FragColor = vec4(0.0);
		}
}

 

In the next part I will show a Distance Field generator in Python and Pillow (PIL).

GUIs quirks and quacks

 

In the search of a ready-to-use GUI system, I came across several options, but in the end I have decided to make my own, simply because none of the options was completely fittable to my needs, specifically for the next reasons:

Maybe one of the most famous options. Looks very good and has official Python bindings.
I never managed to make it work in either Linux nor Windows. In Linux, after four hours of compiling, and a previously failed process due to missing dependencies, I gave up.
In Windows, the compiling took only two hours, only after I fixed some errors thanks to this instructions:

http://knightforged.com/cegui/

After finishing, I realized that the python bindings were not selected, so I started again, just to found severals errors raised for the lack of Boost. Unfortunately, I neither managed to compile the Python bindings for Boost.

 

Is recommended, looks promising and it appears to be usable from Python, but… It needs boost, so no further research on that.

Maybe not very well known like the others, but they are completely Python based, so no need for Boost. I tried to convert both to plain PyOpenGL, since SimplUI uses Pyglet’s event system (very different to mine) and BGEUI is based on Blender Game Engine and Fixed function (wich I won’t use), but after some days I realized that the effort needed for a conversion-integration into my engine would end up taking too much time. Maybe near to the time needed to make my own. So I gave up with pre-made solutions.

 

Result, after three days of work, the first screenshot of the first ‘Panel control’ of my GUI:

gui

Right now, the widgets support opacity and are stretched automatically. In the picture seen over a 3D duck with specular reflection. I have already one directional light. Current cost of the panel: 1ms

It only uses four vertices in a VBO and an index buffer. Then 3 unifforms for each control’s position/size, color and background image. This setup establish the possibility to render the GUI controls either as normal 2D ‘widgets’ or as 3D objects attached to the scene models and, with a little more work, controls could be rendered using instancing where available (all the GUI in one draw call).

and the gestation continues.