E-Mail
Password
Register
  • Home
  • Blog


Blog

Different OpenGL issues

While working on Ramble in the Sky I encountered some interesting OpenGL issues. They are not that obvious, especially taking into account variety of different android devices. If something works on your testing smartphone (or even on 5 of them!) it doesn't mean to work on every other ones. So, if you are interested in using GLES2.0 directly, you will probably find something useful down here.

1. GLES20.glDrawElements doesn't work with GL_UNSIGNED_INT
When I started implementing sprite batching (the method where you stack up many sprite vertex data and use it in shader to draw all of them simultaneously - it works faster compared to several draw calls), I used INT indices. It was fine on devices I had at that moment - LG510 and one of Motorola's. But when we started to test it out on other devices, all those batches rendered as an beautiful nothing. It is not like I really needed those 4294967296 indices, that was "just in case", but still... At the moment we are working on The Grid puzzle and there is a level selection part where it is needed something like 40k indices, it is fine for now to use UNSIGNED_SHORT, but I guess I would need to split them up in order to have some more levels.

2. Using "for" in fragment shader
That thing didn't work on one of the ASUS tablets; don't remember the model exactly. The implementation of alpha-blur shader had to be rewritten with hard-coded blur depth, so the option of "blur-all-over-dozens-of-pixels" failed - in order to support as many devices as we can it is now limited by 2 adjacent pixel blending.

3. Issues with copy-Framebuffer-to-Texture
If you need to pre-render some textures and use them lately this one is for you. The method I used:
- prepare framebuffer
- render something in it
- copy that thing to a new texture
- delete framebuffer
Again, it worked fine on all devices I could get (mine, friends', some side testers). Still when one of my friends downloaded Ramble in the Sky from playmarket some entities weren't rendering. That was one of older Samsung models and the trick was in GLES20.glCopyTexSubImage2D. Dunno why I used that function when I could get the things done with GLES20.glCopyTexImage2D (there wasn't any need in copying part of the texture). So, the bug was right there - GLES20.glCopyTexSubImage2D failed to work right with vertical-oriented images (256x512 in our case). GLES20.glCopyTexImage2D did everything fine.

4. Alpha-channel bug
I'll get to the point right here. Look at this code:

Bitmap bitmap = BitmapFactory.decodeResource(pContext.getResources(), resourceId, options);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);

That simple way of loading texture holds some dangerous things inside if you are using some half-transparent things there. Especially if you wish to use RGB and alpha-channels separately (as an example - using some bump map info in RGB and specular map in A). Every pixel with zero alpha-channel will also get zeros in RGB! That is what BitmapFactory does.
And that's not all - GLUtils.texImage2D got its own issues. It is also linked with how BitmapFactory works - it holds pixel data in R*A, G*A, B*A format and GLUtils.texImage2D copies it as clean RGB. So half-transparent red dot with 0.5 alpha value - hex 0x80ff0000 as ARGB at source - will transform into 0x80800000 in your texture! It is not that obvious if you use image as diffuse data, but for stored normals at bump map is causes huge mess.
The way I came up with to fix it:

int bitmapW = pBitmap.getWidth();
int bitmapH = pBitmap.getHeight();

int pixels[] = new int[bitmapW * bitmapH];
pBitmap.getPixels(pixels, 0, bitmapW, 0, 0, bitmapW, bitmapH);

byte[] pixelComponents = new byte[pixels.length * 4];
int byteIndex = 0;
int p;
for (int i = 0; i < pixels.length; i++)
{
  p = pixels[i];
  pixelComponents[byteIndex++] = (byte) ((p >> 16) & 0xff); // blue
  pixelComponents[byteIndex++] = (byte) ((p >> 8) & 0xff); // green
  pixelComponents[byteIndex++] = (byte) (p & 0xff); // red
  pixelComponents[byteIndex++] = (byte) (p >> 24); // alpha
}
ByteBuffer pixelBuffer = ByteBuffer.allocateDirect(byteIndex);
pixelBuffer.order(ByteOrder.nativeOrder());
pixelBuffer.put(pixelComponents);
pixelBuffer.position(0);

GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, bitmapW, bitmapH, 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, pixelBuffer);

So the point is to directly get the value of RGBA data and to use GLES20.glTexImage2D. As for me I think GLUtils could do it itself, but the fact is - it does not. Also the problem with zero alpha is still there - in my case I just scaled alpha channel from 0.00 - 1.00 to 0.05 - 1.00 and wrote corresponding shader keeping that thing in mind. But I guess there should be more elegant way to fix this.

5. Some mystics with large floats between vertex-fragment shaders
That is one I don't have any explanations for, just the code that works and the code that doesn't.
I'll be talking about shader which draws our team name in the game intro - you can watch how it works in Ramble in the Sky gameplay video (first 5 seconds, but it is fine to watch the entire thing :) ):


The purpose of the shader is to draw "ScintiArts" inside the explosion wave - it is truncated by circle with growing range. Here is the working code:

// Vertex shader
uniform mat4 mvp;
attribute vec4 vPos;
attribute vec2 vTex;
uniform vec2 cPos;       // center or circle
uniform float expRange;  // truncate radius
varying vec2 fPos;
varying vec2 texcoord;
void main()
{
  texcoord = vTex;
  fPos = (vPos.xy - cPos.xy) / expRange; // pay attention here
  gl_Position = mvp * vec4(vPos.xy, 0.0, 1.0);
}

// Fragment shader
precision mediump float;
uniform sampler2D tex;
varying vec2 fPos;
varying vec2 texcoord;
void main()
{
  vec4 vClr = texture2D(tex, texcoord);
  float dr = (fPos.x * fPos.x + fPos.y * fPos.y); // and some attention here
  float a = vClr.a * (1.0 - step(1.0, dr));
  gl_FragColor = vec4(vClr.rgb, a);
}

And now the code which do not work on some of the devices:

// Vertex shader
uniform mat4 mvp;
attribute vec4 vPos;
attribute vec2 vTex;
uniform vec2 cPos;
varying vec2 fPos;
varying vec2 texcoord;
void main()
{
  texcoord = vTex;
  fPos = (vPos.xy - cPos); // <= scaling expRange at frament shader
  gl_Position = mvp * vec4(vPos.xy, 0.0, 1.0);
}

// Fragment shader
precision mediump float;
uniform sampler2D tex;
uniform float expRange;
varying vec2 fPos;
varying vec2 texcoord;
void main()
{
  vec4 vClr = texture2D(tex, texcoord);
  float dr = sqrt(fPos.x * fPos.x + fPos.y * fPos.y) / expRange; // scaling
  float a = vClr.a * (1.0 - step(1.0, dr));
  gl_FragColor = vec4(vClr.rgb, a);
}

The first one is better even without those does-not-work issues - we got one calculation less at fragment shader. But still the question is open - why the second one fails to work on different devices. The only difference is that fPos variable is scaled by explosion radius in fragment shader, in terms of math the result should be equal. Playing around with precision value also doesn't help, so I have the solution, but I don't have any explanations.

So that's some part of hardships using pure GLES2.0. You'll probably won't face those things building the game with external engines, but as for me - it is interesting that way, and the things we can implement are not limited by some other engine rules. I hope things described here will help somehow if you also wish to follow that path. Or at least it was interesting to read about!
by Ricane at 03.08.2014 18:06


Comments
No comments yet
Leave a reply:
Nickname *
E-mail *
Title
Text *