This is a demonstration of issue #2 I articulated here (https://www.toradex.com/community/questions/3630/a-more-complete-egl-implementation-for-imx6-wec-20.html)
I’m trying to find a way to get OpenGL ES to render directly into my application’s memory, so I can use the pixel data in other APIs without resorting to slow copies between the GPU and the CPU. This demonstration tries to accomplish that by using Pixmap Surfaces (e.g. eglCreatePixmapSurface
) and rendering directly into a Windows HBITMAP
.
The problem I’m having is, while the code compiles and executes, I can’t get any pixel data to be rendered into the HBITMAP
(e.g. underlying Pixmap). I thought this may have been because I was trying to use the data before OpenGL was finished, so I added a call to glFinish()
to sync up with the GPU. That causes the application to behave very strangely (sometimes crashing without any error, or repeatedly returning to the same glFinish()
call without ever being able to get past it).
I’ve also tried glReadPixels
, eglCopySurface
, and eglLockSurfaceKHR
, and while they do work, they are just way too slow. Other resources (like here and here) suggest using EGL Images, but I have not been able to get that working on this platform (I’ll be posting another issue about that later). So, that is the reason why I’m trying to use Pixmap Surfaces.
I believe the call to glFinish
illustrates a bug in the OpenGL ES library. I would also like to know if I’m using Pixmap Surfaces incorrectly, and if I can use Pixmap Surfaces as a way to share pixel data between the CPU and GPU.