OpenGL Without GUI












2















Let's say I run a Linux and I have no desktop environment installed. I boot up my system and all I have is my shell.



Is it possible to compile a program that uses the OpenGL libraries or directly uses the GPU driver to draw to the screen?



As far as I could understand I would always need some kind of desktop environment that would provide me a window that I can draw on. To keep it
simple let's say I just want to draw a simple 2d shape like a triangle in the middle of the screen for example.



And if that's possible how can I do it and where can I read more about the topic? If I am able to draw directly over my terminal does this mean that I would be able to run my app on a system that has a desktop environment and still be able to see my triangle?










share|improve this question























  • You need a running X server, not necessarily a desktop.

    – httpdigest
    Nov 19 '18 at 19:18











  • Can I do it without a X server and do the job that the X server is normally doing?

    – Kaloyan Manev
    Nov 19 '18 at 19:21











  • Does mesa3d.org/osmesa.html help?

    – tink
    Nov 19 '18 at 19:36






  • 1





    @KaloyanManev: I just have to point this out, for you (and the rest of the readers), for future reference. This (again…) is a prime example of a XY problem. You thought that somehow getting direct access to graphics memory, would allow you to draw some overlay over the rest of the GUI, so you'd asked about talking to the GPU directly without a windowing system, omitting what your actual goal is. Ergo you got an answer how to use the GPU without a windowing system, which didn't solve your actual problem.

    – datenwolf
    Nov 20 '18 at 19:47






  • 1





    @KaloyanManev: Take this as a lesson for the future: ALWAYS ask about the actual task you try to accomplish. If you have to ask about it, that means, that you also lack understanding of how this problem can be actually addressed in the first place. You might have a guess what could be a part of the solution, and this guess might actually be sensible in a different context. But it doesn't help you there. So always ask, what it really is, what you're trying to do. Then you'll get to your goal much faster.

    – datenwolf
    Nov 20 '18 at 19:51
















2















Let's say I run a Linux and I have no desktop environment installed. I boot up my system and all I have is my shell.



Is it possible to compile a program that uses the OpenGL libraries or directly uses the GPU driver to draw to the screen?



As far as I could understand I would always need some kind of desktop environment that would provide me a window that I can draw on. To keep it
simple let's say I just want to draw a simple 2d shape like a triangle in the middle of the screen for example.



And if that's possible how can I do it and where can I read more about the topic? If I am able to draw directly over my terminal does this mean that I would be able to run my app on a system that has a desktop environment and still be able to see my triangle?










share|improve this question























  • You need a running X server, not necessarily a desktop.

    – httpdigest
    Nov 19 '18 at 19:18











  • Can I do it without a X server and do the job that the X server is normally doing?

    – Kaloyan Manev
    Nov 19 '18 at 19:21











  • Does mesa3d.org/osmesa.html help?

    – tink
    Nov 19 '18 at 19:36






  • 1





    @KaloyanManev: I just have to point this out, for you (and the rest of the readers), for future reference. This (again…) is a prime example of a XY problem. You thought that somehow getting direct access to graphics memory, would allow you to draw some overlay over the rest of the GUI, so you'd asked about talking to the GPU directly without a windowing system, omitting what your actual goal is. Ergo you got an answer how to use the GPU without a windowing system, which didn't solve your actual problem.

    – datenwolf
    Nov 20 '18 at 19:47






  • 1





    @KaloyanManev: Take this as a lesson for the future: ALWAYS ask about the actual task you try to accomplish. If you have to ask about it, that means, that you also lack understanding of how this problem can be actually addressed in the first place. You might have a guess what could be a part of the solution, and this guess might actually be sensible in a different context. But it doesn't help you there. So always ask, what it really is, what you're trying to do. Then you'll get to your goal much faster.

    – datenwolf
    Nov 20 '18 at 19:51














2












2








2








Let's say I run a Linux and I have no desktop environment installed. I boot up my system and all I have is my shell.



Is it possible to compile a program that uses the OpenGL libraries or directly uses the GPU driver to draw to the screen?



As far as I could understand I would always need some kind of desktop environment that would provide me a window that I can draw on. To keep it
simple let's say I just want to draw a simple 2d shape like a triangle in the middle of the screen for example.



And if that's possible how can I do it and where can I read more about the topic? If I am able to draw directly over my terminal does this mean that I would be able to run my app on a system that has a desktop environment and still be able to see my triangle?










share|improve this question














Let's say I run a Linux and I have no desktop environment installed. I boot up my system and all I have is my shell.



Is it possible to compile a program that uses the OpenGL libraries or directly uses the GPU driver to draw to the screen?



As far as I could understand I would always need some kind of desktop environment that would provide me a window that I can draw on. To keep it
simple let's say I just want to draw a simple 2d shape like a triangle in the middle of the screen for example.



And if that's possible how can I do it and where can I read more about the topic? If I am able to draw directly over my terminal does this mean that I would be able to run my app on a system that has a desktop environment and still be able to see my triangle?







linux opengl glut xorg glx






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 19 '18 at 19:16









Kaloyan ManevKaloyan Manev

9011




9011













  • You need a running X server, not necessarily a desktop.

    – httpdigest
    Nov 19 '18 at 19:18











  • Can I do it without a X server and do the job that the X server is normally doing?

    – Kaloyan Manev
    Nov 19 '18 at 19:21











  • Does mesa3d.org/osmesa.html help?

    – tink
    Nov 19 '18 at 19:36






  • 1





    @KaloyanManev: I just have to point this out, for you (and the rest of the readers), for future reference. This (again…) is a prime example of a XY problem. You thought that somehow getting direct access to graphics memory, would allow you to draw some overlay over the rest of the GUI, so you'd asked about talking to the GPU directly without a windowing system, omitting what your actual goal is. Ergo you got an answer how to use the GPU without a windowing system, which didn't solve your actual problem.

    – datenwolf
    Nov 20 '18 at 19:47






  • 1





    @KaloyanManev: Take this as a lesson for the future: ALWAYS ask about the actual task you try to accomplish. If you have to ask about it, that means, that you also lack understanding of how this problem can be actually addressed in the first place. You might have a guess what could be a part of the solution, and this guess might actually be sensible in a different context. But it doesn't help you there. So always ask, what it really is, what you're trying to do. Then you'll get to your goal much faster.

    – datenwolf
    Nov 20 '18 at 19:51



















  • You need a running X server, not necessarily a desktop.

    – httpdigest
    Nov 19 '18 at 19:18











  • Can I do it without a X server and do the job that the X server is normally doing?

    – Kaloyan Manev
    Nov 19 '18 at 19:21











  • Does mesa3d.org/osmesa.html help?

    – tink
    Nov 19 '18 at 19:36






  • 1





    @KaloyanManev: I just have to point this out, for you (and the rest of the readers), for future reference. This (again…) is a prime example of a XY problem. You thought that somehow getting direct access to graphics memory, would allow you to draw some overlay over the rest of the GUI, so you'd asked about talking to the GPU directly without a windowing system, omitting what your actual goal is. Ergo you got an answer how to use the GPU without a windowing system, which didn't solve your actual problem.

    – datenwolf
    Nov 20 '18 at 19:47






  • 1





    @KaloyanManev: Take this as a lesson for the future: ALWAYS ask about the actual task you try to accomplish. If you have to ask about it, that means, that you also lack understanding of how this problem can be actually addressed in the first place. You might have a guess what could be a part of the solution, and this guess might actually be sensible in a different context. But it doesn't help you there. So always ask, what it really is, what you're trying to do. Then you'll get to your goal much faster.

    – datenwolf
    Nov 20 '18 at 19:51

















You need a running X server, not necessarily a desktop.

– httpdigest
Nov 19 '18 at 19:18





You need a running X server, not necessarily a desktop.

– httpdigest
Nov 19 '18 at 19:18













Can I do it without a X server and do the job that the X server is normally doing?

– Kaloyan Manev
Nov 19 '18 at 19:21





Can I do it without a X server and do the job that the X server is normally doing?

– Kaloyan Manev
Nov 19 '18 at 19:21













Does mesa3d.org/osmesa.html help?

– tink
Nov 19 '18 at 19:36





Does mesa3d.org/osmesa.html help?

– tink
Nov 19 '18 at 19:36




1




1





@KaloyanManev: I just have to point this out, for you (and the rest of the readers), for future reference. This (again…) is a prime example of a XY problem. You thought that somehow getting direct access to graphics memory, would allow you to draw some overlay over the rest of the GUI, so you'd asked about talking to the GPU directly without a windowing system, omitting what your actual goal is. Ergo you got an answer how to use the GPU without a windowing system, which didn't solve your actual problem.

– datenwolf
Nov 20 '18 at 19:47





@KaloyanManev: I just have to point this out, for you (and the rest of the readers), for future reference. This (again…) is a prime example of a XY problem. You thought that somehow getting direct access to graphics memory, would allow you to draw some overlay over the rest of the GUI, so you'd asked about talking to the GPU directly without a windowing system, omitting what your actual goal is. Ergo you got an answer how to use the GPU without a windowing system, which didn't solve your actual problem.

– datenwolf
Nov 20 '18 at 19:47




1




1





@KaloyanManev: Take this as a lesson for the future: ALWAYS ask about the actual task you try to accomplish. If you have to ask about it, that means, that you also lack understanding of how this problem can be actually addressed in the first place. You might have a guess what could be a part of the solution, and this guess might actually be sensible in a different context. But it doesn't help you there. So always ask, what it really is, what you're trying to do. Then you'll get to your goal much faster.

– datenwolf
Nov 20 '18 at 19:51





@KaloyanManev: Take this as a lesson for the future: ALWAYS ask about the actual task you try to accomplish. If you have to ask about it, that means, that you also lack understanding of how this problem can be actually addressed in the first place. You might have a guess what could be a part of the solution, and this guess might actually be sensible in a different context. But it doesn't help you there. So always ask, what it really is, what you're trying to do. Then you'll get to your goal much faster.

– datenwolf
Nov 20 '18 at 19:51












1 Answer
1






active

oldest

votes


















3















Is it possible to compile a program that uses the OpenGL libraries or directly uses the GPU driver to draw to the screen?




Yes. With the EGL API this has been formalized and works most well with NVidia GPUs and their proprietary drivers. NVidia has it described on their dev blog here https://devblogs.nvidia.com/egl-eye-opengl-visualization-without-x-server/



Essentially the steps are:



Create a OpenGL context for a PBuffer



#include <EGL/egl.h>

static const EGLint configAttribs = {
EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_DEPTH_SIZE, 8,
EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
EGL_NONE
};

static const int pbufferWidth = 9;
static const int pbufferHeight = 9;

static const EGLint pbufferAttribs = {
EGL_WIDTH, pbufferWidth,
EGL_HEIGHT, pbufferHeight,
EGL_NONE,
};

int main(int argc, char *argv)
{
// 1. Initialize EGL
EGLDisplay eglDpy = eglGetDisplay(EGL_DEFAULT_DISPLAY);

EGLint major, minor;

eglInitialize(eglDpy, &major, &minor);

// 2. Select an appropriate configuration
EGLint numConfigs;
EGLConfig eglCfg;

eglChooseConfig(eglDpy, configAttribs, &eglCfg, 1, &numConfigs);

// 3. Create a surface
EGLSurface eglSurf = eglCreatePbufferSurface(eglDpy, eglCfg,
pbufferAttribs);

// 4. Bind the API
eglBindAPI(EGL_OPENGL_API);

// 5. Create a context and make it current
EGLContext eglCtx = eglCreateContext(eglDpy, eglCfg, EGL_NO_CONTEXT,
NULL);

eglMakeCurrent(eglDpy, eglSurf, eglSurf, eglCtx);

// from now on use your OpenGL context

// 6. Terminate EGL when finished
eglTerminate(eglDpy);
return 0;
}


and then go about the rest as per usual. Or you can even ditch the PBuffer completely and just use OpenGL manages resources, i.e. render to framebuffer objects. For that end you can omit creating the surface and just make the context current.



Here's an example for using EGL without display, no EGL surface, with OpenGL managed framebuffer.



#include <GL/glew.h>
#include <GL/glut.h>
#include <EGL/egl.h>

#include <unistd.h>
#include <stdlib.h>
#include <assert.h>
#include <sys/mman.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>

#include <math.h>
#include <stdio.h>

using namespace std;

namespace render
{
int width, height;
float aspect;

void init();
void display();

int const fbo_width = 512;
int const fbo_height = 512;

GLuint fb, color, depth;

void *dumpbuf;
int dumpbuf_fd;
};

static const EGLint configAttribs = {
EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_DEPTH_SIZE, 8,
EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
EGL_NONE
};

int main(int argc, char *argv)
{
// 1. Initialize EGL
EGLDisplay eglDpy = eglGetDisplay(EGL_DEFAULT_DISPLAY);

EGLint major, minor;
eglInitialize(eglDpy, &major, &minor);

// 2. Select an appropriate configuration
EGLint numConfigs;
EGLConfig eglCfg;

eglChooseConfig(eglDpy, configAttribs, &eglCfg, 1, &numConfigs);

// 3. Bind the API
eglBindAPI(EGL_OPENGL_API);

// 3. Create a context and make it current
EGLContext eglCtx = eglCreateContext(eglDpy, eglCfg, EGL_NO_CONTEXT,
NULL);

eglMakeCurrent(eglDpy, EGL_NO_SURFACE, EGL_NO_SURFACE, eglCtx);

glewInit();
// from now on use your OpenGL context
render::init();
render::display();

// 4. Terminate EGL when finished
eglTerminate(eglDpy);
return 0;
}

void CHECK_FRAMEBUFFER_STATUS()
{
GLenum status;
status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER);
switch(status) {
case GL_FRAMEBUFFER_COMPLETE:
break;

case GL_FRAMEBUFFER_UNSUPPORTED:
/* choose different formats */
break;

default:
/* programming error; will fail on all hardware */
throw "Framebuffer Error";
}
}

namespace render
{
float const light_dir={1,1,1,0};
float const light_color={1,0.95,0.9,1};

void init()
{
glGenFramebuffers(1, &fb);
glGenTextures(1, &color);
glGenRenderbuffers(1, &depth);

glBindFramebuffer(GL_FRAMEBUFFER, fb);

glBindTexture(GL_TEXTURE_2D, color);
glTexImage2D( GL_TEXTURE_2D,
0,
GL_RGB8,
fbo_width, fbo_height,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
NULL);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, color, 0);

glBindRenderbuffer(GL_RENDERBUFFER, depth);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, fbo_width, fbo_height);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth);

GLint red_bits, green_bits, blue_bits, alpha_bits;

glGetIntegerv(GL_RED_BITS, &red_bits);
glGetIntegerv(GL_GREEN_BITS, &green_bits);
glGetIntegerv(GL_BLUE_BITS, &blue_bits);
glGetIntegerv(GL_ALPHA_BITS, &alpha_bits);

fprintf(stderr, "FBO format R%dG%dB%dA%dn",
(int)red_bits,
(int)green_bits,
(int)blue_bits,
(int)alpha_bits );

CHECK_FRAMEBUFFER_STATUS();

dumpbuf_fd = open("/tmp/fbodump.rgb", O_CREAT|O_SYNC|O_RDWR, S_IRUSR|S_IWUSR);
assert(-1 != dumpbuf_fd);
dumpbuf = malloc(fbo_width*fbo_height*3);
assert(dumpbuf);
}

void render()
{
static float a=0, b=0, c=0;

glBindTexture(GL_TEXTURE_2D, 0);
glEnable(GL_TEXTURE_2D);
glBindFramebuffer(GL_FRAMEBUFFER, fb);

glViewport(0,0,fbo_width, fbo_height);

glClearColor(0,0,0,0);
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-1, 1, -1, 1, -1, 1);

glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

glBegin(GL_TRIANGLES);
glColor3f(1,0,0);
glVertex3f(1,0,0);

glColor3f(0,1,0);
glVertex3f(0,1,0);

glColor3f(0,0,1);
glVertex3f(0,0,1);
glEnd();

glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0,0,fbo_width,fbo_height,GL_RGB,GL_UNSIGNED_BYTE,dumpbuf);
lseek(dumpbuf_fd, SEEK_SET, 0);
write(dumpbuf_fd, dumpbuf, fbo_width*fbo_height*3);
}
}





share|improve this answer


























  • As far as I can understand this creates a buffer, draws and finally saves it to /tmp/fbodump.rgb?

    – Kaloyan Manev
    Nov 19 '18 at 22:31











  • @KaloyanManev: Yes, that it does indeed. You can display the file generated by this example with the display tool of ImageMagick like this: display -size 512x512 -depth 8 RGB:/tmp/fbodump.rgb – I just cobbled together this example from various code lying around: The EGL code from the NVidia blog, some ancient framebuffer tutorial code I wrote ages ago and then threw in some immediate mode stuff, because I couldn't be bothered to go full VBO. But I tested it with NVidia proprietary and Mesa intel_dri drivers and it works (at least here).

    – datenwolf
    Nov 20 '18 at 8:50











  • Yeah I see, I also found some tutorials on how to do rendering in a buffer. About the displaying part I was asking if I can render the buffer on top of my my running GUI without creating a window.

    – Kaloyan Manev
    Nov 20 '18 at 15:46








  • 1





    @KaloyanManev: On Linux you can in fact use EGL to do that, however I never tried it so far. If you can live with not supporting NVidia GPUs, then you may have a look at GBM which allows to directly access the GPU without going through a display system. There's a demo called kmscube, which shows how to do that. Also if you want to bypass the windowing system for a specific device (say a VR headset or some non-desktop display) there's now DRM leases in the works, just for doing that.

    – datenwolf
    Nov 20 '18 at 16:54











  • I just tried both kmscube and another drm leases example called jesse-cube, both wokring in console mode. When tried with GUI only the jesse-cube works and it is inside a window. I think it is impossible to access the video memory and just draw over the desktop environment. That would be too hacky and would probably required a custom driver. Thank you very much!

    – Kaloyan Manev
    Nov 20 '18 at 19:22











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53381218%2fopengl-without-gui%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









3















Is it possible to compile a program that uses the OpenGL libraries or directly uses the GPU driver to draw to the screen?




Yes. With the EGL API this has been formalized and works most well with NVidia GPUs and their proprietary drivers. NVidia has it described on their dev blog here https://devblogs.nvidia.com/egl-eye-opengl-visualization-without-x-server/



Essentially the steps are:



Create a OpenGL context for a PBuffer



#include <EGL/egl.h>

static const EGLint configAttribs = {
EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_DEPTH_SIZE, 8,
EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
EGL_NONE
};

static const int pbufferWidth = 9;
static const int pbufferHeight = 9;

static const EGLint pbufferAttribs = {
EGL_WIDTH, pbufferWidth,
EGL_HEIGHT, pbufferHeight,
EGL_NONE,
};

int main(int argc, char *argv)
{
// 1. Initialize EGL
EGLDisplay eglDpy = eglGetDisplay(EGL_DEFAULT_DISPLAY);

EGLint major, minor;

eglInitialize(eglDpy, &major, &minor);

// 2. Select an appropriate configuration
EGLint numConfigs;
EGLConfig eglCfg;

eglChooseConfig(eglDpy, configAttribs, &eglCfg, 1, &numConfigs);

// 3. Create a surface
EGLSurface eglSurf = eglCreatePbufferSurface(eglDpy, eglCfg,
pbufferAttribs);

// 4. Bind the API
eglBindAPI(EGL_OPENGL_API);

// 5. Create a context and make it current
EGLContext eglCtx = eglCreateContext(eglDpy, eglCfg, EGL_NO_CONTEXT,
NULL);

eglMakeCurrent(eglDpy, eglSurf, eglSurf, eglCtx);

// from now on use your OpenGL context

// 6. Terminate EGL when finished
eglTerminate(eglDpy);
return 0;
}


and then go about the rest as per usual. Or you can even ditch the PBuffer completely and just use OpenGL manages resources, i.e. render to framebuffer objects. For that end you can omit creating the surface and just make the context current.



Here's an example for using EGL without display, no EGL surface, with OpenGL managed framebuffer.



#include <GL/glew.h>
#include <GL/glut.h>
#include <EGL/egl.h>

#include <unistd.h>
#include <stdlib.h>
#include <assert.h>
#include <sys/mman.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>

#include <math.h>
#include <stdio.h>

using namespace std;

namespace render
{
int width, height;
float aspect;

void init();
void display();

int const fbo_width = 512;
int const fbo_height = 512;

GLuint fb, color, depth;

void *dumpbuf;
int dumpbuf_fd;
};

static const EGLint configAttribs = {
EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_DEPTH_SIZE, 8,
EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
EGL_NONE
};

int main(int argc, char *argv)
{
// 1. Initialize EGL
EGLDisplay eglDpy = eglGetDisplay(EGL_DEFAULT_DISPLAY);

EGLint major, minor;
eglInitialize(eglDpy, &major, &minor);

// 2. Select an appropriate configuration
EGLint numConfigs;
EGLConfig eglCfg;

eglChooseConfig(eglDpy, configAttribs, &eglCfg, 1, &numConfigs);

// 3. Bind the API
eglBindAPI(EGL_OPENGL_API);

// 3. Create a context and make it current
EGLContext eglCtx = eglCreateContext(eglDpy, eglCfg, EGL_NO_CONTEXT,
NULL);

eglMakeCurrent(eglDpy, EGL_NO_SURFACE, EGL_NO_SURFACE, eglCtx);

glewInit();
// from now on use your OpenGL context
render::init();
render::display();

// 4. Terminate EGL when finished
eglTerminate(eglDpy);
return 0;
}

void CHECK_FRAMEBUFFER_STATUS()
{
GLenum status;
status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER);
switch(status) {
case GL_FRAMEBUFFER_COMPLETE:
break;

case GL_FRAMEBUFFER_UNSUPPORTED:
/* choose different formats */
break;

default:
/* programming error; will fail on all hardware */
throw "Framebuffer Error";
}
}

namespace render
{
float const light_dir={1,1,1,0};
float const light_color={1,0.95,0.9,1};

void init()
{
glGenFramebuffers(1, &fb);
glGenTextures(1, &color);
glGenRenderbuffers(1, &depth);

glBindFramebuffer(GL_FRAMEBUFFER, fb);

glBindTexture(GL_TEXTURE_2D, color);
glTexImage2D( GL_TEXTURE_2D,
0,
GL_RGB8,
fbo_width, fbo_height,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
NULL);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, color, 0);

glBindRenderbuffer(GL_RENDERBUFFER, depth);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, fbo_width, fbo_height);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth);

GLint red_bits, green_bits, blue_bits, alpha_bits;

glGetIntegerv(GL_RED_BITS, &red_bits);
glGetIntegerv(GL_GREEN_BITS, &green_bits);
glGetIntegerv(GL_BLUE_BITS, &blue_bits);
glGetIntegerv(GL_ALPHA_BITS, &alpha_bits);

fprintf(stderr, "FBO format R%dG%dB%dA%dn",
(int)red_bits,
(int)green_bits,
(int)blue_bits,
(int)alpha_bits );

CHECK_FRAMEBUFFER_STATUS();

dumpbuf_fd = open("/tmp/fbodump.rgb", O_CREAT|O_SYNC|O_RDWR, S_IRUSR|S_IWUSR);
assert(-1 != dumpbuf_fd);
dumpbuf = malloc(fbo_width*fbo_height*3);
assert(dumpbuf);
}

void render()
{
static float a=0, b=0, c=0;

glBindTexture(GL_TEXTURE_2D, 0);
glEnable(GL_TEXTURE_2D);
glBindFramebuffer(GL_FRAMEBUFFER, fb);

glViewport(0,0,fbo_width, fbo_height);

glClearColor(0,0,0,0);
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-1, 1, -1, 1, -1, 1);

glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

glBegin(GL_TRIANGLES);
glColor3f(1,0,0);
glVertex3f(1,0,0);

glColor3f(0,1,0);
glVertex3f(0,1,0);

glColor3f(0,0,1);
glVertex3f(0,0,1);
glEnd();

glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0,0,fbo_width,fbo_height,GL_RGB,GL_UNSIGNED_BYTE,dumpbuf);
lseek(dumpbuf_fd, SEEK_SET, 0);
write(dumpbuf_fd, dumpbuf, fbo_width*fbo_height*3);
}
}





share|improve this answer


























  • As far as I can understand this creates a buffer, draws and finally saves it to /tmp/fbodump.rgb?

    – Kaloyan Manev
    Nov 19 '18 at 22:31











  • @KaloyanManev: Yes, that it does indeed. You can display the file generated by this example with the display tool of ImageMagick like this: display -size 512x512 -depth 8 RGB:/tmp/fbodump.rgb – I just cobbled together this example from various code lying around: The EGL code from the NVidia blog, some ancient framebuffer tutorial code I wrote ages ago and then threw in some immediate mode stuff, because I couldn't be bothered to go full VBO. But I tested it with NVidia proprietary and Mesa intel_dri drivers and it works (at least here).

    – datenwolf
    Nov 20 '18 at 8:50











  • Yeah I see, I also found some tutorials on how to do rendering in a buffer. About the displaying part I was asking if I can render the buffer on top of my my running GUI without creating a window.

    – Kaloyan Manev
    Nov 20 '18 at 15:46








  • 1





    @KaloyanManev: On Linux you can in fact use EGL to do that, however I never tried it so far. If you can live with not supporting NVidia GPUs, then you may have a look at GBM which allows to directly access the GPU without going through a display system. There's a demo called kmscube, which shows how to do that. Also if you want to bypass the windowing system for a specific device (say a VR headset or some non-desktop display) there's now DRM leases in the works, just for doing that.

    – datenwolf
    Nov 20 '18 at 16:54











  • I just tried both kmscube and another drm leases example called jesse-cube, both wokring in console mode. When tried with GUI only the jesse-cube works and it is inside a window. I think it is impossible to access the video memory and just draw over the desktop environment. That would be too hacky and would probably required a custom driver. Thank you very much!

    – Kaloyan Manev
    Nov 20 '18 at 19:22
















3















Is it possible to compile a program that uses the OpenGL libraries or directly uses the GPU driver to draw to the screen?




Yes. With the EGL API this has been formalized and works most well with NVidia GPUs and their proprietary drivers. NVidia has it described on their dev blog here https://devblogs.nvidia.com/egl-eye-opengl-visualization-without-x-server/



Essentially the steps are:



Create a OpenGL context for a PBuffer



#include <EGL/egl.h>

static const EGLint configAttribs = {
EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_DEPTH_SIZE, 8,
EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
EGL_NONE
};

static const int pbufferWidth = 9;
static const int pbufferHeight = 9;

static const EGLint pbufferAttribs = {
EGL_WIDTH, pbufferWidth,
EGL_HEIGHT, pbufferHeight,
EGL_NONE,
};

int main(int argc, char *argv)
{
// 1. Initialize EGL
EGLDisplay eglDpy = eglGetDisplay(EGL_DEFAULT_DISPLAY);

EGLint major, minor;

eglInitialize(eglDpy, &major, &minor);

// 2. Select an appropriate configuration
EGLint numConfigs;
EGLConfig eglCfg;

eglChooseConfig(eglDpy, configAttribs, &eglCfg, 1, &numConfigs);

// 3. Create a surface
EGLSurface eglSurf = eglCreatePbufferSurface(eglDpy, eglCfg,
pbufferAttribs);

// 4. Bind the API
eglBindAPI(EGL_OPENGL_API);

// 5. Create a context and make it current
EGLContext eglCtx = eglCreateContext(eglDpy, eglCfg, EGL_NO_CONTEXT,
NULL);

eglMakeCurrent(eglDpy, eglSurf, eglSurf, eglCtx);

// from now on use your OpenGL context

// 6. Terminate EGL when finished
eglTerminate(eglDpy);
return 0;
}


and then go about the rest as per usual. Or you can even ditch the PBuffer completely and just use OpenGL manages resources, i.e. render to framebuffer objects. For that end you can omit creating the surface and just make the context current.



Here's an example for using EGL without display, no EGL surface, with OpenGL managed framebuffer.



#include <GL/glew.h>
#include <GL/glut.h>
#include <EGL/egl.h>

#include <unistd.h>
#include <stdlib.h>
#include <assert.h>
#include <sys/mman.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>

#include <math.h>
#include <stdio.h>

using namespace std;

namespace render
{
int width, height;
float aspect;

void init();
void display();

int const fbo_width = 512;
int const fbo_height = 512;

GLuint fb, color, depth;

void *dumpbuf;
int dumpbuf_fd;
};

static const EGLint configAttribs = {
EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_DEPTH_SIZE, 8,
EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
EGL_NONE
};

int main(int argc, char *argv)
{
// 1. Initialize EGL
EGLDisplay eglDpy = eglGetDisplay(EGL_DEFAULT_DISPLAY);

EGLint major, minor;
eglInitialize(eglDpy, &major, &minor);

// 2. Select an appropriate configuration
EGLint numConfigs;
EGLConfig eglCfg;

eglChooseConfig(eglDpy, configAttribs, &eglCfg, 1, &numConfigs);

// 3. Bind the API
eglBindAPI(EGL_OPENGL_API);

// 3. Create a context and make it current
EGLContext eglCtx = eglCreateContext(eglDpy, eglCfg, EGL_NO_CONTEXT,
NULL);

eglMakeCurrent(eglDpy, EGL_NO_SURFACE, EGL_NO_SURFACE, eglCtx);

glewInit();
// from now on use your OpenGL context
render::init();
render::display();

// 4. Terminate EGL when finished
eglTerminate(eglDpy);
return 0;
}

void CHECK_FRAMEBUFFER_STATUS()
{
GLenum status;
status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER);
switch(status) {
case GL_FRAMEBUFFER_COMPLETE:
break;

case GL_FRAMEBUFFER_UNSUPPORTED:
/* choose different formats */
break;

default:
/* programming error; will fail on all hardware */
throw "Framebuffer Error";
}
}

namespace render
{
float const light_dir={1,1,1,0};
float const light_color={1,0.95,0.9,1};

void init()
{
glGenFramebuffers(1, &fb);
glGenTextures(1, &color);
glGenRenderbuffers(1, &depth);

glBindFramebuffer(GL_FRAMEBUFFER, fb);

glBindTexture(GL_TEXTURE_2D, color);
glTexImage2D( GL_TEXTURE_2D,
0,
GL_RGB8,
fbo_width, fbo_height,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
NULL);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, color, 0);

glBindRenderbuffer(GL_RENDERBUFFER, depth);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, fbo_width, fbo_height);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth);

GLint red_bits, green_bits, blue_bits, alpha_bits;

glGetIntegerv(GL_RED_BITS, &red_bits);
glGetIntegerv(GL_GREEN_BITS, &green_bits);
glGetIntegerv(GL_BLUE_BITS, &blue_bits);
glGetIntegerv(GL_ALPHA_BITS, &alpha_bits);

fprintf(stderr, "FBO format R%dG%dB%dA%dn",
(int)red_bits,
(int)green_bits,
(int)blue_bits,
(int)alpha_bits );

CHECK_FRAMEBUFFER_STATUS();

dumpbuf_fd = open("/tmp/fbodump.rgb", O_CREAT|O_SYNC|O_RDWR, S_IRUSR|S_IWUSR);
assert(-1 != dumpbuf_fd);
dumpbuf = malloc(fbo_width*fbo_height*3);
assert(dumpbuf);
}

void render()
{
static float a=0, b=0, c=0;

glBindTexture(GL_TEXTURE_2D, 0);
glEnable(GL_TEXTURE_2D);
glBindFramebuffer(GL_FRAMEBUFFER, fb);

glViewport(0,0,fbo_width, fbo_height);

glClearColor(0,0,0,0);
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-1, 1, -1, 1, -1, 1);

glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

glBegin(GL_TRIANGLES);
glColor3f(1,0,0);
glVertex3f(1,0,0);

glColor3f(0,1,0);
glVertex3f(0,1,0);

glColor3f(0,0,1);
glVertex3f(0,0,1);
glEnd();

glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0,0,fbo_width,fbo_height,GL_RGB,GL_UNSIGNED_BYTE,dumpbuf);
lseek(dumpbuf_fd, SEEK_SET, 0);
write(dumpbuf_fd, dumpbuf, fbo_width*fbo_height*3);
}
}





share|improve this answer


























  • As far as I can understand this creates a buffer, draws and finally saves it to /tmp/fbodump.rgb?

    – Kaloyan Manev
    Nov 19 '18 at 22:31











  • @KaloyanManev: Yes, that it does indeed. You can display the file generated by this example with the display tool of ImageMagick like this: display -size 512x512 -depth 8 RGB:/tmp/fbodump.rgb – I just cobbled together this example from various code lying around: The EGL code from the NVidia blog, some ancient framebuffer tutorial code I wrote ages ago and then threw in some immediate mode stuff, because I couldn't be bothered to go full VBO. But I tested it with NVidia proprietary and Mesa intel_dri drivers and it works (at least here).

    – datenwolf
    Nov 20 '18 at 8:50











  • Yeah I see, I also found some tutorials on how to do rendering in a buffer. About the displaying part I was asking if I can render the buffer on top of my my running GUI without creating a window.

    – Kaloyan Manev
    Nov 20 '18 at 15:46








  • 1





    @KaloyanManev: On Linux you can in fact use EGL to do that, however I never tried it so far. If you can live with not supporting NVidia GPUs, then you may have a look at GBM which allows to directly access the GPU without going through a display system. There's a demo called kmscube, which shows how to do that. Also if you want to bypass the windowing system for a specific device (say a VR headset or some non-desktop display) there's now DRM leases in the works, just for doing that.

    – datenwolf
    Nov 20 '18 at 16:54











  • I just tried both kmscube and another drm leases example called jesse-cube, both wokring in console mode. When tried with GUI only the jesse-cube works and it is inside a window. I think it is impossible to access the video memory and just draw over the desktop environment. That would be too hacky and would probably required a custom driver. Thank you very much!

    – Kaloyan Manev
    Nov 20 '18 at 19:22














3












3








3








Is it possible to compile a program that uses the OpenGL libraries or directly uses the GPU driver to draw to the screen?




Yes. With the EGL API this has been formalized and works most well with NVidia GPUs and their proprietary drivers. NVidia has it described on their dev blog here https://devblogs.nvidia.com/egl-eye-opengl-visualization-without-x-server/



Essentially the steps are:



Create a OpenGL context for a PBuffer



#include <EGL/egl.h>

static const EGLint configAttribs = {
EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_DEPTH_SIZE, 8,
EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
EGL_NONE
};

static const int pbufferWidth = 9;
static const int pbufferHeight = 9;

static const EGLint pbufferAttribs = {
EGL_WIDTH, pbufferWidth,
EGL_HEIGHT, pbufferHeight,
EGL_NONE,
};

int main(int argc, char *argv)
{
// 1. Initialize EGL
EGLDisplay eglDpy = eglGetDisplay(EGL_DEFAULT_DISPLAY);

EGLint major, minor;

eglInitialize(eglDpy, &major, &minor);

// 2. Select an appropriate configuration
EGLint numConfigs;
EGLConfig eglCfg;

eglChooseConfig(eglDpy, configAttribs, &eglCfg, 1, &numConfigs);

// 3. Create a surface
EGLSurface eglSurf = eglCreatePbufferSurface(eglDpy, eglCfg,
pbufferAttribs);

// 4. Bind the API
eglBindAPI(EGL_OPENGL_API);

// 5. Create a context and make it current
EGLContext eglCtx = eglCreateContext(eglDpy, eglCfg, EGL_NO_CONTEXT,
NULL);

eglMakeCurrent(eglDpy, eglSurf, eglSurf, eglCtx);

// from now on use your OpenGL context

// 6. Terminate EGL when finished
eglTerminate(eglDpy);
return 0;
}


and then go about the rest as per usual. Or you can even ditch the PBuffer completely and just use OpenGL manages resources, i.e. render to framebuffer objects. For that end you can omit creating the surface and just make the context current.



Here's an example for using EGL without display, no EGL surface, with OpenGL managed framebuffer.



#include <GL/glew.h>
#include <GL/glut.h>
#include <EGL/egl.h>

#include <unistd.h>
#include <stdlib.h>
#include <assert.h>
#include <sys/mman.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>

#include <math.h>
#include <stdio.h>

using namespace std;

namespace render
{
int width, height;
float aspect;

void init();
void display();

int const fbo_width = 512;
int const fbo_height = 512;

GLuint fb, color, depth;

void *dumpbuf;
int dumpbuf_fd;
};

static const EGLint configAttribs = {
EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_DEPTH_SIZE, 8,
EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
EGL_NONE
};

int main(int argc, char *argv)
{
// 1. Initialize EGL
EGLDisplay eglDpy = eglGetDisplay(EGL_DEFAULT_DISPLAY);

EGLint major, minor;
eglInitialize(eglDpy, &major, &minor);

// 2. Select an appropriate configuration
EGLint numConfigs;
EGLConfig eglCfg;

eglChooseConfig(eglDpy, configAttribs, &eglCfg, 1, &numConfigs);

// 3. Bind the API
eglBindAPI(EGL_OPENGL_API);

// 3. Create a context and make it current
EGLContext eglCtx = eglCreateContext(eglDpy, eglCfg, EGL_NO_CONTEXT,
NULL);

eglMakeCurrent(eglDpy, EGL_NO_SURFACE, EGL_NO_SURFACE, eglCtx);

glewInit();
// from now on use your OpenGL context
render::init();
render::display();

// 4. Terminate EGL when finished
eglTerminate(eglDpy);
return 0;
}

void CHECK_FRAMEBUFFER_STATUS()
{
GLenum status;
status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER);
switch(status) {
case GL_FRAMEBUFFER_COMPLETE:
break;

case GL_FRAMEBUFFER_UNSUPPORTED:
/* choose different formats */
break;

default:
/* programming error; will fail on all hardware */
throw "Framebuffer Error";
}
}

namespace render
{
float const light_dir={1,1,1,0};
float const light_color={1,0.95,0.9,1};

void init()
{
glGenFramebuffers(1, &fb);
glGenTextures(1, &color);
glGenRenderbuffers(1, &depth);

glBindFramebuffer(GL_FRAMEBUFFER, fb);

glBindTexture(GL_TEXTURE_2D, color);
glTexImage2D( GL_TEXTURE_2D,
0,
GL_RGB8,
fbo_width, fbo_height,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
NULL);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, color, 0);

glBindRenderbuffer(GL_RENDERBUFFER, depth);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, fbo_width, fbo_height);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth);

GLint red_bits, green_bits, blue_bits, alpha_bits;

glGetIntegerv(GL_RED_BITS, &red_bits);
glGetIntegerv(GL_GREEN_BITS, &green_bits);
glGetIntegerv(GL_BLUE_BITS, &blue_bits);
glGetIntegerv(GL_ALPHA_BITS, &alpha_bits);

fprintf(stderr, "FBO format R%dG%dB%dA%dn",
(int)red_bits,
(int)green_bits,
(int)blue_bits,
(int)alpha_bits );

CHECK_FRAMEBUFFER_STATUS();

dumpbuf_fd = open("/tmp/fbodump.rgb", O_CREAT|O_SYNC|O_RDWR, S_IRUSR|S_IWUSR);
assert(-1 != dumpbuf_fd);
dumpbuf = malloc(fbo_width*fbo_height*3);
assert(dumpbuf);
}

void render()
{
static float a=0, b=0, c=0;

glBindTexture(GL_TEXTURE_2D, 0);
glEnable(GL_TEXTURE_2D);
glBindFramebuffer(GL_FRAMEBUFFER, fb);

glViewport(0,0,fbo_width, fbo_height);

glClearColor(0,0,0,0);
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-1, 1, -1, 1, -1, 1);

glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

glBegin(GL_TRIANGLES);
glColor3f(1,0,0);
glVertex3f(1,0,0);

glColor3f(0,1,0);
glVertex3f(0,1,0);

glColor3f(0,0,1);
glVertex3f(0,0,1);
glEnd();

glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0,0,fbo_width,fbo_height,GL_RGB,GL_UNSIGNED_BYTE,dumpbuf);
lseek(dumpbuf_fd, SEEK_SET, 0);
write(dumpbuf_fd, dumpbuf, fbo_width*fbo_height*3);
}
}





share|improve this answer
















Is it possible to compile a program that uses the OpenGL libraries or directly uses the GPU driver to draw to the screen?




Yes. With the EGL API this has been formalized and works most well with NVidia GPUs and their proprietary drivers. NVidia has it described on their dev blog here https://devblogs.nvidia.com/egl-eye-opengl-visualization-without-x-server/



Essentially the steps are:



Create a OpenGL context for a PBuffer



#include <EGL/egl.h>

static const EGLint configAttribs = {
EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_DEPTH_SIZE, 8,
EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
EGL_NONE
};

static const int pbufferWidth = 9;
static const int pbufferHeight = 9;

static const EGLint pbufferAttribs = {
EGL_WIDTH, pbufferWidth,
EGL_HEIGHT, pbufferHeight,
EGL_NONE,
};

int main(int argc, char *argv)
{
// 1. Initialize EGL
EGLDisplay eglDpy = eglGetDisplay(EGL_DEFAULT_DISPLAY);

EGLint major, minor;

eglInitialize(eglDpy, &major, &minor);

// 2. Select an appropriate configuration
EGLint numConfigs;
EGLConfig eglCfg;

eglChooseConfig(eglDpy, configAttribs, &eglCfg, 1, &numConfigs);

// 3. Create a surface
EGLSurface eglSurf = eglCreatePbufferSurface(eglDpy, eglCfg,
pbufferAttribs);

// 4. Bind the API
eglBindAPI(EGL_OPENGL_API);

// 5. Create a context and make it current
EGLContext eglCtx = eglCreateContext(eglDpy, eglCfg, EGL_NO_CONTEXT,
NULL);

eglMakeCurrent(eglDpy, eglSurf, eglSurf, eglCtx);

// from now on use your OpenGL context

// 6. Terminate EGL when finished
eglTerminate(eglDpy);
return 0;
}


and then go about the rest as per usual. Or you can even ditch the PBuffer completely and just use OpenGL manages resources, i.e. render to framebuffer objects. For that end you can omit creating the surface and just make the context current.



Here's an example for using EGL without display, no EGL surface, with OpenGL managed framebuffer.



#include <GL/glew.h>
#include <GL/glut.h>
#include <EGL/egl.h>

#include <unistd.h>
#include <stdlib.h>
#include <assert.h>
#include <sys/mman.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>

#include <math.h>
#include <stdio.h>

using namespace std;

namespace render
{
int width, height;
float aspect;

void init();
void display();

int const fbo_width = 512;
int const fbo_height = 512;

GLuint fb, color, depth;

void *dumpbuf;
int dumpbuf_fd;
};

static const EGLint configAttribs = {
EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_DEPTH_SIZE, 8,
EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
EGL_NONE
};

int main(int argc, char *argv)
{
// 1. Initialize EGL
EGLDisplay eglDpy = eglGetDisplay(EGL_DEFAULT_DISPLAY);

EGLint major, minor;
eglInitialize(eglDpy, &major, &minor);

// 2. Select an appropriate configuration
EGLint numConfigs;
EGLConfig eglCfg;

eglChooseConfig(eglDpy, configAttribs, &eglCfg, 1, &numConfigs);

// 3. Bind the API
eglBindAPI(EGL_OPENGL_API);

// 3. Create a context and make it current
EGLContext eglCtx = eglCreateContext(eglDpy, eglCfg, EGL_NO_CONTEXT,
NULL);

eglMakeCurrent(eglDpy, EGL_NO_SURFACE, EGL_NO_SURFACE, eglCtx);

glewInit();
// from now on use your OpenGL context
render::init();
render::display();

// 4. Terminate EGL when finished
eglTerminate(eglDpy);
return 0;
}

void CHECK_FRAMEBUFFER_STATUS()
{
GLenum status;
status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER);
switch(status) {
case GL_FRAMEBUFFER_COMPLETE:
break;

case GL_FRAMEBUFFER_UNSUPPORTED:
/* choose different formats */
break;

default:
/* programming error; will fail on all hardware */
throw "Framebuffer Error";
}
}

namespace render
{
float const light_dir={1,1,1,0};
float const light_color={1,0.95,0.9,1};

void init()
{
glGenFramebuffers(1, &fb);
glGenTextures(1, &color);
glGenRenderbuffers(1, &depth);

glBindFramebuffer(GL_FRAMEBUFFER, fb);

glBindTexture(GL_TEXTURE_2D, color);
glTexImage2D( GL_TEXTURE_2D,
0,
GL_RGB8,
fbo_width, fbo_height,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
NULL);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, color, 0);

glBindRenderbuffer(GL_RENDERBUFFER, depth);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, fbo_width, fbo_height);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth);

GLint red_bits, green_bits, blue_bits, alpha_bits;

glGetIntegerv(GL_RED_BITS, &red_bits);
glGetIntegerv(GL_GREEN_BITS, &green_bits);
glGetIntegerv(GL_BLUE_BITS, &blue_bits);
glGetIntegerv(GL_ALPHA_BITS, &alpha_bits);

fprintf(stderr, "FBO format R%dG%dB%dA%dn",
(int)red_bits,
(int)green_bits,
(int)blue_bits,
(int)alpha_bits );

CHECK_FRAMEBUFFER_STATUS();

dumpbuf_fd = open("/tmp/fbodump.rgb", O_CREAT|O_SYNC|O_RDWR, S_IRUSR|S_IWUSR);
assert(-1 != dumpbuf_fd);
dumpbuf = malloc(fbo_width*fbo_height*3);
assert(dumpbuf);
}

void render()
{
static float a=0, b=0, c=0;

glBindTexture(GL_TEXTURE_2D, 0);
glEnable(GL_TEXTURE_2D);
glBindFramebuffer(GL_FRAMEBUFFER, fb);

glViewport(0,0,fbo_width, fbo_height);

glClearColor(0,0,0,0);
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-1, 1, -1, 1, -1, 1);

glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

glBegin(GL_TRIANGLES);
glColor3f(1,0,0);
glVertex3f(1,0,0);

glColor3f(0,1,0);
glVertex3f(0,1,0);

glColor3f(0,0,1);
glVertex3f(0,0,1);
glEnd();

glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0,0,fbo_width,fbo_height,GL_RGB,GL_UNSIGNED_BYTE,dumpbuf);
lseek(dumpbuf_fd, SEEK_SET, 0);
write(dumpbuf_fd, dumpbuf, fbo_width*fbo_height*3);
}
}






share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 19 '18 at 21:35

























answered Nov 19 '18 at 21:28









datenwolfdatenwolf

133k10132236




133k10132236













  • As far as I can understand this creates a buffer, draws and finally saves it to /tmp/fbodump.rgb?

    – Kaloyan Manev
    Nov 19 '18 at 22:31











  • @KaloyanManev: Yes, that it does indeed. You can display the file generated by this example with the display tool of ImageMagick like this: display -size 512x512 -depth 8 RGB:/tmp/fbodump.rgb – I just cobbled together this example from various code lying around: The EGL code from the NVidia blog, some ancient framebuffer tutorial code I wrote ages ago and then threw in some immediate mode stuff, because I couldn't be bothered to go full VBO. But I tested it with NVidia proprietary and Mesa intel_dri drivers and it works (at least here).

    – datenwolf
    Nov 20 '18 at 8:50











  • Yeah I see, I also found some tutorials on how to do rendering in a buffer. About the displaying part I was asking if I can render the buffer on top of my my running GUI without creating a window.

    – Kaloyan Manev
    Nov 20 '18 at 15:46








  • 1





    @KaloyanManev: On Linux you can in fact use EGL to do that, however I never tried it so far. If you can live with not supporting NVidia GPUs, then you may have a look at GBM which allows to directly access the GPU without going through a display system. There's a demo called kmscube, which shows how to do that. Also if you want to bypass the windowing system for a specific device (say a VR headset or some non-desktop display) there's now DRM leases in the works, just for doing that.

    – datenwolf
    Nov 20 '18 at 16:54











  • I just tried both kmscube and another drm leases example called jesse-cube, both wokring in console mode. When tried with GUI only the jesse-cube works and it is inside a window. I think it is impossible to access the video memory and just draw over the desktop environment. That would be too hacky and would probably required a custom driver. Thank you very much!

    – Kaloyan Manev
    Nov 20 '18 at 19:22



















  • As far as I can understand this creates a buffer, draws and finally saves it to /tmp/fbodump.rgb?

    – Kaloyan Manev
    Nov 19 '18 at 22:31











  • @KaloyanManev: Yes, that it does indeed. You can display the file generated by this example with the display tool of ImageMagick like this: display -size 512x512 -depth 8 RGB:/tmp/fbodump.rgb – I just cobbled together this example from various code lying around: The EGL code from the NVidia blog, some ancient framebuffer tutorial code I wrote ages ago and then threw in some immediate mode stuff, because I couldn't be bothered to go full VBO. But I tested it with NVidia proprietary and Mesa intel_dri drivers and it works (at least here).

    – datenwolf
    Nov 20 '18 at 8:50











  • Yeah I see, I also found some tutorials on how to do rendering in a buffer. About the displaying part I was asking if I can render the buffer on top of my my running GUI without creating a window.

    – Kaloyan Manev
    Nov 20 '18 at 15:46








  • 1





    @KaloyanManev: On Linux you can in fact use EGL to do that, however I never tried it so far. If you can live with not supporting NVidia GPUs, then you may have a look at GBM which allows to directly access the GPU without going through a display system. There's a demo called kmscube, which shows how to do that. Also if you want to bypass the windowing system for a specific device (say a VR headset or some non-desktop display) there's now DRM leases in the works, just for doing that.

    – datenwolf
    Nov 20 '18 at 16:54











  • I just tried both kmscube and another drm leases example called jesse-cube, both wokring in console mode. When tried with GUI only the jesse-cube works and it is inside a window. I think it is impossible to access the video memory and just draw over the desktop environment. That would be too hacky and would probably required a custom driver. Thank you very much!

    – Kaloyan Manev
    Nov 20 '18 at 19:22

















As far as I can understand this creates a buffer, draws and finally saves it to /tmp/fbodump.rgb?

– Kaloyan Manev
Nov 19 '18 at 22:31





As far as I can understand this creates a buffer, draws and finally saves it to /tmp/fbodump.rgb?

– Kaloyan Manev
Nov 19 '18 at 22:31













@KaloyanManev: Yes, that it does indeed. You can display the file generated by this example with the display tool of ImageMagick like this: display -size 512x512 -depth 8 RGB:/tmp/fbodump.rgb – I just cobbled together this example from various code lying around: The EGL code from the NVidia blog, some ancient framebuffer tutorial code I wrote ages ago and then threw in some immediate mode stuff, because I couldn't be bothered to go full VBO. But I tested it with NVidia proprietary and Mesa intel_dri drivers and it works (at least here).

– datenwolf
Nov 20 '18 at 8:50





@KaloyanManev: Yes, that it does indeed. You can display the file generated by this example with the display tool of ImageMagick like this: display -size 512x512 -depth 8 RGB:/tmp/fbodump.rgb – I just cobbled together this example from various code lying around: The EGL code from the NVidia blog, some ancient framebuffer tutorial code I wrote ages ago and then threw in some immediate mode stuff, because I couldn't be bothered to go full VBO. But I tested it with NVidia proprietary and Mesa intel_dri drivers and it works (at least here).

– datenwolf
Nov 20 '18 at 8:50













Yeah I see, I also found some tutorials on how to do rendering in a buffer. About the displaying part I was asking if I can render the buffer on top of my my running GUI without creating a window.

– Kaloyan Manev
Nov 20 '18 at 15:46







Yeah I see, I also found some tutorials on how to do rendering in a buffer. About the displaying part I was asking if I can render the buffer on top of my my running GUI without creating a window.

– Kaloyan Manev
Nov 20 '18 at 15:46






1




1





@KaloyanManev: On Linux you can in fact use EGL to do that, however I never tried it so far. If you can live with not supporting NVidia GPUs, then you may have a look at GBM which allows to directly access the GPU without going through a display system. There's a demo called kmscube, which shows how to do that. Also if you want to bypass the windowing system for a specific device (say a VR headset or some non-desktop display) there's now DRM leases in the works, just for doing that.

– datenwolf
Nov 20 '18 at 16:54





@KaloyanManev: On Linux you can in fact use EGL to do that, however I never tried it so far. If you can live with not supporting NVidia GPUs, then you may have a look at GBM which allows to directly access the GPU without going through a display system. There's a demo called kmscube, which shows how to do that. Also if you want to bypass the windowing system for a specific device (say a VR headset or some non-desktop display) there's now DRM leases in the works, just for doing that.

– datenwolf
Nov 20 '18 at 16:54













I just tried both kmscube and another drm leases example called jesse-cube, both wokring in console mode. When tried with GUI only the jesse-cube works and it is inside a window. I think it is impossible to access the video memory and just draw over the desktop environment. That would be too hacky and would probably required a custom driver. Thank you very much!

– Kaloyan Manev
Nov 20 '18 at 19:22





I just tried both kmscube and another drm leases example called jesse-cube, both wokring in console mode. When tried with GUI only the jesse-cube works and it is inside a window. I think it is impossible to access the video memory and just draw over the desktop environment. That would be too hacky and would probably required a custom driver. Thank you very much!

– Kaloyan Manev
Nov 20 '18 at 19:22




















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53381218%2fopengl-without-gui%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







這個網誌中的熱門文章

Tangent Lines Diagram Along Smooth Curve

Yusuf al-Mu'taman ibn Hud

Zucchini