Read from GL_TEXTURE_EXTERNAL_OES to GL_TEXTURE_2D have perfomance issues and glitches












2















I'm need to send data from GL_TEXTURE_EXTERNAL_OES to simple GL_TEXTURE_2D (Render image from Android player to Unity texture) and currently do it through read pixels from buffer with attached source texture. This process work correctly on my OnePlus 5 phone, but have some glitches with image on phones like xiaomi note 4, mi a2 and etc (like image is very green), and also there is perfomance issues becouse of this process works every frame and than more pixels to read, than worser perfomance (even my phone has low fps at 4k resolution). Any idea how to optimize this process or do it in some other way?



Thanks and best regards!



GLuint FramebufferName;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);

glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_EXTERNAL_OES, g_ExtTexturePointer, 0);

if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
LOGD("%s", "Error: Could not setup frame buffer.");
}

unsigned char* data = new unsigned char[g_SourceWidth * g_SourceHeight * 4];
glReadPixels(0, 0, g_SourceWidth, g_SourceHeight, GL_RGBA, GL_UNSIGNED_BYTE, data);

glBindTexture(GL_TEXTURE_2D, g_TexturePointer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, g_SourceWidth, g_SourceHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);

glDeleteFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);

delete data;


UPDATE.
Function which contain this code and function which calls it from Unity side



static void UNITY_INTERFACE_API OnRenderEvent(int eventID) { ... }

extern "C" UnityRenderingEvent UNITY_INTERFACE_EXPORT UNITY_INTERFACE_API UMDGetRenderEventFunc()
{
return OnRenderEvent;
}


Which called from Unity Update function like this:



[DllImport("RenderingPlugin")]
static extern IntPtr UMDGetRenderEventFunc();

IEnumerator UpdateVideoTexture()
{
while (true)
{
...
androidPlugin.UpdateSurfaceTexture();
GL.IssuePluginEvent(UMDGetRenderEventFunc, 1);
}
}


And Android plugin do this on its side (surfaceTexture its texture which contain this external texture on which ExoPlayer render video)



public void exportUpdateSurfaceTexture() {
synchronized (this) {
if (this.mIsStopped) {
return;
}
surfaceTexture.updateTexImage();
}
}









share|improve this question

























  • Please show the function that's calling the code above. Also, post the C# side of the code too.

    – Programmer
    Nov 20 '18 at 8:58













  • @Programmer I'm added more information about this process.

    – Urbanovich Andrew
    Nov 20 '18 at 9:16
















2















I'm need to send data from GL_TEXTURE_EXTERNAL_OES to simple GL_TEXTURE_2D (Render image from Android player to Unity texture) and currently do it through read pixels from buffer with attached source texture. This process work correctly on my OnePlus 5 phone, but have some glitches with image on phones like xiaomi note 4, mi a2 and etc (like image is very green), and also there is perfomance issues becouse of this process works every frame and than more pixels to read, than worser perfomance (even my phone has low fps at 4k resolution). Any idea how to optimize this process or do it in some other way?



Thanks and best regards!



GLuint FramebufferName;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);

glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_EXTERNAL_OES, g_ExtTexturePointer, 0);

if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
LOGD("%s", "Error: Could not setup frame buffer.");
}

unsigned char* data = new unsigned char[g_SourceWidth * g_SourceHeight * 4];
glReadPixels(0, 0, g_SourceWidth, g_SourceHeight, GL_RGBA, GL_UNSIGNED_BYTE, data);

glBindTexture(GL_TEXTURE_2D, g_TexturePointer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, g_SourceWidth, g_SourceHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);

glDeleteFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);

delete data;


UPDATE.
Function which contain this code and function which calls it from Unity side



static void UNITY_INTERFACE_API OnRenderEvent(int eventID) { ... }

extern "C" UnityRenderingEvent UNITY_INTERFACE_EXPORT UNITY_INTERFACE_API UMDGetRenderEventFunc()
{
return OnRenderEvent;
}


Which called from Unity Update function like this:



[DllImport("RenderingPlugin")]
static extern IntPtr UMDGetRenderEventFunc();

IEnumerator UpdateVideoTexture()
{
while (true)
{
...
androidPlugin.UpdateSurfaceTexture();
GL.IssuePluginEvent(UMDGetRenderEventFunc, 1);
}
}


And Android plugin do this on its side (surfaceTexture its texture which contain this external texture on which ExoPlayer render video)



public void exportUpdateSurfaceTexture() {
synchronized (this) {
if (this.mIsStopped) {
return;
}
surfaceTexture.updateTexImage();
}
}









share|improve this question

























  • Please show the function that's calling the code above. Also, post the C# side of the code too.

    – Programmer
    Nov 20 '18 at 8:58













  • @Programmer I'm added more information about this process.

    – Urbanovich Andrew
    Nov 20 '18 at 9:16














2












2








2


2






I'm need to send data from GL_TEXTURE_EXTERNAL_OES to simple GL_TEXTURE_2D (Render image from Android player to Unity texture) and currently do it through read pixels from buffer with attached source texture. This process work correctly on my OnePlus 5 phone, but have some glitches with image on phones like xiaomi note 4, mi a2 and etc (like image is very green), and also there is perfomance issues becouse of this process works every frame and than more pixels to read, than worser perfomance (even my phone has low fps at 4k resolution). Any idea how to optimize this process or do it in some other way?



Thanks and best regards!



GLuint FramebufferName;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);

glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_EXTERNAL_OES, g_ExtTexturePointer, 0);

if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
LOGD("%s", "Error: Could not setup frame buffer.");
}

unsigned char* data = new unsigned char[g_SourceWidth * g_SourceHeight * 4];
glReadPixels(0, 0, g_SourceWidth, g_SourceHeight, GL_RGBA, GL_UNSIGNED_BYTE, data);

glBindTexture(GL_TEXTURE_2D, g_TexturePointer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, g_SourceWidth, g_SourceHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);

glDeleteFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);

delete data;


UPDATE.
Function which contain this code and function which calls it from Unity side



static void UNITY_INTERFACE_API OnRenderEvent(int eventID) { ... }

extern "C" UnityRenderingEvent UNITY_INTERFACE_EXPORT UNITY_INTERFACE_API UMDGetRenderEventFunc()
{
return OnRenderEvent;
}


Which called from Unity Update function like this:



[DllImport("RenderingPlugin")]
static extern IntPtr UMDGetRenderEventFunc();

IEnumerator UpdateVideoTexture()
{
while (true)
{
...
androidPlugin.UpdateSurfaceTexture();
GL.IssuePluginEvent(UMDGetRenderEventFunc, 1);
}
}


And Android plugin do this on its side (surfaceTexture its texture which contain this external texture on which ExoPlayer render video)



public void exportUpdateSurfaceTexture() {
synchronized (this) {
if (this.mIsStopped) {
return;
}
surfaceTexture.updateTexImage();
}
}









share|improve this question
















I'm need to send data from GL_TEXTURE_EXTERNAL_OES to simple GL_TEXTURE_2D (Render image from Android player to Unity texture) and currently do it through read pixels from buffer with attached source texture. This process work correctly on my OnePlus 5 phone, but have some glitches with image on phones like xiaomi note 4, mi a2 and etc (like image is very green), and also there is perfomance issues becouse of this process works every frame and than more pixels to read, than worser perfomance (even my phone has low fps at 4k resolution). Any idea how to optimize this process or do it in some other way?



Thanks and best regards!



GLuint FramebufferName;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);

glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_EXTERNAL_OES, g_ExtTexturePointer, 0);

if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
LOGD("%s", "Error: Could not setup frame buffer.");
}

unsigned char* data = new unsigned char[g_SourceWidth * g_SourceHeight * 4];
glReadPixels(0, 0, g_SourceWidth, g_SourceHeight, GL_RGBA, GL_UNSIGNED_BYTE, data);

glBindTexture(GL_TEXTURE_2D, g_TexturePointer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, g_SourceWidth, g_SourceHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);

glDeleteFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);

delete data;


UPDATE.
Function which contain this code and function which calls it from Unity side



static void UNITY_INTERFACE_API OnRenderEvent(int eventID) { ... }

extern "C" UnityRenderingEvent UNITY_INTERFACE_EXPORT UNITY_INTERFACE_API UMDGetRenderEventFunc()
{
return OnRenderEvent;
}


Which called from Unity Update function like this:



[DllImport("RenderingPlugin")]
static extern IntPtr UMDGetRenderEventFunc();

IEnumerator UpdateVideoTexture()
{
while (true)
{
...
androidPlugin.UpdateSurfaceTexture();
GL.IssuePluginEvent(UMDGetRenderEventFunc, 1);
}
}


And Android plugin do this on its side (surfaceTexture its texture which contain this external texture on which ExoPlayer render video)



public void exportUpdateSurfaceTexture() {
synchronized (this) {
if (this.mIsStopped) {
return;
}
surfaceTexture.updateTexImage();
}
}






android unity3d opengl-es framebuffer exoplayer






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 20 '18 at 14:24









Nicol Bolas

288k33479651




288k33479651










asked Nov 20 '18 at 8:30









Urbanovich AndrewUrbanovich Andrew

133




133













  • Please show the function that's calling the code above. Also, post the C# side of the code too.

    – Programmer
    Nov 20 '18 at 8:58













  • @Programmer I'm added more information about this process.

    – Urbanovich Andrew
    Nov 20 '18 at 9:16



















  • Please show the function that's calling the code above. Also, post the C# side of the code too.

    – Programmer
    Nov 20 '18 at 8:58













  • @Programmer I'm added more information about this process.

    – Urbanovich Andrew
    Nov 20 '18 at 9:16

















Please show the function that's calling the code above. Also, post the C# side of the code too.

– Programmer
Nov 20 '18 at 8:58







Please show the function that's calling the code above. Also, post the C# side of the code too.

– Programmer
Nov 20 '18 at 8:58















@Programmer I'm added more information about this process.

– Urbanovich Andrew
Nov 20 '18 at 9:16





@Programmer I'm added more information about this process.

– Urbanovich Andrew
Nov 20 '18 at 9:16












2 Answers
2






active

oldest

votes


















1














On the C++ side:



You're creating and destroying pixel data every frame when you do new unsigned char[g_SourceWidth * g_SourceHeight * 4]; and delete data and that's expensive depending on the Texture size. Create the texture data once then re-use it.



One way to do this is to have static variables on the C++ side hold the texture information then a function to initialize those variables::



static void* pixelData = nullptr;
static int _x;
static int _y;
static int _width;
static int _height;

void initPixelData(void* buffer, int x, int y, int width, int height) {
pixelData = buffer;
_x = x;
_y = y;
_width = width;
_height = height;
}


Then your capture function should be re-written to remove new unsigned char[g_SourceWidth * g_SourceHeight * 4]; and delete data but use the static variables.



static void UNITY_INTERFACE_API OnRenderEvent(int eventID)
{
if (pixelData == nullptr) {
//Debug::Log("Pointer is null", Color::Red);
return;
}

GLuint FramebufferName;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);

glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_EXTERNAL_OES, g_ExtTexturePointer, 0);

if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
LOGD("%s", "Error: Could not setup frame buffer.");
}

glReadPixels(_x, _y, _width, _height, GL_RGBA, GL_UNSIGNED_BYTE, pixelData);

glBindTexture(GL_TEXTURE_2D, g_TexturePointer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, _width, _height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData);

glDeleteFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
}

extern "C" UnityRenderingEvent UNITY_INTERFACE_EXPORT UNITY_INTERFACE_API
UMDGetRenderEventFunc()
{
return OnRenderEvent;
}


On the C# side:



[DllImport("RenderingPlugin", CallingConvention = CallingConvention.Cdecl)]
public static extern void initPixelData(IntPtr buffer, int x, int y, int width, int height);

[DllImport("RenderingPlugin", CallingConvention = CallingConvention.StdCall)]
private static extern IntPtr UMDGetRenderEventFunc();


Create the Texture information, pin it and send the pointer to C++:



int width = 500;
int height = 500;

//Where Pixel data will be saved
byte screenData;
//Where handle that pins the Pixel data will stay
GCHandle pinHandler;

//Used to test the color
public RawImage rawImageColor;
private Texture2D texture;

// Use this for initialization
void Awake()
{
Resolution res = Screen.currentResolution;
width = res.width;
height = res.height;

//Allocate array to be used
screenData = new byte[width * height * 4];
texture = new Texture2D(width, height, TextureFormat.RGBA32, false, false);

//Pin the Array so that it doesn't move around
pinHandler = GCHandle.Alloc(screenData, GCHandleType.Pinned);

//Register the screenshot and pass the array that will receive the pixels
IntPtr arrayPtr = pinHandler.AddrOfPinnedObject();

initPixelData(arrayPtr, 0, 0, width, height);

StartCoroutine(UpdateVideoTexture());
}


Then to update the texture, see the sample below. Note that there are two methods to update the texture as shown on the code below. If you run into issues with Method1, comment out the two lines which uses texture.LoadRawTextureData and texture.Apply and un-comment the Method2 code which uses the ByteArrayToColor, texture.SetPixels and texture.Apply function:



IEnumerator UpdateVideoTexture()
{
while (true)
{
//Take screenshot of the screen
GL.IssuePluginEvent(UMDGetRenderEventFunc(), 1);

//Update Texture Method1
texture.LoadRawTextureData(screenData);
texture.Apply();

//Update Texture Method2. Use this if the Method1 above crashes
/*
ByteArrayToColor();
texture.SetPixels(colors);
texture.Apply();
*/

//Test it by assigning the texture to a raw image
rawImageColor.texture = texture;

//Wait for a frame
yield return null;
}
}

Color colors = null;

void ByteArrayToColor()
{
if (colors == null)
{
colors = new Color[screenData.Length / 4];
}

for (int i = 0; i < screenData.Length; i += 4)
{
colors[i / 4] = new Color(screenData[i],
screenData[i + 1],
screenData[i + 2],
screenData[i + 3]);
}
}


Unpin the array when done or when the script is about to be destroyed:



void OnDisable()
{
//Unpin the array when disabled
pinHandler.Free();
}





share|improve this answer
























  • Wow, thanks for full answer with clarification, I gonna check it in few hours! The only thing (which I clearly doesn't mention before, my bad) is - this data what I read from Android player it's dash streaming video and it can change resolution in any time depending on internet connection, this is why I always reinit array with g_Source width and height.

    – Urbanovich Andrew
    Nov 20 '18 at 11:49











  • If resolution can change, add a callback function to the C# side that you can call to resize the array on the C# side if the width and height don't match anymore then free the current handler with pinHandler.Free(), resize the array, the texture size, pin the new array again with pinHandler = GCHandle.Alloc(screenData, GCHandleType.Pinned) get the address again with pinHandler.AddrOfPinnedObject() and update the information on the C++ side by calling the initPixelData function. You're basically re-doing what you did before in the Awake function.

    – Programmer
    Nov 20 '18 at 11:56













  • Thanks! It became better, but still not enough for 4k resolution even on OnePlus 5) Maybe you know why this kind of bug with image can showed up? It happens not on all phones, but still link

    – Urbanovich Andrew
    Nov 20 '18 at 16:15













  • Where are you getting the video texture and does the image you linked happen every frame or once in while?

    – Programmer
    Nov 20 '18 at 18:54











  • I found out that this effect exists only on Xiaomi phones (check four different Xiaomi models), but razer and OnePlus is ok. I link videotexture from exoplayer (android) with g_exttexturepointer only once at start and after that only everything you saw in code. Update this texture at android side and after this update in Unity.

    – Urbanovich Andrew
    Nov 20 '18 at 19:57



















0














Calling glReadPixels is always going to be slow; CPUs are not good at bulk data transfer.



Ideally you'd managed to convince Unity to accept an external image handle, and do the whole process zero copy, but failing that I would use a GPU render-to-texture and use a shader to transfer from the external image to the RGB surface.






share|improve this answer


























  • Not so long ago found some information about pixel buffer and reading from it, but never test it. Maybe you have some experience with it?

    – Urbanovich Andrew
    Nov 21 '18 at 11:39













  • You can use a pbuffer to do asynchronous glReadPixels, but it's still going to be horribly slow. As I said, CPUs are not designed for fast bulk data transfer. Use the GPU, it's what it's exceptionally good at ...

    – solidpixel
    Nov 22 '18 at 23:01











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53388939%2fread-from-gl-texture-external-oes-to-gl-texture-2d-have-perfomance-issues-and-gl%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









1














On the C++ side:



You're creating and destroying pixel data every frame when you do new unsigned char[g_SourceWidth * g_SourceHeight * 4]; and delete data and that's expensive depending on the Texture size. Create the texture data once then re-use it.



One way to do this is to have static variables on the C++ side hold the texture information then a function to initialize those variables::



static void* pixelData = nullptr;
static int _x;
static int _y;
static int _width;
static int _height;

void initPixelData(void* buffer, int x, int y, int width, int height) {
pixelData = buffer;
_x = x;
_y = y;
_width = width;
_height = height;
}


Then your capture function should be re-written to remove new unsigned char[g_SourceWidth * g_SourceHeight * 4]; and delete data but use the static variables.



static void UNITY_INTERFACE_API OnRenderEvent(int eventID)
{
if (pixelData == nullptr) {
//Debug::Log("Pointer is null", Color::Red);
return;
}

GLuint FramebufferName;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);

glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_EXTERNAL_OES, g_ExtTexturePointer, 0);

if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
LOGD("%s", "Error: Could not setup frame buffer.");
}

glReadPixels(_x, _y, _width, _height, GL_RGBA, GL_UNSIGNED_BYTE, pixelData);

glBindTexture(GL_TEXTURE_2D, g_TexturePointer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, _width, _height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData);

glDeleteFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
}

extern "C" UnityRenderingEvent UNITY_INTERFACE_EXPORT UNITY_INTERFACE_API
UMDGetRenderEventFunc()
{
return OnRenderEvent;
}


On the C# side:



[DllImport("RenderingPlugin", CallingConvention = CallingConvention.Cdecl)]
public static extern void initPixelData(IntPtr buffer, int x, int y, int width, int height);

[DllImport("RenderingPlugin", CallingConvention = CallingConvention.StdCall)]
private static extern IntPtr UMDGetRenderEventFunc();


Create the Texture information, pin it and send the pointer to C++:



int width = 500;
int height = 500;

//Where Pixel data will be saved
byte screenData;
//Where handle that pins the Pixel data will stay
GCHandle pinHandler;

//Used to test the color
public RawImage rawImageColor;
private Texture2D texture;

// Use this for initialization
void Awake()
{
Resolution res = Screen.currentResolution;
width = res.width;
height = res.height;

//Allocate array to be used
screenData = new byte[width * height * 4];
texture = new Texture2D(width, height, TextureFormat.RGBA32, false, false);

//Pin the Array so that it doesn't move around
pinHandler = GCHandle.Alloc(screenData, GCHandleType.Pinned);

//Register the screenshot and pass the array that will receive the pixels
IntPtr arrayPtr = pinHandler.AddrOfPinnedObject();

initPixelData(arrayPtr, 0, 0, width, height);

StartCoroutine(UpdateVideoTexture());
}


Then to update the texture, see the sample below. Note that there are two methods to update the texture as shown on the code below. If you run into issues with Method1, comment out the two lines which uses texture.LoadRawTextureData and texture.Apply and un-comment the Method2 code which uses the ByteArrayToColor, texture.SetPixels and texture.Apply function:



IEnumerator UpdateVideoTexture()
{
while (true)
{
//Take screenshot of the screen
GL.IssuePluginEvent(UMDGetRenderEventFunc(), 1);

//Update Texture Method1
texture.LoadRawTextureData(screenData);
texture.Apply();

//Update Texture Method2. Use this if the Method1 above crashes
/*
ByteArrayToColor();
texture.SetPixels(colors);
texture.Apply();
*/

//Test it by assigning the texture to a raw image
rawImageColor.texture = texture;

//Wait for a frame
yield return null;
}
}

Color colors = null;

void ByteArrayToColor()
{
if (colors == null)
{
colors = new Color[screenData.Length / 4];
}

for (int i = 0; i < screenData.Length; i += 4)
{
colors[i / 4] = new Color(screenData[i],
screenData[i + 1],
screenData[i + 2],
screenData[i + 3]);
}
}


Unpin the array when done or when the script is about to be destroyed:



void OnDisable()
{
//Unpin the array when disabled
pinHandler.Free();
}





share|improve this answer
























  • Wow, thanks for full answer with clarification, I gonna check it in few hours! The only thing (which I clearly doesn't mention before, my bad) is - this data what I read from Android player it's dash streaming video and it can change resolution in any time depending on internet connection, this is why I always reinit array with g_Source width and height.

    – Urbanovich Andrew
    Nov 20 '18 at 11:49











  • If resolution can change, add a callback function to the C# side that you can call to resize the array on the C# side if the width and height don't match anymore then free the current handler with pinHandler.Free(), resize the array, the texture size, pin the new array again with pinHandler = GCHandle.Alloc(screenData, GCHandleType.Pinned) get the address again with pinHandler.AddrOfPinnedObject() and update the information on the C++ side by calling the initPixelData function. You're basically re-doing what you did before in the Awake function.

    – Programmer
    Nov 20 '18 at 11:56













  • Thanks! It became better, but still not enough for 4k resolution even on OnePlus 5) Maybe you know why this kind of bug with image can showed up? It happens not on all phones, but still link

    – Urbanovich Andrew
    Nov 20 '18 at 16:15













  • Where are you getting the video texture and does the image you linked happen every frame or once in while?

    – Programmer
    Nov 20 '18 at 18:54











  • I found out that this effect exists only on Xiaomi phones (check four different Xiaomi models), but razer and OnePlus is ok. I link videotexture from exoplayer (android) with g_exttexturepointer only once at start and after that only everything you saw in code. Update this texture at android side and after this update in Unity.

    – Urbanovich Andrew
    Nov 20 '18 at 19:57
















1














On the C++ side:



You're creating and destroying pixel data every frame when you do new unsigned char[g_SourceWidth * g_SourceHeight * 4]; and delete data and that's expensive depending on the Texture size. Create the texture data once then re-use it.



One way to do this is to have static variables on the C++ side hold the texture information then a function to initialize those variables::



static void* pixelData = nullptr;
static int _x;
static int _y;
static int _width;
static int _height;

void initPixelData(void* buffer, int x, int y, int width, int height) {
pixelData = buffer;
_x = x;
_y = y;
_width = width;
_height = height;
}


Then your capture function should be re-written to remove new unsigned char[g_SourceWidth * g_SourceHeight * 4]; and delete data but use the static variables.



static void UNITY_INTERFACE_API OnRenderEvent(int eventID)
{
if (pixelData == nullptr) {
//Debug::Log("Pointer is null", Color::Red);
return;
}

GLuint FramebufferName;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);

glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_EXTERNAL_OES, g_ExtTexturePointer, 0);

if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
LOGD("%s", "Error: Could not setup frame buffer.");
}

glReadPixels(_x, _y, _width, _height, GL_RGBA, GL_UNSIGNED_BYTE, pixelData);

glBindTexture(GL_TEXTURE_2D, g_TexturePointer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, _width, _height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData);

glDeleteFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
}

extern "C" UnityRenderingEvent UNITY_INTERFACE_EXPORT UNITY_INTERFACE_API
UMDGetRenderEventFunc()
{
return OnRenderEvent;
}


On the C# side:



[DllImport("RenderingPlugin", CallingConvention = CallingConvention.Cdecl)]
public static extern void initPixelData(IntPtr buffer, int x, int y, int width, int height);

[DllImport("RenderingPlugin", CallingConvention = CallingConvention.StdCall)]
private static extern IntPtr UMDGetRenderEventFunc();


Create the Texture information, pin it and send the pointer to C++:



int width = 500;
int height = 500;

//Where Pixel data will be saved
byte screenData;
//Where handle that pins the Pixel data will stay
GCHandle pinHandler;

//Used to test the color
public RawImage rawImageColor;
private Texture2D texture;

// Use this for initialization
void Awake()
{
Resolution res = Screen.currentResolution;
width = res.width;
height = res.height;

//Allocate array to be used
screenData = new byte[width * height * 4];
texture = new Texture2D(width, height, TextureFormat.RGBA32, false, false);

//Pin the Array so that it doesn't move around
pinHandler = GCHandle.Alloc(screenData, GCHandleType.Pinned);

//Register the screenshot and pass the array that will receive the pixels
IntPtr arrayPtr = pinHandler.AddrOfPinnedObject();

initPixelData(arrayPtr, 0, 0, width, height);

StartCoroutine(UpdateVideoTexture());
}


Then to update the texture, see the sample below. Note that there are two methods to update the texture as shown on the code below. If you run into issues with Method1, comment out the two lines which uses texture.LoadRawTextureData and texture.Apply and un-comment the Method2 code which uses the ByteArrayToColor, texture.SetPixels and texture.Apply function:



IEnumerator UpdateVideoTexture()
{
while (true)
{
//Take screenshot of the screen
GL.IssuePluginEvent(UMDGetRenderEventFunc(), 1);

//Update Texture Method1
texture.LoadRawTextureData(screenData);
texture.Apply();

//Update Texture Method2. Use this if the Method1 above crashes
/*
ByteArrayToColor();
texture.SetPixels(colors);
texture.Apply();
*/

//Test it by assigning the texture to a raw image
rawImageColor.texture = texture;

//Wait for a frame
yield return null;
}
}

Color colors = null;

void ByteArrayToColor()
{
if (colors == null)
{
colors = new Color[screenData.Length / 4];
}

for (int i = 0; i < screenData.Length; i += 4)
{
colors[i / 4] = new Color(screenData[i],
screenData[i + 1],
screenData[i + 2],
screenData[i + 3]);
}
}


Unpin the array when done or when the script is about to be destroyed:



void OnDisable()
{
//Unpin the array when disabled
pinHandler.Free();
}





share|improve this answer
























  • Wow, thanks for full answer with clarification, I gonna check it in few hours! The only thing (which I clearly doesn't mention before, my bad) is - this data what I read from Android player it's dash streaming video and it can change resolution in any time depending on internet connection, this is why I always reinit array with g_Source width and height.

    – Urbanovich Andrew
    Nov 20 '18 at 11:49











  • If resolution can change, add a callback function to the C# side that you can call to resize the array on the C# side if the width and height don't match anymore then free the current handler with pinHandler.Free(), resize the array, the texture size, pin the new array again with pinHandler = GCHandle.Alloc(screenData, GCHandleType.Pinned) get the address again with pinHandler.AddrOfPinnedObject() and update the information on the C++ side by calling the initPixelData function. You're basically re-doing what you did before in the Awake function.

    – Programmer
    Nov 20 '18 at 11:56













  • Thanks! It became better, but still not enough for 4k resolution even on OnePlus 5) Maybe you know why this kind of bug with image can showed up? It happens not on all phones, but still link

    – Urbanovich Andrew
    Nov 20 '18 at 16:15













  • Where are you getting the video texture and does the image you linked happen every frame or once in while?

    – Programmer
    Nov 20 '18 at 18:54











  • I found out that this effect exists only on Xiaomi phones (check four different Xiaomi models), but razer and OnePlus is ok. I link videotexture from exoplayer (android) with g_exttexturepointer only once at start and after that only everything you saw in code. Update this texture at android side and after this update in Unity.

    – Urbanovich Andrew
    Nov 20 '18 at 19:57














1












1








1







On the C++ side:



You're creating and destroying pixel data every frame when you do new unsigned char[g_SourceWidth * g_SourceHeight * 4]; and delete data and that's expensive depending on the Texture size. Create the texture data once then re-use it.



One way to do this is to have static variables on the C++ side hold the texture information then a function to initialize those variables::



static void* pixelData = nullptr;
static int _x;
static int _y;
static int _width;
static int _height;

void initPixelData(void* buffer, int x, int y, int width, int height) {
pixelData = buffer;
_x = x;
_y = y;
_width = width;
_height = height;
}


Then your capture function should be re-written to remove new unsigned char[g_SourceWidth * g_SourceHeight * 4]; and delete data but use the static variables.



static void UNITY_INTERFACE_API OnRenderEvent(int eventID)
{
if (pixelData == nullptr) {
//Debug::Log("Pointer is null", Color::Red);
return;
}

GLuint FramebufferName;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);

glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_EXTERNAL_OES, g_ExtTexturePointer, 0);

if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
LOGD("%s", "Error: Could not setup frame buffer.");
}

glReadPixels(_x, _y, _width, _height, GL_RGBA, GL_UNSIGNED_BYTE, pixelData);

glBindTexture(GL_TEXTURE_2D, g_TexturePointer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, _width, _height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData);

glDeleteFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
}

extern "C" UnityRenderingEvent UNITY_INTERFACE_EXPORT UNITY_INTERFACE_API
UMDGetRenderEventFunc()
{
return OnRenderEvent;
}


On the C# side:



[DllImport("RenderingPlugin", CallingConvention = CallingConvention.Cdecl)]
public static extern void initPixelData(IntPtr buffer, int x, int y, int width, int height);

[DllImport("RenderingPlugin", CallingConvention = CallingConvention.StdCall)]
private static extern IntPtr UMDGetRenderEventFunc();


Create the Texture information, pin it and send the pointer to C++:



int width = 500;
int height = 500;

//Where Pixel data will be saved
byte screenData;
//Where handle that pins the Pixel data will stay
GCHandle pinHandler;

//Used to test the color
public RawImage rawImageColor;
private Texture2D texture;

// Use this for initialization
void Awake()
{
Resolution res = Screen.currentResolution;
width = res.width;
height = res.height;

//Allocate array to be used
screenData = new byte[width * height * 4];
texture = new Texture2D(width, height, TextureFormat.RGBA32, false, false);

//Pin the Array so that it doesn't move around
pinHandler = GCHandle.Alloc(screenData, GCHandleType.Pinned);

//Register the screenshot and pass the array that will receive the pixels
IntPtr arrayPtr = pinHandler.AddrOfPinnedObject();

initPixelData(arrayPtr, 0, 0, width, height);

StartCoroutine(UpdateVideoTexture());
}


Then to update the texture, see the sample below. Note that there are two methods to update the texture as shown on the code below. If you run into issues with Method1, comment out the two lines which uses texture.LoadRawTextureData and texture.Apply and un-comment the Method2 code which uses the ByteArrayToColor, texture.SetPixels and texture.Apply function:



IEnumerator UpdateVideoTexture()
{
while (true)
{
//Take screenshot of the screen
GL.IssuePluginEvent(UMDGetRenderEventFunc(), 1);

//Update Texture Method1
texture.LoadRawTextureData(screenData);
texture.Apply();

//Update Texture Method2. Use this if the Method1 above crashes
/*
ByteArrayToColor();
texture.SetPixels(colors);
texture.Apply();
*/

//Test it by assigning the texture to a raw image
rawImageColor.texture = texture;

//Wait for a frame
yield return null;
}
}

Color colors = null;

void ByteArrayToColor()
{
if (colors == null)
{
colors = new Color[screenData.Length / 4];
}

for (int i = 0; i < screenData.Length; i += 4)
{
colors[i / 4] = new Color(screenData[i],
screenData[i + 1],
screenData[i + 2],
screenData[i + 3]);
}
}


Unpin the array when done or when the script is about to be destroyed:



void OnDisable()
{
//Unpin the array when disabled
pinHandler.Free();
}





share|improve this answer













On the C++ side:



You're creating and destroying pixel data every frame when you do new unsigned char[g_SourceWidth * g_SourceHeight * 4]; and delete data and that's expensive depending on the Texture size. Create the texture data once then re-use it.



One way to do this is to have static variables on the C++ side hold the texture information then a function to initialize those variables::



static void* pixelData = nullptr;
static int _x;
static int _y;
static int _width;
static int _height;

void initPixelData(void* buffer, int x, int y, int width, int height) {
pixelData = buffer;
_x = x;
_y = y;
_width = width;
_height = height;
}


Then your capture function should be re-written to remove new unsigned char[g_SourceWidth * g_SourceHeight * 4]; and delete data but use the static variables.



static void UNITY_INTERFACE_API OnRenderEvent(int eventID)
{
if (pixelData == nullptr) {
//Debug::Log("Pointer is null", Color::Red);
return;
}

GLuint FramebufferName;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);

glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_EXTERNAL_OES, g_ExtTexturePointer, 0);

if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
LOGD("%s", "Error: Could not setup frame buffer.");
}

glReadPixels(_x, _y, _width, _height, GL_RGBA, GL_UNSIGNED_BYTE, pixelData);

glBindTexture(GL_TEXTURE_2D, g_TexturePointer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, _width, _height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData);

glDeleteFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
}

extern "C" UnityRenderingEvent UNITY_INTERFACE_EXPORT UNITY_INTERFACE_API
UMDGetRenderEventFunc()
{
return OnRenderEvent;
}


On the C# side:



[DllImport("RenderingPlugin", CallingConvention = CallingConvention.Cdecl)]
public static extern void initPixelData(IntPtr buffer, int x, int y, int width, int height);

[DllImport("RenderingPlugin", CallingConvention = CallingConvention.StdCall)]
private static extern IntPtr UMDGetRenderEventFunc();


Create the Texture information, pin it and send the pointer to C++:



int width = 500;
int height = 500;

//Where Pixel data will be saved
byte screenData;
//Where handle that pins the Pixel data will stay
GCHandle pinHandler;

//Used to test the color
public RawImage rawImageColor;
private Texture2D texture;

// Use this for initialization
void Awake()
{
Resolution res = Screen.currentResolution;
width = res.width;
height = res.height;

//Allocate array to be used
screenData = new byte[width * height * 4];
texture = new Texture2D(width, height, TextureFormat.RGBA32, false, false);

//Pin the Array so that it doesn't move around
pinHandler = GCHandle.Alloc(screenData, GCHandleType.Pinned);

//Register the screenshot and pass the array that will receive the pixels
IntPtr arrayPtr = pinHandler.AddrOfPinnedObject();

initPixelData(arrayPtr, 0, 0, width, height);

StartCoroutine(UpdateVideoTexture());
}


Then to update the texture, see the sample below. Note that there are two methods to update the texture as shown on the code below. If you run into issues with Method1, comment out the two lines which uses texture.LoadRawTextureData and texture.Apply and un-comment the Method2 code which uses the ByteArrayToColor, texture.SetPixels and texture.Apply function:



IEnumerator UpdateVideoTexture()
{
while (true)
{
//Take screenshot of the screen
GL.IssuePluginEvent(UMDGetRenderEventFunc(), 1);

//Update Texture Method1
texture.LoadRawTextureData(screenData);
texture.Apply();

//Update Texture Method2. Use this if the Method1 above crashes
/*
ByteArrayToColor();
texture.SetPixels(colors);
texture.Apply();
*/

//Test it by assigning the texture to a raw image
rawImageColor.texture = texture;

//Wait for a frame
yield return null;
}
}

Color colors = null;

void ByteArrayToColor()
{
if (colors == null)
{
colors = new Color[screenData.Length / 4];
}

for (int i = 0; i < screenData.Length; i += 4)
{
colors[i / 4] = new Color(screenData[i],
screenData[i + 1],
screenData[i + 2],
screenData[i + 3]);
}
}


Unpin the array when done or when the script is about to be destroyed:



void OnDisable()
{
//Unpin the array when disabled
pinHandler.Free();
}






share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 20 '18 at 10:54









ProgrammerProgrammer

77.2k1089158




77.2k1089158













  • Wow, thanks for full answer with clarification, I gonna check it in few hours! The only thing (which I clearly doesn't mention before, my bad) is - this data what I read from Android player it's dash streaming video and it can change resolution in any time depending on internet connection, this is why I always reinit array with g_Source width and height.

    – Urbanovich Andrew
    Nov 20 '18 at 11:49











  • If resolution can change, add a callback function to the C# side that you can call to resize the array on the C# side if the width and height don't match anymore then free the current handler with pinHandler.Free(), resize the array, the texture size, pin the new array again with pinHandler = GCHandle.Alloc(screenData, GCHandleType.Pinned) get the address again with pinHandler.AddrOfPinnedObject() and update the information on the C++ side by calling the initPixelData function. You're basically re-doing what you did before in the Awake function.

    – Programmer
    Nov 20 '18 at 11:56













  • Thanks! It became better, but still not enough for 4k resolution even on OnePlus 5) Maybe you know why this kind of bug with image can showed up? It happens not on all phones, but still link

    – Urbanovich Andrew
    Nov 20 '18 at 16:15













  • Where are you getting the video texture and does the image you linked happen every frame or once in while?

    – Programmer
    Nov 20 '18 at 18:54











  • I found out that this effect exists only on Xiaomi phones (check four different Xiaomi models), but razer and OnePlus is ok. I link videotexture from exoplayer (android) with g_exttexturepointer only once at start and after that only everything you saw in code. Update this texture at android side and after this update in Unity.

    – Urbanovich Andrew
    Nov 20 '18 at 19:57



















  • Wow, thanks for full answer with clarification, I gonna check it in few hours! The only thing (which I clearly doesn't mention before, my bad) is - this data what I read from Android player it's dash streaming video and it can change resolution in any time depending on internet connection, this is why I always reinit array with g_Source width and height.

    – Urbanovich Andrew
    Nov 20 '18 at 11:49











  • If resolution can change, add a callback function to the C# side that you can call to resize the array on the C# side if the width and height don't match anymore then free the current handler with pinHandler.Free(), resize the array, the texture size, pin the new array again with pinHandler = GCHandle.Alloc(screenData, GCHandleType.Pinned) get the address again with pinHandler.AddrOfPinnedObject() and update the information on the C++ side by calling the initPixelData function. You're basically re-doing what you did before in the Awake function.

    – Programmer
    Nov 20 '18 at 11:56













  • Thanks! It became better, but still not enough for 4k resolution even on OnePlus 5) Maybe you know why this kind of bug with image can showed up? It happens not on all phones, but still link

    – Urbanovich Andrew
    Nov 20 '18 at 16:15













  • Where are you getting the video texture and does the image you linked happen every frame or once in while?

    – Programmer
    Nov 20 '18 at 18:54











  • I found out that this effect exists only on Xiaomi phones (check four different Xiaomi models), but razer and OnePlus is ok. I link videotexture from exoplayer (android) with g_exttexturepointer only once at start and after that only everything you saw in code. Update this texture at android side and after this update in Unity.

    – Urbanovich Andrew
    Nov 20 '18 at 19:57

















Wow, thanks for full answer with clarification, I gonna check it in few hours! The only thing (which I clearly doesn't mention before, my bad) is - this data what I read from Android player it's dash streaming video and it can change resolution in any time depending on internet connection, this is why I always reinit array with g_Source width and height.

– Urbanovich Andrew
Nov 20 '18 at 11:49





Wow, thanks for full answer with clarification, I gonna check it in few hours! The only thing (which I clearly doesn't mention before, my bad) is - this data what I read from Android player it's dash streaming video and it can change resolution in any time depending on internet connection, this is why I always reinit array with g_Source width and height.

– Urbanovich Andrew
Nov 20 '18 at 11:49













If resolution can change, add a callback function to the C# side that you can call to resize the array on the C# side if the width and height don't match anymore then free the current handler with pinHandler.Free(), resize the array, the texture size, pin the new array again with pinHandler = GCHandle.Alloc(screenData, GCHandleType.Pinned) get the address again with pinHandler.AddrOfPinnedObject() and update the information on the C++ side by calling the initPixelData function. You're basically re-doing what you did before in the Awake function.

– Programmer
Nov 20 '18 at 11:56







If resolution can change, add a callback function to the C# side that you can call to resize the array on the C# side if the width and height don't match anymore then free the current handler with pinHandler.Free(), resize the array, the texture size, pin the new array again with pinHandler = GCHandle.Alloc(screenData, GCHandleType.Pinned) get the address again with pinHandler.AddrOfPinnedObject() and update the information on the C++ side by calling the initPixelData function. You're basically re-doing what you did before in the Awake function.

– Programmer
Nov 20 '18 at 11:56















Thanks! It became better, but still not enough for 4k resolution even on OnePlus 5) Maybe you know why this kind of bug with image can showed up? It happens not on all phones, but still link

– Urbanovich Andrew
Nov 20 '18 at 16:15







Thanks! It became better, but still not enough for 4k resolution even on OnePlus 5) Maybe you know why this kind of bug with image can showed up? It happens not on all phones, but still link

– Urbanovich Andrew
Nov 20 '18 at 16:15















Where are you getting the video texture and does the image you linked happen every frame or once in while?

– Programmer
Nov 20 '18 at 18:54





Where are you getting the video texture and does the image you linked happen every frame or once in while?

– Programmer
Nov 20 '18 at 18:54













I found out that this effect exists only on Xiaomi phones (check four different Xiaomi models), but razer and OnePlus is ok. I link videotexture from exoplayer (android) with g_exttexturepointer only once at start and after that only everything you saw in code. Update this texture at android side and after this update in Unity.

– Urbanovich Andrew
Nov 20 '18 at 19:57





I found out that this effect exists only on Xiaomi phones (check four different Xiaomi models), but razer and OnePlus is ok. I link videotexture from exoplayer (android) with g_exttexturepointer only once at start and after that only everything you saw in code. Update this texture at android side and after this update in Unity.

– Urbanovich Andrew
Nov 20 '18 at 19:57













0














Calling glReadPixels is always going to be slow; CPUs are not good at bulk data transfer.



Ideally you'd managed to convince Unity to accept an external image handle, and do the whole process zero copy, but failing that I would use a GPU render-to-texture and use a shader to transfer from the external image to the RGB surface.






share|improve this answer


























  • Not so long ago found some information about pixel buffer and reading from it, but never test it. Maybe you have some experience with it?

    – Urbanovich Andrew
    Nov 21 '18 at 11:39













  • You can use a pbuffer to do asynchronous glReadPixels, but it's still going to be horribly slow. As I said, CPUs are not designed for fast bulk data transfer. Use the GPU, it's what it's exceptionally good at ...

    – solidpixel
    Nov 22 '18 at 23:01
















0














Calling glReadPixels is always going to be slow; CPUs are not good at bulk data transfer.



Ideally you'd managed to convince Unity to accept an external image handle, and do the whole process zero copy, but failing that I would use a GPU render-to-texture and use a shader to transfer from the external image to the RGB surface.






share|improve this answer


























  • Not so long ago found some information about pixel buffer and reading from it, but never test it. Maybe you have some experience with it?

    – Urbanovich Andrew
    Nov 21 '18 at 11:39













  • You can use a pbuffer to do asynchronous glReadPixels, but it's still going to be horribly slow. As I said, CPUs are not designed for fast bulk data transfer. Use the GPU, it's what it's exceptionally good at ...

    – solidpixel
    Nov 22 '18 at 23:01














0












0








0







Calling glReadPixels is always going to be slow; CPUs are not good at bulk data transfer.



Ideally you'd managed to convince Unity to accept an external image handle, and do the whole process zero copy, but failing that I would use a GPU render-to-texture and use a shader to transfer from the external image to the RGB surface.






share|improve this answer















Calling glReadPixels is always going to be slow; CPUs are not good at bulk data transfer.



Ideally you'd managed to convince Unity to accept an external image handle, and do the whole process zero copy, but failing that I would use a GPU render-to-texture and use a shader to transfer from the external image to the RGB surface.







share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 22 '18 at 23:00

























answered Nov 21 '18 at 11:31









solidpixelsolidpixel

5,12011122




5,12011122













  • Not so long ago found some information about pixel buffer and reading from it, but never test it. Maybe you have some experience with it?

    – Urbanovich Andrew
    Nov 21 '18 at 11:39













  • You can use a pbuffer to do asynchronous glReadPixels, but it's still going to be horribly slow. As I said, CPUs are not designed for fast bulk data transfer. Use the GPU, it's what it's exceptionally good at ...

    – solidpixel
    Nov 22 '18 at 23:01



















  • Not so long ago found some information about pixel buffer and reading from it, but never test it. Maybe you have some experience with it?

    – Urbanovich Andrew
    Nov 21 '18 at 11:39













  • You can use a pbuffer to do asynchronous glReadPixels, but it's still going to be horribly slow. As I said, CPUs are not designed for fast bulk data transfer. Use the GPU, it's what it's exceptionally good at ...

    – solidpixel
    Nov 22 '18 at 23:01

















Not so long ago found some information about pixel buffer and reading from it, but never test it. Maybe you have some experience with it?

– Urbanovich Andrew
Nov 21 '18 at 11:39







Not so long ago found some information about pixel buffer and reading from it, but never test it. Maybe you have some experience with it?

– Urbanovich Andrew
Nov 21 '18 at 11:39















You can use a pbuffer to do asynchronous glReadPixels, but it's still going to be horribly slow. As I said, CPUs are not designed for fast bulk data transfer. Use the GPU, it's what it's exceptionally good at ...

– solidpixel
Nov 22 '18 at 23:01





You can use a pbuffer to do asynchronous glReadPixels, but it's still going to be horribly slow. As I said, CPUs are not designed for fast bulk data transfer. Use the GPU, it's what it's exceptionally good at ...

– solidpixel
Nov 22 '18 at 23:01


















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53388939%2fread-from-gl-texture-external-oes-to-gl-texture-2d-have-perfomance-issues-and-gl%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







這個網誌中的熱門文章

Xamarin.form Move up view when keyboard appear

Post-Redirect-Get with Spring WebFlux and Thymeleaf

Anylogic : not able to use stopDelay()