76 views
in System Integration by

Hi,

I am using NXP IMXRT1170 IC and Embedded Wizard v13 with Platform Package VGLite RGBA8888.

I am currently displaying 720 x 576 pixel PAL camera output on LCD Display by triggering Extern Bitmap every 10ms period in Embedded Wizard.

 Within Extern Bitmap function, I am making some PXP operations for YUV-RGB conversion with my BSP function. Later I am feeding this buffer to Bitmap as following;

bitmap = EwCreateBitmap(EW_PIXEL_FORMAT_NATIVE, frameSize, 0, 1);

if(bitmap != (XBitmap *)NULL)

{

lock = EwLockBitmap(bitmap, 0, EwNewRect(0, 0, frameSize.X, frameSize.Y), 0, 1);

destinationAddress = (U_8*)lock->Pixel1;

pitchBytes = (lock->Pitch1Y >> 2);

BytePerPixel = lock->Pitch1X;

.....

}

Up to now, this method has worked fine because we are able to show camera in any area of GUI Embedded Wizard. However, we required to implement some images on directly Camera output in GUI. Our requirement looks like the example that Think about rear camera in a car showing rear view while parking and on top of that showing the distance with lines/signs taking from parking sensor. 

After implementing this requirement; our FPS value got lower up to 7-8 frames. We are barely viewing the image from camera. So even adding only one image onto camera causes system to get much slower.

So this situation pushes us to find another way to make it realized. We explored Applet Interface. Actually Method1 is nearly the same as the one in Extern Bitmap, which looks the same as already implemented. Nevertheless we implemented Method1 and saw the results exactly the same we experienced in Extern Bitmap.

Method3 with graphics hardware layer is on our in To-Do List.  Later, we tried to implement Method2. However, We could not have succeed in that. Actually, I tested the Method2 with debugger. My code is working well and I can toggle breakpoints onDraw Method and its called function in my BSP as below;

void CameraPlayerDraw( CameraPlayerObject* aObject, XBitmap* aBitmap, XRect aClip, XRect aDstRect )

{

U_32 GetWidth = ( aDstRect.Point2.X - aDstRect.Point1.X );

U_32 GetHeight = ( aDstRect.Point2.Y - aDstRect.Point1.Y );

static U_32 frameNumber;

bspAccessHandle.cameraAccessHandle->cameraGetLatestFrame(&frameNumber, 0, 0, bspAccessHandle.cameraAccessHandle->cameraGetFrameWidth(), bspAccessHandle.cameraAccessHandle->cameraGetFrameHeight(),

aBitmap, 1024, 600, 1024*4,

aDstRect.Point1.X , aDstRect.Point1.Y, GetWidth, GetHeight,

CAMERA_OUTPUT_NO_RATATION, CAMERA_OUTPUT_NO_FLIP);

}

Here, aDstRect parameter gives Applet object coordinates and aBitmap gives the address. Actually when I looking into the code debugger, I see that aBitmap is an address on Embedded Wizard Heap, not actual Display FrameBuffer Address. So whenever I try to run the above code, my function works well and returns from onDraw function but in a short time it drops into the exception.Seemingly, I am writing to a wrong address. Actually, I tried to put several trials of kind as aBitmap->Surfaces[0]->Handle argument in place of aBitmap argument in my function. However although my function works without error, later some time my program again crashed.

In Method1 above function works well. However it gives me lock->Pixel1 pointer for me as supplying my camera output data to input of bitmap. But in Method2 no any address is given to me. Additionally the address of aBitmap is totally different from the address of  actual FrameBuffer.

Are you making several operations on your Heap memory just prior to copying full GUI frame into the actual Display/Screen FrameBuffer Address? What is your method to display any UI Full frame? Directly making operations in actual FrameBuffer or somewhere else in memory until you prepared full frame image?   

Also, How can I feed my camera output to Display frame Buffer in one step? As you explained very well in your documentation, InMethod1 or ExternBitmap, you are getting the data where we are copying to address of lock->Pixel1, you later copy to another place in memory. We are aiming to get rid of making the same images copying again and again. If we do this, we are considering to improve of overall system performance and reach to more FPS values again even when we combine several images onto the camera output within UI design.

1 Answer

0 votes
by

Hello,

since I have no experience with the NXP target, I'm not sure whether my answers will help you.

However, we required to implement some images on directly Camera output in GUI. 

Assuming the images should be displayed by Embedded Wizard, Extern Bitmap approach should work in this case. Just arrange other image and text views in front of the image view intended to display the Extern Bitmap content.

After implementing this requirement; our FPS value got lower up to 7-8 frames. We are barely viewing the image from camera. So even adding only one image onto camera causes system to get much slower.

Possibly the images are large and semitransparent. In that case all images need to be alpha-blended which may impact the performance. Without knowing your implementation it is difficult to deduce what is wrong.

Later, we tried to implement Method2. However, We could not have succeed in that. Actually, I tested the Method2 with debugger.

...

So whenever I try to run the above code, my function works well and returns from onDraw function but in a short time it drops into the exception

I suppose your implementation is ignoring the values from parameter aClip. However, the resulting modification in frame buffer may affect ONLY the area determined in aClip. Trying to modify framebuffer contents lying outside this area will have two effects. In best case you get artefacts on the screen. In worst case, the application crashes because of accessing not existing frame buffer areas.

 Additionally the address of aBitmap is totally different from the address of  actual FrameBuffer.

Embedded Wizard uses (depending on the target system) different strategies to manage and update frame buffer. It can be that what you see as 'frame buffer' during the execution of OnDraw() method is a small off-screen buffer and not the real frame buffer. The size of this off-screen buffer corresponds to aClip.

Also, How can I feed my camera output to Display frame Buffer in one step?

Try the Method 2 with correctly used aClip parameter. I hope it helps you further.

Best regards

Paul Banach

Ask Embedded Wizard

Welcome to the question and answer site for Embedded Wizard users and UI developers.

Ask your question and receive answers from the Embedded Wizard support team or from other members of the community!

Embedded Wizard Website | Privacy Policy | Imprint

...