Instantaneous read of color/depth images under RF 136 vs RF 220

Since we moved to RecFusion 220 from 136, we have seen in our application a huge performance loss (around x3, from 22 fps to 7fps on the same hardware, with three Orbbec Astra S sensors). The only code that has changed is the one related to scanning and reconstruction (basically, the RecFusion part). I managed to track down from where exactly this difference comes, and it is the Sensor::readImage() function.

On RecFusion 220, readImage() for each sensor takes around 30 ms. However, on RecFusion 136 with three sensors only one sensor takes around 30 ms while the other two takes <1ms. Not surprisingly, if I use only two sensors, then one takes around 30 ms while the other <1ms. Finally, one sensor alone takes around 30 ms. From this behaviour, it seems that data from all three sensors are obtained in parallel. But is it possible from the RF136/OpenNI2/driver sides? What has changed between RF 136 and 220 that we could potentially observe such behaviour?

Example measurements for RF 136:

Example measurements for RF 220:

To once again test this behaviour with a simpler code I have created this small project: As you can see, the code for RF 136 and RF220 differs minimally, but the timing is vastly different.

Could you guide me in getting to the bottom of things and the possible reasons for such behaviour? Thanks!

Hello Patryk,

we will take a look into this issue. It might be that some laser managing functionality changed between the versions. In general though, we would recommend not to use readImage, but rather to subscribe as a listener and implement onSensorData. This way you don’t lose any frames and also you don’t have to block your application till the new frame is available.

In any case we will check the performance issue and get back to you when we have some conclusions.

Best regards,

Hello Olga,

I was actually going to use the onSensorData and the listener. However, with a parallel acquisition/reconstruction I had a lot of problems. I am using a lock-free queue for each sensor, where I put a frame after another. This happens on different threads , one for each sensor (usually up to 3). Then, in another “master” thread, I am running the collection loop which takes images from these queues. Neverthless, with this apporach I always get some memory access violation inside the SDK. At first, I though the problem is allocating memory inside the listener in the onSensorData method and deallocating outside in the reconstruction thread. But I changed it so that the allocation/deallocation happens on the same thread and it didn’t work. I have some code examples here: (parallel-listener doesn’t work; parallel-read-image does work).

In the end, I have switched to readImage with no such issues. I guess I am doing something unsafe (or less likely there is something wrong in the SDK). Overall, I am not planning to debug this approach more for the moment. I will come back if something changes :slight_smile: