We are using RecFusion Pro to perform a multi-sensor calibration with three Orbbec Astra Series sensors. The calibration performs well when all sensors are using the maximal resolution available for both, the depth and the color, streams:
However, when we lower the resolution of depth and/or color stream the calibration performs very poorly. It’s almost like there is none! All segments are detached by half a meter. We have tested many different settings, and none of them give sensible results:
I see reasons why lower resolution should perform worse, like it is harder to position the calibration marker in space from pixelated images. However, I don’t understand why it results in such a bad calibration? Is there a particular reason or assumption behind the calibration algorithm?

