We are tring to use SDK to support the function “Align to marker” and we find that there is a static function getMarkerPose in class RecFusion::Calibration.
We are using two sensors and the result seems different to what we’ve get from the RecFusion Pro.
Could you help us to explain the way you using in RecFusion Pro after get the marker pose, pls? Thx so much for your support.
Thx Olga. I have tried it before and the problem is I have two sensors. I could get the marker pose from my second camera but the reconstruction is based on the first one. Do I need to transform the pose using calibration params? How to solve that? Thx for your support.
In case of multiple sensors, the volume transformation will be the following:
T = sensorT[j] * depthToColorT[j].inverse() * T * TT;
where j is the id of the sensor which sees the marker, sensorT is the transformation from the sensor to the reference sensor (obtained during multi-sensor calibration procedure).
Inverse of a matrix is more complicated, you can refer to this page for an explanation of how this is computed. Another option is to use one of the math libraries that already have the implementation, for example Eigen. We will add the inverse function to our SDK in its next release but I cannot give you a timeline at this point.
However, for this particular problem you can also just remove depthToColorT from the formula since it’s usually very close to Identity. So the formula will be: