Align automatically

We are tring to use SDK to support the function “Align to marker” and we find that there is a static function getMarkerPose in class RecFusion::Calibration.

We are using two sensors and the result seems different to what we’ve get from the RecFusion Pro.

Could you help us to explain the way you using in RecFusion Pro after get the marker pose, pls? Thx so much for your support.

Hello,

there is an example of how to do this in the provided sample QtReconstruction in processFrame function. Here is the code:

    Mat3 K = m_sensor->colorIntrinsics();
	Mat4 T;
	bool ok = Calibration::getMarkerPose(100, 190, *m_imgColor, K, T);
	if (ok)
	{
		Vec3 volSize = m_params.volumeSize();
		// Rotate by -90 degrees around x and translate so that volume stands on marker (row major order)
		double dataTT[] = { 1, 0, 0, 0, 0, 0, -1, 0, 0, 1, 0, 0, 0, 0, volSize[1]/2, 1 };
		Mat4 TT(dataTT);

		T = T*TT;

		Mat3 R;
		for (int r = 0; r < 3; ++r)
			for (int c = 0; c < 3; ++c)
				R(r, c) = T(r, c);
		Vec3 t(T(0, 3), T(1, 3), T(2, 3));
		m_params.setVolumeRotation(R);
		m_params.setVolumePosition(t);
	}

Best regards,
Olga

Thx Olga. I have tried it before and the problem is I have two sensors. I could get the marker pose from my second camera but the reconstruction is based on the first one. Do I need to transform the pose using calibration params? How to solve that? Thx for your support.

In case of multiple sensors, the volume transformation will be the following:

T = sensorT[j] * depthToColorT[j].inverse() * T * TT;

where j is the id of the sensor which sees the marker, sensorT is the transformation from the sensor to the reference sensor (obtained during multi-sensor calibration procedure).

Best regards,
Olga

Great. Thx Olga.
My depthToColor is a RecFusion::Mat4 object and it has no inverse function. Inverse means invert all elements, right?

Inverse of a matrix is more complicated, you can refer to this page for an explanation of how this is computed. Another option is to use one of the math libraries that already have the implementation, for example Eigen. We will add the inverse function to our SDK in its next release but I cannot give you a timeline at this point.

However, for this particular problem you can also just remove depthToColorT from the formula since it’s usually very close to Identity. So the formula will be:

T = sensorT[j] * T * TT;

Best regards,
Olga