I am reconstructing ultrasound volumes from freehand tracked ultrasound frames using the sweep compounding algorithm, and then exporting them in the metaimage (mhd) format. I have a question concerning the homogeneous transform associated to my reconstructed volume. I assume the “Position” field in my mhd header to be the translation parameters associated to my volume. However, when loading the same volume in ImFusion and looking at its associated transformation (by clicking on “Edit transformation”) the translation parameters don’t match the “Position” parameters from my mhd header file.

Is there another transformation applied to my volume when loading it in ImFusion? I looked at the user documentation and I didn’t see anything relevant to my problem.

Hi Remi,
Within the ImFusion Suite and SDK, image matrices are always expressed with respect to the image (or volume) center. That makes algorithms such as image registrations numerically more robust. MHD and other formats store the matrix with respect to one of the corners, and during loading and saving, the matrix is automatically converted.
If you have access to the SDK documentation, there’s an elaborate page on this.
Let me otherwise know if you have further questions.
Best,
Oliver

Thank you for your help. I suspected that it had something to do with the transformations being expressed relative to the center of the image. However, I am still confuse on how the transformation is converted. I must be confused with the applied transformation order. For the SDK documentation, I think you are referring to the “General Design Documentation / Coordinate Systems” page but it’s not available in my SDK documentation version:

As far as I understand, the new center of my volume should be [78x0.41x0.5, 555x0.41x0.5, 500x0.41x0.5]. I tried to apply this new center in different way (taking into account different direction conventions) but I am still not able to retrieve the converted ImFusion translation parameters.

First of all, that’s the user documentation. If you have the SDK, the SDK documentation would be next to it in the ImFusionLib folder, but it’s rather technical and describes how the matrices are stored in the different data structures.

On the matrix: It’s important to notice the dropdown for the coordinate convention: MHD matrices are “Data to World”, i.e. the matrix maps a point in the image to the world coordinate system. You selected “World to Data” in the “Edit Transform” widget, so it will show you the inverse.

Since you have a rotation, just adding the translations won’t be sufficient. You need to assemble the full 4x4 matrix, and then multiply a translation matrix with half the image’s pixel dimensions plus half pixel on top to end up at the image center.

Thank you again for your help. I must be missing something because I am still not able to retrieve the “WorldToImageCenter” transformation I have in ImFusion from my “WorldToImageCorner” transform saved in my mhd header file. Before when discussing about applying the translation to the center of the volume I did assemble a 4x4 homogeneous matrix.

In my understanding if I want to obtain the “WorldToImageCenter” transformation I need to multiply the inverse of the mhd DataToWorld transform with ImageCornerToImageCenter transform:

where ImageCornerToImageCenter corresponds to the homogeneous matrix containing the translation parameters that translate the origin from the top left corner to the center of the image in world coordinates (mm). However, I am still obtaining a completely different matrix.

The reason I am asking is that I want to load my ImFusion reconstructed volume and my tracked ultrasound data in 3DSlicer and preserve the alignement between the two volumes.

Hi,
It’s indeed a bit tricky, because you need to consider the image center offset correctly. Maybe it’s best if I share the actual code how the matrices are computed:

Saving a MetaImage file:

// m_matrix is the matrix in ImFusion SDK convention, i.e. world to image center.
vec3 ext = desc.extent(); // that's dimensions * spacing
const vec3 pos = m_matrix.topRightCorner<3, 1>() - spacing / 2;
const mat3 rotShearScale = m_matrix.topLeftCorner<3, 3>();
const vec3 trans = rotShearScale.inverse() * (-pos - ext * 0.5);
file << "Position = " << trans.transpose() << std::endl;
file << "Orientation = ";
const double* ptr = m_matrix.data();
for (int i = 0; i < 3; i++)
for (int j = 0; j < 3; j++)
file << ptr[i + 4 * j] << " ";
file << std::endl;

Loading a MetaImage file:

// Position and orientation are dumped into m_matrix, and then
// the following corrections are made:
vec3 ext = {spacing[0] * dimensions[0], spacing[1] * dimensions[1], spacing[2] * dimensions[2]};
vec3 trans = m_matrix.topRightCorner<3, 1>();
mat3 rot = m_matrix.topLeftCorner<3, 3>();
m_matrix.topRightCorner<3, 1>() = -(rot * trans + ext * 0.5);
if (!ignoreHalfPixelOffset)
m_matrix.topRightCorner<3, 1>() += vec3(spacing[0], spacing[1], spacing[2]) / 2.0;