Problem with Hybrid Ultrasound Simulation

Hello,
I’m running some tests with Hybrid Ultrasound Simulation to generate synthetic US frames from my label map and ultrasound sweeps.

As inputs, I’m using:

  • a .csv imported as a Tracking Sequence
  • a tissue label volume, as indicated in your documentation.

However, when I select both datasets, I’m not able to run or even see the algorithm as expected.
This is de documentation I read:

Am I supplying the wrong inputs or formats?
Also, what exactly do you mean by “Ultrasound Sweep” in this context? Do you mean a .csv file with info about x,y,x,qx,qy,qz,qw?

Thank you and have a great day!

Hi Chiararipiemo,

In order to use the Hybrid Ultrasound Simulation you need a label volume set and optionally a single Ultrasound Sweep. With the sweep as input, the resulting simulated data can have the same geometry and dimensional properties. With a label volume and a list of positions the algorithm can not know what you are expecting, and it does not appear on the suite.

I understand that you already have the sweep from where you are extracting this .csv file with the tracking data, so you should be able to use it directly. If this is not the case, and you need help generating a fake sweep, or you want to change some physical properties (resolution, number of lines, geometry, etc…) , let me know and I can assist you with that.

Best,
Alejandro

1 Like

Hi Alejandro,
what I would like to do is generate synthetic ultrasound images from:

  • organ segmentations (using labels consistent with the Hybrid Ultrasound Simulation module)
  • a tracking sequence obtained from a simulated setup in MoveIt!.

Let me explain my workflow more precisely.

In MoveIt!, I simulate a scene with a robot, an ultrasound probe and a point cloud of the patient’s torso. I perform a linear scanning motion along the patient’s back and record the probe poses and relevant information into a .csv file. Then I import this file into ImFusion as a tracking sequence.

At the beginning, I assumed I could directly use this setup with the Hybrid Ultrasound Simulation, but it seems that the “Ultrasound Sweep” referenced in your documentation is not equivalent to a generic tracking sequence.

My goal is not to manually create an artificial sweep, but to use exactly the tracking sequence generated by my MoveIt! simulation.
To satisfy the input requirements of the Hybrid Ultrasound Simulation, would it be a viable approach to derive by python code the two required sweeps (transducer spline and direction spline) from my tracking sequence path and then simulate the US frames based on those?

Do you have any recommendations or best practices on how to correctly integrate such an externally generated tracking sequence into your Hybrid Ultrasound Simulation pipeline?

Thank you

Hi Chiararipiemo,

Thanks for the clarification. If you must use the positions that you obtain from MoveIt! you will need to create an ultrasound sweep from those positions using Python.

The Suite has no conversion from a Tracking Sequence to an Ultrasound Sweep, because a tracking sequence is only a set of 3D positions, but a sweep has on top of that image and sensor information.

I am preparing and testing a small script for you to convert any tracking present in the suite into a sweep, but you will have to introduce the missing parameters; frame geometry, expected image dimensions, and orientations for the two directions you want to use that better fit your case.

In the meantime, I would recommend creating synthetic sweeps using the suite, so you can play with the frame geometry settings and simulation now.

You can generate a synthetic sweep by defining two splines (one for direction and one for the center of the sweep) by selecting no data > Algorithms > “Synthetic Ultrasound Sweep”. Alternatively, you can run the Hybrid simulation only with the labels data selected and the suite will automatically launch the algorithm for you.

Best,
Alejandro

Good afternoon,
thank you for the clarification.

I have already tested the generation of ultrasound frames by manually defining the sweeps. What I am now trying to achieve is a way to avoid manually selecting the two sweeps and instead work directly with my tracking sequence, if it is possible.

I am looking forward to the code you mentioned, it would be extremely helpful for my project.

In the meantime, I am trying to write a Python script to convert my tracking sequence into two splines (direction and center of the sweep). However, I have not yet succeeded in using these splines as inputs to the Hybrid Ultrasound Simulation. In theory, these splines should correspond to the ones defined manually via “Add Center Spline” and “Add Direction Spline”, but I am struggling to integrate them correctly into the algorithm.


the two red lines visible in the screenshot are the splines I derived from my tracking sequence. I import them as annotations in order to reproduce what happens when I manually define the two splines for the sweep.

However, when I run “Simulate Sweep”, I get the following log messages:

[SyntheticUltrasoundSweepAlgorithm] Transducer center spline is not defined yet.

[UltrasoundSimulationHybrid] Splines are not defined correctly.

This suggests that my approach or assumptions are incorrect and that the splines I am providing are not recognized as valid inputs for the Hybrid Ultrasound Simulation.
Thank you again for your support.

Hi Chiararipiemo,

The synthetic sweep looks for annotations attached to the input data, and it seems the splines you loaded from the workspace are not attached to any data in particular.

See:

These two ellipses are saved similarly on the workspace, with the difference of the parentDataUid tag:

<property name="Annotations">
		<property name="GlEllipse">
			<param name="editable">1</param>
			...
			<param name="init">1</param>
			<param name="poseLinked">0</param>
		</property>
		<property name="GlEllipse">
			<param name="editable">1</param>
			<param name="color">1 1 0 1 </param>
			....
			<param name="parentDataUid">data0</param>
			<param name="poseLinked">0</param>
			<param name="frame">0</param>
		</property>
	</property>

You can add this tag to your workspace file manually, and the Synthetic algorithm should use them correctly. But keep in mind that the synthetic algorithm does not use the points provided as spline as sampling points to compute the symulation. Instead it samples n frames along the spline generated by these points. These will likely not match the ones your MoveIt! application has exported.

If this change works for you we can simplify the approach. It will be easier and more robust to generate the sweep following your approach and then modify any component of it that is not present in the controller (such as the long radius of the convex geometry) via the Python console or the properties inspector.

Best,
Alejandro

Hi,
yes, I’m experiencing the same issue even when my annotations are correctly attached to the input data. It’s not visible in the screenshot I sent previously, but the behavior is identical. With your confirmation, I’ll keep trying to understand and resolve this problem.


Just to double-check my understanding: do you also agree that the following approach could be valid?

  1. Derive the two sweeps (transducer center spline and direction spline) from my external tracking sequence
  2. Attach them correctly to the corresponding labeled volume used for the Hybrid Ultrasound Simulation
  3. Set the simulation parameters (frame geometry, depth, probe width…)and use these splines as inputs in order to generate the synthetic US images along that trajectory.

Can you confirm whether this workflow is in line with how the Hybrid Ultrasound Simulation is designed to work, or if there is a conceptual limitation that would prevent this from functioning as intended?

Thank you again,
Chiara

Hi,

Yes, I can confirm that your workflow is good, with some caveats:

  • You will not generate data at the same points that you exported. These points will create a spline, and the spline will be sampled along. You can use this to your advantage to generate a different number of points without re-running your previous workflow.
  • You have less control over the sweep: some frame geometry configurations are missing from the controller inputs (sector & circular), as well as some parameters (long radius for convex).

If both these things are not a problem, I think your workflow is correct. I do not know why the Synthetic Ultrasound Sweep is not working for you. If you share your workspace file I can investigate. ( no need to send the labeled data, I can create a mockup for that)

Best,
Alejandro

Good evening,
yes, it’s strange for me as well. Here is what happens when I try to use the Synthetic Ultrasound Sweep.


In any case, I am attaching the .iws file so you can take a look: https://drive.google.com/drive/folders/1KpkOYTyn4_8tER7b5YiCITb7ZtlgCBYd?usp=drive_link.
I wasn’t able to attach the .iws file directly, so I created a shared Drive folder that you can access. I hope this is helpful
p.s. I’m aware that the sweep shape is not ideal at the moment. My goal for now is just to test that the algorithm works correctly; once this is confirmed, I will go back and optimize the trajectory.
Thanks again for your help!
Best,
Chiara

little update:
this link is for the ws with a better generated spline https://drive.google.com/file/d/1dNl0uRaF4Ilt2mndRIdVkAQd9SDTuAAv/view?usp=drive_link
Have a nice evening,
Chiara

Hi Chiara,

You are trying to use instances of GlPolyLine and the algorithm expects GlSpline.

You can adjust your workspace file changing the property to a GlSpline and adding the following tags inside the property:

			<param name="labelPixelOffset">7.82771971445489 -74.0752003653169 </param>
			<param name="isClosed">0</param>
			<param name="renderMode2d">0</param>
			<param name="renderMode3d">0</param>
			<param name="tubeThickness">1</param>
			<param name="tubeEndT">1</param>
			<param name="xrayTubeInnerRadius">0.7</param>

ie:

<property name="GlPolyLine">
	<param name="editable">1</param>
	<param name="color">1 1 0 1 </param>
	<param name="lineWidth">1</param>
	<param name="labelVisible">1</param>
	<param name="labelBackgroundVisible">0</param>
	<param name="labelBackgroundColor">0.3 0.3 0.3 0.7 </param>
	<param name="labelBackgroundMargin">3</param>
	<param name="labelDepthTest">1</param>
	<param name="labelColor">0 1 1 </param>
	<param name="labelText">909.1</param>
	<param name="name">Polyline</param>
	<param name="points">245.030746459961 -181.163219928741 -182.627990722656 
210.030746459961 -42.163219928741 -182.627990722656 
377.030746459961 -118.163219928741 -182.627990722656 
164.030746459961 -154.163219928741 -182.627990722656 
324.030746459961 -34.163219928741 -182.627990722656 
245.030746459961 -178.163219928741 -182.627990722656 
245.030746459961 -180.163219928741 -182.627990722656 
</param>
	<param name="poseLinked">0</param>
</property>

to

<property name="GlSpline">
	<param name="editable">1</param>
	<param name="color">1 1 0 1 </param>
	<param name="lineWidth">1</param>
	<param name="labelVisible">1</param>
	<param name="labelBackgroundVisible">0</param>
	<param name="labelBackgroundColor">0.3 0.3 0.3 0.7 </param>
	<param name="labelBackgroundMargin">3</param>
	<param name="labelDepthTest">1</param>
	<param name="labelColor">0 1 1 </param>
	<param name="labelText">909.1</param>
	<param name="name">Polyline</param>
	<param name="points">245.030746459961 -181.163219928741 -182.627990722656 
210.030746459961 -42.163219928741 -182.627990722656 
377.030746459961 -118.163219928741 -182.627990722656 
164.030746459961 -154.163219928741 -182.627990722656 
324.030746459961 -34.163219928741 -182.627990722656 
245.030746459961 -178.163219928741 -182.627990722656 
245.030746459961 -180.163219928741 -182.627990722656 
</param>

<param name="labelPixelOffset">7.82771971445489 -74.0752003653169 </param>
<param name="isClosed">0</param>
<param name="renderMode2d">0</param>
<param name="renderMode3d">0</param>
<param name="tubeThickness">1</param>
<param name="tubeEndT">1</param>
<param name="xrayTubeInnerRadius">0.7</param>

<param name="poseLinked">0</param>
</property>

This allows synthetizing a sweep, but the algorithm fails because some of your points are too close to each other and it fails to detect its tangent direction. You can try generating new splines with points further away.

Have a nice weekend,
Alejandro

Good evening Alejandro,
I’ve understood the issue now!
I’ll work on fixing all the points you mentioned and will let you know how it goes.

In the meantime, have a great weekend!

Hi Chiara,

Can you test this?

import imfusion as imf
import imfusion.ultrasound as us
import numpy as np

def sliceSizeFromGeometry(frameGeometry):
	if frameGeometry.is_linear:
		extent = [frameGeometry.img_desc.width / 2.0, frameGeometry.img_desc.depth / 2.0]
	elif frameGeometry.is_convex:
		sx = 2 * frameGeometry.long_radius * np.sin(np.deg2rad(frameGeometry.opening_angle))
		sy = frameGeometry.long_radius - frameGeometry.short_radius * np.cos(np.deg2rad(frameGeometry.opening_angle))
		extent = [sx.item(), sy.item()]

	return extent;

def createConvexGeometry(probe_width, long_radius, depth, opening_angle):
	# Set up the desired frame geometry for the ultrasound sweep:
	fg = us.FrameGeometryConvex(us.CoordinateSystem.IMAGE)
	# The sweep requires a valid image descriptor for the frame geometry
	fg.img_desc = imf.ImageDescriptor(imf.PixelType.UBYTE,128,128,1,1)

	# Set up the frame geometry dimensions:
	fg.opening_angle = opening_angle
	fg.short_radius = probe_width  / (2 * np.sin(np.deg2rad(opening_angle))).item()
	fg.long_radius = long_radius

	fg.depth = depth
	fg.top_down = True
	return fg


def createLinearGeometry(probe_width, depth, opening_angle):
	# Set up the desired frame geometry for the ultrasound sweep:
	fg = us.FrameGeometryLinear(us.CoordinateSystem.IMAGE)
	# The sweep requires a valid image descriptor for the frame geometry
	fg.img_desc = imf.ImageDescriptor(imf.PixelType.UBYTE,128,128,1,1)

	# Set up the frame geometry dimensions:
	fg.width = probe_width
	fg.depth = depth
	fg.top_down = True
	return fg


def setCenter(fg, extent):
	# Probe center is horizontally centered
	if fg.is_convex:
		sign = 1 if fg.top_down else 0
		y = -extent[0] + sign * fg.short_radius * (1 - np.cos(np.deg2rad(fg.opening_angle))).item();
		fg.offset = np.array([0,y])
	if fg.is_linear:
		fg.offset = np.array([0,-extent[1]])

def createEmptyImage(extent):

	img_desc = imf.ImageDescriptor(imf.PixelType.UBYTE,128,128,1,1)
	img_desc.spacing = np.array([extent[0] * 2 / 128, extent[1] * 2 / 128,1])
	img_desc.is_metric = True
	im = imf.SharedImage(imf.MemImage(img_desc))
	return im



# Retrieve the tracking from the suite data model
trackings = [el for el in imf.app.data_model if type(el) == imf.TrackingSequence]
if len(trackings) == 0:
	print("Error, no trackings found in the data model")
tracking = trackings[0]


# Generate a sweep 
sweep = us.UltrasoundSweep()
sweep.name = "Synthetic Sweep from csv poses"



# Add an empty image to the sweep for every position in the tracking
# Create a copy 
tracking_copy = imf.TrackingSequence()

opening_angle = 30
short_radius = 0
long_radius = 100
depth = 50
top_down = True

fg = createConvexGeometry(short_radius, long_radius, depth, opening_angle)
extent = sliceSizeFromGeometry(fg)
setCenter(fg,extent)


# Duration of the sweep, in seconds
sweep_time = 3
nframes = tracking.size
timestep = sweep_time / (nframes - 1)
perpendicularSlices = True

for i in range(1,nframes-1):
	m = tracking.raw_matrix(i)
	t = i * timestep
	# Vector of sensor pointing
	d = np.array([0,-60,0])
	depth_axis = d / np.linalg.norm(d)

	# Very simple tangent computation without splines
	sweep_tangent = tracking.raw_matrix(i+1)[:3,3] - tracking.raw_matrix(i-1)[:3,3]
	sweep_tangent /= np.linalg.norm(sweep_tangent)

	normalSlice = np.cross(sweep_tangent, depth_axis)
	if(perpendicularSlices):
		normalSlice = np.cross(depth_axis,normalSlice)

	outMat = np.eye(4)
	outMat[:3,0] = np.cross(depth_axis,normalSlice)
	outMat[:3,1] = depth_axis
	outMat[:3,2] = normalSlice
	outMat[:3,3] = m[:3,3] + (depth_axis * extent[1])

	print(outMat)

	tracking_copy.add(outMat,t,1.0)
	sweep.add(createEmptyImage(extent))
	sweep.set_timestamp(t,i)

# Add the tracking
sweep.add_tracking(tracking_copy)
sweep.properties.set_param("topDown", True)

# Change this sweep frame geometry through its metadata component
fgm = [el for el in sweep.components if type(el) == us.FrameGeometryMetadata][0]
fgm.frame_geometry = fg

# Add to the data model
imf.app.data_model.add(sweep)

Run it on the suite python console and it will generate a sweep with garbage image data with the positions from the tracking sequence. You can use this sweep for running the simulation. I put a [0,-60,0] difference between the transducer and direction from the workspace file you shared. If you want a custom direction of scanning or use the ones in the tracking sequence you’ll have to redo the maths for the outMat at the end. For other geometries you’ll have to fill in the if cases, I think they are straight forward but if you need help do not hesitate to ask.

Have a great weekend!
Alejandro

Hi Alejandro,
I tested your code with a few modifications and everything worked well. Now I have the segmentations and the Ultrasound Sweep. Your work has been really helpful.
However, when I try to run the Hybrid Simulation I have the following problem:

Couldn’t create OpenCL image (-59)
Ultrasound Simulation: Sweep simulation failed! Couldn’t create OpenCL image
Couldn’t create OpenCL image (-59)

I get the same issue when I generate the two glsplines from my own tracking sequence and I try to run the Hybrid Simulation, so it seems more likely to be related to my machine/configuration than to incorrect inputs. I am currently running everything on CPU only (no dedicated GPU). Can you confirm that this could be the reason of my problem?
I send to you the link of the workspace: https://drive.google.com/drive/folders/109zRZUufa2gH9leZc6aeId0cZevh4PbF?usp=drive_link

Best,
Chiara