The Best Ways to Combine Custom Diagnostic Imaging Hardware with ImFusion Suite

Hello everyone :hugs:,

I’m right now working on an endeavour that will integrate a customised medical imaging gear setup with ImFusion Suite. Our objective is to improve the images that we gather from our gear by utilising ImFusion Suite’s potent image processing features.

But because the hardware and software components are so complicated, I’m looking for tips on how to integrate them as smoothly and effectively as possible.

I’m looking for clarification on the following key points in particular:

Use of SDK and API: What are the most important things to keep in mind when utilising the ImFusion the SDK and the API to communicate with customised hardware? :thinking: Are there any specific modules or features that I have to be aware of or that are very helpful? :thinking:

Data Formats & Compatibility: Imaging data is output by our hardware in a format that is exclusive to it. What actions are suggested to make sure ImFusion Suite compatibility? :thinking: Exist any particular tools or methods for data conversion that the community discovered to be successful? :thinking:

Real-Time Processing: Processing images in real-time is one of our goals. Which techniques work best for ImFusion Suite real-time performance optimisation? :thinking: Exist any particular setups or settings that may be used to lower latency and increase processing speed? :thinking:

Troubleshooting & Debugging: What are typical integration process obstacles, and what approaches or resources are suggested for efficient troubleshooting and debugging? :thinking:

Community Resources: If someone is new to combining ImFusion Suite with bespoke hardware, were there any tutorials, record-keeping, or group-shared resources which you would suggest? :thinking:

I also checked this :point_right: https://resources.nvidia.com/en-us-medical-imaging/imfusion-is-taking-minitab but didn’t get any clarification on that.

Thank you :pray: in advance for your support and assistance.

Hello,
the short answer is “it depends”, mainly on what exactly you are trying to achieve.

The example plugin from our public demos repo should provide a good starting point. Adding a custom plugin allows you to extend the functionality of the Suite for your specific use case. There are also some other examples in this repo you might find interesting.

If your data source provides a continuous stream of 2D frames, like an ultrasound device or a video camera, I would also recommend checking out the ImageStream class in the Stream module of our SDK. You will most likely want to implement a class derived from ImageStream that communicates with your hardware API and feeds the image data into our framework in real time. Use a custom Algorithm class to create your stream and return it to the application.

Such a stream class usually involves setting up a dedicated processing thread and either feeding it via some callback from your hardware API or have it continuously poll the hardware for new data. The former is generally more efficient, but depends on support from your hardware API. If possible, try to avoid making copies of the data and just pass on pointers to existing buffers to reduce delays. Control flow should pass back from the callback to the hardware API as quickly as possible.

You can then do your processing on this background thread without stalling either the hardware or UI and have the stream emit the processed frames for rendering. Note that you can use GPU computations on background threads as long as you create an OpenGL context on the thread. A lot of our more complex processing operations make use of OpenGL and will crash without this context.

For the data format, our image containers will need a consecutive array of pixel values, ideally with pixel type uint8, uint16 or float and up to 4 color channels. In case of color images, RGB channel layout is assumed. If the conversion from your exclusive format involves more work than just exposing an existing buffer with such a layout, I would recommend also doing the conversion on the background processing thread.

Typical pitfalls you will encounter with this kind of setup mainly come from the multi-threading, in particular race conditions, mutex deadlocks and missing OpenGL synchronization when passing GPU data between threads. The latter should work out of the box if the only thread change of GPU data is when emitting the processing result out of the stream and into our rendering framework. If your setup does end up with additional background threads however, use SyncObject.h from our SDK for the exchanges there.