I have created a virtual environment and copied imfusion folder from /usr/lib/python3/dist-packages/imfusion to the site-packages folder of my virtual environment. I have also installed pytorch version 1.8 and can successfully import it and for example check the version when I launch ImFusionSuite from the virtual environment.
However, when I select 2 US swipes and want to run the plugin by clicking “compute” the following error appears:
[ML.ModelConfiguration] Loading config file cones.yaml
[ML.Factory] torch is not a registered type.
Could not create engine ‘torch’.
It seems that the TorchPlugin was not loaded at start-up.
If it is part of your installer, then it means that there some issue during its loading, and my hypothesis is that its dependencies conflict with the ones you have in your virtual environment.
(Note that you do not need to install torch in Python to run our Torch models in the ImFusion Suite, since we already integrate and ship our own version)
Can you try to start the ImFusion Suite outside of your Python environment and see if the Torch plugin gets loaded? It should be printed in the log window (Click on View>Show Log Window if you don’t see it)
thank you for your answer! I started ImFusion Suite outside the environment, but the plugin does not get loaded. These are the logs:
[Python] No python version specified, please configure one in the settings.
[AlgorithmFactory] Cannot register algorithm CT;Reconstruction. Algorithm with same name has already been registered.
[Base.Framework] libcuda.so.1: cannot open shared object file: No such file or directory
[Base.Framework] Available Plugins: ImFusionStream, ImFusionML, ImFusionPython, ImFusionLiveUS, ImFusionDicom, ImFusionAS, ImFusionUS, ImFusionClarius, ImFusionVision, ImFusionImageMath, ImFusionAtracsys, ImFusionSeg, ImFusionRGB-D, ImFusionCT, ImFusionNDITracking, ImFusionReg.
OpenGL: 4.6 (Core Profile) Mesa 21.2.6
Vendor: Intel
GPU: Mesa Intel(R) UHD Graphics 620 (WHL GT2)
[GL] Neither GL_NVX_gpu_memory_info nor GL_ATI_meminfo extensions available, can not query total GPU memory information.
Memory: 0 bytes of 0 bytes available
[GL] Neither GL_NVX_gpu_memory_info nor GL_ATI_meminfo extensions available, can not query total GPU memory information.
Can it be that the shipped torch version is only GPU compatible, which would stop the TorchPlugin from loading at start-up if my machine does not support that?
Alternatively if I set the python path in the settings e.g to:
/usr/lib/x86_64-linux-gnu/libpython3.8.so
I get the python console, but torch cannot be imported.