View Issue Details
|ID||Project||Category||View Status||Date Submitted||Last Update|
|0004174||Slicer4||Module ModelMaker||public||2016-04-20 13:33||2017-07-25 01:06|
|Product Version||Slicer 4.5.0-1|
|Target Version||backlog||Fixed in Version|
|Summary||0004174: Model maker crashes immediately after generating model (from large size volume labels)|
I am trying to produce models from segmented regions of a CT set (quite large, 30Gb uncompressed nrrd) and model maker does appear to complete without errors (see log) however Slicer immediately crashes before I can work with or export the part. When I look in /private/var/folder/dp/.../slicer I can see that a 10Gb vtk file was indeed created although trying to load this in Slicer again promptly crashes. The file doesn't seem to be corrupt as Paraview will open it promptly. I am wondering if perhaps Slicer immediately tries to view the file with GPU and runs out of memory?
|Steps To Reproduce|
1) Produce segmentation of a part
1alt) import vtk from above
The attached log was from the run that generated the geometry. Please let me know what other diagnostics I can provide.
|Tags||No tags attached.|
Slicer_25012_20160419_160124.log (9,132 bytes)
I'm having problems opening the log file, will try again once our network connectivity issues settle down.
How much RAM does your Mac have? This sounds more like a general Slicer problem (loading large data sets) than a Model Maker one - the model has been generated, it's not loading up.
Hi Nicole. It might be a large dataset issue but so far I have yet to kill it strictly speaking with that (at least for volumes and volume labels). The mac pro has 12 core, 128gb of ram and dual ati FirePro D700 6144mb. With dynamic addition of swap as needed it doesn't seem to run out of memory (though in this case I believe I hadn't even run out of physical memory yet). The reason I was wondering on GPU rendering was that when one does this for volume rendering I quite often run out of graphics memory and crash vs cpu rendering which can take advantage of the much larger ram size on the computer works just fine.
In case it was more of a corruption issue, here is the text from that log file as well if that help:
[DEBUG][Qt] 19.04.2016 16:01:24  (unknown:0) - Session start time .......: 2016-04-19 16:01:24
/Applications/slicer_4-18-2016/Slicer.app/Contents/lib/Slicer-4.5/cli-modules/ModelMaker --color /var/folders/dp/pcdkgh614sjfkqjcgscz43zw000s01/T/Slicer/ICJA_vtkMRMLColorTableNodeFileGenericColors.txt.ctbl --modelSceneFile /var/folders/dp/pcdkgh614sjfkqjcgscz43zw000s01/T/Slicer/ICJA_AxBGIBIfFGA.mrml#vtkMRMLModelHierarchyNode1 --name whatevs --generateAll --start -1 --end -1 --skipUnNamed --smooth 5 --filtertype Laplacian --decimate 0.25 --splitnormals --pointnormals --pad --debug /var/folders/dp/pcdkgh614sjfkqjcgscz43zw000s01/T/Slicer/ICJA_vtkMRMLLabelMapVolumeNodeB.nrrd
The input volume is: /var/folders/dp/pcdkgh614sjfkqjcgscz43zw000s01/T/Slicer/ICJA_vtkMRMLLabelMapVolumeNodeB.nrrd
GenerateAll: there are 0 models to be generated.
It would be nice to investigate and fix this, because Slicer should not crash for any input, but these large data sets are rare and it is difficult to optimize the software to handle these kind of data properly.
A workaround is to crop/resample the input data to make its size more manageable.
To solve the crash, we might consider displaying a warning popup that the file size is too big and offer the user the option of discarding the data or attempt to load it (explain that attempting this may crash the application).
|2016-04-20 13:33||matrimcauthon||New Issue|
|2016-04-20 13:33||matrimcauthon||Status||new => assigned|
|2016-04-20 13:33||matrimcauthon||Assigned To||=> nicole|
|2016-04-20 13:33||matrimcauthon||File Added: Slicer_25012_20160419_160124.log|
|2016-04-20 14:09||nicole||Note Added: 0013859|
|2016-04-20 15:11||matrimcauthon||Note Added: 0013860|
|2017-07-25 01:06||lassoan||Target Version||=> backlog|
|2017-07-25 01:06||lassoan||Note Added: 0014960|