mmv_im2im.utils package¶
Submodules¶
mmv_im2im.utils.basic_losses module¶
- class mmv_im2im.utils.basic_losses.PixelWiseCrossEntropyLoss(class_weights: List | None = None, ignore_index=None, one_hot_gt=False)[source]¶
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(input, target, weights=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- mmv_im2im.utils.basic_losses.expand_as_one_hot(input, C, ignore_index=None)[source]¶
Converts NxSPATIAL label image to NxCxSPATIAL, where each label gets converted to its corresponding one-hot vector. It is assumed that the batch dimension is present. Args:
input (torch.Tensor): 3D/4D input image C (int): number of channels/labels ignore_index (int): ignore index to be kept during the expansion
- Returns:
4D/5D output torch.Tensor (NxCxSPATIAL)
mmv_im2im.utils.embedding_loss module¶
- class mmv_im2im.utils.embedding_loss.SpatialEmbLoss_2d(grid_y=1024, grid_x=1024, pixel_y=1, pixel_x=1, n_sigma=2, foreground_weight=10, use_costmap=False)[source]¶
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(prediction, instances, labels, center_images, costmaps=None, w_inst=1, w_var=10, w_seed=1)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class mmv_im2im.utils.embedding_loss.SpatialEmbLoss_3d(grid_z=32, grid_y=1024, grid_x=1024, pixel_z=1, pixel_y=1, pixel_x=1, n_sigma=3, foreground_weight=10, use_costmap=False)[source]¶
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(prediction, instances, labels, center_images, costmaps=None, w_inst=1, w_var=10, w_seed=1)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
mmv_im2im.utils.embedseg_utils module¶
- mmv_im2im.utils.embedseg_utils.fill_label_holes(lbl_img, **kwargs)[source]¶
Fill small holes in label image.
- mmv_im2im.utils.embedseg_utils.generate_center_image(instance, center, ids, anisotropy_factor=1, speed_up=1)[source]¶
- mmv_im2im.utils.embedseg_utils.generate_center_image_2d(instance, center, ids)[source]¶
Generates a center_image which is one (True) for all center locations and zero (False) otherwise.
- Parameters:
instance (numpy array) –
- instance image containing unique ids for each object (YX)
or present in a one-hot encoded style where each object is one in it own slice and zero elsewhere.
center (string) – One of ‘centroid’, ‘approximate-medoid’ or ‘medoid’.
ids (list) – Unique ids corresponding to the objects present in the instance image.
one_hot (boolean) – True (in this case, instance has shape DYX) or False (in this case, instance has shape YX).
- mmv_im2im.utils.embedseg_utils.generate_center_image_3d(instance, center, ids, anisotropy_factor, speed_up)[source]¶
- mmv_im2im.utils.embedseg_utils.prepare_embedseg_cache(data_path: str | Path, cache_path: str | Path, data_cfg)[source]¶
- mmv_im2im.utils.embedseg_utils.prepare_embedseg_tensor(instance_batch: MetaTensor, spatial_dim: int, center_method: str = 'centroid')[source]¶
Parameters:¶
instance: instance segmentation masks of shape BYX or BZYX spatial_dim: 2 or 3 crop_size: values from cropping of shape YX or ZYX
Return:¶
class_labels: MetaTensor center_images: MetaTensor
mmv_im2im.utils.for_transform module¶
mmv_im2im.utils.gan_losses module¶
- class mmv_im2im.utils.gan_losses.pix2pix_HD(gan_loss, fm_loss, weights, **kwargs)[source]¶
Bases:
pix2pix_HD_original
mmv_im2im.utils.gan_utils module¶
mmv_im2im.utils.lovasz_losses module¶
Lovasz-Softmax and Jaccard hinge loss in PyTorch Maxim Berman 2018 ESAT-PSI KU Leuven (MIT License)
- class mmv_im2im.utils.lovasz_losses.StableBCELoss[source]¶
Bases:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(input, target)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- mmv_im2im.utils.lovasz_losses.binary_xloss(logits, labels, ignore=None)[source]¶
- Binary Cross entropy loss
logits: [B, H, W] Variable, logits at each pixel (between -infty and +infty) # noqa W605 labels: [B, H, W] Tensor, binary ground truth masks (0 or 1) ignore: void class id
- mmv_im2im.utils.lovasz_losses.flatten_binary_scores(scores, labels, ignore=None)[source]¶
Flattens predictions in the batch (binary case) Remove labels equal to ‘ignore’
- mmv_im2im.utils.lovasz_losses.flatten_probas(probas, labels, ignore=None)[source]¶
Flattens predictions in the batch
- mmv_im2im.utils.lovasz_losses.iou(preds, labels, C, EMPTY=1.0, ignore=None, per_image=False)[source]¶
Array of IoU for each (non ignored) class
- mmv_im2im.utils.lovasz_losses.iou_binary(preds, labels, EMPTY=1.0, ignore=None, per_image=True)[source]¶
IoU for foreground class binary: 1 foreground, 0 background
- mmv_im2im.utils.lovasz_losses.lovasz_grad(gt_sorted)[source]¶
Computes gradient of the Lovasz extension w.r.t sorted errors See Alg. 1 in paper
- mmv_im2im.utils.lovasz_losses.lovasz_hinge(logits, labels, per_image=True, ignore=None)[source]¶
- Binary Lovasz hinge loss
logits: [B, H, W] Variable, logits at each pixel (between -infty and +infty) # noqa W605 labels: [B, H, W] Tensor, binary ground truth masks (0 or 1) per_image: compute the loss per image instead of per batch ignore: void class id
- mmv_im2im.utils.lovasz_losses.lovasz_hinge_flat(logits, labels)[source]¶
- Binary Lovasz hinge loss
logits: [P] Variable, logits at each prediction (between -infty and +infty) # noqa W605 labels: [P] Tensor, binary ground truth labels (0 or 1) ignore: label to ignore
- mmv_im2im.utils.lovasz_losses.lovasz_softmax(probas, labels, only_present=False, per_image=False, ignore=None)[source]¶
- Multi-class Lovasz-Softmax loss
- probas: [B, C, H, W] Variable, class probabilities at each prediction
(between 0 and 1)
labels: [B, H, W] Tensor, ground truth labels (between 0 and C - 1) only_present: average only on classes present in ground truth per_image: compute the loss per image instead of per batch ignore: void class labels
- mmv_im2im.utils.lovasz_losses.lovasz_softmax_flat(probas, labels, only_present=False)[source]¶
- Multi-class Lovasz-Softmax loss
probas: [P, C] Variable, class probabilities at each prediction (between 0 and 1) labels: [P] Tensor, ground truth labels (between 0 and C - 1) only_present: average only on classes present in ground truth
mmv_im2im.utils.metrics module¶
mmv_im2im.utils.misc module¶
- mmv_im2im.utils.misc.generate_dataset_dict(data: str | Path | Dict) List[Dict] [source]¶
different options for “data”: - one CSV (columns: source, target, cmap), then split - one folder (_IM.tiff, _GT.tiff, _CM.tiff), then split - a dictionary of two or three folders (Im, GT, CM), then split
- Return
a list of dict, each dict contains 2 or 3 keys “source_fn”, “target_fn”, “costmap_fn” (optional)
- mmv_im2im.utils.misc.generate_dataset_dict_monai(data: str | Path | Dict) List[Dict] [source]¶
different options for “data”: - one CSV (columns: source, target, cmap), then split - one folder (_IM.tiff, _GT.tiff, _CM.tiff), then split - one folder with two or three subfolders (Im, GT, CM), then split - a dictionary of train/val
- Return
a list of dict, each dict contains 2 or more keys, such as “IM”, “GT”, and more (optional, e.g. “LM”, “CM”, etc.)
- mmv_im2im.utils.misc.generate_test_dataset_dict(data: str | Path, data_type: str | None = None) List [source]¶
different options for “data”: - one CSV - one folder Return
a list of filename
- class mmv_im2im.utils.misc.monai_bio_reader(**kwargs)[source]¶
Bases:
ImageReader
- get_data(img) Tuple[ndarray, Dict] [source]¶
Extract data array and metadata from loaded image and return them. This function must return two objects, the first is a numpy array of image data, the second is a dictionary of metadata.
- Args:
img: an image object loaded from an image file or a list of image objects.
- read(data: Sequence[str | PathLike] | str | PathLike)[source]¶
Read image data from specified file or files. Note that it returns a data object or a sequence of data objects.
- Args:
data: file name or a list of file names to read. kwargs: additional args for actual read API of 3rd party libs.
- verify_suffix(filename: Sequence[str | PathLike] | str | PathLike) bool [source]¶
Verify whether the specified filename is supported by the current reader. This method should return True if the reader is able to read the format suggested by the filename.
- Args:
- filename: file name or a list of file names to read.
if a list of files, verify all the suffixes.
mmv_im2im.utils.model_utils module¶
- mmv_im2im.utils.model_utils.init_weights(net, init_type='kaiming', init_gain=0.02)[source]¶
Initialize network weights. Parameters:
net (network) – network to be initialized init_type (str) – the name of an initialization method:
normal | xavier | kaiming | orthogonal
init_gain (float) – scaling factor for normal, xavier and orthogonal.
We use ‘normal’ in the original pix2pix and CycleGAN paper. But xavier and kaiming might work better for some applications. Feel free to try yourself.