Command line interface documentation
Training
yapic train <network> <image_path> <label_path> [options]
Prediction
yapic predict <network> <image_path> <output_path> [options]
Deployment: Export to DeepImageJ
yapic deploy <network> <image_path> <output_path> [options]
Parameters
network
Either a model file in h5 format to use a pretrained model or specific string to initialize a new model.
- Choose
unet_2d
orunet_multi_z
to initialize a new model.unet_2d
: The original U-Net network as described by Ronneberger et al. with zxy size of 1x572x572. You can train 2D images as well as 3D multichannel data with this model (e.g. z-stacks asquired with a confocal microscope). However, the model will be trained with single 2D slices of your 3D data.unet_multi_z
: Combination of 5unet_2d
models to process 3D data. It takes 5 z-slices as input to predict the slice in the middle.
-
Use
path/to/my/pretrained_model.h5
to continue training of a pretrained keras model. - Only
unet_2d
models can be deployed to DeepImageJ.
image_path
Path to image files. You can use wildcards, e.g. my_data/*.tif
. In deploy mode, define one specific tif image file as example image.
Input image format
YAPic supports tif and tiff files
- RGB images
- Multichannel images
- Z-stacks
Especially in case of multidimensional images: Make sure to always convert your pixel images with Fiji before using YAPiC. Large amounts of image data can be conveniently converted with Fiji by using batch processing.
train and predict mode
Define a folder with tiff or tif images
path/to/my/images
or a wildcard
"path/to/my/images/*.tif"
Don’t forget double quotes in case of wildcards!
deploy mode
path/to/a/single/example_image.tif
The example image is packed into the DeepImageJ bundled model. If you share your model with others, they can easily apply the model within DeepImageJ on this test image.
label_path
Define a path to an Ilastik Project File (.ilp)
path/to/my/ilastik_project.ilp
or to label masks in tif format.
path/to/my/labelfiles/
"path/to/my/labelsfiles/*.tif"
Ilastik Project Files
The images in associated with your Ilastik project have to be identical with the tif images you define in the image_path argument.
Label masks in tif format
- The label image have to have identical dimension in z, x and y as the corresponding pixel images. They always have one channel. Pixel integer values define the class labels:
- 0: no label
- 1: class 1
- 2: class 2
- 3: class 3
etc.
-
The label images have to have identical or similar names to the original pixel images defined in image_path.
This works well: Pixel and label images are located in different folders and have identical names:
pixel_image_data/ ├── leaves_1.tif ├── leaves_2.tif ├── leaves_3.tif └── leaves_4.tif label_image_data/ ├── leaves_1.tif ├── leaves_2.tif ├── leaves_3.tif └── leaves_4.tif
This works also: Pixel and label images are located in different folders and have similar names:
pixel_image_data/ ├── leaves_1.tif ├── leaves_2.tif ├── leaves_3.tif └── leaves_4.tif label_image_data/ ├── leaves_1_labels.tif ├── leaves_2_labels.tif ├── leaves_3_labels.tif └── leaves_4_labels.tif
Especially in case of multidimensional images: Make sure to always convert your label masks in tif format with Fiji before using YAPiC. Large amounts of image data can be conveniently converted with Fiji by using batch processing.
Optional parameters
-n –normalize=NORM
Set pixel normalization scope [default: local]
-
For minibatch-wise normalization choose
local_z_score
orlocal
. -
For global normalization use
global_<min>+<max>
(e.g.global_0+255
for 8-bit images andglobal_0+65535
for 16-bit images) -
Set to
off
to deactivate.
–cpu
Train using the CPU (not recommended, very slow).
–gpu=VISIBLE_DEVICES
If you want to use specific GPUs. To use gpu 0, set --gpu=0
. To use gpus 2 and 3, set --gpu=2,3
.
-h –help
Show documentation.
–version
Show version.
Train options
-e –epochs=MAX_EPOCHS
Maximum number of epochs to train [default: 5000].
-a –augment=AUGMENT
Set augmentation method for training [default: flip
].
- Choose
flip
and/orrotate
and/orshear
. - Use
+
to specify multiple augmentations (e.g.flip+rotate
).
-v –valfraction=VAL
Fraction of images to be used for validation [default: 0.2
] between 0 and 1.
-f –file=CLASSIFER
Path to trained model [default: model.h5
].
–steps=STEPS
Steps per epoch [default: 50
].
–equalize
Equalize label weights to promote influcence of less frequent labels.
–csvfile=LOSSDATA
Path to csv file for training loss data [default: loss.csv
].
Deploy options
-s –size=Modelsize
Size of the network to be exported.
Large networks are applied faster in DeepImageJ, but consume more RAM. There are three options:
small
(112 x 112 pixels)middle
(224 x 224 pixels),large
(368 x 368 pixels) [default: middle]
–skip-predict
Skip computation of prediction image. By default, the model is applied to the example image and the resulting probability map is packed into the DeepImageJ Bundled model. You can skip this to make deployment process faster.
Metadata
There are several optional parameters to add metadata to the model (e.g. author information). This is of particular interest if you would like to publish the model on the model repository.
--author=AUTHOR
: Name of the model’s authors [default: n/a]--url=URL
: Url to model publication [default: http://]--credit=CREDIT
: [default: n/a]--mdversion=MDVERSION
: Model version [default: n/a]--reference=REFERENCE
: Publication reference of model [default: n/a]