Processing step by step: start to finish with micapipe

In this section, you will find two examples with the necessary steps of processing a dataset with micapipe.

1. Download an open access dataset

The first step is to identify a dataset and to ensure that you have the computational resources, including storage and computational processing power. In this this example, we will focus on two datasets: Human Connectome Project (HCP) and (MICA-MICs). Other widely used repositories for BIDS-compliant datasets may be found on e.g., OpenNeuro ( or the Canadian Open Neuroscience Platform (CONP;


To download the dataset, it is necessary to create an account first and accept the open access terms. Additionally, HCP uses a third-party software called Aspera Connect to boost data transfer speed. Further information about this software and its installation can be found on the HCP website. Once all the requirements are fulfilled, login to, and select the data you would like to download, in this case the WU-Minn HCP Retest Data with the processing filter of “Unprocessed”. We will use micapipe here to process the structural, resting state fMRI run-1 and Diffusion data (dir97_dir-RL). This dataset requires about 350 GB of storage.

2. Converting to BIDS

The HCP dataset was created before the rise of BIDS. Here, we provide a custom-build script that will transform an HCP directory into BIDS format, using the metadata provided in the HCP S1200 release reference manual.

  1. Clone the micapipe-supplementary github directory:

1git clone
  1. Change into the project directory:

1cd micapipe-supplementary /functions
  1. Run the code specifying the full path to the HCP data and the output HCP directory in BIDS.

1./hcp2bids -in <full_path_to>/HCP_data -out <full_path_to>/HCP_bids

3. Validating BIDS

At this point, MICs and HCP are BIDS conform. However, any new dataset that has been recently acquired and that you wish to make BIDS-compliant (see specifications at, should be validated with tools provided by BIDS, such as the BIDS-validator ( Another example can be found on the tutorial From Dicoms to BIDS: mic2bids

4. Running micapipe

Once micapipe has been installed (see Installation), one can run the pipeline. From the main directory of the dataset the command would be:

Running HCP for subject 250932

 1micapipe -bids HCP_bids -out derivatives -sub 250932 \
 2       -proc_structural \
 3       -proc_surf -freesurfer \
 4       -post_structural \
 5       -proc_dwi -dwi_acq dir97 \
 6            -dwi_main sub-250932/dwi/sub-250932_acq-dir97_dir-LR_dwi.nii.gz \
 7       -dwi_rpe sub-250932/dwi/sub-250932_acq-dir97_dir-RL_sbref.nii.gz \
 8       -SC -tracts 20M \
 9       -proc_func \
10       -MPC -regSynth \
11       -mainScanStr task-rest_dir-LR_run-2_bold \
12       -func_rpe sub-250932/func/sub-250932_task-rest_dir-RL_run-1_bold.nii.gz \
13       -NSR -noFIX \
14       -GD \
15       -QC_subj

5. Visualize the QC report

The individual QC tool generates a pdf report with detailed information of each processing module, which can be used for rapid visualization of processing status, core registrations, and data matrices by parcellation scheme and module. The files can be found under each subject’s directory QC and opened with any browser:


The group level QC generates a report with all completed and processed modules by subject. The report consists of a color-coded table with rows as subjects and columns as the pipeline modules. The file is located under the micapipe directory as shown below: