An end to end tutorial for extraction of FMR timeseries from ADNI Dataset with codes.
By the end of this tutorial/ article, you will be able to extract FMRI timeseries at a desired voxel location. We will extract FMR timeseries at 160 Dosenbach ROIs from 6 classical brain networks ie. Default-mode(34), Frontoparietal(34), Sensorimotor(30), Cingulo-opercular(33), Cerebellum(18), Occipital(21).
Link to scripts
Download the fMRI data with it’s corresponding sMRI image. For this tutorial, I have used ADNI dataset and have downloded Resting state fMRI and MPRAGE images. For a subject we can have mutiple visits contributing to multiple images. The downloaded images will be a specific file structure and will be in Dicom format.
Convert the Dicom files to NifTi files using dicom2nifti python library and save the images in another folder which is given by outpath.
#script name: dcmtonii_CN.py
import dicom2nifti
import os
path = 'path/CN/ADNI'
outpath='path/CNnifti'
for subject in os.listdir(os.path.join(path)):
for modality in os.listdir(os.path.join(path,subject)):
for visit in os.listdir(os.path.join(path,subject,modality)):
for someid in os.listdir(os.path.join(path,subject,modality,visit)):
inpath = os.path.join(path,subject,modality,visit,someid)
out_path = os.path.join(outpath,subject,modality,visit)
print(subject)
if not os.path.exists(out_path):
os.makedirs(out_path)
dicom2nifti.convert_directory(inpath, out_path, compression=True, reorient=True)
Alternatively you can use dcm2niix command line package to achieve the same results
module load pigz-2.4
out_dir="path/to/CN_nifti"
mkdir -p $out_dir
dcm2niix -f %i_%t -o $out_dir -z y path/CN/ADNI
#change the outputfolder accordingly, this script will store all the nifti
# inside the outdir with id followed by the visit time eg: 002_S_0295_20110602075850
# check what will be the output folder structure when you do not specify the path
We will change the file structure and of CNnifti to another folder ‘CNStandard_name’ with subject folder each with 2 NifTI images subjectid_visitdate_f & subjectid_visitdata_s referring functional and structural MRI images respectively. There are multiple files for both the resting state fMRI and sMRI (MPRAGE) and both of the scans are not acquired in the same day. The sMRI is only utilised for linear registration and to obtain a mask of the White Matter and CSF in the brain so we only use the sMRI from the earliest visit by a subject along with fMRI from all visits. The first script handles functional data and the second script handles structural data.
#!/bin/bash
#script name : rename.sh
base_dir="/path/CNnifti"
out_dir="path/CNStandard_name"
for subject_dir in "$base_dir"/*; do
if [[ -d "$subject_dir" ]]; then
for subject_modality in "$subject_dir"/*; do
if [[ -d "$subject_modality" ]]; then
for subject_visit in "$subject_modality"/*; do
if [[ -d "$subject_visit" ]]; then
visit_date=$(basename "$subject_visit")
subject_name=$(basename "$subject_dir")
nifti_files=("$subject_visit"/*.nii.gz)
for nifti_file in "${nifti_files[@]}"; do
nifti_filename=$(basename "$nifti_file" .nii.gz)
if [[ $nifti_filename == *"resting"* ]]; then
newdirname=${subject_name}_${visit_date:0:10}
mkdir -p "$out_dir/$newdirname"
trimmed_filename="${newdirname}_f"
cp "$nifti_file" "$out_dir/$newdirname/$trimmed_filename.nii.gz"
fi
done
fi
done
fi
done
fi
done
#!/bin/bash
#script name : renames.sh
base_dir="/path/CNnifti"
out_dir="path/CNStandard_name"
for subject_dir in "$base_dir"/*; do
if [[ -d "$subject_dir" ]]; then
for subject_modality in "$subject_dir"/*; do
if [[ -d "$subject_modality" ]]; then
for subject_visit in "$subject_modality"/*; do
if [[ -d "$subject_visit" ]]; then
visit_date=$(basename "$subject_visit")
subject_name=$(basename "$subject_dir")
echo $subject_name
nifti_files=("$subject_visit"/*.nii.gz)
for nifti_file in "${nifti_files[@]}"; do
nifti_filename=$(basename "$nifti_file" .nii.gz)
if [[ $nifti_filename == *"mprage"* ]]; then
for allfmr in "$out_dir/$subject_name*"; do
for i in $allfmr;do
final_subject_name=$(basename "$i")
trimmed_filename="${final_subject_name}_s"
# echo "$i/$trimmed_filename.nii.gz"
cp "$nifti_file" "$i/$trimmed_filename.nii.gz"
done
done
break
fi
done
fi
done
fi
done
fi
done
We will process the sMRI images stored as subjectid_visitdate_s using 2 tools from FSL, {link}
1. robustfov → Reduce FOV of image to remove lower head and neck
2. bet2 → Brain Extraction tool
output of the robustfov is subjectid_visitdate_r, which acts as a input to bet2 and produces subjectid_visitdate_sbrain . Lower fractional intensity value (f) is preferred. Set it according to your dataset. {Script}
#!/bin/bash
#script name : bet_CN.sh
base_dir="path/CNStandard_name"
for subject_dir in "$base_dir"/*; do
if [[ -d "$subject_dir" ]]; then
subject_name=$(basename "$subject_dir")
# echo "$subject_name"
in_img="$subject_dir/${subject_name}_s.nii.gz"
robust_img="$subject_dir/${subject_name}_rbrain"
out_img="$subject_dir/${subject_name}_sbrain"
robustfov -i $in_img -r $robust_img
bet2 $robust_img $out_img -f 0.3
fi
done
We will process fMRI images stored as subjectid_visitdate_f using First level (1 subject) Preprocessing through FEAT tool from FSL. Through FEAT we will delete initial 10 volumes, use MCFLIRT motion correction, Change Slice timing correction for ADNI3 dataset, Perform brain extraction perform spatial smoothing of 5mm and perform Highpass temporal filtering. We will also register (linear 12 DOF) the fMRI data to it’s corresponding structural image ie. subjectid_visitdate_sbrain and also to the MNI152 space. Using FEAT GUI perform this on one subject and click Save/ Go. You will find a design.fsf file in the output.feat directory. This file has all the parameters required to run feat analysis of a particular subject and allows you to run the FEAT analysis on terminal. With few changes in the design file we can run the feat analysis on other subjects on terminal. We will copy this design.fsf file to each subject folder in CNStandard_name and change few parameters in the desgin.fsf through sed command.
change 002_S_2010_2011-01-22 to the subjectid_visitdate that you used to run FEAT to get design file. Check few design files manually.
#!/bin/bash
#script name : prepare.sh
designfile='path/CNStandard_name/design.fsf'
# path to the desgin file
base_dir="path/CNStandard_name"
for subject_dir in "$base_dir"/*; do
if [[ -d "$subject_dir" ]]; then
subject_name=$(basename "$subject_dir")
echo "Processing subject: $subject_name"
# new_designfile="${subject_dir}/${subject_name}.fsf"
new_designfile="${subject_dir}/design.fsf"
# echo $new_designfile
cp "$designfile" "$new_designfile"
sed -i "s/002_S_2010_2011-01-22/$subject_name/g" "$new_designfile"
sed -i "s/002_S_2010_2011-01-22_f/${subject_name}_f/g" "$new_designfile"
sed -i "s/002_S_2010_2011-01-22_sbrain/${subject_name}_sbrain/g" "$new_designfile"
echo "Design file copied and modified for $subject_name"
fi
done
FEAT analysis provides preprocessed fMRI as filtered_func_data.nii.gz file
The script to run the FEAT analysis for each subject is below
#!/bin/bash
#script name : feat_CN.sh
base_dir="path/CNStandard_name"
for subject_dir in "$base_dir"/*; do
if [[ -d "$subject_dir" ]]; then
subject_name=$(basename "$subject_dir")
echo "$subject_name"
cd "$subject_dir"
if [ -e *sbrain.nii.gz ] && [ -e *f.nii.gz ]; then
echo "found $subject_name"
feat design.fsf
else
echo "NOT FOUND $subject_name"
fi
fi
done
We know the brain consists of White Matter (WM), Grey Matter (GM) and Cerebrospinal fluid (CSF). Gray matter is composed of the soma, which houses cell organelles like mitochondria. Most of the studies consider only gray matter to contribute towards the neural activity, as it has the synaptic junctions which account for the larger part of brain’s energy consumption detected by fMRI.
#!/bin/bash
#script name : afterfeat_CN.sh
base_dir="path/CNStandard_name"
for subject_dir in "$base_dir"/*.feat; do
if [[ -d "$subject_dir" ]]; then
subject_name=$(basename "$subject_dir")
echo $subject_name
cd "$base_dir/$subject_name"
# step 1
fast -t 1 -n 3 -H 0.1 -I 4 -l 20.0 -o reg/highres2standard reg/highres2standard.nii.gz
# step 2
convert_xfm -omat reg/invfunc2standard.mat -inverse reg/example_func2standard.mat
flirt -in reg/highres2standard_pve_0.nii.gz -ref filtered_func_data.nii.gz -applyxfm -init reg/invfunc2standard.mat -out reg/highres2standard_csf_reg.nii.gz
flirt -in reg/highres2standard_pve_2.nii.gz -ref filtered_func_data.nii.gz -applyxfm -init reg/invfunc2standard.mat -out reg/highres2standard_wm_reg.nii.gz
# step 3
fslmaths reg/highres2standard_csf_reg.nii.gz -thr 0.95 reg/csf_mask_95.nii.gz
fslmaths reg/highres2standard_wm_reg.nii.gz -thr 0.95 reg/wm_mask_95.nii.gz
# step 4
fslmeants -i filtered_func_data.nii.gz -o csf_with_noise.txt -m reg/csf_mask_95.nii.gz
fslmeants -i filtered_func_data.nii.gz -o wm_with_noise.txt -m reg/wm_mask_95.nii.gz
# step 5
paste csf_with_noise.txt wm_with_noise.txt mc/prefiltered_func_data_mcf.par |tr -d "\t" > paraorig.txt
Text2Vest paraorig.txt paraorig.mat
# step 6
fsl_glm -i filtered_func_data.nii.gz -d paraorig.mat --out_res=res_brain.nii.gz
# step 7
flirt -in res_brain.nii.gz -ref reg/standard.nii.gz -applyxfm -init reg/example_func2standard.mat -out res_brain_std.nii.gz
# step 8
fslmaths reg/standard.nii.gz -mul 0 -Tmin -bin roi_template.nii.gz
fi
done
There are 2 spaces that we will be working on Functional space (reference: fMRI → filtered_func_data.nii.gz) and Structural space (reference: sMRI → reg/highres2standard.nii.gz)
You can get MNI co-ordinate for each of the 160 Dosenbach ROIs from the script below.
from nilearn import datasets
rois = datasets.fetch_coords_dosenbach_2010()['rois']
labels = datasets.fetch_coords_dosenbach_2010()['networks'] #which network do they belong to
We have the MNI coordinates of the ROIs belonging to each brain network in a txt files inside the DoschenbachROI folder.
Based on the MNI coordinates point in the roitemplate image that we have created. Then create a sphere of 5mm radius at this point. Get the mean timeseries across the fMRI using fslmeants. Append the timeseries into a csv file. This csv file will have 1 column and all timeseries from each ROI following once after another.
#!/bin/bash
#script name : ts_extract_CN.sh
base_dir="path/CNStandard_name"
for i in CB DMN FP OP CO SM; do
file_path="path/DoschenbachROI/$i.txt"
for subject_dir in "$base_dir"/*.feat; do
if [[ -d "$subject_dir" ]]; then
subject_name=$(basename "$subject_dir")
echo $subject_name
cd "$base_dir/$subject_name"
subject_name1=$(basename "$subject_dir" .feat)
output_file="${subject_name1}_${i}.csv"
counter=0
while read line; do
IFS=',' read -r centerx centery centerz <<< "$line"
counter=$((counter+1))
fslmaths roi_template.nii.gz -mul 0 -add 1 -roi $centerx 1 $centery 1 $centerz 1 0 1 ACCpoint -odt float
fslmaths ACCpoint -kernel sphere 5 -fmean ACCsphere_$counter -odt float
fslmaths ACCsphere_$counter.nii.gz -bin ACCsphere_bin_$counter.nii.gz
roi_timeseries=$(fslmeants -i res_brain_std -m ACCsphere_bin_$counter)
time_series="$roi_timeseries"
echo "$time_series" >> $output_file
done < $file_path
rm -r ACC*
fi
done
done
How can you get timeseries for all ROIs at once ??🤔
All the csv files containing the timeseries at at CNStandard_name move them to another folder RAWtime
#!/bin/bash
base_dir="path/CNStandard_name"
for i in CB DMN FP OP CO SM; do
out_path="path/RAWtime/CN/$i"
mkdir -p "$out_path"
for subject_dir in "$base_dir"/*.feat; do
if [[ -d "$subject_dir" ]]; then
subject_name=$(basename "$subject_dir")
subject_name1=$(basename "$subject_dir" .feat)
echo "$subject_name"
cd "$base_dir/$subject_name" || exit 1
output_file="${subject_name1}_${i}.csv"
cp "$output_file" "$out_path"
subject_name1=$(basename "$subject_dir" .feat)
new_file_name="$subject_name1.csv"
mv "${out_path}/${output_file}" "${out_path}/${new_file_name}"
fi
done
done
Reshape the csv files from having one column to have rows as many as the ROIs and columns being the timestamps
import pandas as pd
import numpy as np
import os
networks = {
'CB': 18,
'CO':32,
'DMN':34,
'FP':21,
'OP': 22,
'SM':33
}
for net,roi in networks.items():
path = f'path/RAWtime/CN/{net}'
if os.path.exists(path):
final_path= f'path/FMRtimeseries/CN/{net}'
os.makedirs(final_path,exist_ok=True)
for csv_path in os.listdir(path=path):
print(csv_path)
try:
df = pd.read_csv(os.path.join(path,csv_path), header=None)
data = np.reshape(df.to_numpy(), (roi, -1))
new_df = pd.DataFrame(data)
new_df.to_csv(f"{final_path}/sub_{csv_path[:-4]}.csv", header=False,index=False)
except:
print('Missing', csv_path)
If you have reached till here give a pat on your back !!