====== Data processing, analysis and visualization with mrTools ======
This tutorial will guide you through data processing, analysis and visualization of your MRI data using [[:mrtools:overview|mrTools]].
====== Learning goals ======
After completing the tutorial you should be able to use mrTools to...
//...examine raw data (epi and T1 weighted anatomies)//\\
//...determine whether motion-compensation worked//\\
//...examine the alignment between different data sets//\\
//...examine gray-matter / white-matter segmentations made by FreeSurfer and view surfaces//\\
//...find the calcarine sulcus and make flat patches of visual cortex//\\
//...average data and run pRF analyses//\\
//...determine the location of V1//\\
//...concatenate data and run an event-related analysis//\\
====== Computer setup ======
You will need to download the git repositories for [[:mrtools:download|mrTools]] and [[:mgl:download|mgl]] and make sure they are in your Matlab path. Do the following (you should make sure to specify the full pathname to mrTools and mgl in the commands below if they are not in your current directory)
addpath(genpath('mrTools'));
addpath(genpath('mgl'));
If you are running the latest Matlab / MacOS version there is some possibility you will run into problems with compiled files (.mexmaci64) not being recognized and causing problems. For this tutorial, we do not need to use most of these files, so it should not be a big problem. You can check to see if you have a probelm by running the following:
>> mydisp('test')
test>>
If instead of printing out test (without a newline) it crashes, then you have two choices. Easiest is to just remove the precompiled file:
>> delete(which('mydisp'))
>> which mydisp
/Users/justin/proj/mgl/utils/mydisp.m
That will remove the compiled file and allow the m file to run instead which should work on any system.
Your other option is to recompile mydisp.c. You will need to have [[https://developer.apple.com/xcode/|XCode]], Apple's developers tools which has various compilers and other tools, installed and then you can do the following:
mglMake myDisp.c
mlrMake
====== Examine raw data ======
No matter what kind of experiment you do - you always want to look at the data you collect in as raw a format as you can. With neuroimaging this is even more important, because often data are processed with complex multi-step analyses and you can lose track of what features of your data give a result and get into real trouble. Lukily with imaging data, you can just look at your data, cause, well, it's a bunch of images. So let's do that first!
First, start in matlab by cd'ing to your data directory and run the program mrLoadRet. It will bring up a viewer. First thing to note is the "Group" drop-down at the top-left. We store data in different "groups" (folders on the harddrive) depending on what stage of analysis the data is in. We want to look at the "Raw" group - so:
//**Step 1: Select the "Raw" group**//
In MRI measurements there are different images made with different contrasts (for example, T1 or T2 or T2*-weighted images). For this purpose, just know that a typical anatomical image that you might see is "T1-weighted" where gray matter is gray, white matter is white and cerebral spinal fluid is black. You can take a look at that image by loading it as a "Base Anatomy". A note here is that we can display summary statistics and data on many different kinds of base anatomies (we will see surfaces and flat-maps in a second). When we do so, we are just doing this for visualization - we transform the statistics on the fly to display on these base anatomies - we always keep the raw data in its original format (thereby avoiding messing with the original data too much by resampling and interpolation only when it is necessary to visualize). We always collect a T1 weighted anatomical image every session - this is used for visualization and for alignment with a canonical anatomy and surfaces (as we will see below).
//**Step 2: Load the T1 anatomy**// Go to the menu item: File/Base Anatomy/Load and select the T1 weighted anatomy. Should be called something like "anat01_11954_2_1_T1w_9mm_sag.nii"
You should see the name of the anatomy appear in the Base drop-down and a picture of the brain appear. A few fun things to do here:
//**Step 3: Examine T1 anatomy**// Click on the "Multi" radio button. You should see 4 different views of the brain. If you click (make sure that Plots/Interrogate Overlay is **unchecked**) in any of the views (except the bottom-right one), it will update the other slices to show you that location in the brain. Explore!
(eventRelatedPlot) Saving data from figure to variable eventRelated
>> eventRelated
eventRelated =
d: [1x1 struct]
voxel: [1x1 struct]
stimvol: {1x3 cell}
roi: [1x1 struct]
This has data from both the voxel and the region of interest (if you had clicked on an ROI). The fields in the data structure should be pretty straight-forward. For the voxel it looks like this:
>> eventRelated.voxel
ans =
vox: [61 64 15]
time: [0.75 2.25 3.75 5.25 6.75 8.25 9.75 11.25 12.75 14.25 15.75 17.25 18.75 20.25 21.75 23.25 24.75]
ehdr: [3x17 single]
ehdrste: [3x17 single]
r2: 0.3391862
tSeries: [760x1 double]
tSeriesByStim: {[19x17 double] [17x17 double] [18x17 double]}
Vox is where the voxel is in the data, ehdr contains the estimated hemodynamic response function for each condition (in this case 3 conditions) - i.e. your event-related analysis output for this voxel. ehdrste contains the standard error of those estimates computed over repetitions of the stimulus. r2 is the variance accounted for of the fit. tSeries contains the tSeries where this was computed from. tSeriesByStim is that same time series sorted by condition (i.e. in this case there were 3 conditions, with 19, 17 and 18 repetitions each of these 3 conditions. The tSeriesByStim has the time series broken up so that it contains the response from the start of each trial out for 17 volumes - the length of the computed event-related hdr). Note that these sorted tSeries do not deal with response overlaps - it's just the raw time series after each event. It also will contain nans in the time series for events which don't have complete data (i.e. the scan ended before the response was finished).
If you have a region of interest, it will contain similar fields, looking like this:
>> eventRelated.roi
ans =
scm: [760x51 double]
hdrlen: 17
nhdr: 3
dim: [2 2 2 760]
volumes: [1x760 double]
ehdr: [3x17 double]
ehdrste: [3x17 double]
r2: 0.60908429155102
time: [0.75 2.25 3.75 5.25 6.75 8.25 9.75 11.25 12.75 14.25 15.75 17.25 18.75 20.25 21.75 23.25 24.75]
covar: [51x51 double]
roiName: 'lV4'
roi: [1x1 struct]
meanTimecourse: [1x760 double]
If you want the time series, it will be in the field roi:
>> eventRelated.roi.roi
ans =
branchNum: []
color: 'black'
coords: [4x636 double]
createdBy: ''
createdFromSession: 's037120180521'
createdOnBase: 's0371_left_occipital'
date: '23-May-2018 16:26:13'
displayOnBase: 's0371_left_occipital'
name: 'lV4'
notes: ''
sformCode: 1
subjectID: []
viewType: 'Volume'
vol2mag: [4x4 double]
vol2tal: []
voxelSize: [1 1 1]
xform: [4x4 double]
scanCoords: [3x46 double]
scanNum: 2
groupNum: 4
scan2roi: [4x4 double]
n: 46
tSeries: [46x760 double]
tSeriesByStim: {[19x46x17 double] [17x46x17 double] [18x46x17 double]}
Same format as the voxel, except you will see that tSeriesByStim has dimensions numRepeats x numVoxels x numTimePoints - where in this case the roi had 46 voxels.
===== Get behavioral data =====
If you want to analyze the behavioral data, you can get this from the workspace for the scan the mrLoadRet viewer is currently displaying (i.e. go to the group, scan in the GUI that you want the data for) and do the following:
>> stimfile = viewGet(getMLRView,'stimfile')
stimfile =
[1x1 struct] [1x1 struct] [1x1 struct]
viewGet is a function that can get various things about the scan you are viewing (type viewGet without any arguments to see all the things it can get), getMLRView accesses the current view being displayed and stimfile are the files that we save about your experiment. Note that it returned an array of 3 of these for this particular experiment since there are 3 scans that were concatenated together, each having its own stimulus file. You can convert these into a format that is easier to parse by running getTaskParameters:
>> e = getTaskParameters(stimfile{1})
e =
nTrials: 20
trialVolume: [1 14 26 39 53 69 85 100 113 129 145 160 175 191 206 220 235 251 267 NaN]
trialTime: [1x20 double]
trialTicknum: [1909 3078 4157 5326 6585 8024 9463 10812 11981 13420 14860 16210 17559 18998 20348 21607 22956 24392 25832 31319]
trials: [1x20 struct]
blockNum: [1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20]
blockTrialNum: [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]
response: [1 1 1 1 2 1 1 1 1 2 2 1 1 1 2 2 1 1 1 NaN]
reactionTime: [1x20 double]
originalTaskParameter: [1x1 struct]
originalRandVars: [1x1 struct]
responseVolume: [10 23 35 48 62 78 94 109 122 138 154 169 184 200 215 229 244 260 276 NaN]
responseTimeRaw: [1x20 double]
randVars: [1x1 struct]
parameter: [1x1 struct]
>> e.parameter
ans =
numJitter: [2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2]
>> e.randVars
ans =
match: [1 1 1 1 0 1 1 1 1 1 0 0 0 0 0 0 1 1 0 0]
orientation: {1x20 cell}
orientationJitter: {1x20 cell}
baseOrientation: {1x20 cell}
Note that variables that were used in your experiment will show up in either e.parameter or e.randVars depending on how they were coded. You can see the subject's response (which button was pressed) and what variables were set to as a function of trial. A nan would indicate no response. reactionTimes are in seconds relative to the beginning of the trial segment from which the subject could respond. Remember that if you want to compute something like percent correct, you probably want to do this by accumulating the data across all the scans (i.e. do a for loop over each stim file). To interpret the particular parameters and what constitutes a correct answer or not, consult with us about how your particular experiment was coded.
===== Displaying contrasts between conditions =====
You may want to display "contrasts" between conditions - i.e. color the brain according to which condition gave a larger response. One way to do that is to run a GLM and choose particular contrasts to display. It's point and click, but a bit involved because there are a lot of relevant settings. See [[:mrTools:documentationglm_v2|here]].
A simpler way is to compute something yourself and try to overlay it using the function mrDispOverlay. Here's a quick example, which takes your event related analysis and computes the difference in response for two conditions and displays that.
- **Get event-related data** Follow steps [[#Getting_data_from_event-related_analysis|above]] to get the eventRelated structure.
- **Compute difference for every voxel**
% First get the estimated responses for all voxels out of the structure
ehdr = eventRelated.d.ehdr;
size(ehdr)
ans =
92 92 54 3 17
The array is xDim x yDim x nSlices x nConditions x hdrlen in size (in this case a 92x92x54 epi image and 3 conditions computed for 17 time points). Let's just take the difference between the 1st and 2nd condition.
diff = nanmean(ehdr(:,:,:,1,:)-ehdr(:,:,:,2,:),5);
Note that I use nanmean since some timepoints will have nans in them if the motion comp has had to interpolate from outside the volume. Also, I'm just averaging across the whole response - you could try different things.
- **Display in the current open viewer**
mrDispOverlay(diff,2,'Concatenation',getMLRView);
Note that the 2 is for scan 2 and the group is 'Concatenation'. Once it is installed, it will show up in the viewer as the overlay mrDispOverlay.
- **Set overlay properties** You may wish to choose how to display it (i.e. what colormap and so forth) by going to the menu item Edit/Overlay/Edit Overlay and choosing parameters. In particular, it may be helpful to set the clip boundaries - i.e so that only values within a range affect the display. I'll set these for this example to -2 and 2: